This introduces a simple encoding scheme to produce abbreviated keys:
pack as many bytes of each UUID as will fit into a Datum. On
little-endian machines, a byteswap is also performed; the abbreviated
comparator can therefore just consist of a simple 3-way unsigned integer
comparison.
The purpose of this change is to speed up sorting data on a column
of type UUID.
Peter Geoghegan
With include_realm=1 being set down in parse_hba_auth_opt, if multiple
options are passed on the pg_hba line, such as:
host all all 0.0.0.0/0 gss include_realm=0 krb_realm=XYZ.COM
We would mistakenly reset include_realm back to 1. Instead, we need to
set include_realm=1 up in parse_hba_line, prior to parsing any of the
additional options.
Discovered by Jeff McCormick during testing.
Bug introduced by 9a08841.
Back-patch to 9.5
Previously, negative values were always displayed in bytes, regardless
of how large they were.
Adrian Vondendriesch, reviewed by Julien Rouhaud and myself
The jsonb_path_ops code calculated hash values inconsistently in some cases
involving nested arrays and objects. This would result in queries possibly
not finding entries that they should find, when using a jsonb_path_ops GIN
index for the search. The problem cases involve JSONB values that contain
both scalars and sub-objects at the same nesting level, for example an
array containing both scalars and sub-arrays. To fix, reset the current
stack->hash after processing each value or sub-object, not before; and
don't try to be cute about the outermost level's initial hash.
Correcting this means that existing jsonb_path_ops indexes may now be
inconsistent with the new hash calculation code. The symptom is the same
--- searches not finding entries they should find --- but the specific
rows affected are likely to be different. Users will need to REINDEX
jsonb_path_ops indexes to make sure that all searches work as expected.
Per bug #13756 from Daniel Cheng. Back-patch to 9.4 where the faulty
logic was introduced.
Previously, plpython was in the habit of allocating a lot of stuff in
TopMemoryContext, and it was very slipshod about making sure that stuff
got cleaned up; in particular, use of TopMemoryContext as fn_mcxt for
function calls represents an unfixable leak, since we generally don't
know what the called function might have allocated in fn_mcxt. This
results in session-lifespan leakage in certain usage scenarios, as for
example in a case reported by Ed Behn back in July.
To fix, get rid of all the retail allocations in TopMemoryContext.
All long-lived allocations are now made in sub-contexts that are
associated with specific objects (either pl/python procedures, or
Python-visible objects such as cursors and plans). We can clean these
up when the associated object is deleted.
I went so far as to get rid of PLy_malloc completely. There were a
couple of places where it could still have been used safely, but on
the whole it was just an invitation to bad coding.
Haribabu Kommi, based on a draft patch by Heikki Linnakangas;
some further work by me
Up until now, the total amount of data that could be passed to a
background worker at startup was one datum, which can be a small as
4 bytes on some systems. That's enough to pass a dsm_handle or an
array index, but not much else. Add a bgw_extra flag to the
BackgroundWorker struct, allowing up to 128 bytes to be passed to
a new worker on any platform.
Use this to fix a problem I recently discovered with the parallel
context machinery added in 9.5: the master assigns each worker an
array index, and each worker subsequently assigns itself an array
index, and there's nothing to guarantee that the two sets of indexes
match, leading to chaos.
Normally, I would not back-patch the change to add bgw_extra, since it
is basically a feature addition. However, since 9.5 is still in beta
and there seems to be no other sensible way to repair the broken
parallel context machinery, back-patch to 9.5. Existing background
worker code can ignore the bgw_extra field without a problem, but
might need to be recompiled since the structure size has changed.
Report and patch by me. Review by Amit Kapila.
Rather than filling a temporary array and then copying values to the
output array, we can generate the required random permutation in-place
using the Fisher-Yates shuffle algorithm. This is shorter as well as
more efficient than before. It's pretty unlikely that anyone would
notice a speed improvement, but shorter code is better.
Nathan Wagner, edited a bit by me
The user can whitelist specified extension(s) in the foreign server's
options, whereupon we will treat immutable functions and operators of those
extensions as candidates to be sent for remote execution.
Whitelisting an extension in this way basically promises that the extension
exists on the remote server and behaves compatibly with the local instance.
We have no way to prove that formally, so we have to rely on the user to
get it right. But this seems like something that people can usually get
right in practice.
We might in future allow functions and operators to be whitelisted
individually, but extension granularity is a very convenient special case,
so it got done first.
The patch as-committed lacks any regression tests, which is unfortunate,
but introducing dependencies on other extensions for testing purposes
would break "make installcheck" scenarios, which is worse. I have some
ideas about klugy ways around that, but it seems like material for a
separate patch. For the moment, leave the problem open.
Paul Ramsey, hacked up a bit more by me
Fix some brain fade in commit a2dabf0e1dda93c8: erroneous variable names
in docs, rearrangements that made sentences less clear not more so,
undocumented and poorly-chosen-anyway API behaviors of subroutines,
bad grammar in error messages, copy-and-paste faults.
Albe Laurenz and Tom Lane
Commit a1480ec1d3 purported to fix the
problems with commit b2ccb5f4e6, but it
didn't completely fix them. The problem is that the checks were
performed in the wrong order, leading to a race condition. If the
sender attached, sent a message, and detached after the receiver
called shm_mq_get_sender and before the receiver called
shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED
before all messages were read. Repair by reversing the order of
operations, and add a long comment explaining why this new logic is
(hopefully) correct.
Commit d1b7c1ffe7 introduced a mechanism
for serializing a ParamListInfo structure to be passed to a parallel
worker. However, this mechanism failed to handle external expanded
values, as pointed out by Noah Misch. Repair.
Moreover, plpgsql_param_fetch requires adjustment because the
serialization mechanism needs it to skip evaluating unused parameters
just as we would do when it is called from copyParamList, but params
== estate->paramLI in that case. To fix, make the bms_is_member test
in that function unconditional.
Finally, have setup_param_list set a new ParamListInfo field,
paramMask, to the parameters actually used in the expression, so that
we don't try to fetch those that are not needed when serializing a
parameter list. This isn't necessary for correctness, but it makes
the performance of the parallel executor code comparable to what we
do for cases involving cursors.
Design suggestions and extensive review by Noah Misch. Patch by me.
On insert the CheckForSerializableConflictIn() test was performed
before the page(s) which were going to be modified had been locked
(with an exclusive buffer content lock). If another process
acquired a relation SIReadLock on the heap and scanned to a page on
which an insert was going to occur before the page was so locked,
a rw-conflict would be missed, which could allow a serialization
anomaly to be missed. The window between the check and the page
lock was small, so the bug was generally not noticed unless there
was high concurrency with multiple processes inserting into the
same table.
This was reported by Peter Bailis as bug #11732, by Sean Chittenden
as bug #13667, and by others.
The race condition was eliminated in heap_insert() by moving the
check down below the acquisition of the buffer lock, which had been
the very next statement. Because of the loop locking and unlocking
multiple buffers in heap_multi_insert() a check was added after all
inserts were completed. The check before the start of the inserts
was left because it might avoid a large amount of work to detect a
serialization anomaly before performing the all of the inserts and
the related WAL logging.
While investigating this bug, other SSI bugs which were even harder
to hit in practice were noticed and fixed, an unnecessary check
(covered by another check, so redundant) was removed from
heap_update(), and comments were improved.
Back-patch to all supported branches.
Kevin Grittner and Thomas Munro
A lookbehind constraint is like a lookahead constraint in that it consumes
no text; but it checks for existence (or nonexistence) of a match *ending*
at the current point in the string, rather than one *starting* at the
current point. This is a long-requested feature since it exists in many
other regex libraries, but Henry Spencer had never got around to
implementing it in the code we use.
Just making it work is actually pretty trivial; but naive copying of the
logic for lookahead constraints leads to code that often spends O(N^2) time
to scan an N-character string, because we have to run the match engine
from string start to the current probe point each time the constraint is
checked. In typical use-cases a lookbehind constraint will be written at
the start of the regex and hence will need to be checked at every character
--- so O(N^2) work overall. To fix that, I introduced a third copy of the
core DFA matching loop, paralleling the existing longest() and shortest()
loops. This version, matchuntil(), can suspend and resume matching given
a couple of pointers' worth of storage space. So we need only run it
across the string once, stopping at each interesting probe point and then
resuming to advance to the next one.
I also put in an optimization that simplifies one-character lookahead and
lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND
constraints, which already existed in the engine. This avoids the overhead
of the LACON machinery entirely for these rather common cases.
The net result is that lookbehind constraints run a factor of three or so
slower than Perl's for multi-character constraints, but faster than Perl's
for one-character constraints ... and they work fine for variable-length
constraints, which Perl gives up on entirely. So that's not bad from a
competitive perspective, and there's room for further optimization if
anyone cares. (In reality, raw scan rate across a large input string is
probably not that big a deal for Postgres usage anyway; so I'm happy if
it's linear.)
Commit b0b0d84b3d purported to make it
possible to relaunch workers using the same parallel context, but it had
an unpleasant race condition: we might reinitialize after the workers
have sent their last control message but before they have dettached the
DSM, leaving to crashes. Repair by introducing a new ParallelContext
operation, ReinitializeParallelDSM.
Adjust execParallel.c to use this new support, so that we can rescan a
Gather node by relaunching workers but without needing to recreate the
DSM.
Amit Kapila, with some adjustments by me. Extracted from latest parallel
sequential scan patch.
The original Gather code failed to mark a Gather node as not able to
do projection, but it couldn't, even though it did call initialize its
projection info via ExecAssignProjectionInfo. There doesn't seem to
be any good reason for this node not to have projection capability,
so clean things up so that it does. Without this, plans using Gather
nodes might need to carry extra Result nodes to do projection.
A bug in the original free space computation made it possible to
return a page which wasn't actually able to fit the item. Since the
insertion code isn't prepared to deal with PageAddItem failing, a PANIC
resulted ("failed to add BRIN tuple [to new page]"). Add a macro to
encapsulate the correct computation, and use it in
brin_getinsertbuffer's callers before calling that routine, to raise an
early error.
I became aware of the possiblity of a problem in this area while working
on ccc4c07499. There's no archived discussion about it, but it's
easy to reproduce a problem in the unpatched code with something like
CREATE TABLE t (a text);
CREATE INDEX ti ON t USING brin (a) WITH (pages_per_range=1);
for length in `seq 8000 8196`
do
psql -f - <<EOF
TRUNCATE TABLE t;
INSERT INTO t VALUES ('z'), (repeat('a', $length));
EOF
done
Backpatch to 9.5, where BRIN was introduced.
Further tweak commit_ts.c so that on a standby the state is completely
consistent with what that in the master, rather than behaving
differently in the cases that the settings differ. Now in standby and
master the module should always be active or inactive in lockstep.
Author: Petr Jelínek, with some further tweaks by Álvaro Herrera.
Backpatch to 9.5, where commit timestamps were introduced.
Discussion: http://www.postgresql.org/message-id/5622BF9D.2010409@2ndquadrant.com
Bernd Helmle complained that CreateReplicationSlot() was assigning the
same value to the same variable twice, so we could remove one of them.
Code inspection reveals that we can actually remove both assignments:
according to the author the assignment was there for beauty of the
strlen line only, and another possible fix to that is to put the strlen
in its own line, so do that.
To be consistent within the file, refactor all duplicated strlen()
calls, which is what we do elsewhere in the backend anyway. In
basebackup.c, snprintf already returns the right length; no need for
strlen afterwards.
Backpatch to 9.4, where replication slots were introduced, to keep code
identical. Some of this is older, but the patch doesn't apply cleanly
and it's only of cosmetic value anyway.
Discussion: http://www.postgresql.org/message-id/BE2FD71DEA35A2287EA5F018@eje.credativ.lan
If the counterparty writes some data into the queue and then detaches,
it's wrong to return SHM_MQ_DETACHED right away. If we do that, we
fail to read whatever was written.
The shm_mq mechanism was intended to optionally notice when the process
on the other end of the queue fails to attach to the queue. It does
this by allowing the user to pass a BackgroundWorkerHandle; if the
background worker in question is launched and dies without attaching
to the queue, then we know it never will. This logic works OK in
blocking mode, but when called with nowait = true we fail to notice
that this has happened due to an asymmetry in the logic. Repair.
Reported off-list by Rushabh Lathia. Patch by me.
Once upon a time we did not have a separate CREATEROLE privilege, and
CREATEUSER effectively meant SUPERUSER. When we invented CREATEROLE
(in 8.1) we also added SUPERUSER so as to have a less confusing keyword
for this role property. However, we left CREATEUSER in place as a
deprecated synonym for SUPERUSER, because of backwards-compatibility
concerns. It's still there and is still confusing people, as for example
in bug #13694 from Justin Catterson. 9.6 will be ten years or so later,
which surely ought to be long enough to end the deprecation and just
remove these old keywords. Hence, do so.
Commit 816e336f12 added the wrong error
check to async.c; sending restrictions is restricted to the leader,
not altogether unsafe.
Commit 3bd909b220 added ExecShutdownNode
to traverse the planstate tree and call shutdown functions, but made
a Gather node, the only node that actually has such a function, abort
the tree traversal, which is wrong.
Commit bda76c1c8c caused both plus and
minus infinity to be rendered as "infinity", which is not only wrong
but inconsistent with the pre-9.4 behavior of to_json(). Fix that by
duplicating the coding in date_out/timestamp_out/timestamptz_out more
closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like
the previous commit.
In passing, also re-pgindent json.c, since it had gotten a bit messed up by
recent patches (and I was already annoyed by indentation-related problems
in back-patching this fix ...)
Per a report from Shay Rojansky, Npgsql sends ssl_renegotiation_limit=0
in the startup packet because it does not support renegotiation; other
clients which have not attempted to support renegotiation might well
behave similarly. The recent removal of this parameter forces them to
break compatibility with either current PostgreSQL versions, or
previous ones. Per discussion, the best solution is to accept the
parameter but only allow a value of 0.
Shay Rojansky, edited a little by me.
Commit 0e57b4d8bd contained some clever
logic that attempted to make sure that we couldn't get confused about
whether the last thing we cached was a strcoll() result or a strxfrm()
result, but it wasn't quite clever enough, because we can perform
further abbreviations after having already performed some comparisons.
Introduce an explicit flag in the hopes of making this watertight.
Peter Geoghegan, reviewed by me.
Instead, use transaction abort. Given an unlucky bout of latency, the
timeout would cancel the RESET itself. Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that. Back-patch
to 9.1 (all supported versions). The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
pg_regprefix was doing nothing with lookahead constraints, which would
be fine if it were the right kind of nothing, but it isn't: we have to
terminate our search for a fixed prefix, not just pretend the LACON arc
isn't there. Otherwise, if the current state has both a LACON outarc and a
single plain-color outarc, we'd falsely conclude that the color represents
an addition to the fixed prefix, and generate an extracted index condition
that restricts the indexscan too much. (See added regression test case.)
Terminating the search is conservative: we could traverse the LACON arc
(thus assuming that the constraint can be satisfied at runtime) and then
examine the outarcs of the linked-to state. But that would be a lot more
work than it seems worth, because writing a LACON followed by a single
plain character is a pretty silly thing to do.
This makes a difference only in rather contrived cases, but it's a bug,
so back-patch to all supported branches.
Using this API, one backend can set up a ParallelHeapScanDesc to
which multiple backends can then attach. Each tuple in the relation
will be returned to exactly one of the scanning backends. Only
forward scans are supported, and rescans must be carefully
coordinated.
This is not exposed to the planner or executor yet.
The original version of this code was written by me. Amit Kapila
reviewed it, tested it, and improved it, including adding support for
synchronized scans, per review comments from Jeff Davis. Extensive
testing of this and related patches was performed by Haribabu Kommi.
Final cleanup of this patch by me.
This may allow some callers to avoid the overhead involved in tearing
down a parallel context and then setting up a new one, which means
releasing the DSM and then allocating and populating a new one. I
suspect we'll want to revise the Gather node to make use of this new
capability, but even if not it may be useful elsewhere and requires
very little additional code.
Revert our previous addition of "all" flags to copyins() and copyouts();
they're no longer needed, and were never anything but an unsightly hack.
Improve a couple of infelicities in the REG_DEBUG code for dumping
the NFA data structure, including adding code to count the total
number of states and arcs.
Add a couple of missed error checks.
Add some more documentation in the README file, and some regression tests
illustrating cases that exceeded the state-count limit and/or took
unreasonable amounts of time before this set of patches.
Back-patch to all supported branches.
This code previously counted the number of NFA states it created, and
complained if a limit was exceeded, so as to prevent bizarre regex patterns
from consuming unreasonable time or memory. That's fine as far as it went,
but the code paid no attention to how many arcs linked those states. Since
regexes can be contrived that have O(N) states but will need O(N^2) arcs
after fixempties() processing, it was still possible to blow out memory,
and take a long time doing it too. To fix, modify the bookkeeping to count
space used by both states and arcs.
I did not bother with including the "color map" in the accounting; it
can only grow to a few megabytes, which is not a lot in comparison to
what we're allowing for states+arcs (about 150MB on 64-bit machines
or half that on 32-bit machines).
Looking at some of the larger real-world regexes captured in the Tcl
regression test suite suggests that the most that is likely to be needed
for regexes found in the wild is under 10MB, so I believe that the current
limit has enough headroom to make it okay to keep it as a hard-wired limit.
In connection with this, redefine REG_ETOOBIG as meaning "regular
expression is too complex"; the previous wording of "nfa has too many
states" was already somewhat inapropos because of the error code's use
for stack depth overrun, and it was not very user-friendly either.
Back-patch to all supported branches.
The previous coding would create a new intermediate state every time it
wanted to interchange the ordering of two constraint arcs. Certain regex
features such as \Y can generate large numbers of parallel constraint arcs,
and if we needed to reorder the results of that, we created unreasonable
numbers of intermediate states. To improve matters, keep a list of
already-created intermediate states associated with the state currently
being considered by the outer loop; we can re-use such states to place all
the new arcs leading to the same destination or source.
I also took the trouble to redefine push() and pull() to have a less risky
API: they no longer delete any state or arc that the caller might possibly
have a pointer to, except for the specifically-passed constraint arc.
This reduces the risk of re-introducing the same type of error seen in
the failed patch for CVE-2007-4772.
Back-patch to all supported branches.
The previous coding took something like O(N^4) time to fully process a
chain of N EMPTY arcs. We can't really do much better than O(N^2) because
we have to insert about that many arcs, but we can do lots better than
what's there now. The win comes partly from using mergeins() to amortize
de-duplication of arcs across multiple source states, and partly from
exploiting knowledge of the ordering of arcs for each state to avoid
looking at arcs we don't need to consider during the scan. We do have
to be a bit careful of the possible reordering of arcs introduced by
the sort-merge coding of the previous commit, but that's not hard to
deal with.
Back-patch to all supported branches.
Change the singly-linked in-arc and out-arc lists to be doubly-linked,
so that arc deletion is constant time rather than having worst-case time
proportional to the number of other arcs on the connected states.
Modify the bulk arc transfer operations copyins(), copyouts(), moveins(),
moveouts() so that they use a sort-and-merge algorithm whenever there's
more than a small number of arcs to be copied or moved. The previous
method is O(N^2) in the number of arcs involved, because it performs
duplicate checking independently for each copied arc. The new method may
change the ordering of existing arcs for the destination state, but nothing
really cares about that.
Provide another bulk arc copying method mergeins(), which is unused as
of this commit but is needed for the next one. It basically is like
copyins(), but the source arcs might not all come from the same state.
Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort()
call.
These changes greatly improve the performance of regex compilation for
large or complex regexes, at the cost of extra space for arc storage during
compilation. The original tradeoff was probably fine when it was made, but
now we care more about speed and less about memory consumption.
Back-patch to all supported branches.
It's possible to construct regular expressions that contain loops of
constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use
in fully traversing such a loop at execution, since you'd just end up in
the same NFA state without having consumed any input. Worse, such a loop
leads to infinite looping in the pullback/pushfwd stage of compilation,
because we keep pushing or pulling the same constraints around the loop
in a vain attempt to move them to the pre or post state. Such looping was
previously recognized in CVE-2007-4772; but the fix only handled the case
of trivial single-state loops (that is, a constraint arc leading back to
its source state) ... and not only that, it was incorrect even for that
case, because it broke the admittedly-not-very-clearly-stated API contract
of the pull() and push() subroutines. The first two regression test cases
added by this commit exhibit patterns that result in assertion failures
because of that (though there seem to be no ill effects in non-assert
builds). The other new test cases exhibit multi-state constraint loops;
in an unpatched build they will run until the NFA state-count limit is
exceeded.
To fix, remove the code added for CVE-2007-4772, and instead create a
general-purpose constraint-loop-breaking phase of regex compilation that
executes before we do pullback/pushfwd. Since we never need to traverse
a constraint loop fully, we can just break the loop at any chosen spot,
if we add clone states that can replicate any sequence of arc transitions
that would've traversed just part of the loop.
Also add some commentary clarifying why we have to have all these
machinations in the first place.
This class of problems has been known for some time --- we had a report
from Marc Mamin about two years ago, for example, and there are related
complaints in the Tcl bug tracker. I had discussed a fix of this kind
off-list with Henry Spencer, but didn't get around to doing something
about it until the issue was rediscovered by Greg Stark recently.
Back-patch to all supported branches.
Prior to commit 0709b7ee72, access to
variables within a spinlock-protected critical section had to be done
through a volatile pointer, but that should no longer be necessary.
Michael Paquier
Prior to commit 0709b7ee72, access to
variables within a spinlock-protected critical section had to be done
through a volatile pointer, but that should no longer be necessary.
Thomas Munro
In order for this to be safe, the code which hands true serializability
will need to taught that the SIRead locks taken by a parallel worker
pertain to the same transaction as those taken by the parallel leader.
Some further changes may be needed as well. Until the necessary
adaptations are made, don't generate parallel plans in serializable
mode, and if a previously-generated parallel plan is used after
serializable mode has been activated, run it serially.
This fixes a bug in commit 7aea8e4f2d.
In the previous coding, before returning from ExecutorRun, we'd shut
down all parallel workers. This was dead wrong if ExecutorRun was
called with a non-zero tuple count; it had the effect of truncating
the query output. To fix, give ExecutePlan control over whether to
enter parallel mode, and have it refuse to do so if the tuple count
is non-zero. Rewrite the Gather logic so that it can cope with being
called outside parallel mode.
Commit 7aea8e4f2d is largely to blame
for this problem, though this patch modifies some subsequently-committed
code which relied on the guarantees it purported to make.
Commit 7aea8e4f2d was overoptimistic
about the degree of safety associated with running various functions
in parallel mode. Functions that take a table name or OID as an
argument are at least parallel-restricted, because the table might be
temporary, and we currently don't allow parallel workers to touch
temporary tables. Functions that take a query as an argument are
outright unsafe, because the query could be anything, including a
parallel-unsafe query.
Also, the queue of pending notifications is backend-private, so adding
to it from a worker doesn't behave correctly. We could fix this by
transferring the worker's queue of pending notifications to the master
during worker cleanup, but that seems like more trouble than it's
worth for now. In addition to adjusting the pg_proc.h markings, also
add an explicit check for this in async.c.
check_role() tries to verify that the user has permission to become the
requested role, but this is inappropriate in a parallel worker, which
needs to exactly recreate the master's authorization settings. So skip
the check in that case.
This fixes a bug in commit 924bcf4f16.
Starting a parallel worker transaction changes our notion of which XIDs
are in-progress or committed, and our notion of the current command
counter ID. Therefore, our view of these caches prior to starting
this transaction may no longer valid. Defend against that by clearing
them.
This fixes a bug in commit 924bcf4f16.
Commit 924bcf4f16 failed to enforce
parallel mode checks during the commit of a parallel worker, because
we exited parallel mode prior to ending the transaction so that we
could pop the active snapshot. Re-establish parallel mode during
parallel worker commit. Without this, it's far too easy for unsafe
actions during the pre-commit sequence to crash the server instead of
hitting the error checks as intended.
Just to be extra paranoid, adjust a couple of the sanity checks in
xact.c to check not only IsInParallelMode() but also
IsParallelWorker().
Commit 924bcf4f16 correctly forbade
parallel workers to modify the command counter while in parallel mode,
but it inexplicably neglected to actually transfer the current command
counter from leader to workers. This can result in the workers seeing
a different set of tuples from the leader, which is bad. Repair.
Commit 2bd9e412f9 introduced a mechanism
for relaying protocol messages from a background worker to another
backend via a shm_mq. However, there was no provision for shutting
down the communication channel. Therefore, a protocol message sent
late in the shutdown sequence, such as a DEBUG message resulting from
cranking up log_min_messages, could crash the server. To fix, install
an on_dsm_detach callback that disables sending messages to the shm_mq
when the associated DSM is detached.
The function failed to adhere to its specification that the "tcategory"
argument should not be examined when the input value is NULL. This
resulted in a crash in some cases. Per bug #13680 from Boyko Yordanov.
In passing, re-pgindent some recent changes in jsonb.c, and fix a rather
ungrammatical comment.
Diagnosis and patch by Michael Paquier, cosmetic changes by me
This fixes a long-standing bug which was discovered while investigating
the interaction between the new join pushdown code and the EvalPlanQual
machinery: if a ForeignScan appears on the inner side of a paramaterized
nestloop, an EPQ recheck would re-return the original tuple even if
it no longer satisfied the pushed-down quals due to changed parameter
values.
This fix adds a new member to ForeignScan and ForeignScanState and a
new argument to make_foreignscan, and requires changes to FDWs which
push down quals to populate that new argument with a list of quals they
have chosen to push down. Therefore, I'm only back-patching to 9.5,
even though the bug is not new in 9.5.
Etsuro Fujita, reviewed by me and by Kyotaro Horiguchi.
-- gitweb summary limit --------------------------
pg_upgrade: reorder controldata checks to match program output
Also improve comment for how float8_pass_by_value is used.
Backpatch through 9.5
Per Mark Johnston, this resolves a build error on FreeBSD related
to the fact that dtrace is modifying the generated object files
under the hood. Consequently, without this, dtrace gets reinvoked
at install time because the object files have been updated. This
is a pretty hacky fix, but it shouldn't hurt anything, and it's
not clear that it's worth expending any more effort for a feature
that not too many people are using.
Patch by Mark Johnston. This is arguably back-patchable as a bug
fix to the build system, but I'm not certain enough of the
consequences to try that. Let's see what the buildfarm (and
our packagers) think of this change on master first.
Postmaster child processes that aren't supposed to be attached to shared
memory were not bothering to close the shared memory mapping handle they
inherit from the postmaster process. That's mostly harmless, since the
handle vanishes anyway when the child process exits -- but the syslogger
process, if used, doesn't get killed and restarted during recovery from a
backend crash. That meant that Windows doesn't see the shared memory
mapping as becoming free, so it doesn't delete it and the postmaster is
unable to create a new one, resulting in failure to recover from crashes
whenever logging_collector is turned on.
Per report from Dmitry Vasilyev. It's a bit astonishing that we'd not
figured this out long ago, since it's been broken from the very beginnings
of out native Windows support; probably some previously-unexplained trouble
reports trace to this.
A secondary problem is that on Cygwin (perhaps only in older versions?),
exec() may not detach from the shared memory segment after all, in which
case these child processes did remain attached to shared memory, posing
the risk of an unexpected shared memory clobber if they went off the rails
somehow. That may be a long-gone bug, but we can deal with it now if it's
still live, by detaching within the infrastructure introduced here to deal
with closing the handle.
Back-patch to all supported branches.
Tom Lane and Amit Kapila
pg_ctl start with -w previously relied on a heuristic that the postmaster
would surely always manage to create postmaster.pid within five seconds.
Unfortunately, that fails much more often than we would like on some of the
slower, more heavily loaded buildfarm members.
We have known for quite some time that we could remove the need for that
heuristic on Unix by using fork/exec instead of system() to launch the
postmaster. This allows us to know the exact PID of the postmaster, which
allows near-certain verification that the postmaster.pid file is the one
we want and not a leftover, and it also lets us use waitpid() to detect
reliably whether the child postmaster has exited or not.
What was blocking this change was not wanting to rewrite the Windows
version of start_postmaster() to avoid use of CMD.EXE. That's doable
in theory but would require fooling about with stdout/stderr redirection,
and getting the handling of quote-containing postmaster switches to
stay the same might be rather ticklish. However, we realized that
we don't have to do that to fix the problem, because we can test
whether the shell process has exited as a proxy for whether the
postmaster is still alive. That doesn't allow an exact check of the
PID in postmaster.pid, but we're no worse off than before in that
respect; and we do get to get rid of the heuristic about how long the
postmaster might take to create postmaster.pid.
On Unix, this change means that a second "pg_ctl start -w" immediately
after another such command will now reliably fail, whereas previously
it would succeed if done within two seconds of the earlier command.
Since that's a saner behavior anyway, it's fine. On Windows, the case can
still succeed within the same time window, since pg_ctl can't tell that the
earlier postmaster's postmaster.pid isn't the pidfile it is looking for.
To ensure stable test results on Windows, we can insert a short sleep into
the test script for pg_ctl, ensuring that the existing pidfile looks stale.
This hack can be removed if we ever do rewrite start_postmaster(), but that
no longer seems like a high-priority thing to do.
Back-patch to all supported versions, both because the current behavior
is buggy and because we must do that if we want the buildfarm failures
to go away.
Tom Lane and Michael Paquier
Many functions stored JsonbIteratorToken values in variables of other
integer types. Also, standardize order relative to other declarations.
Expect compilers to generate the same code before and after this change.
Cache strxfrm() blobs across calls made to the text SortSupport
abbreviation routine. This can speed up sorting if the same string
needs to be abbreviated many times in a row.
Also, cache the result of the previous strcoll() comparison, so that
if we're asked to compare the same strings agin, we do need to call
strcoll() again.
Perhaps surprisingly, these optimizations don't seem to hurt even when
they don't help. memcmp() is really cheap compared to strcoll() or
strxfrm().
Peter Geoghegan, reviewed by me.
If we do some byte-swapping while abbreviating, we can do comparisons
using integer arithmetic rather than memcmp.
Peter Geoghegan, reviewed and slightly revised by me.
This flag has proven to be a recipe for bugs, and it doesn't seem like
it can really buy anything in terms of performance. So let's just
*always* set the process latch when we receive SIGUSR1 instead of
trying to do it only when needed.
Per my recent proposal on pgsql-hackers.
During expand_security_quals, we take the security barrier quals on an
RTE and create a subquery which evaluates the quals. During this, we
have to replace any variables in the outer query which refer to the
original RTE with references to the columns from the subquery.
We need to also perform that replacement for any Vars in the
append_rel_list.
Only backpatching to 9.5 as we only go through this process in 9.4 for
auto-updatable security barrier views, which UNION ALL queries aren't.
Discovered by Haribabu Kommi
Patch by Dean Rasheed
For some reason, neither of the compilers I usually use noticed the
uninitialized-variable problem I introduced in commit 7e2a18a916.
That's hardly a good enough excuse though. Committing with brown paper bag
on head.
In addition to putting the operations in the right order, move the
declaration of "now" inside the loop; there's no need for it to be
outside, and that does wake up older gcc enough to notice any similar
future problem.
Back-patch to 9.4; earlier versions lack the time-to-SIGKILL stanza
so there's no bug.
This is like BSWAP32, but for 64-bit values. Since we've got two of
them now and they have use cases (like sortsupport) beyond CRCs, move
the definitions to their own header file.
Peter Geoghegan
This lets us remove the large alternative results files for the main
json and jsonb tests, which makes modifying those tests simpler for
committers and patch submitters.
Backpatch to 9.4 for jsonb and 9.3 for json.
The postmaster now checks every minute or so (worst case, at most two
minutes) that postmaster.pid is still there and still contains its own PID.
If not, it performs an immediate shutdown, as though it had received
SIGQUIT.
The original goal behind this change was to ensure that failed buildfarm
runs would get fully cleaned up, even if the test scripts had left a
postmaster running, which is not an infrequent occurrence. When the
buildfarm script removes a test postmaster's $PGDATA directory, its next
check on postmaster.pid will fail and cause it to exit. Previously, manual
intervention was often needed to get rid of such orphaned postmasters,
since they'd block new test postmasters from obtaining the expected socket
address.
However, by checking postmaster.pid and not something else, we can provide
additional robustness: manual removal of postmaster.pid is a frequent DBA
mistake, and now we can at least limit the damage that will ensue if a new
postmaster is started while the old one is still alive.
Back-patch to all supported branches, since we won't get the desired
improvement in buildfarm reliability otherwise.
Prior to commit 0709b7ee72, access to
variables within a spinlock-protected critical section had to be done
through a volatile pointer, but that should no longer be necessary.
This continues work begun in df4077cda2
and 6ba4ecbf47.
Thomas Munro and Michael Paquier
For time masks, like HH24, MI, SS, CC, MM, do not count the negative
sign as part of the zero-padding length specified by the mask, e.g. have
to_char('-4 years'::interval, 'YY') return '-04', not '-4'.
Report by Craig Ringer
Some of the buildfarm members are evidently miserly enough of stack space
to pass the originally-committed form of this test. Increase the
requirement 10X to hopefully ensure that it fails as-expected everywhere.
Security: CVE-2015-5289
This adds regressions tests which are specific to INSERT+RETURNING and
UPDATE+RETURNING to ensure that the SELECT policies are added as
WithCheckOptions (and should therefore throw an error when the policy is
violated).
Per suggestion from Andres.
Back-patch to 9.5 as the prior commit was.
The tsquery, ltxtquery and query_int data types have a common ancestor.
Having acquired check_stack_depth() calls independently, each was
missing at least one call. Back-patch to 9.0 (all supported versions).
A range type can name another range type as its subtype, and a record
type can bear a column of another record type. Consequently, functions
like range_cmp() and record_recv() are recursive. Functions at risk
include operator family members and referents of pg_type regproc
columns. Treat as recursive any such function that looks up and calls
the same-purpose function for a record column type or the range subtype.
Back-patch to 9.0 (all supported versions).
An array type's element type is never itself an array type, so array
functions are unaffected. Recursion depth proportional to array
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
Sufficiently-deep recursion heretofore elicited a SIGSEGV. If an
application constructs PostgreSQL json or jsonb values from arbitrary
user input, application users could have exploited this to terminate all
active database connections. That applies to 9.3, where the json parser
adopted recursive descent, and later versions. Only row_to_json() and
array_to_json() were at risk in 9.2, both in a non-security capacity.
Back-patch to 9.2, where the json type was introduced.
Oskari Saarenmaa, reviewed by Michael Paquier.
Security: CVE-2015-5289
Similar to 7d8db3e, given that INSERT+RETURNING requires SELECT rights
on the table, apply the SELECT policies as WCOs to the tuples being
inserted. Apply the same logic to UPDATE+RETURNING.
Back-patch to 9.5 where RLS was added.
The WithCheckOptions list in Query are only populated during rewrite and
do not need to be written out or read in as part of a Query structure.
Further, move WithCheckOptions to the bottom and add comments to clarify
that it is only populated during rewrite.
Back-patch to 9.5 with a catversion bump, as we are still in alpha.
In 020235a575 I lowered the autovacuum_*freeze_max_age minimums to
allow for easier testing of wraparounds. I did not touch the
corresponding per-table limits. While those don't matter for the purpose
of wraparound, it seems more consistent to lower them as well.
It's noteworthy that the previous reloption lower limit for
autovacuum_multixact_freeze_max_age was too high by one magnitude, even
before 020235a575.
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: back to 9.0 (in parts), like the prior patch
To allow users to force RLS to always be applied, even for table owners,
add ALTER TABLE .. FORCE ROW LEVEL SECURITY.
row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump
output is complete (by default).
Also add SECURITY_NOFORCE_RLS context to avoid data corruption when
ALTER TABLE .. FORCE ROW SECURITY is being used. The
SECURITY_NOFORCE_RLS security context is used only during referential
integrity checks and is only considered in check_enable_rls() after we
have already checked that the current user is the owner of the relation
(which should always be the case during referential integrity checks).
Back-patch to 9.5 where RLS was added.
Due to b67aaf21e8 / CREATE EXTENSION ... CASCADE the test output
contains the extension name in yet another place. Since that's variable
depending on the python version...
Add yet another name mangling stanza to regress-python3-mangle.mk.
Author: Petr Jelinek
On reflection, the submitted patch didn't really work to prevent the
request size from exceeding MaxAllocSize, because of the fact that we'd
happily round nbuckets up to the next power of 2 after we'd limited it to
max_pointers. The simplest way to enforce the limit correctly is to
round max_pointers down to a power of 2 when it isn't one already.
(Note that the constraint to INT_MAX / 2, if it were doing anything useful
at all, is properly applied after that.)
Limit the size of the hashtable pointer array to not more than
MaxAllocSize, per reports from Kouhei Kaigai and others of "invalid memory
alloc request size" failures. There was discussion of allowing the array
to get larger than that by using the "huge" palloc API, but so far no proof
that that is actually a good idea, and at this point in the 9.5 cycle major
changes from old behavior don't seem like the way to go.
Fix a rather serious secondary bug in the new code, which was that it
didn't ensure nbuckets remained a power of 2 when recomputing it for the
multiple-batch case.
Clean up sloppy division of labor between ExecHashIncreaseNumBuckets and
its sole call site.
Null path elements and, where the object is an array, invalid integer
elements now cause an error.
Incorrect behaviour noted by Thom Brown, patch from Dmitry Dolgov.
Backpatch to 9.5 where jsonb_set was introduced
Specifically, make its effect independent from the row_security GUC, and
make it affect permission checks pertinent to views the BYPASSRLS role
owns. The row_security GUC thereby ceases to change successful-query
behavior; it can only make a query fail with an error. Back-patch to
9.5, where BYPASSRLS was introduced.
Without CASCADE, if an extension has an unfullfilled dependency on
another extension, CREATE EXTENSION ERRORs out with "required extension
... is not installed". That is annoying, especially when that dependency
is an implementation detail of the extension, rather than something the
extension's user can make sense of.
In addition to CASCADE this also includes a small set of regression
tests around CREATE EXTENSION.
Author: Petr Jelinek, editorialized by Michael Paquier, Andres Freund
Reviewed-By: Michael Paquier, Andres Freund, Jeff Janes
Discussion: 557E0520.3040800@2ndquadrant.com
The existing hint talked about "may only contain letters", but the
actual requirement is more strict: only lower case letters are allowed.
Reported-By: Rushabh Lathia
Author: Rushabh Lathia
Discussion: AGPqQf2x50qcwbYOBKzb4x75sO_V3g81ZsA8+Ji9iN5t_khFhQ@mail.gmail.com
Backpatch: 9.4-, where replication slots were added
Four related issues:
1) attnos/varnos/resnos for EXCLUDED were out of sync when a column
after one dropped in the underlying relation was referenced.
2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors.
3) It was possible to reference system columns in the EXCLUDED pseudo
relations, even though they would not have valid contents.
4) References to EXCLUDED were rewritten by the RLS machinery, as
EXCLUDED was treated as if it were the underlying relation.
To fix the first two issues, generate the excluded targetlist with
dropped columns in mind and add an entry for whole row
variables. Instead of unconditionally adding a wholerow entry we could
pull up the expression if needed, but doing it unconditionally seems
simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN
support anyway.
The remaining two issues are addressed by changing the EXCLUDED RTE to
have relkind = composite. That fits with EXCLUDED not actually being a
real relation, and allows to treat it differently in the relevant
places. scanRTEForColumn now skips looking up system columns when the
RTE has a composite relkind; fireRIRrules() already had a corresponding
check, thereby preventing RLS expansion on EXCLUDED.
Also add tests for these issues, and improve a few comments around
excluded handling in setrefs.c.
Reported-By: Peter Geoghegan, Geoff Winkless
Author: Andres Freund, Amit Langote, Peter Geoghegan
Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com,
CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island,
North Korea, Turkey, Uruguay. New zone America/Fort_Nelson for Canadian
Northern Rockies.
Some of the functions in regex compilation and execution recurse, and
therefore could in principle be driven to stack overflow. The Tcl crew
has seen this happen in practice in duptraverse(), though their fix was
to put in a hard-wired limit on the number of recursive levels, which is
not too appetizing --- fortunately, we have enough infrastructure to check
the actually available stack. Greg Stark has also seen it in other places
while fuzz testing on a machine with limited stack space. Let's put guards
in to prevent crashes in all these places.
Since the regex code would leak memory if we simply threw elog(ERROR),
we have to introduce an API that checks for stack depth without throwing
such an error. Fortunately that's not difficult.
In cfindloop(), if the initial call to shortest() reports that a
zero-length match is possible at the current search start point, but then
it is unable to construct any actual match to that, it'll just loop around
with the same start point, and thus make no progress. We need to force the
start point to be advanced. This is safe because the loop over "begin"
points has already tried and failed to match starting at "close", so there
is surely no need to try that again.
This bug was introduced in commit e2bd904955,
wherein we allowed continued searching after we'd run out of match
possibilities, but evidently failed to think hard enough about exactly
where we needed to search next.
Because of the way this code works, such a match failure is only possible
in the presence of backrefs --- otherwise, shortest()'s judgment that a
match is possible should always be correct. That probably explains how
come the bug has escaped detection for several years.
The actual fix is a one-liner, but I took the trouble to add/improve some
comments related to the loop logic.
After fixing that, the submitted test case "()*\1" didn't loop anymore.
But it reported failure, though it seems like it ought to match a
zero-length string; both Tcl and Perl think it does. That seems to be from
overenthusiastic optimization on my part when I rewrote the iteration match
logic in commit 173e29aa5deefd9e71c183583ba37805c8102a72: we can't just
"declare victory" for a zero-length match without bothering to set match
data for capturing parens inside the iterator node.
Per fuzz testing by Greg Stark. The first part of this is a bug in all
supported branches, and the second part is a bug since 9.2 where the
iteration rewrite happened.
Commit 9662143f0c added infrastructure to
allow regular-expression operations to be terminated early in the event
of SIGINT etc. However, fuzz testing by Greg Stark disclosed that there
are still cases where regex compilation could run for a long time without
noticing a cancel request. Specifically, the fixempties() phase never
adds new states, only new arcs, so it doesn't hit the cancel check I'd put
in newstate(). Add one to newarc() as well to cover that.
Some experimentation of my own found that regex execution could also run
for a long time despite a pending cancel. We'd put a high-level cancel
check into cdissect(), but there was none inside the core text-matching
routines longest() and shortest(). Ordinarily those inner loops are very
very fast ... but in the presence of lookahead constraints, not so much.
As a compromise, stick a cancel check into the stateset cache-miss
function, which is enough to guarantee a cancel check at least once per
lookahead constraint test.
Making this work required more attention to error handling throughout the
regex executor. Henry Spencer had apparently originally intended longest()
and shortest() to be incapable of incurring errors while running, so
neither they nor their subroutines had well-defined error reporting
behaviors. However, that was already broken by the lookahead constraint
feature, since lacon() can surely suffer an out-of-memory failure ---
which, in the code as it stood, might never be reported to the user at all,
but just silently be treated as a non-match of the lookahead constraint.
Normalize all that by inserting explicit error tests as needed. I took the
opportunity to add some more comments to the code, too.
Back-patch to all supported branches, like the previous patch.
The output of a typical pg_rewind run contained a mix of capitalized and
not-capitalized and punctuated and not-punctuated phrases for no
apparent reason. Make that consistent. Also fix some problems in other
messages.
This case seems to have been overlooked when unvalidated check constraints
were introduced, in 9.2. The code would attempt to dump such constraints
over again for each child table, even though adding them to the parent
table is sufficient.
In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make
clean" target fully cleans up after a failed test. This evidently got
dealt with at some point in 9.4, but it wasn't back-patched. I ran into
it while testing this fix ...
Per bug #13656 from Ingmar Brouns.
Module initialization was still not completely correct after commit
6b61955135, per crash report from Takashi Ohnishi. To fix, instead of
trying to monkey around with the value of the GUC setting directly, add
a separate boolean flag that enables the feature on a standby, but only
for the startup (recovery) process, when it sees that its master server
has the feature enabled.
Discussion: http://www.postgresql.org/message-id/ca44c6c7f9314868bdc521aea4f77cbf@MP-MSGSS-MBX004.msg.nttdata.co.jp
Also change the deactivation routine to delete all segment files rather
than leaving the last one around. (This doesn't need separate
WAL-logging, because on recovery we execute the same deactivation
routine anyway.)
In passing, clean up the code structure somewhat, particularly so that
xlog.c doesn't know so much about when to activate/deactivate the
feature.
Thanks to Fujii Masao for testing and Petr Jelínek for off-list discussion.
Back-patch to 9.5, where commit_ts was introduced.
Previously "GRANT * ON * TO " was tab-completed to add an extra "TO",
rather than with a list of roles. This is the bug that commit 2f88807
introduced unexpectedly. This commit fixes that incorrect tab-completion.
Thomas Munro, reviewed by Jeff Janes.
If some existing listener is far behind, incoming new listener sessions
would start from that session's read pointer and then need to advance over
many already-committed notification messages, which they have no interest
in. This was expensive in itself and also thrashed the pg_notify SLRU
buffers a lot more than necessary. We can improve matters considerably
in typical scenarios, without much added cost, by starting from the
furthest-ahead read pointer, not the furthest-behind one. We do have to
consider only sessions in our own database when doing this, which requires
an extra field in the data structure, but that's a pretty small cost.
Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced.
Matt Newell, slightly adjusted by me
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
If a transaction or subtransaction creates a ParallelContext but ends
without calling InitializeParallelDSM, the previous code would
seg fault. Fix that.
When considering which policies should be included, rather than look at
individual bits of the query (eg: if a RETURNING clause exists, or if a
WHERE clause exists which is referencing the table, or if it's a
FOR SHARE/UPDATE query), consider any case where we've determined
the user needs SELECT rights on the relation while doing an UPDATE or
DELETE to be a case where we apply SELECT policies, and any case where
we've deteremind that the user needs UPDATE rights on the relation while
doing a SELECT to be a case where we apply UPDATE policies.
This simplifies the logic and addresses concerns that a user could use
UPDATE or DELETE with a WHERE clauses to determine if rows exist, or
they could use SELECT .. FOR UPDATE to lock rows which they are not
actually allowed to modify through UPDATE policies.
Use list_append_unique() to avoid adding the same quals multiple times,
as, on balance, the cost of checking when adding the quals will almost
always be cheaper than keeping them and doing busywork for each tuple
during execution.
Back-patch to 9.5 where RLS was added.
We seem to have lost a line somewhere along the way in the comment block
that discusses async.c's locks, because it suddenly refers to "both locks"
without previously having mentioned more than one. Add a sentence to make
that read more sanely. Also, refer to the "pos of the slowest backend"
not the "tail of the slowest backend", since we have no per-backend value
called "tail".
The tolerance (larger than actual tps number) increases as the number
of threads decreases. The bug has been there since the thread support
was introduced in 9.0. Because back patching introduces incompatible
behavior changes regarding the tps number, the fix is committed to
master and 9.5 stable branches only.
Problem spotted by me and fix proposed by Fabien COELHO. Note that his
original patch included more than fixes (a code re-factoring) which is
not related to the problem and I omitted the part.
There are three main changes here:
1. No longer cause a start failure in a standby if the feature is
disabled in postgresql.conf but enabled in the master. This reverts one
part of commit 4f3924d9cd43; what we keep is the ability of the standby
to activate/deactivate the module (which includes creating and removing
segments as appropriate) during replay of such actions in the master.
2. Replay WAL records affecting commitTS even if the feature is
disabled. This means the standby will always have the same state as the
master after replay.
3. Have COMMIT PREPARE record the transaction commit time as well. We
were previously only applying it in the normal transaction commit path.
Author: Petr Jelínek
Discussion: http://www.postgresql.org/message-id/CAHGQGwHereDzzzmfxEBYcVQu3oZv6vZcgu1TPeERWbDc+gQ06g@mail.gmail.com
Discussion: http://www.postgresql.org/message-id/CAHGQGwFuzfO4JscM9LCAmCDCxp_MfLvN4QdB+xWsS-FijbjTYQ@mail.gmail.com
Additionally, I cleaned up nearby code related to replication origins,
which I found a bit hard to follow, and fixed a couple of typos.
Backpatch to 9.5, where this code was introduced.
Per bug reports from Fujii Masao and subsequent discussion.
We were passing error message texts to croak() verbatim, which turns out
not to work if the text contains non-ASCII characters; Perl mangles their
encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert
the text into a UTF8-encoded SV first.
It's hard to test this without risking failures in different database
encodings; but we can follow the lead of plpython, which is already
assuming that no-break space (U+00A0) has an equivalent in all encodings
we care about running the regression tests in (cf commit 2dfa15de5).
Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems
too risky to put something like this into 9.0's final minor release.
Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
This code provides infrastructure for a parallel leader to start up
parallel workers to execute subtrees of the plan tree being executed
in the master. User-supplied parameters from ParamListInfo are passed
down, but PARAM_EXEC parameters are not. Various other constructs,
such as initplans, subplans, and CTEs, are also not currently shared.
Nevertheless, there's enough here to support a basic implementation of
parallel query, and we can lift some of the current restrictions as
needed.
Amit Kapila and Robert Haas
Thom Brown reported that SSL connections didn't seem to work on Windows in
9.5. Asif Naeem figured out that the cause was my_sock_read() looking at
"errno" when it needs to look at "SOCK_ERRNO". This mistake was introduced
in commit 680513ab79, which cloned the
backend's custom SSL BIO code into libpq, and didn't translate the errno
handling properly. Moreover, it introduced unnecessary errno save/restore
logic, which was particularly confusing because it was incomplete; and it
failed to check for all three of EINTR, EAGAIN, and EWOULDBLOCK in
my_sock_write. (That might not be necessary; but since we're copying
well-tested backend code that does do that, it seems prudent to copy it
faithfully.)
To make sure that pg_dump/pg_restore function properly with RLS
policies, arrange to have a few of them left around at the end of the
regression tests.
Back-patch to 9.5 where RLS was added.
When taking the UPDATE path in an INSERT .. ON CONFLICT .. UPDATE tables
with oids were not supported. The tuple generated by the update target
list was projected without space for an oid - a simple oversight.
Reported-By: Peter Geoghegan
Author: Andres Freund
Backpatch: 9.5, where ON CONFLICT was introduced
Otherwise, we effectively act as if abs_top_builddir were the root
directory, which is quite dangerous if the user happens to have
permissions to do things there. This can crop up in PGXS builds,
for example.
Report by Sandro Santilli, patch by me, review by Noah Misch.
In 9.5 and master there is no need to support legacy truncation. This is
just committed separately to make it easier to backpatch the WAL logged
multixact truncation to 9.3 and 9.4 if we later decide to do so.
I bumped master's magic from 0xD086 to 0xD088 and 9.5's from 0xD085 to
0xD087 to avoid 9.5 reusing a value that has been in use on master while
keeping the numbers increasing between major versions.
Discussion: 20150621192409.GA4797@alap3.anarazel.de
Backpatch: 9.5
The fact that multixact truncations are not WAL logged has caused a fair
share of problems. Amongst others it requires to do computations during
recovery while the database is not in a consistent state, delaying
truncations till checkpoints, and handling members being truncated, but
offset not.
We tried to put bandaids on lots of these issues over the last years,
but it seems time to change course. Thus this patch introduces WAL
logging for multixact truncations.
This allows:
1) to perform the truncation directly during VACUUM, instead of delaying it
to the checkpoint.
2) to avoid looking at the offsets SLRU for truncation during recovery,
we can just use the master's values.
3) simplify a fair amount of logic to keep in memory limits straight,
this has gotten much easier
During the course of fixing this a bunch of additional bugs had to be
fixed:
1) Data was not purged from memory the member's SLRU before deleting
segments. This happened to be hard or impossible to hit due to the
interlock between checkpoints and truncation.
2) find_multixact_start() relied on SimpleLruDoesPhysicalPageExist - but
that doesn't work for offsets that haven't yet been flushed to
disk. Add code to flush the SLRUs to fix. Not pretty, but it feels
slightly safer to only make decisions based on actual on-disk state.
3) find_multixact_start() could be called concurrently with a truncation
and thus fail. Via SetOffsetVacuumLimit() that could lead to a round
of emergency vacuuming. The problem remains in
pg_get_multixact_members(), but that's quite harmless.
For now this is going to only get applied to 9.5+, leaving the issues in
the older branches in place. It is quite possible that we need to
backpatch at a later point though.
For the case this gets backpatched we need to handle that an updated
standby may be replaying WAL from a not-yet upgraded primary. We have to
recognize that situation and use "old style" truncation (i.e. looking at
the SLRUs) during WAL replay. In contrast to before, this now happens in
the startup process, when replaying a checkpoint record, instead of the
checkpointer. Doing truncation in the restartpoint is incorrect, they
can happen much later than the original checkpoint, thereby leading to
wraparound. To avoid "multixact_redo: unknown op code 48" errors
standbys would have to be upgraded before primaries.
A later patch will bump the WAL page magic, and remove the legacy
truncation codepaths. Legacy truncation support is just included to make
a possible future backpatch easier.
Discussion: 20150621192409.GA4797@alap3.anarazel.de
Reviewed-By: Robert Haas, Alvaro Herrera, Thomas Munro
Backpatch: 9.5 for now
This replaces ill-fated commit 5ddc72887a,
which was reverted because it broke active uses of FK cache entries. In
this patch, we still do nothing more to invalidatable cache entries than
mark them as needing revalidation, so we won't break active uses. To keep
down the overhead of InvalidateConstraintCacheCallBack(), keep a list of
just the currently-valid cache entries. (The entries are large enough that
some added space for list links doesn't seem like a big problem.) This
would still be O(N^2) when there are many valid entries, though, so when
the list gets too long, just force the "sinval reset" behavior to remove
everything from the list. I set the threshold at 1000 entries, somewhat
arbitrarily. Possibly that could be fine-tuned later. Another item for
future study is whether it's worth adding reference counting so that we
could safely remove invalidated entries. As-is, problem cases are likely
to end up with large and mostly invalid FK caches.
Like the previous attempt, backpatch to 9.3.
Jan Wieck and Tom Lane
(Third time's the charm, I hope.)
Additional testing disclosed that this code could mangle already-localized
output from the "money" datatype. We can't very easily skip applying it
to "money" values, because the logic is tied to column right-justification
and people expect "money" output to be right-justified. Short of
decoupling that, we can fix it in what should be a safe enough way by
testing to make sure the string doesn't contain any characters that would
not be expected in plain numeric output.
On closer inspection, those seemingly redundant atoi() calls were not so
much inefficient as just plain wrong: the author of this code either had
not read, or had not understood, the POSIX specification for localeconv().
The grouping field is *not* a textual digit string but separate integers
encoded as chars.
We'll follow the existing code as well as the backend's cash.c in only
honoring the first group width, but let's at least honor it correctly.
This doesn't actually result in any behavioral change in any of the
locales I have installed on my Linux box, which may explain why nobody's
complained; grouping width 3 is close enough to universal that it's barely
worth considering other cases. Still, wrong is wrong, so back-patch.
This code did the wrong thing entirely for numbers with an exponent
but no decimal point (e.g., '1e6'), as reported by Jeff Janes in
bug #13636. More generally, it made lots of unverified assumptions
about what the input string could possibly look like. Rearrange so
that it only fools with leading digits that it's directly verified
are there, and an immediately adjacent decimal point. While at it,
get rid of some useless inefficiencies, like converting the grouping
count string to integer over and over (and over).
This has been broken for a long time, so back-patch to all supported
branches.
Previously, a function call appearing at the top level of WHERE had a
hard-wired selectivity estimate of 0.3333333, a kludge conveniently dated
in the source code itself to July 1992. The expectation at the time was
that somebody would soon implement estimator support functions analogous
to those for operators; but no such code has appeared, nor does it seem
likely to in the near future. We do have an alternative solution though,
at least for immutable functions on single relations: creating an
expression index on the function call will allow ANALYZE to gather stats
about the function's selectivity. But the code in clause_selectivity()
failed to make use of such data even if it exists.
Refactor so that that will happen. I chose to make it try this technique
for any clause type for which clause_selectivity() doesn't have a special
case, not just functions. To avoid adding unnecessary overhead in the
common case where we don't learn anything new, make selfuncs.c provide an
API that hooks directly to examine_variable() and then var_eq_const(),
rather than the previous coding which laboriously constructed an OpExpr
only so that it could be expensively deconstructed again.
I preserved the behavior that the default estimate for a function call
is 0.3333333. (For any other expression node type, it's 0.5, as before.)
I had originally thought to make the default be 0.5 across the board, but
changing a default estimate that's survived for twenty-three years seems
like something not to do without a lot more testing than I care to put
into it right now.
Per a complaint from Jehan-Guillaume de Rorthais. Back-patch into 9.5,
but not further, at least for the moment.
The comments here stated that this was just in case we ever had an
ALTER OPERATOR command that could remap an operator to a different
function. But those comments have been here for a long time, and no
such command has come about. In the absence of such a feature,
forcing the pg_proc OID to be looked up again each time we reread a
stored rule or similar is just a waste of cycles. Moreover, parallel
query needs a way to reread the exact same node tree that was written
out, not one that has been slightly stomped on. So just get rid of
this for now.
Per discussion with Tom Lane.
Previously pg_controldata didn't report newestCommitTs and this was
an oversight in commit 73c986a.
Also this patch changes pg_resetxlog so that it uses the same sentences
as pg_controldata does, regarding oldestCommitTs and newestCommitTs,
for the sake of consistency.
Back-patch to 9.5 where track_commit_timestamp was added.
Euler Taveira
The old minimum values are rather large, making it time consuming to
test related behaviour. Additionally the current limits, especially for
multixacts, can be problematic in space-constrained systems. 10000000
multixacts can contain a lot of members.
Since there's no good reason for the current limits, lower them a good
bit. Setting them to 0 would be a bad idea, triggering endless vacuums,
so still retain a limit.
While at it fix autovacuum_multixact_freeze_max_age to refer to
multixact.c instead of varsup.c.
Reviewed-By: Robert Haas
Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com
Backpatch: back to 9.0 (in parts)
Previously, ANALYZE simply ignored columns of datatypes that have neither
a btree nor hash opclass (which means they have no recognized equality
operator). Without a notion of equality, we can't identify most-common
values nor estimate the number of distinct values. But we can still
count nulls and compute the average physical column width, and those
stats might be of value. Moreover there are some tools out there that
don't work so well if rows are missing from pg_statistic. So let's
add suitable logic for this case.
While this is arguably a bug fix, it also has the potential to change
query plans, and the gain seems not worth taking a risk of that in
stable branches. So back-patch into 9.5 but not further.
Oleksandr Shulgin, rewritten a bit by me.
For parallel query, we need to be able to pass a Plan to a worker, so
that it knows what it's supposed to do. We could invent our own way
of serializing plans for that purpose, but piggybacking on the
existing node infrastructure seems like a much better idea.
Initially, we'll probably only support a limited number of nodes
within parallel workers, but this commit adds support for everything
in plannodes.h except CustomScan, because doing it all at once seems
easier than doing it piecemeal, and it makes testing this code easier,
too. CustomScan is excluded because making that work requires a
larger rework of that facility.
Amit Kapila, reviewed and slightly revised by me.
It's declared as being an array of bool, but it's printed
differently from the way bool and arrays of bool are handled
elsewhere.
Patch by Amit Kapila. Anomaly noted independently by Amit Kapila
and KaiGai Kohei.
Commit e956808328 introduces adding pages
to FSM for ordinary insert, but autoanalyze was able just cleanup
pending list without adding to FSM.
Also fix double call of IndexFreeSpaceMapVacuum() during ginvacuumcleanup()
Report from Fujii Masao
Patch by me
Review by Jeff Janes
This logic was missing from ExplainPreScanNode, from which I derived
planstate_tree_walker. But it shouldn't be missing, especially not
from a generic walker function, so add it.
KaiGai Kohei
mul_var() postpones propagating carries until it risks overflow in its
internal digit array. However, the logic failed to account for the
possibility of overflow in the carry propagation step, allowing wrong
results to be generated in corner cases. We must slightly reduce the
when-to-propagate-carries threshold to avoid that.
Discovered and fixed by Dean Rasheed, with small adjustments by me.
This has been wrong since commit d72f6c7503,
so back-patch to all supported branches.
This commit's parent made superfluous the bit's sole usage. Referential
integrity checks have long run as the subject table's owner, and that
now implies RLS bypass. Safe use of the bit was tricky, requiring
strict control over the SQL expressions evaluating therein. Back-patch
to 9.5, where the bit was introduced.
Based on a patch by Stephen Frost.
Every query of a single ENABLE ROW SECURITY table has two meanings, with
the row_security GUC selecting between them. With row_security=force
available, every function author would have been advised to either set
the GUC locally or test both meanings. Non-compliance would have
threatened reliability and, for SECURITY DEFINER functions, security.
Authors already face an obligation to account for search_path, and we
should not mimic that example. With this change, only BYPASSRLS roles
need exercise the aforementioned care. Back-patch to 9.5, where the
row_security GUC was introduced.
Since this narrows the domain of pg_db_role_setting.setconfig and
pg_proc.proconfig, one might bump catversion. A row_security=force
setting in one of those columns will elicit a clear message, so don't.
RemoveLocalLock() must consider the possibility that LockAcquireExtended()
failed to palloc the initial space for a locallock's lockOwners array.
I had evidently meant to cope with this hazard when the code was originally
written (commit 1785acebf2), but missed that
the pfree needed to be protected with an if-test. Just to make sure things
are left in a clean state, reset numLockOwners as well.
Per low-memory testing by Andreas Seltenreich. Back-patch to all supported
branches.
BSD find is not very smart and ends up writing double slashes into the
output in those cases. Also, xgettext is not very smart and splits the
file names incorrectly in those cases, resulting in slightly incorrect
file names being written into the POT file.
The shm_mq mechanism was built to send error (and notice) messages and
tuples between backends. However, shm_mq itself only deals in raw
bytes. Since commit 2bd9e412f9, we have
had infrastructure for one message to redirect protocol messages to a
queue and for another backend to parse them and do useful things with
them. This commit introduces a somewhat analogous facility for tuples
by adding a new type of DestReceiver, DestTupleQueue, which writes
each tuple generated by a query into a shm_mq, and a new
TupleQueueFunnel facility which reads raw tuples out of the queue and
reconstructs the HeapTuple format expected by the executor.
The TupleQueueFunnel abstraction supports reading from multiple tuple
streams at the same time, but only in round-robin fashion. Someone
could imaginably want other policies, but this should be good enough
to meet our short-term needs related to parallel query, and we can
always extend it later.
This also makes one minor addition to the shm_mq API that didn'
seem worth breaking out as a separate patch.
Extracted from Amit Kapila's parallel sequential scan patch. This
code was originally written by me, and then it was revised by Amit,
and then it was revised some more by me.
These functions have been looking up type info for every row they
process. Instead of doing that we only look them up the first time
through and stash the information in the aggregate state object.
Affects json_agg, json_object_agg, jsonb_agg and jsonb_object_agg.
There is plenty more work to do in making these more efficient,
especially the jsonb functions, but this is a virtually cost free
improvement that can be done right away.
Backpatch to 9.5 where the jsonb variants were introduced.
After an internal failure in shortest() or longest() while pinning down the
exact location of a match, find() forgot to free the DFA structure before
returning. This is pretty unlikely to occur, since we just successfully
ran the "search" variant of the DFA; but it could happen, and it would
result in a session-lifespan memory leak since this code uses malloc()
directly. Problem seems to have been aboriginal in Spencer's library,
so back-patch all the way.
In passing, correct a thinko in a comment I added awhile back about the
meaning of the "ntree" field.
I happened across these issues while comparing our code to Tcl's version
of the library.
Use IsBinaryCoercible() method instead of custom
is_expected_type/is_text_type functions which was introduced when tsearch2
was moved into core.
Per report by David E. Wheeler
Analysis by Tom Lane
Patch by me
This setting contains extra configuration for the temp instance, as used
in pg_regress' --temp-config flag.
Backpatch to 9.2 where test.sh was introduced.
ExplainPreScanNode knows how to iterate over a generic tree of plan
states; factor that logic out into a separate walker function so that
other code, such as upcoming patches for parallel query, can also use
it.
Patch by me, reviewed by Tom Lane.
Commit 013ebc0a7b introduces microvacuum for
GiST, deletetion of tuple marked LP_DEAD uses IndexPageMultiDelete while
recovery code uses IndexPageTupleDelete in loop. This causes a difference
in offset numbers of tuples to delete. Patch introduces usage of
IndexPageMultiDelete in GiST except gistplacetopage() where only one tuple is
deleted at once. That also slightly improve performance, because
IndexPageMultiDelete is more effective.
Patch changes WAL format, so bump wal page magic.
Bug report from Jeff Janes
Diagnostic and patch by Anastasia Lubennikova and me
Commit 924bcf4f16 introduced a framework
for parallel computation in PostgreSQL that makes most but not all
built-in functions safe to execute in parallel mode. In order to have
parallel query, we'll need to be able to determine whether that query
contains functions (either built-in or user-defined) that cannot be
safely executed in parallel mode. This requires those functions to be
labeled, so this patch introduces an infrastructure for that. Some
functions currently labeled as safe may need to be revised depending on
how pending issues related to heavyweight locking under paralllelism
are resolved.
Parallel plans can't be used except for the case where the query will
run to completion. If portal execution were suspended, the parallel
mode restrictions would need to remain in effect during that time, but
that might make other queries fail. Therefore, this patch introduces
a framework that enables consideration of parallel plans only when it
is known that the plan will be run to completion. This probably needs
some refinement; for example, at bind time, we do not know whether a
query run via the extended protocol will be execution to completion or
run with a limited fetch count. Having the client indicate its
intentions at bind time would constitute a wire protocol break. Some
contexts in which parallel mode would be safe are not adjusted by this
patch; the default is not to try parallel plans except from call sites
that have been updated to say that such plans are OK.
This commit doesn't introduce any parallel paths or plans; it just
provides a way to determine whether they could potentially be used.
I'm committing it on the theory that the remaining parallel sequential
scan patches will also get committed to this release, hopefully in the
not-too-distant future.
Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah
Misch.
Sync our regex code with upstream changes since last time we did this,
which was Tcl 8.5.11 (see commit 08fd6ff37f).
The only functional change here is to disbelieve that an octal escape is
three digits long if it would exceed \377. That's a bug fix, but it's
a minor one and could change the interpretation of working regexes, so
don't back-patch.
In addition to that, s/INFINITY/DUPINF/ to eliminate the risk of collisions
with <math.h>'s macro, and s/LOCAL/NOPROP/ because that also seems like
an unnecessarily collision-prone macro name.
There were some other cosmetic changes in their copy that I did not adopt,
notably a rather half-hearted attempt at renaming some of the C functions
in a more verbose style. (I'm not necessarily against the concept, but
renaming just a few functions in the package is not an improvement.)
This patch adds an option to replace the "time since pgbench run
started" with a Unix epoch timestamp in the progress report so that,
for instance, it is easier to compare timelines with pgsql log
Fabien COELHO <coelho@cri.ensmp.fr>
For the UPDATE/DELETE RETURNING case, filter the records which are not
visible to the user through ALL or SELECT policies from those considered
for the UPDATE or DELETE. This is similar to how the GRANT system
works, which prevents RETURNING unless the caller has SELECT rights on
the relation.
Per discussion with Robert, Dean, Tom, and Kevin.
Back-patch to 9.5 where RLS was introduced.
This refactors rewrite/rowsecurity.c to simplify the handling of the
default deny case (reducing the number of places where we check for and
add the default deny policy from three to one) by splitting up the
retrival of the policies from the application of them.
This also allowed us to do away with the policy_id field. A policy_name
field was added for WithCheckOption policies and is used in error
reporting, when available.
Patch by Dean Rasheed, with various mostly cosmetic changes by me.
Back-patch to 9.5 where RLS was introduced to avoid unnecessary
differences, since we're still in alpha, per discussion with Robert.
Commit 5ddc72887a does not actually work
because it will happily blow away ri_constraint_cache entries that are
in active use in outer call levels. In any case, it's a very ugly,
brute-force solution to the problem of limiting the cache size.
Revert until it can be redesigned.
This patch changes the log message which is logged when the server
successfully renames backup_label file to *.old but fails to rename
tablespace_map file during the shutdown. Previously the WARNING
message "online backup mode was not canceled" was logged in that case.
However this message is confusing because the backup mode is treated
as canceled whenever backup_label is successfully renamed. So this
commit makes the server log the message "online backup mode canceled"
in that case.
Also this commit changes errdetail messages so that they follow the
error message style guide.
Back-patch to 9.5 where tablespace_map file is introduced.
Original patch by Amit Kapila, heavily modified by me.
Patch provides command line option --strict-names which requires that at
least one table/schema should present for each -t/-n option.
Pavel Stehule <pavel.stehule@gmail.com>
To prevent perverse results, we now only return the other operand if
it's not scalar, and if both operands are of the same kind (array or
object).
Original bug complaint and patch from Oskari Saarenmaa, extended by me
to cover the cases of different kinds of jsonb.
Backpatch to 9.5 where jsonb_concat was introduced.
Modify pg_dump to restore postgres/template1 databases to non-default
tablespaces by switching out of the database to be moved, then switching
back.
Also, to fix potentially cases where the old/new tablespaces might not
match, fix pg_upgrade to process new/old tablespaces separately in all
cases.
Report by Marti Raudsepp
Patch by Marti Raudsepp, me
Backpatch through 9.0
I think this particular branch is actually dead, but the analysis to
prove that is not trivial, so instead take the weasel way.
Reported by Jinyu Zhang
Backpatch to 9.5, where BRIN was introduced.
Commit 45ba424f improved foreign key lookups during bulk updates
when the FK value does not change. When restoring a schema dump
from a database with many (say 100,000) foreign keys, this cache
would grow very big and every ALTER TABLE command was causing an
InvalidateConstraintCacheCallBack(), which uses a sequential hash
table scan. This could cause a severe performance regression in
restoring a schema dump (including during pg_upgrade).
The patch uses a heuristic method of detecting when the hash table
should be destroyed and recreated.
InvalidateConstraintCacheCallBack() adds the current size of the
hash table to a counter. When that sum reaches 1,000,000, the hash
table is flushed. This fixes the regression without noticeable
harm to the bulk update use case.
Jan Wieck
Backpatch to 9.3 where the performance regression was introduced.
Naming the individual lwlocks seems like something that may be useful
for other types of debugging, monitoring, or instrumentation output,
but this commit just implements it for the specific case of
trace_lwlocks.
Patch by me, reviewed by Amit Kapila and Kyotaro Horiguchi
The "typo" alleged in commit 1e460d4bd was actually a comment that was
correct when written, but I missed updating it in commit b5282aa89.
Use a slightly less specific (and hopefully more future-proof) description
of what is collected. Back-patch to 9.2 where that commit appeared, and
revert the comment to its then-entirely-correct state before that.
The list-wrangling here was done wrong, allowing the same state to get
put into the list twice. The following loop then would clone it twice.
The second clone would wind up with no inarcs, so that there was no
observable misbehavior AFAICT, but a useless state in the finished NFA
isn't an especially good thing.
Mark index tuple as dead if it's pointed by kill_prior_tuple during
ordinary (search) scan and remove it during insert process if there is no
enough space for new tuple to insert. This improves select performance
because index will not return tuple marked as dead and improves insert
performance because it reduces number of page split.
Anastasia Lubennikova <a.lubennikova@postgrespro.ru> with
minor editorialization by me
This commit makes postmaster forcibly remove the files signaling
a standby promotion request. Otherwise, the existence of those files
can trigger a promotion too early, whether a user wants that or not.
This removal of files is usually unnecessary because they can exist
only during a few moments during a standby promotion. However
there is a race condition: if pg_ctl promote is executed and creates
the files during a promotion, the files can stay around even after
the server is brought up to new master. Then, if new standby starts
by using the backup taken from that master, the files can exist
at the server startup and should be removed in order to avoid
an unexpected promotion.
Back-patch to 9.1 where promote signal file was introduced.
Problem reported by Feike Steenbergen.
Original patch by Michael Paquier, modified by me.
Discussion: 20150528100705.4686.91426@wrigleys.postgresql.org
Even views considered "simple" enough to be automatically updatable may
have mulitple relations involved (eg: in a where clause). We need to
make sure and lock those relations when rewriting the query.
Back-patch to 9.3 where updatable views were added.
Pointed out by Andres, patch thanks to Dean Rasheed.
This was forgotten in 8a3631f (commit that originally added the parameter)
and 0ca9907 (commit that added the documentation later that year).
Back-patch to all supported versions.
Per discussion, nowadays it is possible to have tablespaces that have
wildly different I/O characteristics from others. Setting different
effective_io_concurrency parameters for those has been measured to
improve performance.
Author: Julien Rouhaud
Reviewed by: Andres Freund
If a transaction never reaches the standby, later tests find unexpected
cluster state. A "tail-copy: query result matches" test failure has
been the usual symptom. Among the buildfarm members having run this
test suite, most have exhibited that symptom at least once. Back-patch
to 9.5, where pg_rewind was introduced.
Michael Paquier, reported by Christoph Berg.
Commit f828654e introduced the 'n' option, but it invoked
gettimeofday() independently of the 'm' option. If both options were
in use (or multiple 'n' options), or if 'n' was in use along with
csvlog, then the reported times could be different for the same log
message.
To fix, initialize a global variable with gettimeofday() once per log
message, and use that for both formats.
Don't bother coordinating the time for the 't' option, which has much
lower resolution.
Per complaint by Alvaro Herrera.
Cleanup process could be called by ordinary insert/update and could take a lot
of time. Add vacuum_delay_point() to make this process interruptable. Under
vacuum this call will also throttle a vacuum process to decrease system load,
called from insert/update it will not throttle, and that reduces a latency.
Backpatch for all supported branches.
Jeff Janes <jeff.janes@gmail.com>
Add pages deleted from GIN's pending list during cleanup to free space map
immediately. Clean up process could be initiated by ordinary insert but adding
page to FSM might occur only at vacuum. On some workload like never-vacuumed
insert-only tables it could cause a huge bloat.
Jeff Janes <jeff.janes@gmail.com>
Since 6fcd885 it is possible to immediately reserve WAL when creating a
slot via pg_create_physical_replication_slot(). Extend the replication
protocol to allow that as well.
Although, in contrast to the SQL interface, it is possible to update the
reserved location via the replication interface, it is still useful
being able to reserve upon creation there. Otherwise the logic in
ReplicationSlotReserveWal() has to be repeated in slot employing
clients.
Author: Michael Paquier
Discussion: CAB7nPqT0Wc1W5mdYGeJ_wbutbwNN+3qgrFR64avXaQCiJMGaYA@mail.gmail.com
RESERV. RESERV is meant for tokens like "now" and having them in that
category throws errors like these when used as an input date:
stark=# SELECT 'doy'::timestamptz;
ERROR: unexpected dtype 33 while parsing timestamptz "doy"
LINE 1: SELECT 'doy'::timestamptz;
^
stark=# SELECT 'dow'::timestamptz;
ERROR: unexpected dtype 32 while parsing timestamptz "dow"
LINE 1: SELECT 'dow'::timestamptz;
^
Found by LLVM's Libfuzzer
This has been broken since 9.3 (commit 82b1b213ca to be exact),
which suggests that nobody is any longer using a Windows build system that
doesn't provide a symlink emulation. Still, it's wrong on its own terms,
so repair.
YUriy Zhuravlev
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands. That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function. What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.
To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages. Printing CONTEXT for errors only
is now their default behavior.
The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread. I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.
In passing, fix up (again) the output line counts in psql's various
help displays. Add some commentary about how to verify them.
Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others
If the number of heap blocks is not multiples of pages per range, the
summarizing produces wrong summary information for the last brin index
tuple while vacuuming.
Problem reported by Tatsuo Ishii and fixed by Amit Langote.
Discussion at "[HACKERS] BRIN INDEX value (message id :20150903.174935.1946402199422994347.t-ishii@sraoss.co.jp)
Backpatched to 9.5 in which brin index was added.
Formerly, we treated only portals created in the current subtransaction as
having failed during subtransaction abort. However, if the error occurred
while running a portal created in an outer subtransaction (ie, a cursor
declared before the last savepoint), that has to be considered broken too.
To allow reliable detection of which ones those are, add a bookkeeping
field to struct Portal that tracks the innermost subtransaction in which
each portal has actually been executed. (Without this, we'd end up
failing portals containing functions that had called the subtransaction,
thereby breaking plpgsql exception blocks completely.)
In addition, when we fail an outer-subtransaction Portal, transfer its
resources into the subtransaction's resource owner, so that they're
released early in cleanup of the subxact. This fixes a problem reported by
Jim Nasby in which a function executed in an outer-subtransaction cursor
could cause an Assert failure or crash by referencing a relation created
within the inner subtransaction.
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache
assumed it could blow away a relcache entry without first checking that the
entry had zero refcount. That was a bad idea on its own terms, so add such
a check there, and to the similar coding in AtEOXact_RelationCache. This
provides an independent safety measure in case there are still ways to
provoke the situation despite the Portal-level changes.
This has been broken since subtransactions were invented, so back-patch
to all supported branches.
Tom Lane and Michael Paquier
Post-commit review by Andres Freund discovered a couple of concurrency
bugs in the original patch: specifically, if the leader cleared a
follower's XID before it reached PGSemaphoreLock, the semaphore would be
left in the wrong state; and if another process did PGSemaphoreUnlock
for some unrelated reason, we might resume execution before the fact
that our XID was cleared was globally visible.
Also, improve the wording of some comments, rename nextClearXidElem
to firstClearXidElem in PROC_HDR for clarity, and drop some volatile
qualifiers that aren't necessary.
Amit Kapila, reviewed and slightly revised by me.
The setting values of some parameters including max_worker_processes
must be equal to or higher than the values on the master. However,
previously max_worker_processes was not listed as such parameter
in the document. So this commit adds it to that list.
Back-patch to 9.4 where max_worker_processes was added.
Most suites already did so via start_test_server(), but the pg_rewind,
pg_ctl and pg_controldata suites ran a postmaster or initdb with fsync
enabled. This halves the pg_rewind suite's runtime on buildfarm member
tern. It makes tern and that machine's other buildfarm members less
vulnerable to noise failures from postmaster startup overrunning the 60s
pg_ctl timeout. Back-patch to 9.5, where pg_rewind was introduced.
Currently, in-memory posting list during GIN build process is limited 1GB
because of using repalloc. The patch replaces call of repalloc to repalloc_huge.
It increases limit of posting list from 180 millions
(1GB / sizeof(ItemPointerData)) to 4 billions limited by maxcount/count fields
in GinEntryAccumulator and subsequent calls. Check added.
Also, fix accounting of allocatedMemory during build to prevent integer
overflow with maintenance_work_mem > 4GB.
Robert Abraham <robert.abraham86@googlemail.com> with additions by me
Previously, if one background worker registered another background
worker and set bgw_notify_pid while for the second background worker,
it would not receive notifications from the postmaster unless, at the
time the "parent" was registered, BGWORKER_BACKEND_DATABASE_CONNECTION
was set.
To fix, instead instead of including only those background workers that
requested database connections in the postmater's BackendList, include
them all. There doesn't seem to be any reason not do this, and indeed
it removes a significant amount of duplicated code. The other option
is to make PostmasterMarkPIDForWorkerNotify look at BackgroundWorkerList
in addition to BackendList, but that adds more code duplication instead
of getting rid of it.
Patch by me. Review and testing by Ashutosh Bapat.
Some googling turned up multiple sources saying that older versions of icc
do not accept gcc-compatible asm blocks on IA64, though asm does work on
x86[_64]. This is apparently fixed as of icc version 12.0 or so, but that
doesn't help us much; if we have to carry the extra implementation anyway,
we may as well just use it for icc rather than add a compiler version test.
Hence, revert commit 2c713d6ea2 (though I
separated the icc code from the gcc code completely, producing what seems
cleaner code). Document the state of affairs more explicitly, both in
s_lock.h and postgres.c, and make some cosmetic adjustments around the
IA64 code in s_lock.h.
The atomics headers were written under the impression that icc doesn't
handle gcc-style asm blocks, but this is demonstrably false on x86_[64],
because s_lock.h has done it that way for more than a decade. (The jury is
still out on whether this also works on ia64, so I'm leaving ia64-related
code alone for the moment.) Treat gcc and icc the same in these headers.
This is less code and it should improve the results for icc, because we
hadn't gotten around to providing icc-specific implementations for most
of the atomics.
Intel's icc is generally able to swallow asm blocks written for gcc.
We have a few places that don't seem to know that, though. Experiment
with removing the special case for icc in ia64_get_bsp(); if the buildfarm
likes this, I'll try more cleanup. This is a good test case because it
involves a "stop" notation that seems like it might not be very portable.
Remove configure's checks for HAVE_POSIX_SIGNALS, HAVE_SIGPROCMASK, and
HAVE_SIGSETJMP. These APIs are required by the Single Unix Spec v2
(POSIX 1997), which we generally consider to define our minimum required
set of Unix APIs. Moreover, no buildfarm member has reported not having
them since 2012 or before, which means that even if the code is still live
somewhere, it's untested --- and we've made plenty of signal-handling
changes of late. So just take these APIs as given and save the cycles for
configure probes for them.
However, we can't remove as much C code as I'd hoped, because the Windows
port evidently still uses the non-POSIX code paths for signal masking.
Since we're largely emulating these BSD-style APIs for Windows anyway, it
might be a good thing to switch over to POSIX-like notation and thereby
remove a few more #ifdefs. But I'm not in a position to code or test that.
In the meantime, we can at least make things a bit more transparent by
testing for WIN32 explicitly in these places.
C89 requires <signal.h> to define sig_atomic_t, and there is no evidence
in the buildfarm that any supported platforms don't comply. Remove the
configure test to stop wasting build cycles on a purely historical issue.
(Once upon a time, we cared about supporting C89-compliant compilers on
machines with pre-C89 system headers, but that use-case has been dead for
quite a few years.)
I have some other fixes planned in this area, but let's start with this
to see if the buildfarm produces any surprising results.
On recent AIX it's necessary to configure gcc to use the native assembler
(because the GNU assembler hasn't been updated to handle AIX 6+). This
caused PG builds to fail with assembler syntax errors, because we'd try
to compile s_lock.h's gcc asm fragment for PPC, and that assembly code
relied on GNU-style local labels. We can't substitute normal labels
because it would fail in any file containing more than one inlined use of
tas(). Fortunately, that code is stable enough, and the PPC ISA is simple
enough, that it doesn't seem like too much of a maintenance burden to just
hand-code the branch offsets, removing the need for any labels.
Note that the AIX assembler only accepts "$" for the location counter
pseudo-symbol. The usual GNU convention is "."; but it appears that all
versions of gas for PPC also accept "$", so in theory this patch will not
break any other PPC platforms.
This has been reported by a few people, but Steve Underwood gets the credit
for being the first to pursue the problem far enough to understand why it
was failing. Thanks also to Noah Misch for additional testing.
During fireRIRrules(), get_row_security_policies can add to
securityQuals and withCheckOptions. Make sure to lock any relations
added at that point and before firing RIR rules on those expressions.
Back-patch to 9.5 where RLS was added.
Rather than consulting TransactionIdIsInProgress to see if an in-doubt
transaction is still running, consult XidInMVCCSnapshot. That requires
the same or fewer cycles as TransactionIdIsInProgress, and what's far
more important, it does not access shared data structures (at least in the
no-subxip-overflow case) so it incurs no contention. Furthermore, we would
have had to check XidInMVCCSnapshot anyway before deciding that we were
allowed to see the tuple.
There should never be a case where XidInMVCCSnapshot says a transaction is
done while TransactionIdIsInProgress says it's still running. The other
way around is quite possible though. The result of that difference is that
HeapTupleSatisfiesMVCC will no longer set hint bits on tuples whose source
transactions recently finished but are still running according to our
snapshot. The main cost of delaying the hint-bit setting is that repeated
visits to a just-committed tuple, by transactions none of which have
snapshots new enough to see the source transaction as done, will each
execute TransactionIdIsCurrentTransactionId, which they need not have done
before. However, that's normally just a small overhead, and no contention
costs are involved; so it seems well worth the benefit of removing
TransactionIdIsInProgress calls during the life of the source transaction.
The core idea for this patch is due to Jeff Janes, who also did the legwork
proving its performance benefits. His original proposal was to swap the
order of TransactionIdIsInProgress and XidInMVCCSnapshot calls in some
cases within HeapTupleSatisfiesMVCC. That was a bit messy though.
The idea that we could dispense with calling TransactionIdIsInProgress
altogether was mine, as is the final patch.
Until 9.4, pg_controldata output was all aligned. At some point
during 9.5 development, a new item was added, namely
"Current track_commit_timestamp setting:" which is two characters
too long to be aligned with the rest of the output. Fix this by
removing the noise word "Current" and adding the requisite number
of padding spaces. Since the six preceding items are also similar
in nature, remove "Current" and pad those as well in order to
maintain overall consistency. Backpatch to 9.5 where new offending
item was added.
We had a report from Stefan Kaltenbrunner of a case in which postmaster
log files overran available disk space because multiple backends spewed
enormous context stats dumps upon hitting an out-of-memory condition.
Given the lack of similar reports, this isn't a common problem, but it
still seems worth doing something about. However, we don't want to just
blindly truncate the output, because that might prevent diagnosis of OOM
problems. What seems like a workable compromise is to limit the dump to
100 child contexts per parent, and summarize the space used within any
additional child contexts. That should help because practical cases where
the dump gets long will typically be huge numbers of siblings under the
same parent context; while the additional debugging value from seeing
details about individual siblings beyond 100 will not be large, we hope.
Anyway it doesn't take much code or memory space to do this, so let's try
it like this and see how things go.
Since the summarization mechanism requires passing totals back up anyway,
I took the opportunity to add a "grand total" line to the end of the
printout.
The results of the KNN-search test cases were indeterminate, as they asked
the system to sort pairs of points that are exactly equidistant from the
query reference point. It's a bit surprising that we've seen no
platform-specific failures from this in the buildfarm. Perhaps IEEE-float
math is well enough standardized that no such failures will ever occur on
supported platforms ... but since this entire regression test has yet to be
shipped in any non-alpha release, that seems like an unduly optimistic
assumption. Tweak the queries so that the correct output is uniquely
defined.
(The other queries in this test are also underdetermined; but it looks like
they are regurgitating index rows in insertion order, so for the moment
assume that that behavior is stable enough.)
Per Greg Stark's experiments with VAX. Back-patch to 9.5 where this test
script was introduced.
Previously, convert_one_string_to_scalar() would examine up to 20 bytes of
the input string, producing a scalar conversion with theoretical precision
far greater than is of any possible use considering the other limitations
on the accuracy of the resulting selectivity estimate. (I think this
choice might pre-date the caller-level logic that strips any common prefix
of the strings; before that, there could have been value in scanning the
strings far enough to use all the precision available in a double.)
Aside from wasting cycles to little purpose, this choice meant that the
"denom" variable could grow to as much as 256^21 = 3.74e50, which could
overflow in some non-IEEE float arithmetics. While we don't really support
any machines with non-IEEE arithmetic anymore, this still seems like quite
an unnecessary platform dependency. Limit the scan to 12 bytes instead,
thus limiting "denom" to 256^13 = 2.03e31, a value more likely to be
computable everywhere.
Per testing by Greg Stark, which showed overflow failures in our standard
regression tests on VAX.
Since the distances used in this algorithm are small integers (not more
than the size of the U set, in fact), there is no good reason to use float
arithmetic for them. Use short ints instead: they're smaller, faster, and
require no special portability assumptions.
Per testing by Greg Stark, which disclosed that the code got into an
infinite loop on VAX for lack of IEEE-style float infinities. We don't
really care all that much whether Postgres can run on a VAX anymore,
but there seems sufficient reason to change this code anyway.
In passing, make a few other small adjustments to make the code match
usual Postgres coding style a bit better.
For no obvious reason, spi_printtup() was coded to enlarge the tuple
pointer table by just 256 slots at a time, rather than doubling the size at
each reallocation, as is our usual habit. For very large SPI results, this
makes for O(N^2) time spent in repalloc(), which of course soon comes to
dominate the runtime. Use the standard doubling approach instead.
This is a longstanding performance bug, so back-patch to all active
branches.
Neil Conway
With a bit of tweaking of the compile namestack data structure, we can
verify at compile time whether a CONTINUE or EXIT is legal. This is
surely better than leaving it to runtime, both because earlier is better
and because we can issue a proper error pointer. Also, we can get rid
of the ad-hoc old way of detecting the problem, which only took care of
CONTINUE not EXIT.
Jim Nasby, adjusted a bit by me
Having the roles remain after the test ends up causing repeated 'make
installcheck' runs to fail and may be risky from a security perspective
also, so remove them at the end of the test.
The code had bugs that would cause crashes if NULL was passed as that
argument (originally intended to mean not to bother returning its
value), and after inspection it turns out that nothing seems interested
in the case that *ts is NULL anyway. Therefore, remove the partial
checks intended to support that case.
Author: Michael Paquier
though I didn't include a proposed Assert.
Backpatch to 9.5.
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though
for a composite result type the other union variant proc->result.out.r is
the one that should be valid. This could result in a crash if out.r had
in fact been filled in (proc->result.is_rowtype == 1) and then somebody
later attempted to use that data; as per bug #13579 from Paweł Michalak.
Just to add insult to injury, it didn't work for RECORD results anyway,
because record_in() would refuse the case.
Fix by doing the I/O function lookup in a local PLyTypeInfo variable,
as we were doing already in PLyObject_ToComposite(). This is not a great
technique because any fn_extra data allocated by the input function will
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).
But that's a pre-existing issue that is much less serious than a crash,
so leave it to be fixed separately.
This bug would be a potential security issue, except that plpython is
only available to superusers and the crash requires coding the function
in a way that didn't work before today's patches.
Add regression test cases covering all the supported methods of converting
composite results.
Back-patch to 9.1 where the faulty coding was introduced.
If we have the typmod that identifies a registered record type, there's no
reason that record_in() should refuse to perform input conversion for it.
Now, in direct SQL usage, record_in() will always be passed typmod = -1
with type OID RECORDOID, because no typmodin exists for type RECORD, so the
case can't arise. However, some InputFunctionCall users such as PLs may be
able to supply the right typmod, so we should allow this to support them.
Note: the previous coding and comment here predate commit 59c016aa9f.
There has been no case since 8.1 in which the passed type OID wouldn't be
valid; and if it weren't, this error message wouldn't be apropos anyway.
Better to let lookup_rowtype_tupdesc complain about it.
Back-patch to 9.1, as this is necessary for my upcoming plpython fix.
I'm committing it separately just to make it a bit more visible in the
commit history.
To avoid confusion, rename CreatePolicyStmt's 'cmd' to 'cmd_name',
parse_policy_command's 'cmd' to 'polcmd', and AlterPolicy's 'cmd_datum'
to 'polcmd_datum', per discussion with Noah and as a follow-up to his
correction of copynodes/equalnodes handling of the CreatePolicyStmt
'cmd' field.
Back-patch to 9.5 where the CreatePolicyStmt was introduced, as we
are still only in alpha.
When reworking bypassrls in AlterRole to operate the same way the other
attribute handling is done, I missed that the variable was incorrectly a
bool rather than an int. This meant that on platforms with an unsigned
char, we could end up with incorrect behavior during ALTER ROLE.
Pointed out by Andres thanks to tests he did changing our bool to be the
one from stdbool.h which showed this and a number of other issues.
Add regression tests to test CREATE/ALTER role for the various role
attributes. Arrange to leave roles behind for testing pg_dumpall, but
none which have the LOGIN attribute.
Back-patch to 9.5 where the AlterRole bug exists.
Commit 8cce08f168 used a left-shift
on a literal of 1 that could (in large allocations) be shifted by
31 or more bits. This was assigned to a local variable that was
already declared to be a long to protect against overruns of int,
but the literal in this shift needs to be declared long to allow it
to work correctly in some compilers.
Backpatch to 9.5, where the bug was introduced.
Report and patch by KaiGai Kohei, slighly modified based on
discussion.
plpgsql's error location context messages ("PL/pgSQL function fn-name line
line-no at stmt-type") would misreport a CONTINUE statement as being an
EXIT, and misreport a MOVE statement as being a FETCH. These are clear
bugs that have been there a long time, so back-patch to all supported
branches.
In addition, in 9.5 and HEAD, change the description of EXECUTE from
"EXECUTE statement" to just plain EXECUTE; there seems no good reason why
this statement type should be described differently from others that have
a well-defined head keyword. And distinguish GET STACKED DIAGNOSTICS from
plain GET DIAGNOSTICS. These are a bit more of a judgment call, and also
affect existing regression-test outputs, so I did not back-patch into
stable branches.
Pavel Stehule and Tom Lane
My expanded-objects patch (commit 1dc5ebc907) included code to make
plpgsql pass expanded-object variables as R/W pointers to certain functions
that are trusted for modifying such variables in-place. However, that
optimization got broken by commit 6c82d8d1fd, which arranged to share
a single ParamListInfo across most expressions evaluated by a plpgsql
function. We don't want a R/W pointer to be passed to other functions
just because we decided one function was safe! Fortunately, the breakage
was in the other direction, of never passing a R/W pointer at all, because
we'd always have pre-initialized the shared array slot with a R/O pointer.
So it was still functionally correct, but we were back to O(N^2)
performance for repeated use of "a := a || x". To fix, force an unshared
param array to be used when the R/W param optimization is active.
Commit 6c82d8d1fd is in HEAD only, so no need for a back-patch.
DO blocks use private simple_eval_estates to avoid intra-transaction memory
leakage, cf commit c7b849a89. I had forgotten about that while writing
commit 0fc94a5ba, but it means that expression execution trees created
within a DO block disappear immediately on exiting the DO block, and hence
can't safely be linked into plpgsql's session-wide cast hash table.
To fix, give a DO block a private cast hash table to go with its private
simple_eval_estate. This is less efficient than one could wish, since
DO blocks can no longer share any cast lookup work with other plpgsql
execution, but it shouldn't be too bad; in any case it's no worse than
what happened in DO blocks before commit 0fc94a5ba.
Per bug #13571 from Feike Steenbergen. Preliminary analysis by
Oleksandr Shulgin.
This fixes a bunch of somewhat pedantic warnings with new
compilers. Since by far the majority of other functions definitions use
the (void) style it just seems to be consistent to do so as well in the
remaining few places.
It was a bool, even though it should be CEOUC_WAIT_MODE. That's unlikely
to have a negative effect with the current definition of bool (char),
but it's definitely wrong.
Discussion: 20150812084351.GD8470@awork2.anarazel.de
Backpatch: 9.5, where ON CONFLICT was merged
Since a179232047 (vacuumdb: enable parallel mode) -1 has been assigned
to a boolean. That can, justifiedly, trigger compiler warnings. There's
also no need for ternary logic, result was only ever set to 0 or -1. So
don't.
Discussion: 20150812084351.GD8470@awork2.anarazel.de
Backpatch: 9.5
Doing so doesn't work if bool is a macro rather than a typedef.
Although c.h spends some effort to support configurations where bool is
a preexisting macro, help_config.c has existed this way since
2003 (b700a6), and there have not been any reports of
problems. Backpatch anyway since this is as riskless as it gets.
Discussion: 20150812084351.GD8470@awork2.anarazel.de
Backpatch: 9.0-master
Mistakenly relreplident was stored as a bool. That works today as c.h
typedefs bool to a char, but isn't very future proof.
Discussion: 20150812084351.GD8470@awork2.anarazel.de
Backpatch: 9.4 where replica identity was introduced.
This fixes presentation of non-ASCII messages to the Windows event log
and console in rare cases involving Korean locale. Processes like the
postmaster and checkpointer, but not processes attached to databases,
were affected. Back-patch to 9.4, where MessageEncoding was introduced.
The problem exists in all supported versions, but this change has no
effect in the absence of the code recognizing PG_UHC MessageEncoding.
Noticed while investigating bug #13427 from Dmitri Bourlatchkov.
Commit 49c817eab7 replaced with a hard
error the dubious pg_do_encoding_conversion() behavior when outside a
transaction. Reintroduce the historic soft failure locally within
pgwin32_message_to_UTF16(). This fixes errors when writing messages in
less-common encodings to the Windows event log or console. Back-patch
to 9.4, where the aforementioned commit first appeared.
Per bug #13427 from Dmitri Bourlatchkov.
Reduce lock levels down to ShareUpdateExclusiveLock for all autovacuum-related
relation options when setting them using ALTER TABLE.
Add infrastructure to allow varying lock levels for relation options in later
patches. Setting multiple options together uses the highest lock level required
for any option. Works for both main and toast tables.
Fabrízio Mello, reviewed by Michael Paquier, mild edit and additional regression
tests from myself
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file. In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python. So use a generic
exception instead, which has a message that doesn't vary across
versions.
This time, instead of using a core isolation test, put it on its own
test module; this way it can require the pageinspect module to be
present before running.
The module's Makefile is loosely modeled after test_decoding's, so that
it's easy to add further tests for either pg_regress or isolationtester
later.
Backpatch to 9.5.
In commit 95f4e59c32 I added a regression test case that examined
the plan of a query on system catalogs. That isn't a terribly great idea
because the catalogs tend to change from version to version, or even
within a version if someone makes an unrelated regression-test change that
populates the catalogs a bit differently. Usually I try to make planner
test cases rely on test tables that have not changed since Berkeley days,
but I got sloppy in this case because the submitted crasher example queried
the catalogs and I didn't spend enough time on rewriting it. But it was a
problem waiting to happen, as I was rudely reminded when I tried to port
that patch into Salesforce's Postgres variant :-(. So spend a little more
effort and rewrite the query to not use any system catalogs. I verified
that this version still provokes the Assert if 95f4e59c32866716's code fix
is reverted.
I also removed the EXPLAIN output from the test, as it turns out that the
assertion occurs while considering a plan that isn't the one ultimately
selected anyway; so there's no value in risking any cross-platform
variation in that printout.
Back-patch to 9.2, like the previous patch.
These are emitted by the new ax_pthread.m4 script version. They are not
used for anything in PostgreSQL, but let's keep the generated header file
up-to-date.
Andres Freund
One of the changes I made in commit 8703059c6b turns out not to have
been such a good idea: we still need the exception in join_is_legal() that
allows a join if both inputs already overlap the RHS of the special join
we're checking. Otherwise we can miss valid plans, and might indeed fail
to find a plan at all, as in recent report from Andreas Seltenreich.
That code was added way back in commit c17117649b, but I failed to
include a regression test case then; my bad. Put it back with a better
explanation, and a test this time. The logic does end up a bit different
than before though: I now believe it's appropriate to make this check
first, thereby allowing such a case whether or not we'd consider the
previous SJ(s) to commute with this one. (Presumably, we already decided
they did; but it was confusing to have this consideration in the middle
of the code that was handling the other case.)
Back-patch to all active branches, like the previous patch.
In some corner cases, it is possible for the BRIN index relation to be
extended by brin_getinsertbuffer but the new page not be used
immediately for anything by its callers; when this happens, the page is
initialized and the FSM is updated (by brin_getinsertbuffer) with the
info about that page, but these actions are not WAL-logged. A later
index insert/update can use the page, but since the page is already
initialized, the initialization itself is not WAL-logged then either.
Replay of this sequence of events causes recovery to fail altogether.
There is a related corner case within brin_getinsertbuffer itself, in
which we extend the relation to put a new index tuple there, but later
find out that we cannot do so, and do not return the buffer; the page
obtained from extension is not even initialized. The resulting page is
lost forever.
To fix, shuffle the code so that initialization is not the
responsibility of brin_getinsertbuffer anymore, in normal cases;
instead, the initialization is done by its callers (brin_doinsert and
brin_doupdate) once they're certain that the page is going to be used.
When either those functions determine that the new page cannot be used,
before bailing out they initialize the page as an empty regular page,
enter it in FSM and WAL-log all this. This way, the page is usable for
future index insertions, and WAL replay doesn't find trying to insert
tuples in pages whose initialization didn't make it to the WAL. The
same strategy is used in brin_getinsertbuffer when it cannot return the
new page.
Additionally, add a new step to vacuuming so that all pages of the index
are scanned; whenever an uninitialized page is found, it is initialized
as empty and WAL-logged. This closes the hole that the relation is
extended but the system crashes before anything is WAL-logged about it.
We also take this opportunity to update the FSM, in case it has gotten
out of date.
Thanks to Heikki Linnakangas for finding the problem that kicked some
additional analysis of BRIN page assignment code.
Backpatch to 9.5, where BRIN was introduced.
Discussion: https://www.postgresql.org/message-id/20150723204810.GY5596@postgresql.org
In 4b4b680c I passed a buffer index number (starting from 0) instead of
a proper Buffer id (which start from 1 for shared buffers) in two
places.
This wasn't noticed so far as one of those locations isn't compiled at
all (PrintPinnedBufs) and the other one (InvalidBuffer) requires a
unlikely, but possible, set of circumstances to trigger a symptom.
To reduce the likelihood of such incidents a bit also convert existing
open coded mappings from buffer descriptors to buffer ids with
BufferDescriptorGetBuffer().
Author: Qingqing Zhou
Reported-By: Qingqing Zhou
Discussion: CAJjS0u2ai9ooUisKtkV8cuVUtEkMTsbK8c7juNAjv8K11zeCQg@mail.gmail.com
Backpatch: 9.5 where the private ref count infrastructure was introduced
newnfa() failed to set the regex error state when malloc() fails.
Several places in regcomp.c failed to check for an error after calling
subre(). Each of these mistakes could lead to null-pointer-dereference
crashes in memory-starved backends.
Report and patch by Andreas Seltenreich. Back-patch to all branches.
Until now we computed these Param ID sets at the end of subquery_planner,
but that approach depends on subquery_planner returning a concrete Plan
tree. We would like to switch over to returning one or more Paths for a
subquery, and in that representation the necessary details aren't fully
fleshed out (not to mention that we don't really want to do this work for
Paths that end up getting discarded). Hence, refactor so that we can
compute the param ID sets at the end of planning, just before
set_plan_references is run.
The main change necessary to make this work is that we need to capture
the set of outer-level Param IDs available to the current query level
before exiting subquery_planner, since the outer levels' plan_params lists
are transient. (That's not going to pose a problem for returning Paths,
since all the work involved in producing that data is part of expression
preprocessing, which will continue to happen before Paths are produced.)
On the plus side, this change gets rid of several existing kluges.
Eventually I'd like to get rid of SS_finalize_plan altogether in favor of
doing this work during set_plan_references, but that will require some
complex rejiggering because SS_finalize_plan needs to visit subplans and
initplans before the main plan. So leave that idea for another day.
When creating a physical slot it's often useful to immediately reserve
the current WAL position instead of only doing after the first feedback
message arrives. That e.g. allows slots to guarantee that all the WAL
for a base backup will be available afterwards.
Logical slots already have to reserve WAL during creation, so generalize
that logic into being usable for both physical and logical slots.
Catversion bump because of the new parameter.
Author: Gurjeet Singh
Reviewed-By: Andres Freund
Discussion: CABwTF4Wh_dBCzTU=49pFXR6coR4NW1ynb+vBqT+Po=7fuq5iCw@mail.gmail.com
pg_dump produced fairly silly GRANT/REVOKE commands when dumping types from
pre-9.2 servers, and when dumping functions or procedural languages from
pre-7.3 servers. Those server versions lack the typacl, proacl, and/or
lanacl columns respectively, and pg_dump substituted default values that
were in fact incorrect. We ended up revoking all the owner's own
privileges for the object while granting all privileges to PUBLIC.
Of course the owner would then have those privileges again via PUBLIC, so
long as she did not try to revoke PUBLIC's privileges; which may explain
the lack of field reports. Nonetheless this is pretty silly behavior.
The stakes were raised by my recent patch to make pg_dump dump shell types,
because 9.2 and up pg_dump would proceed to emit bogus GRANT/REVOKE
commands for a shell type if dumping from a pre-9.2 server; and the server
will not accept GRANT/REVOKE commands for a shell type. (Perhaps it
should, but that's a topic for another day.) So the resulting dump script
wouldn't load without errors.
The right thing to do is to act as though these objects have default
privileges (null ACL entries), which causes pg_dump to print no
GRANT/REVOKE commands at all for them. That fixes the silly results
and also dodges the problem with shell types.
In passing, modify getProcLangs() to be less creatively different about
how to handle missing columns when dumping from older server versions.
Every other data-acquisition function in pg_dump does that by substituting
appropriate default values in the version-specific SQL commands, and I see
no reason why this one should march to its own drummer. Its use of
"SELECT *" was likewise not conformant with anyplace else, not to mention
it's not considered good SQL style for production queries.
Back-patch to all supported versions. Although 9.0 and 9.1 pg_dump don't
have the issue with typacl, they are more likely than newer versions to be
used to dump from ancient servers, so we ought to fix the proacl/lanacl
issues all the way back.
Commit 85e5e222b1 turns out not to have taken
care of all cases of the partially-evaluatable-PlaceHolderVar problem found
by Andreas Seltenreich's fuzz testing. I had set it up to check for risky
PHVs only in the event that we were making a star-schema-based exception to
the param_source_rels join ordering heuristic. However, it turns out that
the problem can occur even in joins that satisfy the param_source_rels
heuristic, in which case allow_star_schema_join() isn't consulted.
Refactor so that we check for risky PHVs whenever the proposed join has
any remaining parameterization.
Back-patch to 9.2, like the previous patch (except for the regression test
case, which only works back to 9.3 because it uses LATERAL).
Note that this discovery implies that problems of this sort could've
occurred in 9.2 and up even before the star-schema patch; though I've not
tried to prove that experimentally.
Several versions of the perl that comes with the Msys DTK have been
found to have a bug that fails to recognize a ' before a multiline $ in
some circumstances. To work around the problem, use a character class
for the '. Another solution would have been to use \n instead of $, but
that would have changed the test semantics very slightly.
Commit 2834855cb added a not-very-carefully-thought-out isolation test
to check a BRIN index bug fix. The test depended on the availability
of the pageinspect contrib module, which meant it did not work in
several common testing scenarios such as "make check-world". It's not
clear whether we want a core test depending on a contrib module like
that, but in any case, failing to deal with the possibility that the
module isn't present in the installation-under-test is not acceptable.
Remove that test pending some better solution.
There's no reason not to expose both restart_lsn and confirmed_flush
since they have rather distinct meanings. The former is the oldest WAL
still required and valid for both physical and logical slots, whereas
the latter is the location up to which a logical slot's consumer has
confirmed receiving data. Most of the time a slot will require older
WAL (i.e. restart_lsn) than the confirmed
position (i.e. confirmed_flush_lsn).
Author: Marko Tiikkaja, editorialized by me
Discussion: 559D110B.1020109@joh.to
XLogRecPtr was compared with InvalidTransactionId instead of
InvalidXLogRecPtr. As both are defined to the same value this doesn't
cause any actual problems, but it's still wrong.
Backpatch: 9.4-master, bug was introduced in 9.4
Immediately starting to stream after --create-slot is inconvenient in a
number of situations (e.g. when configuring a slot for use in
recovery.conf) and it's easy to just call pg_receivexlog twice in the
rest of the cases.
Author: Michael Paquier
Discussion: CAB7nPqQ9qEtuDiKY3OpNzHcz5iUA+DUX9FcN9K8GUkCZvG7+Ew@mail.gmail.com
Backpatch: 9.5, where the option was introduced
The allowed syntax for OVERLAPS, viz "row OVERLAPS row", is sufficiently
constrained that we don't actually need a precedence declaration for
OVERLAPS; indeed removing this declaration does not change the generated
gram.c file at all. Let's remove it to avoid confusion about whether
OVERLAPS has precedence or not. If we ever generalize what we allow for
OVERLAPS, we might need to put back a precedence declaration for it,
but we might want some other level than what it has today --- and leaving
the declaration there would just risk confusion about whether that would
be an incompatible change.
Likewise, remove OVERLAPS from the documentation's precedence table.
Per discussion with Noah Misch. Back-patch to 9.5 where we hacked up some
nearby precedence decisions.
commit 9043Fe390f4f0b4586cfe59cbd22314b9c3e2957 broke multibyte
regression tests because the commit removes the warning message when
temporary hash indexes is created, which has been added by commit
07af523870.
Back patched to 9.5 stable tree.
In de6fd1c8 I moved the the work around from 53f73879 into the aix
template. The previous location was removed in the former commit, and I
thought that it would be nice to emit a warning when running configure.
That didn't turn out to work because at the point the template is
included we don't know whether we're compiling a 32/64 bit binary and
it's possible to install compilers for both on a 64 bit kernel/OS.
So go back to a less ambitious approach and define
PG_FORCE_DISABLE_INLINE in port/aix.h, without emitting a warning. We
could try a more fancy approach, but it doesn't seem worth it.
This requires moving the check for PG_FORCE_DISABLE_INLINE in c.h to
after including the system headers included from therein which isn't
perfect, as it seems slightly more robust to include all system headers
in a similar environment. Oh well.
Discussion: 20150807132000.GC13310@awork2.anarazel.de
A removed check in ba3deeefb made all threads but the main one busy-loop
when -P was used. All threads computed the time to the next time the
progress report should be printed, but only the main thread did so and
re-scheduled it only for the future.
Reported-By: Jesper Pedersen
Discussion: 55C4E190.3050104@redhat.com
A new test case from Andreas Seltenreich showed that we were still a bit
confused about removing PlaceHolderVars during join removal. Specifically,
remove_rel_from_query would remove a PHV that was used only underneath
the removable join, even if the place where it's used was the join partner
relation and not the join clause being deleted. This would lead to a
"too late to create a new PlaceHolderInfo" error later on. We can defend
against that by checking ph_eval_at to see if the PHV could possibly be
getting used at some partner rel.
Also improve some nearby LATERAL-related logic. I decided that the check
on ph_lateral needed to take precedence over the check on ph_needed, in
case there's a lateral reference underneath the join being considered.
(That may be impossible, but I'm not convinced of it, and it's easy enough
to defend against the case.) Also, I realized that remove_rel_from_query's
logic for updating LateralJoinInfos is dead code, because we don't build
those at all until after join removal.
Back-patch to 9.3. Previous versions didn't have the LATERAL issues, of
course, and they also didn't attempt to remove PlaceHolderInfos during join
removal. (I'm starting to wonder if changing that was really such a great
idea.)
Some frontend code like e.g. pg_xlogdump or pg_resetxlog, has to use
backend headers. Unfortunately until now that code includes most of the
locking code. It's generally not nice to expose such low level details,
but de6fd1c898 made that a hard problem. We fall back to defining
'inline' away if the compiler doesn't support it - that can cause linker
errors like on buildfarm animal pademelon if a inline function
references backend only code.
To fix that problem separate definitions from lock.h that are required
from frontend code into lockdefs.h and use it in the relevant
places. I've only removed the minimal amount of necessary definitions
for now - it might turn out that we want more for other reasons.
To avoid such details being exposed again put some checks against being
included from frontend code into atomics.h, lock.h, lwlock.h and
s_lock.h. It's otherwise fairly easy to indirectly include these
headers.
Discussion: 20150806070902.GE12214@awork2.anarazel.de
Amit reviewed the replication origins patch and made some good
points. Address them. This fixes typos in error messages, docs and
comments and adds a missing error check (although in a
should-never-happen scenario).
Discussion: CAA4eK1JqUBVeWWKwUmBPryFaje4190ug0y-OAUHWQ6tD83V4xg@mail.gmail.com
Backpatch: 9.5, where replication origins were introduced.
Commit 9e7e29c75a introduced an Assert that
join removal didn't reduce the eval_at set of any PlaceHolderVar to empty.
At first glance it looks like join_is_removable ensures that's true --- but
actually, the loop in join_is_removable skips PlaceHolderVars that are not
referenced above the join due to be removed. So, if we don't want any
empty eval_at sets, the right thing to do is to delete any now-unreferenced
PlaceHolderVars from the data structure entirely.
Per fuzz testing by Andreas Seltenreich. Back-patch to 9.3 where the
aforesaid Assert was added.
Formerly, this function would always return "true" for an appendrel child
relation, because it would think that the appendrel parent was a potential
join target for the child. In principle that should only lead to some
inefficiency in planning, but fuzz testing by Andreas Seltenreich disclosed
that it could lead to "could not find pathkey item to sort" planner errors
in odd corner cases. Specifically, we would think that all columns of a
child table's multicolumn index were interesting pathkeys, causing us to
generate a MergeAppend path that sorts by all the columns. However, if any
of those columns weren't actually used above the level of the appendrel,
they would not get added to that rel's targetlist, which would result in
being unable to resolve the MergeAppend's sort keys against its targetlist
during createplan.c.
Backpatch to 9.3. In older versions, columns of an appendrel get added
to its targetlist even if they're not mentioned above the scan level,
so that the failure doesn't occur. It might be worth back-patching this
fix to older versions anyway, but I'll refrain for the moment.
Further testing revealed that commit f69b4b9495 was still a few
bricks shy of a load: minor tweaking of the previous test cases resulted
in the same wrong-outer-join-order problem coming back. After study
I concluded that my previous changes in make_outerjoininfo() were just
accidentally masking the problem, and should be reverted in favor of
forcing syntactic join order whenever an upper outer join's predicate
doesn't mention a lower outer join's LHS. This still allows the
chained-outer-joins style that is the normally optimizable case.
I also tightened things up some more in join_is_legal(). It seems to me
on review that what's really happening in the exception case where we
ignore a mismatched special join is that we're allowing the proposed join
to associate into the RHS of the outer join we're comparing it to. As
such, we should *always* insist that the proposed join be a left join,
which eliminates a bunch of rather dubious argumentation. The case where
we weren't enforcing that was the one that was already known buggy anyway
(it had a violatable Assert before the aforesaid commit) so it hardly
deserves a lot of deference.
Back-patch to all active branches, like the previous patch. The added
regression test case failed in all branches back to 9.1, and I think it's
only an unrelated change in costing calculations that kept 9.0 from
choosing a broken plan.
If some, but not all, of the length word has already been read, and the
next attempt to read sees exactly the number of bytes needed to complete
the length word, or fewer, then we'll incorrectly read less than all of
the available data.
Antonin Houska
When a write transaction commits, it must clear its XID advertised via
the ProcArray, which requires that we hold ProcArrayLock in exclusive
mode in order to prevent concurrent processes running GetSnapshotData
from seeing inconsistent results. When many processes try to commit
at once, ProcArrayLock must change hands repeatedly, with each
concurrent process trying to commit waking up to acquire the lock in
turn. To make things more efficient, when more than one backend is
trying to commit a write transaction at the same time, have just one
of them acquire ProcArrayLock in exclusive mode and clear the XIDs of
all processes in the group. Benchmarking reveals that this is much
more efficient at very high client counts.
Amit Kapila, heavily revised by me, with some review also from Pavan
Deolasee.
Commit e5550d5fec added some new
tests for ALTER TABLE which involved table scans. When
default_transaction_isolation = 'serializable' these acquire
relation-level SIReadLocks. The test results didn't cope with
that. Add SIReadLock as the minimum lock level for purposes of
these tests.
This could also be fixed by excluding this type of lock from the
my_locks view, but it would be a bug for SIReadLock to show up for
a relation which was not otherwise locked, so do it this way to
allow that sort of condition to cause a regression test failure.
There is some question whether we could avoid taking SIReadLocks
during these operations, but confirming the safety of that and
figuring out how to avoid the locks is not trivial, and would be
a separate patch.
Backpatch to 9.4 where the new tests were added.
pg_resetxlog.h contained two superfluous includes, origin.h superfluously
depended on logical.h, and pg_xlogdump's rmgrdesc.h only indirectly
included origin.h.
Backpatch: 9.5, where replication origins were introduced.
A few of the discrepancies had semantic significance, but I did not
track down the resulting user-visible bugs, if any. Back-patch to 9.5,
where all but one discrepancy appeared. The _equalCreateEventTrigStmt()
situation dates to 9.3 but does not affect semantics.
catversion bump due to readfuncs.c field order changes.
Commit 0ffc201a51 included this object
unconditionally. Being unprepared for that, most external, single-file
modules failed to build. This better aligns the GNU make build system
with the heuristic in the MSVC build's Project::AddDirResourceFile().
In-tree, installed modules set PGFILEDESC, so they will see no change.
Also, under PGXS, omit the nonfunctioning rule to build win32ver.rc.
Back-patch to 9.5, where the aforementioned commit first appeared.
Older versions have rmtree but not remove_tree. The one-argument forms
of these are equivalent, so replace remove_tree with rmtree. This allows
the tests to be run on oldish Msys systems.
For correctness of summarization results, it is critical that the
snapshot used during the summarization scan is able to see all tuples
that are live to all transactions -- including tuples inserted or
deleted by in-progress transactions. Otherwise, it would be possible
for a transaction to insert a tuple, then idle for a long time while a
concurrent transaction executes summarization of the range: this would
result in the inserted value not being considered in the summary.
Previously we were trying to use a MVCC snapshot in conjunction with
adding a "placeholder" tuple in the index: the snapshot would see all
committed tuples, and the placeholder tuple would catch insertions by
any new inserters. The hole is that prior insertions by transactions
that are still in progress by the time the MVCC snapshot was taken were
ignored.
Kevin Grittner reported this as a bogus error message during vacuum with
default transaction isolation mode set to repeatable read (because the
error report mentioned a function name not being invoked during), but
the problem is larger than that.
To fix, tweak IndexBuildHeapRangeScan to have a new mode that behaves
the way we need using SnapshotAny visibility rules. This change
simplifies the BRIN code a bit, mainly by removing large comments that
were mistaken. Instead, rely on the SnapshotAny semantics to provide
what it needs. (The business about a placeholder tuple needs to remain:
that covers the case that a transaction inserts a a tuple in a page that
summarization already scanned.)
Discussion: https://www.postgresql.org/message-id/20150731175700.GX2441@postgresql.org
In passing, remove a couple of unused declarations from brin.h and
reword a comment to be proper English. This part submitted by Kevin
Grittner.
Backpatch to 9.5, where BRIN was introduced.
Per the discussion in optimizer/README, it's unsafe to reassociate anything
into or out of the RHS of a SEMI or ANTI join. An example from Piotr
Stefaniak showed that join_is_legal() wasn't sufficiently enforcing this
rule, so lock it down a little harder.
I couldn't find a reasonably simple example of the optimizer trying to
do this, so no new regression test. (Piotr's example involved the random
search in GEQO accidentally trying an invalid case and triggering a sanity
check way downstream in clause selectivity estimation, which did not seem
like a sequence of events that would be useful to memorialize in a
regression test as-is.)
Back-patch to all active branches.
So far we have worked around the fact that some very old compilers do
not support 'inline' functions by only using inline functions
conditionally (or not at all). Since such compilers are very rare by
now, we have decided to rely on inline functions from 9.6 onwards.
To avoid breaking these old compilers inline is defined away when not
supported. That'll cause "function x defined but not used" type of
warnings, but since nobody develops on such compilers anymore that's
ok.
This change in policy will allow us to more easily employ inline
functions.
I chose to remove code previously conditional on PG_USE_INLINE as it
seemed confusing to have code dependent on a define that's always
defined.
Blacklisting of compilers, like in c53f73879f, now has to be done
differently. A platform template can define PG_FORCE_DISABLE_INLINE to
force inline to be defined empty.
Discussion: 20150701161447.GB30708@awork2.anarazel.de
Previously the message erroneously printed the same LSN twice as the
assignment to the start_lsn variable was before the message. Correct
that.
Reported-By: Marko Tiikkaja
Author: Marko Tiikkaja
Backpatch: 9.5, where logical decoding was introduced
I appear to accidentally have switched the comments for
pg_atomic_write_u32 and pg_atomic_read_u32 around. Also fix some minor
typos I found while fixing.
Noticed-By: Amit Kapila
Backpatch: 9.5
Per discussion, it really ought to do this. The original choice to
exclude shell types was probably made in the dark ages before we made
it harder to accidentally create shell types; but that was in 7.3.
Also, cause the standard regression tests to leave a shell type behind,
for convenience in testing the case in pg_dump and pg_upgrade.
Back-patch to all supported branches.
The tuplesort/tuplestore memory management logic assumed that the chunk
allocation overhead for its memtuples array could not increase when
increasing the array size. This is and always was true for tuplesort,
but we (I, I think) blindly copied that logic into tuplestore.c without
noticing that the assumption failed to hold for the much smaller array
elements used by tuplestore. Given rather small work_mem, this could
result in an improper complaint about "unexpected out-of-memory situation",
as reported by Brent DeSpain in bug #13530.
The easiest way to fix this is just to increase tuplestore's initial
array size so that the assumption holds. Rather than relying on magic
constants, though, let's export a #define from aset.c that represents
the safe allocation threshold, and make tuplestore's calculation depend
on that.
Do the same in tuplesort.c to keep the logic looking parallel, even though
tuplesort.c isn't actually at risk at present. This will keep us from
breaking it if we ever muck with the allocation parameters in aset.c.
Back-patch to all supported versions. The error message doesn't occur
pre-9.3, not so much because the problem can't happen as because the
pre-9.3 tuplestore code neglected to check for it. (The chance of
trouble is a great deal larger as of 9.3, though, due to changes in the
array-size-increasing strategy.) However, allowing LACKMEM() to become
true unexpectedly could still result in less-than-desirable behavior,
so let's patch it all the way back.
In commit b514a7460d, I changed the planner
so that it would allow nestloop paths to remain partially parameterized,
ie the inner relation might need parameters from both the current outer
relation and some upper-level outer relation. That's fine so long as we're
talking about distinct parameters; but the patch also allowed creation of
nestloop paths for cases where the inner relation's parameter was a
PlaceHolderVar whose eval_at set included the current outer relation and
some upper-level one. That does *not* work.
In principle we could allow such a PlaceHolderVar to be evaluated at the
lower join node using values passed down from the upper relation along with
values from the join's own outer relation. However, nodeNestloop.c only
supports simple Vars not arbitrary expressions as nestloop parameters.
createplan.c is also a few bricks shy of being able to handle such cases;
it misplaces the PlaceHolderVar parameters in the plan tree, which is why
the visible symptoms of this bug are "plan should not reference subplan's
variable" and "failed to assign all NestLoopParams to plan nodes" planner
errors.
Adding the necessary complexity to make this work doesn't seem like it
would be repaid in significantly better plans, because in cases where such
a PHV exists, there is probably a corresponding join order constraint that
would allow a good plan to be found without using the star-schema exception.
Furthermore, adding complexity to nodeNestloop.c would create a run-time
penalty even for plans where this whole consideration is irrelevant.
So let's just reject such paths instead.
Per fuzz testing by Andreas Seltenreich; the added regression test is based
on his example query. Back-patch to 9.2, like the previous patch.
It must be possible to multiply wal_buffers by XLOG_BLCKSZ without
overflowing int, or calculations in StartupXLOG will go badly wrong
and crash the server. Avoid that by imposing a maximum value on
wal_buffers. This will be just under 2GB, assuming the usual value
for XLOG_BLCKSZ.
Josh Berkus, per an analysis by Andrew Gierth.
If there are two different aggregates in the query with same inputs, and
the aggregates have the same initial condition and transition function,
only calculate the state value once, and only call the final functions
separately. For example, AVG(x) and SUM(x) aggregates have the same
transition function, which accumulates the sum and number of input tuples.
For a query like "SELECT AVG(x), SUM(x) FROM x", we can therefore
accumulate the state function only once, which gives a nice speedup.
David Rowley, reviewed and edited by me.
Only remove the default deny policy when a permissive policy exists
(either from the hook or defined by the user). If only restrictive
policies exist then no rows will be visible, as restrictive policies
shouldn't make rows visible. To address this requirement, a single
"USING (true)" permissive policy can be created.
Update the test_rls_hooks regression tests to create the necessary
"USING (true)" permissive policy.
Back-patch to 9.5 where RLS was added.
Per discussion with Dean.
If tablespace_map file is present without backup_label file, there is
no use of such file. There is no harm in retaining it, but it is better
to get rid of the map file so that we don't have any redundant file
in data directory and it will avoid any sort of confusion. It seems
prudent though to just rename the file out of the way rather than
delete it completely, also we ignore any error that occurs in rename
operation as even if map file is present without backup_label file,
it is harmless.
Back-patch to 9.5 where tablespace_map file was introduced.
Amit Kapila, reviewed by Robert Haas, Alvaro Herrera and me.
pg_xlog is often a symlink, typically to a different filesystem. Don't
get confused and comlain about by that, and just always pretend that it's a
normal directory, even if it's really a symlink.
Also add a test case for this.
Backpatch to 9.5.
Since commit 01f6bb4b2, TestLib.pm has exported path to tmp_check directory,
so let's use that also for the pg_rewind test clusters etc.
Also, in master, the $tempdir_short variable has not been used since commit
13d856e17, which moved the initdb-running code to TestLib.pm.
Backpatch to 9.5.
It's against project policy to use elog() for user-facing errors, or to
omit an errcode() selection for errors that aren't supposed to be "can't
happen" cases. Fix all the violations of this policy that result in
ERRCODE_INTERNAL_ERROR log entries during the standard regression tests,
as errors that can reliably be triggered from SQL surely should be
considered user-facing.
I also looked through all the files touched by this commit and fixed
other nearby problems of the same ilk. I do not claim to have fixed
all violations of the policy, just the ones in these files.
In a few places I also changed existing ERRCODE choices that didn't
seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR
by something more specific.
Back-patch to 9.5, but no further; changing ERRCODE assignments in
stable branches doesn't seem like a good idea.
The Msys DTK perl, which is required to run TAP tests under Msys as a
native perl won't recognize the correct virtual paths, has its osname
recorded in the Config module as 'msys' instead of 'MSWin32'. To avoid
having to repeat the test a variable is created that is true iff the
osname is either of these values, and is then used everywhere that
matters.
As in commit 0a52d378b0, avoid doing something that has undefined
results according to the C standard, even though in practice there does
not seem to be any problem with it.
This fixes two places in numeric.c that demonstrably could call memcpy()
with such arguments. I looked through that file and didn't see any other
places with similar hazards; this is not to claim that there are not such
places in other files.
Per report from Piotr Stefaniak. Back-patch to 9.5 which is where the
previous commit was added. We're more or less setting a precedent that
we will not worry about this type of issue in pre-9.5 branches unless
someone demonstrates a problem in the field.
Commit c9b0cbe98b accidentally broke the
order of operations during postmaster shutdown: it resulted in removing
the per-socket lockfiles after, not before, postmaster.pid. This creates
a race-condition hazard for a new postmaster that's started immediately
after observing that postmaster.pid has disappeared; if it sees the
socket lockfile still present, it will quite properly refuse to start.
This error appears to be the explanation for at least some of the
intermittent buildfarm failures we've seen in the pg_upgrade test.
Another problem, which has been there all along, is that the postmaster
has never bothered to close() its listen sockets, but has just allowed them
to close at process death. This creates a different race condition for an
incoming postmaster: it might be unable to bind to the desired listen
address because the old postmaster is still incumbent. This might explain
some odd failures we've seen in the past, too. (Note: this is not related
to the fact that individual backends don't close their client communication
sockets. That behavior is intentional and is not changed by this patch.)
Fix by adding an on_proc_exit function that closes the postmaster's ports
explicitly, and (in 9.3 and up) reshuffling the responsibility for where
to unlink the Unix socket files. Lock file unlinking can stay where it
is, but teach it to unlink the lock files in reverse order of creation.
If a call to WaitForXLogInsertionsToFinish() returned a value in the middle
of a page, and another backend then started to insert a record to the same
page, and then you called WaitXLogInsertionsToFinish() again, the second
call might return a smaller value than the first call. The problem was in
GetXLogBuffer(), which always updated the insertingAt value to the
beginning of the requested page, not the actual requested location. Because
of that, the second call might return a xlog pointer to the beginning of
the page, while the first one returned a later position on the same page.
XLogFlush() performs two calls to WaitXLogInsertionsToFinish() in
succession, and holds WALWriteLock on the second call, which can deadlock
if the second call to WaitXLogInsertionsToFinish() blocks.
Reported by Spiros Ioannou. Backpatch to 9.4, where the more scalable
WALInsertLock mechanism, and this bug, was introduced.
LWLockAttemptLock pointlessly read the lock's state in every loop
iteration, even though pg_atomic_compare_exchange_u32() returns the old
value. Instead do that only once before the loop iteration.
Additionally there's no need to have the expected_state variable,
old_state mostly had the same value anyway.
Noticed-By: Heikki Linnakangas
Backpatch: 9.5, no reason to let the branches diverge at this point
The lwlock scalability work introduced two race conditions into the
lwlock variable support provided for xlog.c. First, and harmlessly on
most platforms, it set/read the variable without the spinlock in some
places. Secondly, due to the removal of the spinlock, it was possible
that a backend missed changes to the variable's state if it changed in
the wrong moment because checking the lock's state, the variable's state
and the queuing are not protected by a single spinlock acquisition
anymore.
To fix first move resetting the variable's from LWLockAcquireWithVar to
WALInsertLockRelease, via a new function LWLockReleaseClearVar. That
prevents issues around waiting for a variable's value to change when a
new locker has acquired the lock, but not yet set the value. Secondly
re-check that the variable hasn't changed after enqueing, that prevents
the issue that the lock has been released and already re-acquired by the
time the woken up backend checks for the lock's state.
Reported-By: Jeff Janes
Analyzed-By: Heikki Linnakangas
Reviewed-By: Heikki Linnakangas
Discussion: 5592DB35.2060401@iki.fi
Backpatch: 9.5, where the lwlock scalability went in
An outer join clause that didn't actually reference the RHS (perhaps only
after constant-folding) could confuse the join order enforcement logic,
leading to wrong query results. Also, nested occurrences of such things
could trigger an Assertion that on reflection seems incorrect.
Per fuzz testing by Andreas Seltenreich. The practical use of such cases
seems thin enough that it's not too surprising we've not heard field
reports about it.
This has been broken for a long time, so back-patch to all active branches.
Per complaint from Peter Holzer. It's useful to cover this special case,
since for a boolean variable "foo", earlier parts of the planner will have
reduced variants like "foo = true" to just "foo", and thus we may fail
to recognize the applicability of a partial index with predicate
"foo IS NOT NULL".
Back-patch to 9.5, but not further; given the lack of previous complaints
this doesn't seem like behavior to change in stable branches.
In many cases, we can implement a semijoin as a plain innerjoin by first
passing the righthand-side relation through a unique-ification step.
However, one of the cases where this does NOT work is where the RHS has
a LATERAL reference to the LHS; that makes the RHS dependent on the LHS
so that unique-ification is meaningless. joinpath.c understood this,
and so would not generate any join paths of this kind ... but join_is_legal
neglected to check for the case, so it would think that we could do it.
The upshot would be a "could not devise a query plan for the given query"
failure once we had failed to generate any join paths at all for the bogus
join pair.
Back-patch to 9.3 where LATERAL was added.
The PGXS-case directory does not exist in the non-PGXS case, and vice
versa. Add one or the other, not both. This is essentially cosmetic.
It makes Makefile.win32 more like the similar Makefile.global code.
Responsibility was formerly split between Makefile.global and pgxs.mk.
As a result of commit b58233c71b, in the
PGXS case, these variables were unset while parsing Makefile.global and
callees. Inclusion of Makefile.custom did not work from PGXS, and the
subtle difference seemed like a recipe for future bugs. Back-patch to
9.4, where that commit first appeared.
They are marked stable, but since they act on instantaneous state and it
is possible to consult state of transactions as they commit, the results
could change mid-query. They need to be marked volatile, and this
commit does so.
There would normally be a catversion bump here, but this is so much a
niche feature and I don't believe there's real damage from the incorrect
marking, that I refrained.
Backpatch to 9.5, where commit timestamps where introduced.
Per note from Fujii Masao.
The code was assuming that any NULL value in scan keys was due to IS
NULL or IS NOT NULL, but it turns out to be possible to get them with
other operators too, if they are used in contrived-enough ways. Easiest
way out of the problem seems to check explicitely for the IS NOT NULL
flag, instead of assuming it must be set if the IS NULL flag is not set,
when a null scan key is found; if neither flag is set, follow the lead
of other index AMs and assume that all indexable operators must be
strict, and thus the query is never satisfiable.
Also, add a comment to try and lure some future hacker into improving
analysis of scan keys in brin.
Per report from Andreas Seltenreich; diagnosis by Tom Lane.
Backpatch to 9.5.
Discussion: http://www.postgresql.org/message-id/20646.1437919632@sss.pgh.pa.us
When retrieving policies, if not working on the root target relation,
we actually want the relation's SELECT policies, regardless of
the top level query command type. For example in UPDATE t1...FROM t2
we need to apply t1's UPDATE policies and t2's SELECT policies.
Previously top level query command type was applied to all relations,
which was wrong. Add some regression coverage to ensure we don't
violate this principle in the future.
Report and patch by Dean Rasheed. Cherry picked from larger refactoring
patch and tweaked by me. Back-patched to 9.5 where RLS was introduced.
Although I think on all modern machines floating division by zero
results in Infinity not SIGFPE, we still don't want infinities
running around in the planner's costing estimates; too much risk
of that leading to insane behavior.
grouping_planner() failed to consider the possibility that final_rel
might be known dummy and hence have zero rowcount. (I wonder if it
would be better to set a rows estimate of 1 for dummy relations?
But at least in the back branches, changing this convention seems
like a bad idea, so I'll leave that for another day.)
Make certain that get_variable_numdistinct() produces a nonzero result.
The case that can be shown to be broken is with stadistinct < 0.0 and
small ntuples; we did not prevent the result from rounding to zero.
For good luck I applied clamp_row_est() to all the nonconstant return
values.
In ExecChooseHashTableSize(), Assert that we compute positive nbuckets
and nbatch. I know of no reason to think this isn't the case, but it
seems like a good safety check.
Per reports from Piotr Stefaniak. Back-patch to all active branches.
When we loop back to the top of doCustom after processing a backslash
command, we must reset the "now" timestamp, because that's used to
calculate the time spent executing the previous command.
Report and fix by Fabien Coelho. Backpatch to 9.5, where this was broken.
The reverted changes did not narrow the semantic gap between the MSVC
build system and the GNU make build system. For targets old and new
that run multiple suites (contribcheck, modulescheck, tapcheck), restore
vcregress.pl to mimicking "make -k" rather than the "make -S" default.
Lack of "-k" would be more burdensome than lack of "-S". Keep changes
reflecting contemporary changes to the GNU make build system, and keep
updates to Makefile parsing. Keep the loss of --psqldir in "check" and
"ecpgcheck" targets; it had been a no-op when used alongside
--temp-install. No log message mentioned any of the reverted changes.
Based on a germ by Michael Paquier. Back-patch to 9.5.
This code relied on knowing exactly where in the source tree temporary
installations might appear. A reasonable hacker may not think to update
this code when adding use of a temporary installation, making it
fragile. Observe that commit 9fa8b0ee90
broke it unnoticed, and commit dcae5facca
fixed it unnoticed. Back-patch to 9.5 only; use of temporary
installations is unlikely to change in released versions.
Policy USING and WITH CHECK expressions were using EXPR_KIND_WHERE for
parse analysis, which results in inappropriate ERROR messages when
the expression contains unsupported constructs such as aggregates.
Create a new ParseExprKind called EXPR_KIND_POLICY and tailor the
related messages to fit.
Reported by Noah Misch. Reviewed by Dean Rasheed, Alvaro Herrera,
and Robert Haas. Back-patch to 9.5 where RLS was introduced.
A Salesforce colleague of mine griped that the regression tests don't
exercise EvalPlanQualFetchRowMarks() and allied routines. Which is
a fair complaint. Add test cases that go through the REFERENCE and COPY
code paths. Unfortunately we don't have sufficient infrastructure right
now to exercise the FDW code path in the isolation tests, but this is
surely better than before.
AlterPolicy() and CreatePolicy() lacked their respective hook invocations.
Noted by Noah Misch, review by Dean Rasheed. Back-patch to 9.5 where
RLS was introduced.
On Windows, use listen_address=127.0.0.1 to allow TCP connections. We were
already using "pg_regress --config-auth" to set up HBA appropriately. The
standard_initdb helper function now sets up the server's
unix_socket_directories or listen_addresses in the config file, so that
they don't need to be specified in the pg_ctl command line anymore. That
way, the pg_ctl invocations in test programs don't need to differ between
Windows and Unix.
Add another helper function to configure the server's pg_hba.conf to allow
replication connections. The configuration is done similarly to "pg_regress
--config-auth": trust on domain sockets on Unix, and SSPI authentication on
Windows.
Replace calls to "cat" and "touch" programs with built-in perl code, as
those programs don't normally exist on Windows.
Add instructions in the docs on how to install IPC::Run on Windows. Adjust
vcregress.pl to not replace PERL5LIB completely in vcregress.pl, because
otherwise cannot install IPC::Run in a non-standard location easily.
Michael Paquier, reviewed by Noah Misch, some additional tweaking by me.
This option specifies a replication slot for WAL streaming (-X stream),
so that there can be continuous replication slot use between WAL
streaming during the base backup and the start of regular streaming
replication.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
When DefineQueryRewrite() is about to convert a table to a view, it checks
the table for features unavailable to views. For example, it rejects tables
having triggers. It omits to reject tables having relrowsecurity or a
pg_policy record. Fix that. To faciliate the repair, invent
relation_has_policies() which indicates the presence of policies on a
relation even when row security is disabled for that relation.
Reported by Noah Misch. Patch by me, review by Stephen Frost. Back-patch
to 9.5 where RLS was introduced.
CreatePolicy() and AlterPolicy() omit to create a pg_shdepend entry for
each role in the TO clause. Fix this by creating a new shared dependency
type called SHARED_DEPENDENCY_POLICY and assigning it to each role.
Reported by Noah Misch. Patch by me, reviewed by Alvaro Herrera.
Back-patch to 9.5 where RLS was introduced.
The previous code resulted in memory access beyond the path bounds. The
cure is to move it into a code branch that checks the value of lex_level
is within the correct bounds.
Bug reported and diagnosed by Piotr Stefaniak.
Don't print a WARNING if we get ESRCH from a kill() that's attempting
to cancel an autovacuum worker. It's possible (and has been seen in the
buildfarm) that the worker is already gone by the time we are able to
execute the kill, in which case the failure is harmless. About the only
plausible reason for reporting such cases would be to help debug corrupted
lock table contents, but this is hardly likely to be the most important
symptom if that happens. Moreover issuing a WARNING might scare users
more than is warranted.
Also, since sending a signal to an autovacuum worker is now entirely a
routine thing, and the worker will log the query cancel on its end anyway,
reduce the message saying we're doing that from LOG to DEBUG1 level.
Very minor cosmetic cleanup as well.
Since the main practical reason for doing this is to avoid unnecessary
buildfarm failures, back-patch to all active branches.
As pointed out by Tom, since HEAD has progressed beyond 9.5 in terms of
its catalog, we need to be sure catversion of HEAD is advanced beyond
that of 9.5. Corrects my mistake in the pg_stats view commit cfa928ff.
The pg_stats view is supposed to be restricted to only show rows
about tables the user can read. However, it sometimes can leak
information which could not otherwise be seen when row level security
is enabled. Fix that by not showing pg_stats rows to users that would
be subject to RLS on the table the row is related to. This is done
by creating/using the newly introduced SQL visible function,
row_security_active().
Along the way, clean up three call sites of check_enable_rls(). The second
argument of that function should only be specified as other than
InvalidOid when we are checking as a different user than the current one,
as in when querying through a view. These sites were passing GetUserId()
instead of InvalidOid, which can cause the function to return incorrect
results if the current user has the BYPASSRLS privilege and row_security
has been set to OFF.
Additionally fix a bug causing RI Trigger error messages to unintentionally
leak information when RLS is enabled, and other minor cleanup and
improvements. Also add WITH (security_barrier) to the definition of pg_stats.
Bumped CATVERSION due to new SQL functions and pg_stats view definition.
Back-patch to 9.5 where RLS was introduced. Reported by Yaroslav.
Patch by Joe Conway and Dean Rasheed with review and input by
Michael Paquier and Stephen Frost.
While postgres' use of SSL renegotiation is a good idea in theory, it
turned out to not work well in practice. The specification and openssl's
implementation of it have lead to several security issues. Postgres' use
of renegotiation also had its share of bugs.
Additionally OpenSSL has a bunch of bugs around renegotiation, reported
and open for years, that regularly lead to connections breaking with
obscure error messages. We tried increasingly complex workarounds to get
around these bugs, but we didn't find anything complete.
Since these connection breakages often lead to hard to debug problems,
e.g. spuriously failing base backups and significant latency spikes when
synchronous replication is used, we have decided to change the default
setting for ssl renegotiation to 0 (disabled) in the released
backbranches and remove it entirely in 9.5 and master.
Author: Andres Freund
Discussion: 20150624144148.GQ4797@alap3.anarazel.de
Backpatch: 9.5 and master, 9.0-9.4 get a different patch
This code was originally written as part of parallel query effort, but
it seems to have independent value, because if we make one decision
about where to get a PGPROC when we allocate and then put it back on a
different list at backend-exit time, bad things happen. This isn't
just a theoretical risk; we fixed an actual problem of this type in
commit e280c630a8.
join_clause_is_movable_into() is approximate, in the sense that it might
sometimes return "false" when actually it would be valid to push the given
join clause down to the specified level. This is okay ... but there was
an Assert in get_joinrel_parampathinfo() that's only safe if the answers
are always exact. Comment out the Assert, and add a bunch of commentary
to clarify what's going on.
Per fuzz testing by Andreas Seltenreich. The added regression test is
a pretty silly query, but it's based on his crasher example.
Back-patch to 9.2 where the faulty logic was introduced.
This was broken in 1bc90f7a, which removed the thread-emulation. With modest
-j and -c settings the result were usually close enough that you wouldn't
notice it easily, but with a high enough thread count it would access
uninitialized memory and crash.
Per report from Andres Freund offlist.
To avoid a race condition where the relation being COPY'd could be
changed into a view or otherwise modified, keep the original lock
on the relation. Further, fully qualify the relation when building
the query up.
Also remove the poorly thought-out Assert() and check the entire
relationOids list as, post-RLS, there can certainly be multiple
relations involved and the planner does not guarantee their ordering.
Per discussion with Noah and Andres.
Back-patch to 9.5 where RLS was introduced.
Fix additional bogosity in commit 9029f4b374. Include the
BackendSslStatusBuffer in the BackendStatusShmemSize calculation,
avoid ugly and error-prone casts to char* and back, put related
code stanzas into a consistent order (and fix a couple of previous
instances of that sin). All cosmetic except for the size oversight.
On some platforms, notably ARM and PowerPC, 'char' is unsigned by
default. This fixes an assertion failure at WAL replay on such platforms.
Reported by Noah Misch. Backpatch to 9.5, where this was broken.
It does currently, and I don't see us changing that any time soon, but we
don't make that assumption anywhere else.
Per Tom Lane's suggestion. Backpatch to 9.2, like the previous patch that
added this assumption.
XLogReaderFree failed to free the per-block data buffers, when they
happened to not be used by the latest read WAL record.
Michael Paquier. Backpatch to 9.5, where the per-block buffers were added.
In GIN, an all-zeros page would be leaked forever, and never reused. Just
add them to the FSM in vacuum, and they will be reinitialized when grabbed
from the FSM. On master and 9.5, attempting to access the page's opaque
struct also caused an assertion failure, although that was otherwise
harmless.
Reported by Jeff Janes. Backpatch to all supported versions.
SP-GiST initialized an all-zeros page at vacuum, but that was not
WAL-logged, which is not safe. You might get a torn page write, when it gets
flushed to disk, and end-up with a half-initialized index page. To fix,
leave it in the all-zeros state, and add it to the FSM. It will be
initialized when reused. Also don't set the page-deleted flag when recycling
an empty page. That was also not WAL-logged, and a torn write of that would
cause the page to have an invalid checksum.
Backpatch to 9.2, where SP-GiST indexes were added.
That was otherwise harmless, but tripped the new assertion in
PageGetSpecialPointer().
Reported by Amit Langote. Backpatch to 9.5, where the assertion was added.
There is no full discussion of speculative insertions in the executor
README. There is a high-level explanation in execIndexing.c, but it doesn't
seem necessary to refer it from here.
Peter Geoghegan
I missed a restriction that commit f4abd0241d
should have enforced: we can't pull up an empty-FROM subquery if it's under
an outer join, because then we'd need to wrap its output columns in
PlaceHolderVars. As the code currently stands, the PHVs end up with empty
relid sets, which doesn't work (and is correctly caught by an Assert).
It's possible that this could be fixed by assigning the PHVs the relid
sets of the parent FromExpr/JoinExpr, but getting that to work is more
complication than I care to add right now; indeed it's likely that
we'll never bother, since pulling up empty-FROM subqueries is a rather
marginal optimization anyway.
Per report from Andreas Seltenreich. Back-patch to 9.5 where the faulty
code was added.
The planner generally expects that the estimated rowcount of any relation
is at least one row, *unless* it has been proven empty by constraint
exclusion or similar mechanisms, which is marked by installing a dummy path
as the rel's cheapest path (cf. IS_DUMMY_REL). When I split up
allpaths.c's processing of base rels into separate set_base_rel_sizes and
set_base_rel_pathlists steps, the intention was that dummy rels would get
marked as such during the "set size" step; this is what justifies an Assert
in indxpath.c's get_loop_count that other relations should either be dummy
or have positive rowcount. Unfortunately I didn't get that quite right
for append relations: if all the child rels have been proven empty then
set_append_rel_size would come up with a rowcount of zero, which is
correct, but it didn't then do set_dummy_rel_pathlist. (We would have
ended up with the right state after set_append_rel_pathlist, but that's
too late, if we generate indexpaths for some other rel first.)
In addition to fixing the actual bug, I installed an Assert enforcing this
convention in set_rel_size; that then allows simplification of a couple
of now-redundant tests for zero rowcount in set_append_rel_size.
Also, to cover the possibility that third-party FDWs have been careless
about not returning a zero rowcount estimate, apply clamp_row_est to
whatever an FDW comes up with as the rows estimate.
Per report from Andreas Seltenreich. Back-patch to 9.2. Earlier branches
did not have the separation between set_base_rel_sizes and
set_base_rel_pathlists steps, so there was no intermediate state where an
appendrel would have had inconsistent rowcount and pathlist. It's possible
that adding the Assert to set_rel_size would be a good idea in older
branches too; but since they're not under development any more, it's likely
not worth the trouble.
ON CONFLICT unique index inference had a thinko that could affect cases
where the user-supplied inference clause required that an attribute
match a particular (user specified) collation and/or opclass.
infer_collation_opclass_match() has to check for opclass and/or
collation matches and that the attribute is in the list of attributes or
expressions known to be in the definition of the index under
consideration. The bug was that these two conditions weren't necessarily
evaluated for the same index attribute.
Author: Peter Geoghegan
Discussion: CAM3SWZR4uug=WvmGk7UgsqHn2MkEzy9YU-+8jKGO4JPhesyeWg@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
Previously nested grouping set specifications accidentally weren't
flattened, but instead contained the nested specification as a element
in the outer list.
Fix this by, as actually documented in comments, concatenating the
nested set specification into the outer one. Also add tests to prevent
this from breaking again.
Author: Andrew Gierth, with tests from Jeevan Chalke
Reported-By: Jeevan Chalke
Discussion: CAM2+6=V5YvuxB+EyN4iH=GbD-XTA435TCNvnDFSD--YvXs+pww@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
Previously we disallowed pushing down quals to WHERE in the presence of
grouping sets. That's overly restrictive.
We now instead copy quals to WHERE if applicable, leaving the
one in HAVING in place. That's because, at that stage of the planning
process, it's nontrivial to determine if it's safe to remove the one in
HAVING.
Author: Andrew Gierth
Discussion: 874mkt3l59.fsf@news-spur.riddles.org.uk
Backpatch: 9.5, where grouping sets were introduced. This isn't exactly
a bugfix, but it seems better to keep the branches in sync at this point.
Previously GROUPING() was not recognized as a aggregate expression,
erroneously allowing the planner to move it from HAVING to WHERE.
Author: Jeevan Chalke
Reviewed-By: Andrew Gierth
Discussion: CAM2+6=WG9omG5rFOMAYBweJxmpTaapvVp5pCeMrE6BfpCwr4Og@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
The previous coding frequently failed to fail because for one it's
unusual to have rollup clauses with one column, and for another
sometimes the wrong mapping didn't cause obvious problems.
Author: Jeevan Chalke
Reviewed-By: Andrew Gierth
Discussion: CAM2+6=W=9=hQOipH0HAPbkun3Z3TFWij_EiHue0_6UX=oR=1kw@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
Some of the older OS X critters in the buildfarm are failing regression,
with symptoms showing that a request for 100% sampling in BERNOULLI or
SYSTEM methods actually gets only around 50% of the table. gdb revealed
that the computation of the "cutoff" number was producing 0x7FFFFFFF
rather than the expected 0x100000000. Inspecting the assembly code,
it looks like gcc is trying to use lrint() instead of rint() and then
fumbling the conversion from long double to uint64. This seems like a
clear compiler bug, but assigning the intermediate result into a plain
double variable works around it, so let's just do that. (Another idea
would be to give up one bit of hash width so that we don't need to use
a uint64 cutoff, but let's see if this is enough.)
This was broken by commit 0e7e355f27 and
friends, which ignored the fact that gzopen() will treat "-1" in the
mode argument as an invalid character, which it ignores, and a flag for
compression level 1. Now, when this value is encountered no compression
level flag is passed to gzopen, leaving it to use the zlib default.
Also, enforce the documented allowed range for pg_dump's -Z option,
namely 0 .. 9, and remove some consequently dead code from
pg_backup_tar.c.
Problem reported by Marc Mamin.
Backpatch to 9.1, like the patch that introduced the bug.
Any error other than ENOENT is a bit suspicious here, and perhaps should
not be grounds for assuming the postmaster has failed. For the moment
though, just report it, and don't change the behavior otherwise. The
intent is mainly to try to determine why we are seeing intermittent
failures in this area on some buildfarm members.
Back-patch to 9.5 where some of these failures have happened.
New FK relationships for pg_transform. Also findoidjoins now detects a few
relationships it didn't before for pre-existing catalogs, as a result of
new regression tests leaving entries in those catalogs that weren't there
before.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
UPDATE ... WHERE CURRENT OF would not work in conjunction with
RLS. Arrange to allow the CURRENT OF expression to be pushed down.
Issue noted by Peter Geoghegan. Patch by Dean Rasheed. Back patch
to 9.5 where RLS was introduced.
The wrong is_null flag was being passed to datum_to_json. Also, null
object key values are not permitted, and this was not being checked
for. Add regression tests covering these cases, and also add those tests
to the json set, even though it was doing the right thing.
Fixes bug #13514, initially diagnosed by Tom Lane.
Handling of assigned-to expressions with indirection (e.g. set f1[1] =
3) was broken for ON CONFLICT DO UPDATE. The problem was that
ParseState was consulted to determine if an INSERT-appropriate or
UPDATE-appropriate behavior should be used when transforming expressions
with indirections. When the wrong path was taken the old row was
substituted with NULL, leading to wrong results..
To fix remove p_is_update and only use p_is_insert to decide how to
transform the assignment expression, and uset p_is_insert while parsing
the on conflict statement. This isn't particularly pretty, but it's not
any worse than before.
Author: Peter Geoghegan, slightly edited by me
Discussion: CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com
Backpatch: 9.5, where the feature was introduced
dbf2ec1a changed make check so that the installation logs get directed
to stdout and stderr. Per discussion on -hackers, this patch restores
saving it to a file. It is now saved in /tmp_install/log, which is
created once per invocation of any make target doing regression tests.
Along the way, add a missing /log/ entry to test_ddl_deparse's
.gitignore.
Michael Paquier.
If there were no subtransactions (or multixacts) active, we would calculate
the oldestxid == next xid. That's correct, but if next XID happens to be
on the next pg_subtrans (pg_multixact) page, the page does not exist yet,
and SimpleLruTruncate will produce an "apparent wraparound" warning. The
warning is harmless in this case, but looks very alarming to users.
Backpatch to all supported versions. Patch and analysis by Thomas Munro.
The TABLESAMPLE and row security patches each overlooked this function,
though their errors of omission were opposite: RLS failed to zero out the
securityQuals field, leading to wasteful copying of useless expression
trees in finished plans, while TABLESAMPLE neglected to add a comment
saying that it intentionally *isn't* deleting the tablesample subtree.
There probably should be a similar comment about ctename, too.
Back-patch as appropriate.
Remove HeapScanDescData.rs_initblock, which wasn't being used for anything
in the final version of the patch.
Fix IndexBuildHeapScan so that it supports syncscan again; the patch
broke synchronous scanning for index builds by forcing rs_startblk
to zero even when the caller did not care about that and had asked
for syncscan.
Add some commentary and usage defenses to heap_setscanlimits().
Fix heapam so that asking for rs_numblocks == 0 does what you would
reasonably expect. As coded it amounted to requesting a whole-table
scan, because those "--x <= 0" tests on an unsigned variable would
behave surprisingly.
initdb.log and postmaster.log were moved to within the temporary instance
path by commit dcae5fa. This directory now gets removed at the end
of the run of pg_regress when there are no failures found, which makes
analysis of after-run issues difficult in some cases, and reduces the
output verbosity of the buildfarm after a run.
Fix by Michael Paquier
Backpatch to 9.5
There was already a sanity-check in the other direction: if a page was
marked with WILL_INIT, it had to be initialized by the redo routine. It's
not strictly necessary for correctness that a page is marked with WILL_INIT
if it's going to be initialized at redo, but it's a missed optimization if
nothing else.
Fix a few instances of this issue in SP-GiST, where a block in WAL record
was not marked with WILL_INIT, but was in fact always initialized at redo.
We were creating a full-page image of the page unnecessarily in those
cases.
Backpatch to 9.5, where the new WILL_INIT flag was added.
Since those role specifiers are checked in the grammar, there's no need
for the old checks to remain in place after 31eae6028e. Remove them.
Backpatch to 9.5.
Noted and patch by Jeevan Chalke
As reported by Bill Parker, PL/Tcl did not validate some malloc() calls
against NULL return. Fix by using palloc() in a new long-lived memory
context instead. This allows us to simplify error handling too, by
simply deleting the memory context instead of doing retail frees.
There's still a lot that could be done to improve PL/Tcl's memory
handling ...
This is pretty ancient, so backpatch all the way back.
Author: Michael Paquier and Álvaro Herrera
Discussion: https://www.postgresql.org/message-id/CAFrbyQwyLDYXfBOhPfoBGqnvuZO_Y90YgqFM11T2jvnxjLFmqw@mail.gmail.com
This removes some info about support procedures being used, which was
obsoleted by commit db5f98ab4f, as well as add some more documentation
on how to create new opclasses using the Minmax infrastructure.
(Hopefully we can get something similar for Inclusion as well.)
In passing, fix some obsolete mentions of "mmtuples" in source code
comments.
Backpatch to 9.5, where BRIN was introduced.
In the passing, also move AT_ReAddComment to more logical position in the
enum, after all the Constraint-related subcommands.
This fixes a compiler warning, added by commit e42375fc. Backpatch to 9.5,
like that patch.
In the previous coding, timeout would be noticed and reported only when
poll() or socket() returned zero (or the equivalent behavior on Windows).
Ordinarily that should work well enough, but it seems conceivable that we
could get into a state where poll() always returns a nonzero value --- for
example, if it is noticing a condition on one of the file descriptors that
we do not think is reason to exit the loop. If that happened, we'd be in a
busy-wait loop that would fail to terminate even when the timeout expires.
We can make this more robust at essentially no cost, by deciding to exit
of our own accord if we compute a zero or negative time-remaining-to-wait.
Previously the code noted this but just clamped the time-remaining to zero,
expecting that we'd detect timeout on the next loop iteration.
Back-patch to 9.2. While 9.1 had a version of WaitLatchOrSocket, it was
primitive compared to later versions, and did not guarantee reliable
detection of timeouts anyway. (Essentially, this is a refinement of
commit 3e7fdcffd6, which was back-patched only as far as 9.2.)
Previously, there was an inconsistency across json/jsonb operators that
operate on datums containing JSON arrays -- only some operators
supported negative array count-from-the-end subscripting. Specifically,
only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
integer" operator). This inconsistency seemed likely to be
counter-intuitive to users. To fix, allow all places where the user can
supply an integer subscript to accept a negative subscript value,
including path-orientated operators and functions, as well as other
extraction operators. This will need to be called out as an
incompatibility in the 9.5 release notes, since it's possible that users
are relying on certain established extraction operators changed here
yielding NULL in the event of a negative subscript.
For the json type, this requires adding a way of cheaply getting the
total JSON array element count ahead of time when parsing arrays with a
negative subscript involved, necessitating an ad-hoc lex and parse.
This is followed by a "conversion" from a negative subscript to its
equivalent positive-wise value using the count. From there on, it's as
if a positive-wise value was originally provided.
Note that there is still a minor inconsistency here across jsonb
deletion operators. Unlike the aforementioned new "-" deletion operator
that accepts an integer on its right hand side, the new "#-" path
orientated deletion variant does not throw an error when it appears like
an array subscript (input that could be recognized by as an integer
literal) is being used on an object, which is wrong-headed. The reason
for not being stricter is that it could be the case that an object pair
happens to have a key value that looks like an integer; in general,
these two possibilities are impossible to differentiate with rhs path
text[] argument elements. However, we still don't allow the "#-"
path-orientated deletion operator to perform array-style subscripting.
Rather, we just return the original left operand value in the event of a
negative subscript (which seems analogous to how the established
"jsonb/json #> text[]" path-orientated operator may yield NULL in the
event of an invalid subscript).
In passing, make SetArrayPath() stricter about not accepting cases where
there is trailing non-numeric garbage bytes rather than a clean NUL
byte. This means, for example, that strings like "10e10" are now not
accepted as an array subscript of 10 by some new-to-9.5 path-orientated
jsonb operators (e.g. the new #- operator). Finally, remove dead code
for jsonb subscript deletion; arguably, this should have been done in
commit b81c7b409.
Peter Geoghegan and Andrew Dunstan
In commit 1345cc67bb, I introduced caching
of expressions representing type-cast operations into plpgsql. However,
I supposed that I could cache both the expression trees and the evaluation
state trees derived from them for the life of the session. This doesn't
work, because we execute the expressions in plpgsql's simple_eval_estate,
which has an ecxt_per_query_memory that is only transaction-lifespan.
Therefore we can end up putting pointers into the evaluation state tree
that point to transaction-lifespan memory; in particular this happens if
the cast expression calls a SQL-language function, as reported by Geoff
Winkless.
The minimum-risk fix seems to be to treat the state trees the same way
we do for "simple expression" trees in plpgsql, ie create them in the
simple_eval_estate's ecxt_per_query_memory, which means recreating them
once per transaction.
Since I had to introduce bookkeeping overhead for that anyway, I bought
back some of the added cost by sharing the read-only expression trees
across all functions in the session, instead of using a per-function
table as originally. The simple-expression bookkeeping takes care of
the recursive-usage risk that I was concerned about avoiding before.
At some point we should take a harder look at how all this works,
and see if we can't reduce the amount of tree reinitialization needed.
But that won't happen for 9.5.
Not only did this test fail to test what it was supposed to test, but it
left a user definition lying around, which caused subsequent runs of the
regression tests to fail.
This tells you what fraction of NOTIFY's queue is currently filled.
Brendan Jurd, reviewed by Merlin Moncure and Gurjeet Singh. A few
further tweaks by me.
xlc provides "long long" unconditionally at C99-compatible language
levels, and this option provokes a warning. The warning interferes with
"configure" tests that fail in response to any warning. Notably, before
commit 85a2a8903f, it interfered with the
test for -qnoansialias. Back-patch to 9.0 (all supported versions).
It's standard for quicksort implementations, after having partitioned the
input into two subgroups, to recurse to process the smaller partition and
then handle the larger partition by iterating. This method guarantees
that no more than log2(N) levels of recursion can be needed. However,
Bentley and McIlroy argued that checking to see which partition is smaller
isn't worth the cycles, and so their code doesn't do that but just always
recurses on the left partition. In most cases that's fine; but with
worst-case input we might need O(N) levels of recursion, and that means
that qsort could be driven to stack overflow. Such an overflow seems to
be the only explanation for today's report from Yiqing Jin of a SIGSEGV
in med3_tuple while creating an index of a couple billion entries with a
very large maintenance_work_mem setting. Therefore, let's spend the few
additional cycles and lines of code needed to choose the smaller partition
for recursion.
Also, fix up the qsort code so that it properly uses size_t not int for
some intermediate values representing numbers of items. This would only
be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg)
or tuples (in qsort_tuple), which I believe would never happen with any
caller in the current core code --- but perhaps it could happen with
call sites in third-party modules? In any case, this is trouble waiting
to happen, and the corrected code is probably if anything shorter and
faster than before, since it removes sign-extension steps that had to
happen when converting between int and size_t.
In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's
not necessary to preserve the value of "r" across them, and prettify
the output of gen_qsort_tuple.pl a little.
Back-patch to all supported branches. The odds of hitting this issue
are probably higher in 9.4 and up than before, due to the new ability
to allocate sort workspaces exceeding 1GB, but there's no good reason
to believe that it's impossible to crash older branches this way.
The result closely resembles linking of these modules for the "win32"
port. Augment the $(exports_file) header so the file is also usable as
an import file. Unfortunately, relocating an AIX installation will now
require adding $(pkglibdir) to LD_LIBRARY_PATH. Back-patch to 9.5,
where the modules were introduced.
This allows PostgreSQL modules and their dependencies to have undefined
symbols, resolved at runtime. Perl module shared objects rely on that
in Perl 5.8.0 and later. This fixes the crash when PL/PerlU loads such
modules, as the hstore_plperl test suite does. Module authors can link
using -Wl,-G to permit undefined symbols; by default, linking will fail
as it has. Back-patch to 9.0 (all supported versions).
Other options cannot be changed, as it's not totally clear if cached plans
would need to be invalidated if one of the other options change. Selectivity
estimator functions only change plan costs, not correctness of plans, so
those should be safe.
Original patch by Uriy Zhuravlev, heavily edited by me.
In the test query I added for ALTER TABLE retaining comments, the order of
the result rows was not stable, and varied across systems. Add an ORDER BY
to make the order predictable. This should fix the buildfarm failures.
When a column's datatype is changed, ATExecAlterColumnType() rebuilds all
the affected indexes and constraints, and the comments from the old
indexes/constraints were not carried over.
To fix, create a synthetic COMMENT ON command in the work queue, to re-add
any comments on constraints. For indexes, there's a comment field in
IndexStmt that is used.
This fixes bug #13126, reported by Kirill Simonov. Original patch by
Michael Paquier, reviewed by Petr Jelinek and me. This bug is present in
all versions, but only backpatch to 9.5. Given how minor the issue is, it
doesn't seem worth the work and risk to backpatch further than that.
The code in ATPostAlterTypeParse was very deeply indented, mostly because
there were two nested switch-case statements, which add a lot of
indentation. Use if-else blocks instead, to make the code less indented
and more readable.
This is in preparation for next patch that makes some actualy changes to
the function. These cosmetic parts have been separated to make it easier
to see the real changes in the other patch.
Per Coverity (not that any of these are so non-obvious that they should not
have been caught before commit). The extent of leakage is probably minor
to unnoticeable, but a leak is a leak. Back-patch as necessary.
Michael Paquier
pg_receivexlog and pg_recvlogical error out when --create-slot is
specified and a slot with the same name already exists. In some cases,
especially with pg_receivexlog, that's rather annoying and requires
additional scripting.
Backpatch to 9.5 as slot control functions have newly been added to
pg_receivexlog, and there doesn't seem much point leaving it in a less
useful state.
Discussion: 20150619144755.GG29350@alap3.anarazel.de
As noted by Noah Misch, CreatePolicy() and AlterPolicy() omit to call
assign_expr_collations() on the node trees. Fix the omission and add
his test case to the rowsecurity regression test.
Ordinarily, a failure (unexpected exit status) of the startup subprocess
should be considered fatal, so the postmaster should just close up shop
and quit. However, if we sent the startup process a SIGQUIT or SIGKILL
signal, the failure is hardly "unexpected", and we should attempt restart;
this is necessary for recovery from ordinary backend crashes in hot-standby
scenarios. I attempted to implement the latter rule with a two-line patch
in commit 442231d7f7, but it now emerges that
that patch was a few bricks shy of a load: it failed to distinguish the
case of a signaled startup process from the case where the new startup
process crashes before reaching database consistency. That resulted in
infinitely respawning a new startup process only to have it crash again.
To handle this properly, we really must track whether we have sent the
*current* startup process a kill signal. Rather than add yet another
ad-hoc boolean to the postmaster's state, I chose to unify this with the
existing RecoveryError flag into an enum tracking the startup process's
state. That seems more consistent with the postmaster's general state
machine design.
Back-patch to 9.0, like the previous patch.
When enabling wal_compression, there is a risk to leak data similarly to
the BREACH and CRIME attacks on SSL where the compression ratio of
a full page image gives a hint of what is the existing data of this page.
This vulnerability is quite cumbersome to exploit in practice, but doable.
So this patch makes wal_compression PGC_SUSET in order to prevent
non-superusers from enabling it and exploiting the vulnerability while
DBA thinks the risk very seriously and disables it in postgresql.conf.
Back-patch to 9.5 where wal_compression was introduced.
Create a log file for each test run. Stdout and stderr of the test script,
as well as any subprocesses run as part of the test, are redirected to
the log file. This makes it a lot easier to debug test failures. Also print
the test output (ok 12 - ... messages) to the log file, and the command
line of any external programs executed with the system_or_bail and run_log
functions. This makes it a lot easier to debug failing tests.
Modify some of the pg_ctl and other command invocations to not use 'silent'
or 'quiet' options, and don't redirect output to /dev/null, so that you get
all the information in the log instead.
In the passing, construct some command lines in a way that works if $tempdir
contains quote-characters. I haven't systematically gone through all of
them or tested that, so I don't know if this is enough to make that work.
pg_rewind tests had a custom mechanism for creating a similar log file. Use
the new generic facility instead.
Michael Paquier and me.
This formalizes a decision implicit in commit
4ea51cdfe8 and adds clean detection of
affected systems. Vendor updates are available for each such known bug.
Back-patch to 9.5, where the aforementioned commit first appeared.
POSIX does not specify the -q option, and many implementations do not
offer it. Don't bother changing the MSVC build system, because having
non-GNU diff on Windows is vanishingly unlikely. Back-patch to 9.2,
where this invocation was introduced.
The psql crash happened when no current connection existed. (The second
new check is optional given today's undocumented NULL argument handling
in PQhost() etc.) Back-patch to 9.0 (all supported versions).
Test the interactions with permissions and LOCK TABLE. Specifically
ROW EXCLUSIVE, ACCESS SHARE, and ACCESS EXCLUSIVE modes against
SELECT, INSERT, UPDATE, DELETE, and TRUNCATE permissions. Discussed
by Stephen Frost and Michael Paquier, patch by the latter. Backpatch
to 9.5 where matching behavior was first committed.
SUSv2-era shells don't set the PWD variable, though anything more modern
does. In the buildfarm environment this could lead to test.sh executing
with PWD pointing to $HOME or another high-level directory, so that there
were conflicts between concurrent executions of the test in different
branch subdirectories. This appears to be the explanation for recent
intermittent failures on buildfarm members binturong and dingo (and might
well have something to do with the buildfarm script's failure to capture
log files from pg_upgrade tests, too).
To fix, just use `pwd` in place of $PWD. AFAICS test.sh is the only place
in our source tree that depended on $PWD. Back-patch to all versions
containing this script.
Per buildfarm. Thanks to Oskari Saarenmaa for diagnosing the problem.
If an allocation fails in the main message handling loop, pqParseInput3
or pqParseInput2, it should not be treated as "not enough data available
yet". Otherwise libpq will wait indefinitely for more data to arrive from
the server, and gets stuck forever.
This isn't a complete fix - getParamDescriptions and getCopyStart still
have the same issue, but it's a step in the right direction.
Michael Paquier and me. Backpatch to all supported versions.
When spilling transaction data to disk a simple typo caused the output
file to be closed and reopened for every serialized change. That happens
to not have a huge impact on linux, which is why it probably wasn't
noticed so far, but on windows that appears to trigger actual disk
writes after every change. Not fun.
The bug fortunately does not have any impact besides speed. A change
could end up being in the wrong segment (last instead of next), but
since we read all files to the end, that's just ugly, not really
problematic. It's not a problem to upgrade, since transaction spill
files do not persist across restarts.
Bug: #13484
Reported-By: Olivier Gosseaume
Discussion: 20150703090217.1190.63940@wrigleys.postgresql.org
Backpatch to 9.4, where logical decoding was added.
The previous coding tried to handle possible failures when fsyncing a
tty or pipe fd by accepting EINVAL - but apparently some
platforms (windows, OSX) don't reliably return that. So instead check
whether the output fd refers to a pipe or a tty when opening it.
Reported-By: Olivier Gosseaume, Marko Tiikkaja
Discussion: 559AF98B.3050901@joh.to
Backpatch to 9.4, where pg_recvlogical was added.
Build.bat and vcregress.bat got similar treatment years ago. I'm not sure
why install.bat wasn't treated at the same time, but it seems like a good
idea anyway.
The immediate problem with the old install.bat was that it had quoting
issues, and wouldn't work if the target directory's name contained spaces.
This fixes that problem.
We're interested in the buffer size of the socket that's connected to the
client, not the one that's listening for new connections. It happened to
work, as default buffer size is the same on both, but it was clearly not
wrong.
Spotted by Tom Lane
It's unnecessary to set it if the default is higher in the first place.
Furthermore, setting SO_SNDBUF disables the so-called "dynamic send
buffering" feature, which hurts performance further. This can be seen
especially when the network between the client and the server has high
latency.
Chen Huajun
This builds on commit 21dcda2713 by keeping
a plpgsql function's shared ParamListInfo's entries for simple variables
(PLPGSQL_DTYPE_VARs) valid at all times. That adds a few cycles to each
assignment to such variables, but saves significantly more cycles each time
they are used; so except in the pathological case of many dead stores, this
should always be a win. Initial testing says it's good for about a 10%
speedup of simple calculations; more in large functions with many datums.
We can't use this method for row/record references unfortunately, so what
we do for those is reset those ParamListInfo slots after use; which we
can skip doing unless some of them were actually evaluated during the
previous evaluation call. So this should frequently be a win as well,
while worst case is that it's similar cost to the previous approach.
Also, closer study suggests that the previous method of instantiating a
new ParamListInfo array per evaluation is actually probably optimal for
cursor-opening executor calls. The reason is that whatever is visible in
the array is going to get copied into the cursor portal via copyParamList.
So if we used the function's main ParamListInfo for those calls, we'd end
up with all of its DTYPE_VAR vars getting copied, which might well include
large pass-by-reference values that the cursor actually has no need for.
To avoid a possible net degradation in cursor cases, go back to creating
and filling a private ParamListInfo in those cases (which therefore will be
exactly the same speed as before 21dcda2713). We still get some benefit
out of this though, because this approach means that we only have to defend
against copyParamList's try-to-fetch-every-slot behavior in the case of an
unshared ParamListInfo; so plpgsql_param_fetch() can skip testing
expr->paramnos in the common case.
To ensure that the main ParamListInfo's image of a DTYPE_VAR datum is
always valid, all assignments to such variables are now funneled through
assign_simple_var(). But this makes for cleaner and shorter code anyway.
You can no longer use pgbench with multiple threads when compiled without
--enable-thread-safety. That's an acceptable limitation these days; it
still works fine with -j1, and all modern platforms support threads anyway.
This makes future maintenance and development of the code easier.
Fabien Coelho
There were two issues here. First, if a query got stuck so that it took
e.g. 5 seconds, and progress interval was 1 second, no progress reports were
printed until the query returned. Fix so that we wake up specifically to
print the progress report. Secondly, if pgbench got stuck so that it would
nevertheless not print a progress report on time, and enough time passes
that it's already time to print the next progress report, just skip the one
that was missed. Before this patch, it would print the missed one with 0 TPS
immediately after the previous one.
Fabien Coelho. Backpatch to 9.4, where progress reports were added.
Commit de76884 changed an archive recovery so that the last WAL
segment with old timeline was renamed with suffix .partial. It should
have updated WAL-related utilities so that they can handle such
.paritial WAL files, but we forgot that.
This patch changes pg_archivecleanup so that it can clean up even
archived WAL files with .partial suffix. Also it allows us to specify
.partial WAL file name as the command-line argument "oldestkeptwalfile".
This patch also changes pg_resetxlog so that it can remove .partial
WAL files in pg_xlog directory.
pg_xlogdump cannot handle .partial WAL files. Per discussion,
we decided only to document that limitation instead of adding the fix.
Because a user can easily work around the limitation (i.e., just remove
.partial suffix from the file name) and the fix seems complicated for
very narrow use case.
Back-patch to 9.5 where the problem existed.
Review by Michael Paquier.
Discussion: http://www.postgresql.org/message-id/CAHGQGwGxMKnVHGgTfiig2Bt_2djec0in3-DLJmtg7+nEiidFdQ@mail.gmail.com
-t will now match views, foreign tables, materialized views, and sequences,
not only plain tables. This is more useful, and also more consistent with
the behavior of pg_dump's -t switch, which has always matched all relation
types.
We're still not there on matching pg_dump's behavior entirely, so mention
that in the docs.
Craig Ringer, reviewed by Pavel Stehule
Expose PG_VERSION_NUM (e.g., "90600") as a Make variable; but for
consistency with the other Make variables holding similar info,
call the variable just VERSION_NUM not PG_VERSION_NUM.
There was some discussion of making this value available as a pg_config
value as well. However, that would entail substantially more work than
this two-line patch. Given that there was not exactly universal consensus
that we need this at all, let's just do a minimal amount of work for now.
Michael Paquier, reviewed by Pavel Stehule
"TextDatumGetCString(PG_GETARG_TEXT_P(x))" is formally wrong: a text*
is not a Datum. Although this coding will accidentally fail to fail on
all known platforms, it risks leaking memory if a detoast step is needed,
unlike "TextDatumGetCString(PG_GETARG_DATUM(x))" which is what's used
elsewhere. Make pg_get_object_address() fall in line with other uses.
Noted while reviewing two-arg current_setting() patch.
This allows convenient checking for existence of a GUC from SQL, which is
particularly useful when dealing with custom variables.
David Christensen, reviewed by Jeevan Chalke
These variants used the old-style 'n'/' ' NULL indicators. The new-style
functions have been available since version 8.1. That should be long enough
that if there is still any old external code using these functions, they
can just switch to the new functions without worrying about backwards
compatibility
Peter Geoghegan
There's no point in trying to free every small allocation in these
programs that are used in a one-shot fashion, but these ones seems like
an improvement on readability grounds.
Michael Paquier, per Coverity report.