This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.
A small cosmetic piece of tidying of the pgperlcritic script is
included.
Mike Blackwell
Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
Print columns as "column C of <relation>" rather than "<relation> column
C". This seems to read noticeably better in English, as evidenced by the
regression test output changes, and the code change also makes it possible
for translators to adjust the phrase order in other languages.
Also change the output for OCLASS_DEFAULT from "default for %s" to
"default value for %s". This seems to read better and is also more
consistent with the output of, for instance, getObjectTypeDescription().
Kyotaro Horiguchi, per a complaint from me
Discussion: https://postgr.es/m/20180522.182020.114074746.horiguchi.kyotaro@lab.ntt.co.jp
getObjectDescription and getObjectIdentity failed to schema-qualify
the name of the published table, which is bad in getObjectDescription and
unforgivable in getObjectIdentity. Actually, getObjectIdentity failed to
emit the table's name at all unless "objname" output is requested, which
accidentally works for some (all?) extant callers but is clearly not the
intended API. Somebody had also not gotten the memo that the output of
getObjectIdentity is not to be translated.
To fix getObjectDescription, I made it call getRelationDescription, which
required refactoring the translatable string for the case, but is more
future-proof in case we ever publish relations that aren't plain tables.
While at it, I made the English output look like "publication of table X
in publication Y"; the added "of" seems to me to make it read much better.
Back-patch to v10 where publications were introduced.
Discussion: https://postgr.es/m/20180522.182020.114074746.horiguchi.kyotaro@lab.ntt.co.jp
Collations, conversions, extended statistics objects (in >= v10),
and all four types of text search objects have schema-qualified names.
getObjectDescription() ignored that and would emit just the base name of
the object, potentially producing wrong or at least highly misleading
output. Fix it to add the schema name whenever the object is not "visible"
in the current search path, as is the rule for other schema-qualifiable
object types.
Although in common situations the output won't change, this seems to me
(tgl) to be a bug worthy of back-patching, hence do so.
Kyotaro Horiguchi, per a complaint from me
Discussion: https://postgr.es/m/20180522.182020.114074746.horiguchi.kyotaro@lab.ntt.co.jp
In commit 6bdf1303b, we ensured that power()/^ for float8 would honor
the NaN behaviors specified by POSIX standards released in this century,
ie NaN ^ 0 = 1 and 1 ^ NaN = 1. However, numeric_power() was not
touched and continued to follow the once-common behavior that every
case involving NaN input produces NaN. For consistency, let's switch
the numeric behavior to the modern spec in the same release that ensures
that behavior for float8.
(Note that while 6bdf1303b was initially back-patched, we later undid
that, concluding that any behavioral change should appear only in v11.)
Discussion: https://postgr.es/m/10898.1526421338@sss.pgh.pa.us
Up to now, it's been safe for plpgsql to store TOAST pointers in its
variables because the ActiveSnapshot for whatever query called the plpgsql
function will surely protect such TOAST values from being vacuumed away,
even if the owning table rows are committed dead. With the introduction of
procedures, that assumption is no longer good in "non atomic" executions
of plpgsql code. We adopt the slightly brute-force solution of detoasting
all TOAST pointers at the time they are stored into variables, if we're in
a non-atomic context, just in case the owning row goes away.
Some care is needed to avoid long-term memory leaks, since plpgsql tends
to run with CurrentMemoryContext pointing to its call-lifespan context,
but we shouldn't assume that no memory is leaked by heap_tuple_fetch_attr.
In plpgsql proper, we can do the detoasting work in the "eval_mcontext".
Most of the code thrashing here is due to the need to add this capability
to expandedrecord.c as well as plpgsql proper. In expandedrecord.c,
we can't assume that the caller's context is short-lived, so make use of
the short-term sub-context that was already invented for checking domain
constraints. In view of this repurposing, it seems good to rename that
variable and associated code from "domain_check_cxt" to "short_term_cxt".
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/5AC06865.9050005@anastigmatix.net
canonicalize_ec_expression() is supposed to agree with coerce_type() as to
whether a RelabelType should be inserted to make a subexpression be valid
input for the operators of a given opclass. However, it did the wrong
thing with named-composite-type inputs to record_eq(): it put in a
RelabelType to RECORDOID, which the parser doesn't. In some cases this was
harmless because all code paths involving a particular equivalence class
did the same thing, but in other cases this would result in failing to
recognize a composite-type expression as being a member of an equivalence
class that it actually is a member of. The most obvious bad effect was to
fail to recognize that an index on a composite column could provide the
sort order needed for a mergejoin on that column, as reported by Teodor
Sigaev. I think there might be other, subtler, cases that result in
misoptimization. It also seems possible that an unwanted RelabelType
would sometimes get into an emitted plan --- but because record_eq and
friends don't examine the declared type of their input expressions, that
would not create any visible problems.
To fix, just treat RECORDOID as if it were a polymorphic type, which in
some sense it is. We might want to consider formalizing that a bit more
someday, but for the moment this seems to be the only place where an
IsPolymorphicType() test ought to include RECORDOID as well.
This has been broken for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/a6b22369-e3bf-4d49-f59d-0c41d3551e81@sigaev.ru
Creating indexes on foreign tables is already forbidden, but local
partitioned indexes (commit 8b08f7d482) forgot to check for them. Add
a preliminary check to prevent wasting time.
Another school of thought says to allow the index to be created if it's
not a unique index; but it's possible to do better in the future (enable
indexing of foreign tables, somehow), so we avoid painting ourselves in
a corner by rejecting all cases, to avoid future grief (a.k.a. backward
incompatible changes).
Reported-by: Arseny Sher
Author: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/87sh71cakz.fsf@ars-thinkpad
match_clause_to_partition_key failed to consider COERCION_PATH_ARRAYCOERCE
cases in scalar-op-array expressions, so it was possible to crash the
server easily. To handle this case properly (ie. prune partitions) we
would need to run a bit of executor code during planning. Maybe it can
be improved, but for now let's just not crash. Add a test case that
used to trigger the crash.
Author: Michaël Paquier
match_clause_to_partition_key failed to indicate that operators that
don't have a commutator in a btree opclass are unsupported. It is
possible for this to cause a crash later if such an operator is used in
a scalar-op-array expression. Add a test case that used to the crash.
Author: Amit Langote
One caller of gen_partprune_steps_internal in
match_clause_to_partition_key was too optimistic about the former never
returning an empty step list. Rid it of its innocence. (Having fixed
the bug above, I no longer know how to exploit this, so no test case for
it, but it remained a bug.) Revise code flow a little bit, for
succintness.
Author: Álvaro Herrera
Reported-by: Marina Polyakova
Reviewed-by: Michaël Paquier
Reviewed-by: Amit Langote
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/ff8f9bfa485ff961d6bb43e54120485b@postgrespro.ru
The vertical tightness settings collapse vertical whitespace between
opening and closing brackets (parentheses, square brakets and braces).
This can make data structures in particular harder to read, and is not
very consistent with our style in non-Perl code. This patch restricts
that setting to parentheses only, and reformats all the perl code
accordingly. Not applying this to parentheses has some unfortunate
effects, so the consensus is to keep the setting for parentheses and not
for the others.
The diff for this patch does highlight some places where structures
should have trailing commas. They can be added manually, as there is no
automatic tool to do so.
Discussion: https://postgr.es/m/a2f2b87c-56be-c070-bfc0-36288b4b41c1@2ndQuadrant.com
Commit 6271fceb8 changed PostgresNode.pm to force log_min_messages = debug1
in all TAP tests, without any discussion and without a concrete need for
it. This makes some of the TAP tests noticeably slower (although much of
that may be due to poorly-written regexes), and for certain it's bloating
the buildfarm logs. Revert the change.
Discussion: https://postgr.es/m/32459.1525657786@sss.pgh.pa.us
Commit 86f575948 already manually updated the oidjoins test for the
new pg_constraint.conparentid => pg_constraint.oid relationship, but
failed to update findoidjoins/README, thus the apparent inconsistency
here.
Michael Paquier
Discussion: https://postgr.es/m/20180507001811.GA27389@paquier.xyz
This reverts commit 55e0e45817.
It's served its purpose of demonstrating what was wrong on
buildfarm member opossum. We could consider putting some kind
of single-purpose hack into ftod() to make the test pass there;
but I don't think it's worth the trouble, since there are surely
many other places whether this platform bug could manifest.
The principle behind the locking was not very well thought-out, and not
documented. Add a section in the README to explain how it's supposed to
work, and change the code so that it actually works that way.
This fixes two bugs:
1. If fast update was turned on concurrently, subsequent inserts to the
pending list would not conflict with predicate locks that were acquired
earlier, on entry pages. The included 'predicate-gin-fastupdate' test
demonstrates that. To fix, make all scans acquire a predicate lock on
the metapage. That lock represents a scan of the pending list, whether
or not there is a pending list at the moment. Forget about the
optimization to skip locking/checking for locks, when fastupdate=off.
2. If a scan finds no match, it still needs to lock the entry page. The
point of predicate locks is to lock the gabs between values, whether
or not there is a match. The included 'predicate-gin-nomatch' test
tests that case.
In addition to those two bug fixes, this removes some unnecessary locking,
following the principle laid out in the README. Because all items in
a posting tree have the same key value, a lock on the posting tree root is
enough to cover all the items. (With a very large posting tree, it would
possibly be better to lock the posting tree leaf pages instead, so that a
"skip scan" with a query like "A & B", you could avoid unnecessary conflict
if a new tuple is inserted with A but !B. But let's keep this simple.)
Also, some spelling fixes.
Author: Heikki Linnakangas with some editorization by me
Review: Andrey Borodin, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/0b3ad2c2-2692-62a9-3a04-5724f2af9114@iki.fi
While looking at a recent buildfarm failure in the ecpg tests, I wondered
why the pg_regress output claimed the stderr part of the test failed, when
the regression diffs were clearly for the stdout part. Looking into it,
the reason is that pg_regress.c's logic for iterating over three parallel
lists is wrong, and has been wrong since it was written: it advances the
"tag" pointer at a different place in the loop than the other two pointers.
Fix that.
Buildfarm results show that the modern POSIX rule that 1 ^ NaN = 1 is not
honored on *BSD until relatively recently, and really old platforms don't
believe that NaN ^ 0 = 1 either. (This is unsurprising, perhaps, since
SUSv2 doesn't require either behavior.) In hopes of getting to platform
independent behavior, let's deal with all the NaN-input cases explicitly
in dpow().
Note that numeric_power() doesn't know either of these special cases.
But since that behavior is platform-independent, I think it should be
addressed separately, and probably not back-patched.
Discussion: https://postgr.es/m/75DB81BEEA95B445AE6D576A0A5C9E936A73E741@BPXM05GP.gisp.nec.co.jp
Per spec, the result of power() should be NaN if either input is NaN.
It appears that on some versions of Windows, the libc function does
return NaN, but it also sets errno = EDOM, confusing our code that
attempts to work around shortcomings of other platforms. Hence, add
guard tests to avoid substituting a wrong result for the right one.
It's been like this for a long time (and the odd behavior only appears
in older MSVC releases, too) so back-patch to all supported branches.
Dang Minh Huong, reviewed by David Rowley
Discussion: https://postgr.es/m/75DB81BEEA95B445AE6D576A0A5C9E936A73E741@BPXM05GP.gisp.nec.co.jp
Commit 0927d2f46d didn't check that
consider_parallel was set for the target relation or account for
the possibility that required_outer might be non-empty.
To prevent future bugs of this ilk, add some assertions to
add_partial_path and do a bit of future-proofing of the code
recently added to recurse_set_operations.
Report by Andreas Seltenreich. Patch by Jeevan Chalke. Review
by Amit Kapila and by me.
Discussion: http://postgr.es/m/CAM2+6=U+9otsyF2fYB8x_2TBeHTR90itarqW=qAEjN-kHaC7kw@mail.gmail.com
This controls both plan-time and execution-time new-style partition
pruning. While finer-grain control is possible (maybe using an enum GUC
instead of boolean), there doesn't seem to be much need for that.
This new parameter controls partition pruning for all queries:
trivially, SELECT queries that affect partitioned tables are naturally
under its control since they are using the new technology. However,
while UPDATE/DELETE queries do not use the new code, we make the new GUC
control their behavior also (stealing control from
constraint_exclusion), because it is more natural, and it leads to a
more natural transition to the future in which those queries will also
use the new pruning code.
Constraint exclusion still controls pruning for regular inheritance
situations (those not involving partitioned tables).
Author: David Rowley
Review: Amit Langote, Ashutosh Bapat, Justin Pryzby, David G. Johnston
Discussion: https://postgr.es/m/CAKJS1f_0HwsxJG9m+nzU+CizxSdGtfe6iF_ykPYBiYft302DCw@mail.gmail.com
Previously, you could partition by a boolean column as long as you
spelled the bound values as string literals, for instance FOR VALUES
IN ('t'). The trouble with this is that ruleutils.c printed that as
FOR VALUES IN (TRUE), which is reasonable syntax but wasn't accepted by
the grammar. That results in dump-and-reload failures for such cases.
Apply a minimal fix that just causes TRUE and FALSE to be converted to
strings 'true' and 'false'. This is pretty grotty, but it's too late for
a more principled fix in v11 (to say nothing of v10). We should revisit
the whole issue of how partition bound values are parsed for v12.
Amit Langote
Discussion: https://postgr.es/m/e05c5162-1103-7e37-d1ab-6de3e0afaf70@lab.ntt.co.jp
5c067521 erroneously had coded bms_prev_member assuming that a bitmapword
would always hold 32 bits and started it's search on what it thought was the
highest 8-bits of the word. This was not the case if bitmapwords were 64
bits.
In passing add a test to exercise this function a little. Previously there was
no coverage at all.
David Rowly
After introducing usage of t_tid of inner or page high key for storing
number of attributes of tuple, validation of tuple's ItemPointer with
ItemPointerIsValid becomes incorrect, it's need to validate only blocknumber of
ItemPointer. Missing this causes a incorrect page deletion, fix that. Test is
added.
BTW, current contrib/amcheck doesn't fail on index corrupted by this way.
Also introduce BTreeTupleGetTopParent/BTreeTupleSetTopParent macroses to improve
code readability and to avoid possible confusion with page high key: high key
is used to store top-parent link for branch to remove.
Bug found by Michael Paquier, but bug doesn't exist in previous versions because
t_tid was set to P_HIKEY.
Author: Teodor Sigaev
Reviewer: Peter Geoghegan
Discussion: https://www.postgresql.org/message-id/flat/20180419052436.GA16000%40paquier.xyz
Results from buildfarm member opossum suggest that this doesn't work
quite right on that platform. We've seen issues with NaN support on
MIPS/NetBSD before ... allegedly they fixed this stuff back in 2010,
but maybe only for small values of "fixed".
If, in fact, opossum fails this test then I plan to revert it;
it's mainly for diagnostic purposes rather than something we'd
necessarily keep long-term. I think that the failures in window.sql
could be worked around with some code duplication, but I want to
verify my theory about the cause first.
Since it sets up an event trigger that would fire on DDL done by any
concurrent test script, the original scheduling is just an invitation
to irreproducible test failures. (The fact that we found a bug through
exactly such irreproducible test failures doesn't really change the
calculus here: this script is a hazard to anything that runs in parallel
with it today or might be added to that parallel group in future. No,
I don't believe that the trigger is protecting itself sufficiently to
avoid all possible trouble.)
Discussion: https://postgr.es/m/5767.1523995174@sss.pgh.pa.us
In some cases a clause attached to an outer join can be pushed down into
the outer join's RHS even though the clause is not degenerate --- this
can happen if we choose to make a parameterized path for the RHS. If
the clause ends up attached to a lower outer join, we'd misclassify it
as being a "join filter" not a plain "filter" condition at that node,
leading to wrong query results.
To fix, teach extract_actual_join_clauses to examine each join clause's
required_relids, not just its is_pushed_down flag. (The latter now
seems vestigial, or at least in need of rethinking, but we won't do
anything so invasive as redefining it in a bug-fix patch.)
This has been wrong since we introduced parameterized paths in 9.2,
though it's evidently hard to hit given the lack of previous reports.
The test case used here involves a lateral function call, and I think
that a lateral reference may be required to get the planner to select
a broken plan; though I wouldn't swear to that. In any case, even if
LATERAL is needed to trigger the bug, it still affects all supported
branches, so back-patch to all.
Per report from Andreas Karlsson. Thanks to Andrew Gierth for
preliminary investigation.
Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
Amit Langote reported that partition prune was unable to work with
arrays, enums, etc, which led him to research the appropriate way to
match query clauses to partition keys: instead of searching for an exact
match of the expression's type, it is better to rely on the fact that
the expression qual has already been resolved to a specific operator,
and that the partition key is linked to a specific operator family.
With that info, it's possible to figure out the strategy and comparison
function to use for the pruning clause in a manner that works reliably
for pseudo-types also.
Include new test cases that demonstrate pruning where pseudotypes are
involved.
Author: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/2b02f1e9-9812-9c41-972d-517bdc0f815d@lab.ntt.co.jp
Add several assertions that ensure that we're dealing with a pivot tuple
without non-key attributes where that's expected. Also, remove the
assertion within _bt_isequal(), restoring the v10 function signature. A
similar check will be performed for the page highkey within
_bt_moveright() in most cases. Also avoid dropping all objects within
regression tests, to increase pg_dump test coverage for INCLUDE indexes.
Rather than using infrastructure that's generally intended to be used
with reference counted heap tuple descriptors during truncation, use the
same function that was introduced to store flat TupleDescs in shared
memory (we use a temp palloc'd buffer). This isn't strictly necessary,
but seems more future-proof than the old approach. It also lets us
avoid including rel.h within indextuple.c, which was arguably a
modularity violation. Also, we now call index_deform_tuple() with the
truncated TupleDesc, not the source TupleDesc, since that's more robust,
and saves a few cycles.
In passing, fix a memory leak by pfree'ing truncated pivot tuple memory
during CREATE INDEX. Also pfree during a page split, just to be
consistent.
Refactor _bt_check_natts() to be more readable.
Author: Peter Geoghegan with some editorization by me
Reviewed by: Alexander Korotkov, Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CAH2-Wz%3DkCWuXeMrBCopC-tFs3FbiVxQNjjgNKdG2sHxZ5k2y3w%40mail.gmail.com
coverage report indicated that mark_invalid_subplans_as_finished() and
nearby code was not getting exercised by any tests. Add a new one which
has execution-time Params rather than only external Params to fix this.
In passing, David noticed that ab_q6 tests were not actually required to
have a generic plan. The tests were testing exec Params not external
Params, so there was no need for the PREPARE. Remove the PREPARE,
making these plain queries. (The new queries are called from
explain_parallel_append, which may be unnecessary since they don't
actually have a Parallel Append node, just an Append. But it doesn't
seem to hurt anything, either.)
Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f--hopb6JBSDY4wiXTS3ZcDp-wparXjTQ1nzNdBa04Fog@mail.gmail.com
This reverts commit 4d0f6d3f20 ("Attempt to stabilize partition_prune
test output (2)"), and attempts to stabilize the test by using string
replacement to hide any loop count difference in parallel nodes.
Discussion: https://postgr.es/m/4475.1523628300@sss.pgh.pa.us
We had an Assert() preventing whole-row expressions from being used in
the SET clause of INSERT ON CONFLICT, but it seems unnecessary, given
some tests, so remove it. Add a new test to exercise the case.
Still at ExecInitPartitionInfo, we used map_partition_varattnos (which
constructs an attribute map, then calls map_variable_attnos) using
the same two relations many times in different expressions and with
different parameters. Constructing the map over and over is a waste.
To avoid this repeated work, construct the map once, and use
map_variable_attnos() directly instead.
Author: Amit Langote, per comments by me (Álvaro)
Discussion: https://postgr.es/m/20180326142016.m4st5e34chrzrknk@alvherre.pgsql
I was dissatisfied with the code coverage report for expand_tuple() in the
wake of commit 7c44c46de: while better than no coverage at all, it was
still not exercising the core function of inserting out-of-line default
values, nor was the HeapTuple-output path covered. So far as I can find,
the only code path that reaches the latter at present is EvalPlanQual
fetches for non-locked tables. Hence, extend eval-plan-qual.spec to
test cases where out-of-line defaults must be inserted into a tuple
fetched from a non-locked table.
Discussion: https://postgr.es/m/87woxi24uw.fsf@ansel.ydns.eu
SELECT FOR UPDATE on a view should require UPDATE (as well as SELECT)
permissions on the view, and then the view's owner needs those same
permissions against the relations it references, and so on all the way
down to base tables. But ApplyRetrieveRule did things in the wrong order,
resulting in failure to mark intermediate view levels as needing UPDATE
permission. Thus for example, if user A creates a table T and an updatable
view V1 on T, then grants only SELECT permissions on V1 to user B, B could
create a second view V2 on V1 and then would be allowed to perform SELECT
FOR UPDATE via V2 (since V1 wouldn't be checked for UPDATE permissions).
To fix, just switch the order of expanding sub-views and marking referenced
objects as needing UPDATE permission. I think additional simplifications
are now possible, but that's distinct from the bug fix proper.
This is certainly a security issue, but the consequences are pretty minor
(just the ability to lock rows that shouldn't be lockable). Against that
we have a small risk of breaking applications that are working as-desired,
since nested views have behaved this way since such cases worked at all.
On balance I'm inclined not to back-patch.
Per report from Alexander Lakhin.
Discussion: https://postgr.es/m/24db7b8f-3de5-e25f-7ab9-d8848351d42c@gmail.com
We need to call expand_function_arguments() to expand named and default
arguments.
In PL/pgSQL, we also need to deal with named and default INOUT arguments
when receiving the output values into variables.
Author: Pavel Stehule <pavel.stehule@gmail.com>
Commit 16828d5c forgot to check that it had a set of missing values
before trying to retrieve a value from it.
An additional query to add coverage for this code is added to the
regression test.
Per bug report from Andreas Seltenreich.
In passing, throw an error if the AF count is too small, rather than
just silently discarding extra affix entries.
Note that the new regression test cases require installing the
updated src/backend/tsearch/dicts files.
Arthur Zakirov
Discussion: https://postgr.es/m/20180413113447.GA32474@zakirov.localdomain
This custom opclass was already in use in other tests -- defined
independently in every such file. Move the definition to the earliest
test that uses it, and keep it around so that later tests can reuse it.
Use it in the tests for pruning of hash partitioning, and since this
makes the second expected file unnecessary, put those tests back in
partition_prune.sql whence they sprang.
Author: Amit Langote
Discussion: https://postgr.es/m/CA%2BTgmoZ0D5kJbt8eKXtvVdvTcGGWn6ehWCRSZbWytD-uzH92mQ%40mail.gmail.com
We were using CurrentMemoryContext to put the partsupfunc fmgr_info
into, which isn't right, because we want the PartitionKey as a whole to
be in the isolated Relation->rd_partkeycxt context. This can cause a
crash with user-defined support functions in the operator classes used
by partitioning keys. (Maybe this can cause problems with core-supplied
opclasses too, not sure.)
This is demonstrably broken in Postgres 10, too, but the initial
proposed fix runs afoul of a problem discussed back when 8a0596cb65
("Get rid of copy_partition_key") reorganized that code: namely that it
is possible to jump out of RelationBuildPartitionKey because of some
error and leave a dangling memory context child of CacheMemoryContext.
Also, while reviewing this I noticed that the removed-in-pg11
copy_partition_key was doing something wrong, unfixed in pg10, namely
doing memcpy() on the FmgrInfo, which is bogus (should be doing
fmgr_info_copy). Therefore, in branch pg10, the sane fix seems to be to
backpatch both the aforementioned 8a0596cb65 and its followup
be2343221f ("Protect against hypothetical memory leaks in
RelationGetPartitionKey"), so do that, then apply the fmgr_info memcxt
bugfix on top.
Add a test case exercising btree-based custom operator classes, which
causes a crash prior to this fix. This is not a security problem,
because in order to create an operator class you need superuser
privileges anyway.
Authors: Álvaro Herrera and Amit Langote
Reported and diagnosed by: Amit Langote
Discussion: https://postgr.es/m/3041e853-b1dd-a0c6-ff21-7cc5633bffd0@lab.ntt.co.jp
This reverts commits d204ef6377,
83454e3c2b and a few more commits thereafter
(complete list at the end) related to MERGE feature.
While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.
Thanks again to all reviewers and bug reporters.
List of commits reverted, in reverse chronological order:
f1464c5380 Improve parse representation for MERGE
ddb4158579 MERGE syntax diagram correction
530e69e59b Allow cpluspluscheck to pass by renaming variable
01b88b4df5 MERGE minor errata
3af7b2b0d4 MERGE fix variable warning in non-assert builds
a5d86181ec MERGE INSERT allows only one VALUES clause
4b2d44031f MERGE post-commit review
4923550c20 Tab completion for MERGE
aa3faa3c7a WITH support in MERGE
83454e3c2b New files for MERGE
d204ef6377 MERGE SQL Command following SQL:2016
Author: Pavan Deolasee
Reviewed-by: Michael Paquier
If the table being attached contained values that contradict the default
partition's partition constraint, it would fail to complain, because
CommandCounterIncrement changes in 4dba331cb3 coupled with some bogus
coding in the existing ValidatePartitionConstraints prevented the
partition constraint from being validated after all -- or rather, it
caused to constraint to become an empty one, always succeeding.
Fix by not re-reading the OID of the default partition in
ATExecAttachPartition. To forestall similar problems, revise the
existing code:
* rename routine from ValidatePartitionConstraints() to
QueuePartitionConstraintValidation, to better represent what it
actually does.
* add an Assert() to make sure that when queueing a constraint for a
partition we're not overwriting a constraint previously queued.
* add an Assert() that we don't try to invoke the special-purpose
validation of the default partition when attaching the default
partition itself.
While at it, change some loops to obtain partition OIDs from
partdesc->oids rather than find_all_inheritors; reduce the lock level
of partitions being scanned from AccessExclusiveLock to ShareLock;
rewrite QueuePartitionConstraintValidation in a recursive fashion rather
than repetitive.
Author: Álvaro Herrera. Tests written by Amit Langote
Reported-by: Rushabh Lathia
Diagnosed-by: Kyotaro HORIGUCHI, who also provided the initial fix.
Reviewed-by: Kyotaro HORIGUCHI, Amit Langote, Jeevan Ladhe
Discussion: https://postgr.es/m/CAGPqQf0W+v-Ci_qNV_5R3A=Z9LsK4+jO7LzgddRncpp_rrnJqQ@mail.gmail.com
The bug is caused due to the original IndexStmt that DefineIndex receives
being overwritten when processing the INCLUDE columns. Use separate list of
index params to propagate to child tables. Add tests covering this case.
Amit Langote and Alexander Korotkov.
I'd hoped that commit 3b8f6e75f was sufficient to ensure parallel safety
even when a build started in a subdirectory requires rebuilding of
generated headers. This isn't so, because making submake-generated-headers
a prerequisite of "all" isn't enough to ensure it's completed before
starting on "all"'s other prerequisites. The explicit dependencies we put
on the recursive make targets ensure safe ordering before we recurse into
child directories, but they don't protect targets to be made in the current
directory. Hence, put back some ordering dependencies in directories that
we've traditionally expected to be starting points for "standalone" builds,
to wit src/pl/plpython and src/test/regress. (The former needs this in
order to minimize the work involved in building for both python 2 and
python 3; the latter to support packagings that make the regression tests
available for out-of-build-tree execution.) Adjust some other dependencies
so that these two cases work correctly even at high -j settings.
I'm not terribly happy with this partial solution, but I don't see a
way to do better without massive makefile restructuring, which we surely
aren't doing at this point in the development cycle. In any case, it's
little if any worse than what we had in prior releases.
Discussion: https://postgr.es/m/1523353963.8169.26.camel@gunduz.org
Commit 372728b0d created some problems for usages like building a
subdirectory without having first done "make all" at the top level,
or for proceeding directly to "make install" without "make all".
The only reasonably clean way to fix this seems to be to force the
submake-generated-headers rule to fire in *any* "make all" or "make
install" command anywhere in the tree. To avoid lots of redundant work,
as well as parallel make jobs possibly clobbering each others' output, we
still need to be sure that the rule fires only once in a recursive build.
For that, adopt the same MAKELEVEL hack previously used for "temp-install".
But try to document it a bit better.
The submake-errcodes mechanism previously used in src/port/ and src/common/
is subsumed by this, so we can get rid of those special cases. It was
inadequate for src/common/ anyway after the aforesaid commit, and it always
risked parallel attempts to build errcodes.h.
Discussion: https://postgr.es/m/E1f5FAB-0006LU-MB@gemulon.postgresql.org
This reverts the backend sides of commit 1fde38beaa.
I have, at least for now, left the pg_verify_checksums tool in place, as
this tool can be very valuable without the rest of the patch as well,
and since it's a read-only tool that only runs when the cluster is down
it should be a lot safer.
Everything of use to frontend code should now appear in the _d.h files,
and making this change frees us from needing to worry about whether the
catalog header files proper are frontend-safe.
Remove src/interfaces/ecpg/ecpglib/pg_type.h entirely, as the previous
commit reduced it to a confusingly-named wrapper around pg_type_d.h.
In passing, make test_rls_hooks.c follow project convention of including
our own files with #include "" not <>.
Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
This rectifies an oversight in commit 8224de4f4, by adding a new
property 'can_include' for pg_indexam_has_property, and adjusting the
results of pg_index_column_has_property to give more appropriate
results for INCLUDEd columns.
Allow the cluster to be optionally init'd with read access for the
group.
This means a relatively non-privileged user can perform a backup of the
cluster without requiring write privileges, which enhances security.
The mode of PGDATA is used to determine whether group permissions are
enabled for directory and file creates. This method was chosen as it's
simple and works well for the various utilities that write into PGDATA.
Changing the mode of PGDATA manually will not automatically change the
mode of all the files contained therein. If the user would like to
enable group access on an existing cluster then changing the mode of all
the existing files will be required. Note that pg_upgrade will
automatically change the mode of all migrated files if the new cluster
is init'd with the -g option.
Tests are included for the backend and all the utilities which operate
on the PG data directory to ensure that the correct mode is set based on
the data directory permissions.
Author: David Steele <david@pgmasters.net>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
Consolidate directory and file create permissions for tools which work
with the PG data directory by adding a new module (common/file_perm.c)
that contains variables (pg_file_create_mode, pg_dir_create_mode) and
constants to initialize them (0600 for files and 0700 for directories).
Convert mkdir() calls in the backend to MakePGDirectory() if the
original call used default permissions (always the case for regular PG
directories).
Add tests to make sure permissions in PGDATA are set correctly by the
tools which modify the PG data directory.
Authors: David Steele <david@pgmasters.net>,
Adam Brightwell <adam.brightwell@crunchydata.com>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
Existing partition pruning is only able to work at plan time, for query
quals that appear in the parsed query. This is good but limiting, as
there can be parameters that appear later that can be usefully used to
further prune partitions.
This commit adds support for pruning subnodes of Append which cannot
possibly contain any matching tuples, during execution, by evaluating
Params to determine the minimum set of subnodes that can possibly match.
We support more than just simple Params in WHERE clauses. Support
additionally includes:
1. Parameterized Nested Loop Joins: The parameter from the outer side of the
join can be used to determine the minimum set of inner side partitions to
scan.
2. Initplans: Once an initplan has been executed we can then determine which
partitions match the value from the initplan.
Partition pruning is performed in two ways. When Params external to the plan
are found to match the partition key we attempt to prune away unneeded Append
subplans during the initialization of the executor. This allows us to bypass
the initialization of non-matching subplans meaning they won't appear in the
EXPLAIN or EXPLAIN ANALYZE output.
For parameters whose value is only known during the actual execution
then the pruning of these subplans must wait. Subplans which are
eliminated during this stage of pruning are still visible in the EXPLAIN
output. In order to determine if pruning has actually taken place, the
EXPLAIN ANALYZE must be viewed. If a certain Append subplan was never
executed due to the elimination of the partition then the execution
timing area will state "(never executed)". Whereas, if, for example in
the case of parameterized nested loops, the number of loops stated in
the EXPLAIN ANALYZE output for certain subplans may appear lower than
others due to the subplan having been scanned fewer times. This is due
to the list of matching subnodes having to be evaluated whenever a
parameter which was found to match the partition key changes.
This commit required some additional infrastructure that permits the
building of a data structure which is able to perform the translation of
the matching partition IDs, as returned by get_matching_partitions, into
the list index of a subpaths list, as exist in node types such as
Append, MergeAppend and ModifyTable. This allows us to translate a list
of clauses into a Bitmapset of all the subpath indexes which must be
included to satisfy the clause list.
Author: David Rowley, based on an earlier effort by Beena Emerson
Reviewers: Amit Langote, Robert Haas, Amul Sul, Rajkumar Raghuwanshi,
Jesper Pedersen
Discussion: https://postgr.es/m/CAOG9ApE16ac-_VVZVvv0gePSgkg_BwYEV1NBqZFqDR2bBE0X0A@mail.gmail.com
When an update moves a row between partitions (supported since
2f17844104), our normal logic for following update chains in READ
COMMITTED mode doesn't work anymore. Cross partition updates are
modeled as an delete from the old and insert into the new
partition. No ctid chain exists across partitions, and there's no
convenient space to introduce that link.
Not throwing an error in a partitioned context when one would have
been thrown without partitioning is obviously problematic. This commit
introduces infrastructure to detect when a tuple has been moved, not
just plainly deleted. That allows to throw an error when encountering
a deletion that's actually a move, while attempting to following a
ctid chain.
The row deleted as part of a cross partition update is marked by
pointing it's t_ctid to an invalid block, instead of self as a normal
update would. That was deemed to be the least invasive and most
future proof way to represent the knowledge, given how few infomask
bits are there to be recycled (there's also some locking issues with
using infomask bits).
External code following ctid chains should be updated to check for
moved tuples. The most likely consequence of not doing so is a missed
error.
Author: Amul Sul, editorialized by me
Reviewed-By: Amit Kapila, Pavan Deolasee, Andres Freund, Robert Haas
Discussion: http://postgr.es/m/CAAJ_b95PkwojoYfz0bzXU8OokcTVGzN6vYGCNVUukeUDrnF3dw@mail.gmail.com
This patch introduces INCLUDE clause to index definition. This clause
specifies a list of columns which will be included as a non-key part in
the index. The INCLUDE columns exist solely to allow more queries to
benefit from index-only scans. Also, such columns don't need to have
appropriate operator classes. Expressions are not supported as INCLUDE
columns since they cannot be used in index-only scans.
Index access methods supporting INCLUDE are indicated by amcaninclude flag
in IndexAmRoutine. For now, only B-tree indexes support INCLUDE clause.
In B-tree indexes INCLUDE columns are truncated from pivot index tuples
(tuples located in non-leaf pages and high keys). Therefore, B-tree indexes
now might have variable number of attributes. This patch also provides
generic facility to support that: pivot tuples contain number of their
attributes in t_tid.ip_posid. Free 13th bit of t_info is used for indicating
that. This facility will simplify further support of index suffix truncation.
The changes of above are backward-compatible, pg_upgrade doesn't need special
handling of B-tree indexes for that.
Bump catalog version
Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me
Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes,
David Rowley, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/flat/56168952.4010101@postgrespro.ru
Jsonb has a complex nature so there isn't best-for-everything way to convert it
to tsvector for full text search. Current to_tsvector(json(b)) suggests to
convert only string values, but it's possible to index keys, numerics and even
booleans value. To solve that json(b)_to_tsvector has a second required
argument contained a list of desired types of json fields. Second argument is
a jsonb scalar or array right now with possibility to add new options in a
future.
Bump catalog version
Author: Dmitry Dolgov with some editorization by me
Reviewed by: Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CA+q6zcXJQbS1b4kJ_HeAOoOc=unfnOrUEL=KGgE32QKDww7d8g@mail.gmail.com
We need to wait for the initial sync of all subscriptions. On
some (faster?) machines, this didn't make a difference, but
the (slower?) buildfarm machines are upset.
Update the built-in logical replication system to make use of the
previously added logical decoding for TRUNCATE support. Add the
required truncate callback to pgoutput and a new logical replication
protocol message.
Publications get a new attribute to determine whether to replicate
truncate actions. When updating a publication via pg_dump from an older
version, this is not set, thus preserving the previous behavior.
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Hash index searches acquire predicate locks on the primary
page of a bucket. It acquires a lock on both the old and new buckets
for scans that happen concurrently with page splits. During a bucket
split, a predicate lock is copied from the primary page of an old
bucket to the primary page of a new bucket.
Author: Shubham Barai, Amit Kapila
Reviewed by: Amit Kapila, Alexander Korotkov, Thomas Munro
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPvNsM2GTiXdRgaaZ1Pjd1bs+sxfFsf7Ytr+iq+5JJoYXA@mail.gmail.com
The tests added as part of 9fdb675fc5 yield differing results
depending on endianess, causing buildfarm failures. As the differences
are expected, split the hash partitioning tests into a different file
and maintain alternative output. The separate file is so the amount of
duplicated output is reduced.
David produced the alternative output without a machine to test on, so
it's possible this'll require a buildfarm cycle or two to get right.
Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f-6f4c2Qhuipe-GY7BKmFd0FMBobRnLS7hVCoAmTszsBg@mail.gmail.com
The atomics fallback implementation for pg_atomic_flag was broken,
returning the inverted value from pg_atomic_test_set_flag(). This was
unnoticed because a) atomic flags were unused until recently b) the
test code wasn't run when the fallback implementation was in
use (because it didn't allow to test for some edge cases).
Fix the bug, and improve the fallback so it has the same behaviour as
the non-fallback implementation in the problematic edge cases. That
breaks ABI compatibility in the back branches when fallbacks are in
use, but given they were broken until now...
Author: Andres Freund
Reported-by: Daniel Gustafsson
Discussion:
https://postgr.es/m/FB948276-7B32-4B77-83E6-D00167F8EEB4@yesql.sehttps://postgr.es/m/20180406233854.uni2h3mbnveczl32@alap3.anarazel.de
Backpatch: 9.5-, where the atomics abstraction was introduced.
Add a new module backend/partitioning/partprune.c, implementing a more
sophisticated algorithm for partition pruning. The new module uses each
partition's "boundinfo" for pruning instead of constraint exclusion,
based on an idea proposed by Robert Haas of a "pruning program": a list
of steps generated from the query quals which are run iteratively to
obtain a list of partitions that must be scanned in order to satisfy
those quals.
At present, this targets planner-time partition pruning, but there exist
further patches to apply partition pruning at execution time as well.
This commit also moves some definitions from include/catalog/partition.h
to a new file include/partitioning/partbounds.h, in an attempt to
rationalize partitioning related code.
Authors: Amit Langote, David Rowley, Dilip Kumar
Reviewers: Robert Haas, Kyotaro Horiguchi, Ashutosh Bapat, Jesper Pedersen.
Discussion: https://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
This makes it possible to turn checksums on in a live cluster, without
the previous need for dump/reload or logical replication (and to turn it
off).
Enabling checkusm starts a background process in the form of a
launcher/worker combination that goes through the entire database and
recalculates checksums on each and every page. Only when all pages have
been checksummed are they fully enabled in the cluster. Any failure of
the process will revert to checksums off and the process has to be
started.
This adds a new WAL record that indicates the state of checksums, so
the process works across replicated clusters.
Authors: Magnus Hagander and Daniel Gustafsson
Review: Tomas Vondra, Michael Banck, Heikki Linnakangas, Andrey Borodin
Error-tolerant conversion function with web-like syntax for search query,
it simplifies constraining search engine with close to habitual interface for
users.
Bump catalog version
Authors: Victor Drobny, Dmitry Ivanov with editorization by me
Reviewed by: Aleksander Alekseev, Tomas Vondra, Thomas Munro, Aleksandr Parfenov
Discussion: https://www.postgresql.org/message-id/flat/fe931111ff7e9ad79196486ada79e268@postgrespro.ru
This reworks how the tests to run are defined. Instead of having to
define all runs for all tests, we define those tests which should pass
(generally using one of the defined broad hashes), add in any which
should be specific for this test, and exclude any specific runs that
shouldn't pass for this test. This ends up removing some 4k+ lines
(more than half the file) but, more importantly, greatly simplifies the
way runs-to-be-tested are defined.
As discussed in the updated comments, for example, take the test which
does CREATE TABLE test_table. That CREATE TABLE should show up in all
'full' runs of pg_dump, except those cases where 'test_table' is
excluded, of course, and that's exactly how the test gets defined now
(modulo a few other related cases, like where we dump only that table,
or we dump the schema it's in, or we exclude the schema it's in):
like => {
%full_runs,
%dump_test_schema_runs,
only_dump_test_table => 1,
section_pre_data => 1, },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1, }, },
Next, we no longer expect every run to be listed for every test. If a
run is listed in 'like' (directly or through a hash) then it's a 'like',
unless it's listed in 'unlike' in which case it's an 'unlike'. If it
isn't listed in either, then it's considered an 'unlike' automatically.
Lastly, this changes the code to no longer use like/unlike but rather to
use 'ok()' with 'diag()' which allows much more control over what gets
spit out to the screen. Gone are the days of the entire dump being sent
to the console, now you'll just get a couple of lines for each failing
test which say the test that failed and the run that it failed on.
This covers both the pg_dump TAP tests in src/bin/pg_dump and those in
src/test/modules/test_pg_dump.
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
Trigger cloning to partitions was supposed to occur for user-visible
triggers only, but during development the protection that prevented it
from occurring to internal triggers was lost. Reinstate it, as well as
add a test case to ensure internal triggers (in the tested case,
triggers implementing a deferred unique constraint) are not cloned.
Without the code fix, the partitions in the test end up with different
numbers of triggers, which is clearly wrong ...
Bug in 86f575948c.
Discussion: https://postgr.es/m/20180403214903.ozfagwjcpk337uw7@alvherre.pgsql
Make buffer 1 byte larger to fit a sign. It's actually impossible for
there to be a sign in practice, but this is still required to keep GCC 7
happy.
Cleanup from commit 51bc271790.
Based on a suggestion from Peter Eisentraut.
Author: Peter Geoghegan
Reported-By: Peter Eisentraut
Discussion: https://postgr.es/m/d1cc82ed-d07d-cef2-7c00-2e987f121648@2ndquadrant.com
Previous coding was passing the wrong table's tuple descriptor, which
accidentally fails to fail because no existing test case exercises a
foreign key in which the referenced attributes are further to the right
of the referencing attributes.
Add a test so that further breakage is visible.
This got broken in 16828d5c02.
Discussion: https://postgr.es/m/20180403204723.fqte755nukgm42uf@alvherre.pgsql
We were being careless in some places about the order of -L switches in
link command lines, such that -L switches referring to external directories
could come before those referring to directories within the build tree.
This made it possible to accidentally link a system-supplied library, for
example /usr/lib/libpq.so, in place of the one built in the build tree.
Hilarity ensued, the more so the older the system-supplied library is.
To fix, break LDFLAGS into two parts, a sub-variable LDFLAGS_INTERNAL
and the main LDFLAGS variable, both of which are "recursively expanded"
so that they can be incrementally adjusted by different makefiles.
Establish a policy that -L switches for directories in the build tree
must always be added to LDFLAGS_INTERNAL, while -L switches for external
directories must always be added to LDFLAGS. This is sufficient to
ensure a safe search order. For simplicity, we typically also put -l
switches for the respective libraries into those same variables.
(Traditional make usage would have us put -l switches into LIBS, but
cleaning that up is a project for another day, as there's no clear
need for it.)
This turns out to also require separating SHLIB_LINK into two variables,
SHLIB_LINK and SHLIB_LINK_INTERNAL, with a similar rule about which
switches go into which variable. And likewise for PG_LIBS.
Although this change might appear to affect external users of pgxs.mk,
I think it doesn't; they shouldn't have any need to touch the _INTERNAL
variables.
In passing, tweak src/common/Makefile so that the value of CPPFLAGS
recorded in pg_config lacks "-DFRONTEND" and the recorded value of
LDFLAGS lacks "-L../../../src/common". Both of those things are
mistakes, apparently introduced during prior code rearrangements,
as old versions of pg_config don't print them. In general we don't
want anything that's specific to the src/common subdirectory to
appear in those outputs.
This is certainly a bug fix, but in view of the lack of field
complaints, I'm unsure whether it's worth the risk of back-patching.
In any case it seems wise to see what the buildfarm makes of it first.
Discussion: https://postgr.es/m/25214.1522604295@sss.pgh.pa.us
The prefix operator along with SP-GiST indexes can be used as an alternative
for LIKE 'word%' commands and it doesn't have a limitation of string/prefix
length as B-Tree has.
Bump catalog version
Author: Ildus Kurbangaliev with some editorization by me
Review by: Arthur Zakirov, Alexander Korotkov, and me
Discussion: https://www.postgresql.org/message-id/flat/20180202180327.222b04b3@wp.localdomain
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.
MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.
MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.
Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.
This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.
Various issues reported via sqlsmith by Andreas Seltenreich
Authors: Pavan Deolasee, Simon Riggs
Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs
Discussion:
https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.comhttps://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.
MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.
MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.
Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.
This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.
Various issues reported via sqlsmith by Andreas Seltenreich
Authors: Pavan Deolasee, Simon Riggs
Reviewers: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs
Discussion:
https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.comhttps://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
A Bloom filter is a space-efficient, probabilistic data structure that
can be used to test set membership. Callers will sometimes incur false
positives, but never false negatives. The rate of false positives is a
function of the total number of elements and the amount of memory
available for the Bloom filter.
Two classic applications of Bloom filters are cache filtering, and data
synchronization testing. Any user of Bloom filters must accept the
possibility of false positives as a cost worth paying for the benefit in
space efficiency.
This commit adds a test harness extension module, test_bloomfilter. It
can be used to get a sense of how the Bloom filter implementation
performs under varying conditions.
This is infrastructure for the upcoming "heapallindexed" amcheck patch,
which verifies the consistency of a heap relation against one of its
indexes.
Author: Peter Geoghegan
Reviewed-By: Andrey Borodin, Michael Paquier, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
Previously there was no way in the standby side to find out the host and port
of the sender server that the walreceiver was currently connected to when
multiple hosts and ports were specified in primary_conninfo. For that purpose,
this patch adds sender_host and sender_port columns into pg_stat_wal_receiver
view. They report the host and port that the active replication connection
currently uses.
Bump catalog version.
Author: Haribabu Kommi
Reviewed-by: Michael Paquier and me
Discussion: https://postgr.es/m/CAJrrPGcV_aq8=cdqkFhVDJKEnDQ70yRTTdY9RODzMnXNrCz2Ow@mail.gmail.com
Predicate locks are used on per page basis only if fastupdate = off, in
opposite case predicate lock on pending list will effectively lock whole index,
to reduce locking overhead, just lock a relation. Entry and posting trees are
essentially B-tree, so locks are acquired on leaf pages only.
Author: Shubham Barai with some editorization by me and Dmitry Ivanov
Review by: Alexander Korotkov, Dmitry Ivanov, Fedor Sigaev
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPt5sWW+EwTaKUGFL5_XFcZ0MuGBcyJ70oqbWqr42YKR8Q@mail.gmail.com
If the toplevel scan/join target list is parallel-safe, postpone
generating Gather (or Gather Merge) paths until after the toplevel has
been adjusted to return it. This (correctly) makes queries with
expensive functions in the target list more likely to choose a
parallel plan, since the cost of the plan now reflects the fact that
the evaluation will happen in the workers rather than the leader.
The original complaint about this problem was from Jeff Janes.
If the toplevel scan/join relation is partitioned, recursively apply
the changes to all partitions. This sometimes allows us to get rid of
Result nodes, because Append is not projection-capable but its
children may be. It also cleans up what appears to be incorrect SRF
handling from commit e2f1eb0ee30d144628ab523432320f174a2c8966: the old
code had no knowledge of SRFs for child scan/join rels.
Because we now use create_projection_path() in some cases where we
formerly used apply_projection_to_path(), this changes the ordering
of columns in some queries generated by postgres_fdw. Update
regression outputs accordingly.
Patch by me, reviewed by Amit Kapila and by Ashutosh Bapat. Other
fixes for this problem (substantially different from this version)
were reviewed by Dilip Kumar, Amit Khandekar, and Marina Polyakova.
Discussion: http://postgr.es/m/CAMkU=1ycXNipvhWuweUVpKuyu6SpNjF=yHWu4c4US5JgVGxtZQ@mail.gmail.com
Add explicit cast from scalar jsonb to all numeric and bool types. It would be
better to have cast from scalar jsonb to text too but there is already a cast
from jsonb to text as just text representation of json. There is no way to have
two different casts for the same type's pair.
Bump catalog version
Author: Anastasia Lubennikova with editorization by Nikita Glukhov and me
Review by: Aleksander Alekseev, Nikita Glukhov, Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e@postgrespro.ru
Resolve build farm failures from c203d6cf81,
diagnosed by Tom Lane.
The output of pg_stat_get_xact_tuples_hot_updated() and friends
is not guaranteed to show anything after the transaction completes.
Data is flushed slowly to stats collector, so using them can
give timing issues.
Currently adding a column to a table with a non-NULL default results in
a rewrite of the table. For large tables this can be both expensive and
disruptive. This patch removes the need for the rewrite as long as the
default value is not volatile. The default expression is evaluated at
the time of the ALTER TABLE and the result stored in a new column
(attmissingval) in pg_attribute, and a new column (atthasmissing) is set
to true. Any existing row when fetched will be supplied with the
attmissingval. New rows will have the supplied value or the default and
so will never need the attmissingval.
Any time the table is rewritten all the atthasmissing and attmissingval
settings for the attributes are cleared, as they are no longer needed.
The most visible code change from this is in heap_attisnull, which
acquires a third TupleDesc argument, allowing it to detect a missing
value if there is one. In many cases where it is known that there will
not be any (e.g. catalog relations) NULL can be passed for this
argument.
Andrew Dunstan, heavily modified from an original patch from Serge
Rielau.
Reviewed by Tom Lane, Andres Freund, Tomas Vondra and David Rowley.
Discussion: https://postgr.es/m/31e2e921-7002-4c27-59f5-51f08404c858@2ndQuadrant.com
If the value of an index expression is unchanged after UPDATE,
allow HOT updates where previously we disallowed them, giving
a significant performance boost in those cases.
Particularly useful for indexes such as JSON->>field where the
JSON value changes but the indexed value does not.
Submitted as "surjective indexes" patch, now enabled by use
of new "recheck_on_update" parameter.
Author: Konstantin Knizhnik
Reviewer: Simon Riggs, with much wordsmithing and some cleanup
Add page-level predicate locking, due to gist's code organization, patch seems
close to trivial: add check before page changing, add predicate lock before page
scanning. Although choosing right place to check is not simple: it should not
be called during index build, it should support insertion of new downlink and so
on.
Author: Shubham Barai with editorization by me and Alexander Korotkov
Reviewed by: Alexander Korotkov, Andrey Borodin, me
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPtdcANpw5ePU3LvnTP8HCENFw6wygupQAyNBgD-sG3h0g@mail.gmail.com
Test 'triggers' fails when another one creates triggers concurrently at
some precise time, because of a missing WHERE clause.
Per buildfarm members snapper, desmoxytes.
Commit eb7ed3f306 enabled unique constraints on partitioned tables,
but one thing that was not working properly is INSERT/ON CONFLICT.
This commit introduces a new node keeps state related to the ON CONFLICT
clause per partition, and fills it when that partition is about to be
used for tuple routing.
Author: Amit Langote, Álvaro Herrera
Reviewed-by: Etsuro Fujita, Pavan Deolasee
Discussion: https://postgr.es/m/20180228004602.cwdyralmg5ejdqkq@alvherre.pgsql
Remember the last page of an index insert if it's the rightmost leaf
page. If the next entry belongs on and can fit in the remembered page,
insert the new entry there as long as we can get a lock on the page.
Otherwise, fall back on the more expensive method of searching for
the right place to insert the entry.
This provides a performance improvement for the common case where an
index entry is for monotonically increasing or nearly monotonically
increasing value such as an identity field or a current timestamp.
Pavan Deolasee
Reviewed by Claudio Freire, Simon Riggs and Peter Geoghegan
Discussion: https://postgr.es/m/CABOikdM9DrupjyKZZFM5k8-0RCDs1wk6JzEkg7UgSW6QzOwMZw@mail.gmail.com
If random() returns a result sufficiently close to zero, float8out
switches to scientific notation, breaking this test case's expectation
that the output should look like '0.xxxxxxxxx'. Casting to numeric
should fix that. Per buildfarm member pogona.
Discussion: https://postgr.es/m/20180324212502.wt4serghfidge2on@alap3.anarazel.de
For years, our makefiles have correctly observed that "there is no correct
way to write a rule that generates two files". However, what we did is to
provide empty rules that "generate" the secondary output files from the
primary one, and that's not right either. Depending on the details of
the creating process, the primary file might end up timestamped later than
one or more secondary files, causing subsequent make runs to consider the
secondary file(s) out of date. That's harmless in a plain build, since
make will just re-execute the empty rule and nothing happens. But it's
fatal in a VPATH build, since make will expect the secondary file to be
rebuilt in the build directory. This would manifest as "file not found"
failures during VPATH builds from tarballs, if we were ever unlucky enough
to ship a tarball with apparently out-of-date secondary files. (It's not
clear whether that has ever actually happened, but it definitely could.)
To ensure that secondary output files have timestamps >= their primary's,
change our makefile convention to be that we provide a "touch $@" action
not an empty rule. Also, make sure that this rule actually gets invoked
during a distprep run, else the hazard remains.
It's been like this a long time, so back-patch to all supported branches.
In HEAD, I skipped the changes in src/backend/catalog/Makefile, because
those rules are due to get replaced soon in the bootstrap data format
patch, and there seems no need to create a merge issue for that patch.
If for some reason we fail to land that patch in v11, we'll need to
back-fill the changes in that one makefile from v10.
Discussion: https://postgr.es/m/18556.1521668179@sss.pgh.pa.us
Previously, FOR EACH ROW triggers were not allowed in partitioned
tables. Now we allow AFTER triggers on them, and on trigger creation we
cascade to create an identical trigger in each partition. We also clone
the triggers to each partition that is created or attached later.
This means that deferred unique keys are allowed on partitioned tables,
too.
Author: Álvaro Herrera
Reviewed-by: Peter Eisentraut, Simon Riggs, Amit Langote, Robert Haas,
Thomas Munro
Discussion: https://postgr.es/m/20171229225319.ajltgss2ojkfd3kp@alvherre.pgsql
If the partition keys of input relation are part of the GROUP BY
clause, all the rows belonging to a given group come from a single
partition. This allows aggregation/grouping over a partitioned
relation to be broken down * into aggregation/grouping on each
partition. This should be no worse, and often better, than the normal
approach.
If the GROUP BY clause does not contain all the partition keys, we can
still perform partial aggregation for each partition and then finalize
aggregation after appending the partial results. This is less certain
to be a win, but it's still useful.
Jeevan Chalke, Ashutosh Bapat, Robert Haas. The larger patch series
of which this patch is a part was also reviewed and tested by Antonin
Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin
Knizhnik, Pascal Legrand, and Rafia Sabih.
Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
Code that prints out the contents of setconfig or proconfig arrays in
SQL format needs to handle GUC_LIST_QUOTE variables differently from
other ones, because for those variables, flatten_set_variable_args()
already applied a layer of quoting. The value can therefore safely
be printed as-is, and indeed must be, or flatten_set_variable_args()
will muck it up completely on reload. For all other GUC variables,
it's necessary and sufficient to quote the value as a SQL literal.
We'd recognized the need for this long ago, but mis-analyzed the
need slightly, thinking that all GUC_LIST_INPUT variables needed
the special treatment. That's actually wrong, since a valid value
of a LIST variable might include characters that need quoting,
although no existing variables accept such values.
More to the point, we hadn't made any particular effort to keep the
various places that deal with this up-to-date with the set of variables
that actually need special treatment, meaning that we'd do the wrong
thing with, for example, temp_tablespaces values. This affects dumping
of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET
commands.
In ruleutils.c we can fix it reasonably honestly by exporting a guc.c
function that allows discovering the flags for a given GUC variable.
But pg_dump doesn't have easy access to that, so continue the old method
of having a hard-wired list of affected variable names. At least we can
fix it to have just one list not two, and update the list to match
current reality.
A remaining problem with this is that it only works for built-in
GUC variables. pg_dump's list obvious knows nothing of third-party
extensions, and even the "ask guc.c" method isn't bulletproof since
the relevant extension might not be loaded. There's no obvious
solution to that, so for now, we'll just have to discourage extension
authors from inventing custom GUCs that need GUC_LIST_QUOTE.
This has been busted for a long time, so back-patch to all supported
branches.
Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and
Pavel Stehule
Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
Currently, if operator_predicate_proof() is given an operator clause like
"something op NULL", it just throws up its hands and reports it can't prove
anything. But we can often do better than that, if the operator is strict,
because then we know that the clause returns NULL overall. Depending on
whether we're trying to prove or refute something, and whether we need
weak or strong semantics for NULL, this may be enough to prove the
implication, especially when we rely on the standard rule that "false
implies anything". In particular, this lets us do something useful with
questions like "does X IN (1,3,5,NULL) imply X <= 5?" The null entry
in the IN list can effectively be ignored for this purpose, but the
proof rules were not previously smart enough to deduce that.
This patch is by me, but it owes something to previous work by
Amit Langote to try to solve problems of the form mentioned.
Thanks also to Emre Hasegeli and Ashutosh Bapat for review.
Discussion: https://postgr.es/m/3bad48fc-f257-c445-feeb-8a2b2fb622ba@lab.ntt.co.jp
If there were multiple grouping sets, none of them empty, all of which
were unsortable, then an oversight in consider_groupingsets_paths led
to a null pointer dereference. Fix, and add a regression test for this
case.
Per report from Dang Minh Huong, though I didn't use their patch.
Backpatch to 10.x where hashed grouping sets were added.
The new unlogged_reinit recovery tests create a new tablespace using
TestLib.pm's tempdir. However, on msys that function returns a virtual
path that isn't understood by Postgres. Here we add a new function to
TestLib.pm to turn such a path into a real path on the underlying file
system, and use it in the new test to create the tablespace. The new
function is essentially a NOOP everywhere but msys.
We make some changes to ModifyTableState and the EState it uses whenever
we route tuples to partitions; but we weren't restoring properly in all
cases, possibly causing crashes when partitions with different tuple
descriptors are targeted by tuples inserted in the same command.
Refactor some code, creating ExecPrepareTupleRouting, to encapsulate the
needed state changing logic, and have it invoked one level above its
current place (ie. put it in ExecModifyTable instead of ExecInsert);
this makes it all more readable.
Add a test case to exercise this.
We don't support having views as partitions; and since only views can
have INSTEAD OF triggers, there is no point in testing for INSTEAD OF
when processing insertions into a partitioned table. Remove code that
appears to support this (but which is actually never relevant.)
In passing, fix location of some very confusing comments in
ModifyTableState.
Reported-by: Amit Langote
Author: Etsuro Fujita, Amit Langote
Discussion: https://postgr/es/m/0473bf5c-57b1-f1f7-3d58-455c2230bc5f@lab.ntt.co.jp
"UPDATE/DELETE WHERE CURRENT OF cursor_name" failed, with an error message
like "cannot extract system attribute from virtual tuple", if the cursor
was using a index-only scan for the target table. Fix it by digging the
current TID out of the indexscan state.
It seems likely that the same failure could occur for CustomScan plans
and perhaps some FDW plan types, so that leaving this to be treated as an
internal error with an obscure message isn't as good an idea as it first
seemed. Hence, add a bit of heaptuple.c infrastructure to let us deliver
a more on-topic message. I chose to make the message match what you get
for the case where execCurrentOf can't identify the target scan node at
all, "cursor "foo" is not a simply updatable scan of table "bar"".
Perhaps it should be different, but we can always adjust that later.
In the future, it might be nice to provide hooks that would let custom
scan providers and/or FDWs deal with this in other ways; but that's
not a suitable topic for a back-patchable bug fix.
It's been like this all along, so back-patch to all supported branches.
Yugo Nagata and Tom Lane
Discussion: https://postgr.es/m/20180201013349.937dfc5f.nagata@sraoss.co.jp
This allows specifying an external command for prompting for or
otherwise obtaining passphrases for SSL key files. This is useful
because in many cases there is no TTY easily available during service
startup.
Also add a setting ssl_passphrase_command_supports_reload, which allows
supporting SSL configuration reload even if SSL files need passphrases.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Part of the intent in commit fd1a421fe was to allow SQL functions that are
declared to return VOID to contain anything, including an unrelated final
SELECT, the same as SQL-language procedures can. However, the planner's
inlining logic didn't get that memo. Fix it, and add some regression tests
covering this area, since evidently we had none.
In passing, clean up some typos in comments in create_function_3.sql,
and get rid of its none-too-safe assumption that DROP CASCADE notice
output is immutably ordered.
Per report from Prabhat Sahu.
Discussion: https://postgr.es/m/CANEvxPqxAj6nNHVcaXxpTeEFPmh24Whu+23emgjiuKrhJSct0A@mail.gmail.com
Many of the objects we create during the regression tests are put in the
public schema, so that using the same names in different regression tests
creates a hazard of test failures if any two such scripts run concurrently.
This patch cleans up a bunch of latent hazards of that sort, as well as two
live hazards.
The current situation in this regard is far worse than it was a year or two
back, because practically all of the partitioning-related test cases have
reused table names with enthusiasm. I despaired of cleaning up that mess
within the five most-affected tests (create_table, alter_table, insert,
update, inherit); fortunately those don't run concurrently.
Other than partitioning problems, most of the issues boil down to using
names like "foo", "bar", "tmp", etc, without thought for the fact that
other test scripts might use similar names concurrently. I've made an
effort to make all such names more specific.
One of the live hazards was that commit 7421f4b8 caused with.sql to
create a table named "test", conflicting with a similarly-named table
in alter_table.sql; this was exposed in the buildfarm recently.
The other one was that join.sql and transactions.sql both create tables
named "foo" and "bar"; but join.sql's uses of those names date back
only to December or so.
Since commit 7421f4b8 was back-patched to v10, back-patch a minimal
fix for that problem. The rest of this is just future-proofing.
Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
Since these names are global, using the same ones in different regression
tests creates a hazard of test failures if any two such scripts run
concurrently. Let's establish a policy of not doing that. In the cases
where a conflict existed, I chose to rename both sides: in principle one
script or the other could've been left in possession of the common name,
but that seems to just invite more trouble of the same sort.
There are a number of places where scripts are using names that seem
unduly generic, but in the absence of actual conflicts I left them alone.
In addition, fix insert.sql's use of "someone_else" as a role name.
That's a flat out violation of longstanding project policy, so back-patch
that change to v10 where the usage appeared. The rest of this is just
future-proofing, as no two of these scripts are actually run concurrently
in the existing parallel_schedule.
Conflicts of schema-qualified names also exist, but will be dealt with
separately.
Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
In a top-level CALL, the values of INOUT arguments will be returned as a
result row. In PL/pgSQL, the values are assigned back to the input
arguments. In other languages, the same convention as for return a
record from a function is used. That does not require any code changes
in the PL implementations.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Previously there were no tests verifying that NULL handling in AND/OR
was correct (i.e. that NULL rather than false is returned if
expression doesn't return true).
Author: Andres Freund
A simple UNION ALL gets flattened into an appendrel of subquery
RTEs, but up until now it's been impossible for the appendrel to use
the partial paths for the subqueries, so we can implement the
appendrel as a Parallel Append but only one with non-partial paths
as children.
There are three separate obstacles to removing that limitation.
First, when planning a subquery, propagate any partial paths to the
final_rel so that they are potentially visible to outer query levels
(but not if they have initPlans attached, because that wouldn't be
safe). Second, after planning a subquery, propagate any partial paths
for the final_rel to the subquery RTE in the outer query level in the
same way we do for non-partial paths. Third, teach finalize_plan() to
account for the possibility that the fake parameter we use for rescan
signalling when the plan contains a Gather (Merge) node may be
propagated from an outer query level.
Patch by me, reviewed and tested by Amit Khandekar, Rajkumar
Raghuwanshi, and Ashutosh Bapat. Test cases based on examples by
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CA+Tgmoa6L9A1nNCk3aTDVZLZ4KkHDn1+tm7mFyFvP+uQPS7bAg@mail.gmail.com
CREATE TABLE / LIKE with a bigint identity column would fail on
platforms where long is 32 bits. Copying the sequence values used
makeInteger(), which would truncate the 64-bit sequence data to 32 bits.
To fix, use makeFloat() instead, like the parser. (This does not
actually make use of floats, but stores the values as strings.)
Bug: #15096
Reviewed-by: Michael Paquier <michael@paquier.xyz>
If a table containing a primary key is attach as partition to a
partitioned table which has a primary key with a different definition,
we would happily create a second one in the new partition. Oops. It
turns out that this is because an error check in DefineIndex is executed
only if you tell it that it's being run by ALTER TABLE, and the original
code here wasn't. Change it so that it does.
Added a couple of test cases for this, also. A previously working test
started to fail in a different way than before patch because the new
check is called earlier; change the PK to plain UNIQUE so that the new
behavior isn't invoked, so that the test continues to verify what we
want it to verify.
Reported by: Noriyoshi Shinoda
Discussion: https://postgr.es/m/DF4PR8401MB102060EC2615EC9227CC73F7EEDF0@DF4PR8401MB1020.NAMPRD84.PROD.OUTLOOK.COM
One of the things canonicalize_qual() does is to remove constant-NULL
subexpressions of top-level AND/OR clauses. It does that on the assumption
that what it's given is a top-level WHERE clause, so that NULL can be
treated like FALSE. Although this is documented down inside a subroutine
of canonicalize_qual(), it wasn't mentioned in the documentation of that
function itself, and some callers hadn't gotten that memo.
Notably, commit d007a9505 caused get_relation_constraints() to apply
canonicalize_qual() to CHECK constraints. That allowed constraint
exclusion to misoptimize situations in which a CHECK constraint had a
provably-NULL subclause, as seen in the regression test case added here,
in which a child table that should be scanned is not. (Although this
thinko is ancient, the test case doesn't fail before 9.2, for reasons
I've not bothered to track down in detail. There may be related cases
that do fail before that.)
More recently, commit f0e44751d added an independent bug by applying
canonicalize_qual() to index expressions, which is even sillier since
those might not even be boolean. If they are, though, I think this
could lead to making incorrect index entries for affected index
expressions in v10. I haven't attempted to prove that though.
To fix, add an "is_check" parameter to canonicalize_qual() to specify
whether it should assume WHERE or CHECK semantics, and make it perform
NULL-elimination accordingly. Adjust the callers to apply the right
semantics, or remove the call entirely in cases where it's not known
that the expression has one or the other semantics. I also removed
the call in some cases involving partition expressions, where it should
be a no-op because such expressions should be canonical already ...
and was a no-op, independently of whether it could in principle have
done something, because it was being handed the qual in implicit-AND
format which isn't what it expects. In HEAD, add an Assert to catch
that type of mistake in future.
This represents an API break for external callers of canonicalize_qual().
While that's intentional in HEAD to make such callers think about which
case applies to them, it seems like something we probably wouldn't be
thanked for in released branches. Hence, in released branches, the
extra parameter is added to a new function canonicalize_qual_ext(),
and canonicalize_qual() is a wrapper that retains its old behavior.
Patch by me with suggestions from Dean Rasheed. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/24475.1520635069@sss.pgh.pa.us
Commit b08df9cab left things rather poorly documented as far as the
exact semantics of "clause_is_check" mode went. Also, that mode did
not really work correctly for predicate_refuted_by; although given the
lack of specification as to what it should do, as well as the lack
of any actual use-case, that's perhaps not surprising.
Rename "clause_is_check" to "weak" proof mode, and provide specifications
for what it should do. I defined weak refutation as meaning "truth of A
implies non-truth of B", which makes it possible to use the mode in the
part of relation_excluded_by_constraints that checks for mutually
contradictory WHERE clauses. Fix up several places that did things wrong
for that definition. (As far as I can see, these errors would only lead
to failure-to-prove, not incorrect claims of proof, making them not
serious bugs even aside from the fact that v10 contains no use of this
mode. So there seems no need for back-patching.)
In addition, teach predicate_refuted_by_recurse that it can use
predicate_implied_by_recurse after all when processing a strong NOT-clause,
so long as it asks for the correct proof strength. This is an optimization
that could have been included in commit b08df9cab, but wasn't.
Also, simplify and generalize the logic that checks for whether nullness of
the argument of IS [NOT] NULL would force overall nullness of the predicate
or clause. (This results in a change in the partition_prune test's output,
as it is now able to prune an all-nulls partition that it did not recognize
before.)
In passing, in PartConstraintImpliedByRelConstraint, remove bogus
conversion of the constraint list to explicit-AND form and then right back
again; that accomplished nothing except forcing a useless extra level of
recursion inside predicate_implied_by.
Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
I'd initially supposed that predicate_refuted_by(..., true) ought to
say that "A refutes B" means "non-falsity of A implies non-truth of B".
But it seems better to define it as "truth of A implies non-truth of B".
This is more useful in the current system, slightly easier to prove,
and in closer correspondence to the existing code behavior.
With this change, test_predtest no longer claims that any existing
test cases show false proof reports, though there still are cases
where we could prove something and fail to.
Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
The predicate-proof code in predtest.c is a bit hard to test as-is:
you have to construct a query whose plan changes depending on the success
of a test, and in tests that have been around for awhile, it's always
possible that the plan shape is now being determined by some other factor.
Our existing regression tests aren't doing real well at providing full
code coverage of predtest.c, either. So, let's add a small test module
that allows directly inspecting the results of predicate_implied_by()
and predicate_refuted_by() for arbitrary boolean expressions.
I chose the set of tests committed here in order to get reasonably
complete code coverage of predtest.c just from running this test
module, and to cover some cases called out as being interesting in
the existing comments. We might want to add more later. But this
set already shows a few cases where perhaps things could be improved.
Indeed, this exercise proves that predicate_refuted_by() is buggy for
the case of clause_is_check = true, though fortunately we aren't using
that case anywhere yet. I'll look into doing something about that in
a separate commit. For now, just memorialize the current behavior.
Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
The branch that does not support tls-server-end-point runs more tests,
so we need to structure the test counting dynamically.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Like the LDAP and SSL tests, these are not run by default but can be
selected via PG_TEST_EXTRA.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
This is analogous to the syntax allowed for VACUUM. This allows us to
avoid making new options reserved keywords and makes it easier to
allow arbitrary argument order. Oh, and it's consistent with the other
commands, too.
Author: Nathan Bossart
Reviewed-By: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/D3FC73E2-9B1A-4DB4-8180-55F57D116B4E@amazon.com
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so. Patch it up so that it does. Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.
While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.
Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.
In pg11, comments on statistics objects are cloned too. In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it. Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason. Any corresponding user-visible changes (docs) are backpatched,
though.
Reported-by: Stephen Froehlich
Author: David Rowley
Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
Add checks in each test file that the build supports the feature,
otherwise skip all the tests. Before, if someone were to (accidentally)
invoke these tests without build support, they would fail in confusing
ways.
based on patch from Michael Paquier <michael@paquier.xyz>
The SSL and LDAP test suites are not run by default, as they are not
secure for multi-user environments. This commit adds an extra make
variable to optionally enable them, for example:
make check-world PG_TEST_EXTRA='ldap ssl'
Author: Michael Paquier <michael@paquier.xyz>
The new column distinguishes normal functions, procedures, aggregates,
and window functions. This replaces the existing columns proisagg and
proiswindow, and replaces the convention that procedures are indicated
by prorettype == 0. Also change prorettype to be VOIDOID for procedures.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Commit a26116c6c accidentally changed the behavior of the SQL format_type()
function while refactoring. For the reasons explained in that function's
comment, a NULL typemod argument should behave differently from a -1
argument. Since we've managed to break this, add a regression test
memorializing the intended behavior.
In passing, be consistent about the type of the "flags" parameter.
Noted by Rushabh Lathia, though I revised the patch some more.
Discussion: https://postgr.es/m/CAGPqQf3RB2q-d2Awp_-x-Ur6aOxTUwnApt-vm-iTtceZxYnePg@mail.gmail.com
specscanner.l had a fixed limit of 1024 bytes on the length of
individual SQL stanzas in an isolation test script. People are
starting to run into that, so fix it by making the buffer resizable.
Once we allow this in HEAD, it seems inevitable that somebody will
try to back-patch a test that exceeds the old limit, so back-patch
this change as a preventive measure.
Daniel Gustafsson
Discussion: https://postgr.es/m/8D628BE4-6606-4FF6-A3FF-8B2B0E9B43D0@yesql.se
Parallel-aware plan nodes must be prepared to run without parallelism
if it's not possible at execution time for whatever reason. Commit
ab72716778, which introduced Parallel
Append, overlooked this.
Rajkumar Raghuwanshi reported this problem, and I included his test
case in this patch. The code changes are by me.
Discussion: http://postgr.es/m/CAKcux6=WqkUudLg1GLZZ7fc5ScWC1+Y9qD=pAHeqy32WoeJQvw@mail.gmail.com
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node. That's because commit
18ce3a4ab allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times. Repair.
Thomas Munro
Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
A before-update row trigger may choose to return the "new" or "old" tuple
unmodified. ExecBRUpdateTriggers failed to consider the second
possibility, and would proceed to free the "old" tuple even if it was the
one returned, leading to subsequent access to already-deallocated memory.
In debug builds this reliably leads to an "invalid memory alloc request
size" failure; in production builds it might accidentally work, but data
corruption is also possible.
This is a very old bug. There are probably a couple of reasons it hasn't
been noticed up to now. It would be more usual to return NULL if one
wanted to suppress the update action; returning "old" is significantly less
efficient since the update will occur anyway. Also, none of the standard
PLs would ever cause this because they all returned freshly-manufactured
tuples even if they were just copying "old". But commit 4b93f5799 changed
that for plpgsql, making it possible to see the bug with a plpgsql trigger.
Still, this is certainly legal behavior for a trigger function, so it's
ExecBRUpdateTriggers's fault not plpgsql's.
It seems worth creating a test case that exercises returning "old" directly
with a C-language trigger; testing this through plpgsql seems unreliable
because its behavior might change again.
Report and fix by Rushabh Lathia; regression test case by me.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
It's a bit silly to have test functions that aren't tested, so test
them.
In passing, rename int44in/int44out to city_budget_in/_out so that they
match how the regression tests use them. Also, fix city_budget_out
so that it emits the format city_budget_in expects to read; otherwise
we'd have dump/reload failures when testing pg_dump against the
regression database. (We avoided that in the past only because no
data of type city_budget was actually stored anywhere.)
Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
This patch removes five functions that presumably were once used in the
regression tests, but haven't been so used in many years. Nonetheless
we've been wasting maintenance effort on them (e.g., by converting them
to V1 function protocol). I see no reason to think that reviving them
would add any useful test coverage, so drop 'em.
In passing, mark regress_lseg_construct static, since it's not called
from outside this file.
Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
The ability to create like-named objects in different schemas opens up
the potential for users to change the behavior of other users' queries,
maliciously or accidentally. When you connect to a PostgreSQL server,
you should remove from your search_path any schema for which a user
other than yourself or superusers holds the CREATE privilege. If you do
not, other users holding CREATE privilege can redefine the behavior of
your commands, causing them to perform arbitrary SQL statements under
your identity. "SET search_path = ..." and "SELECT
pg_catalog.set_config(...)" are not vulnerable to such hijacking, so one
can use either as the first command of a session. As special
exceptions, the following client applications behave as documented
regardless of search_path settings and schema privileges: clusterdb
createdb createlang createuser dropdb droplang dropuser ecpg (not
programs it generates) initdb oid2name pg_archivecleanup pg_basebackup
pg_config pg_controldata pg_ctl pg_dump pg_dumpall pg_isready
pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_standby
pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb
vacuumlo. Not included are core client programs that run user-specified
SQL commands, namely psql and pgbench. PostgreSQL encourages non-core
client applications to do likewise.
Document this in the context of libpq connections, psql connections,
dblink connections, ECPG connections, extension packaging, and schema
usage patterns. The principal defense for applications is "SELECT
pg_catalog.set_config('search_path', '', false)", and the principal
defense for databases is "REVOKE CREATE ON SCHEMA public FROM PUBLIC".
Either one is sufficient to prevent attack. After a REVOKE, consider
auditing the public schema for objects named like pg_catalog objects.
Authors of SECURITY DEFINER functions use some of the same defenses, and
the CREATE FUNCTION reference page already covered them thoroughly.
This is a good opportunity to audit SECURITY DEFINER functions for
robust security practice.
Back-patch to 9.3 (all supported versions).
Reviewed by Michael Paquier and Jonathan S. Katz. Reported by Arseniy
Sharoglazov.
Security: CVE-2018-1058
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object. This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run. That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.
This patch changes pg_dump so that it does not change the search_path
dynamically. The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter. Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.
Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.
Security: CVE-2018-1058
Commit 0a459cec9 left this for later, but since time's running out,
I went ahead and took care of it. There are more data types that
somebody might someday want RANGE support for, but this is enough
to satisfy all expectations of the SQL standard, which just says that
"numeric, datetime, and interval" types should have RANGE support.
In tests that check whether a connection fails, also check the error
message. That makes sure that the connection was rejected for the right
reason.
This discovered that two tests had their connection failing for the
wrong reason. One test failed because pg_hba.conf was not set up to
allow that user, one test failed because the client key file did not
have the right permissions. Fix those tests and add a new one that is
really supposed to check the file permission issue.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
In the pgoutput plugin, skip changes for relations that are not
publishable, per is_publishable_class(). This concerns in particular
materialized views and information_schema tables. While those relations
cannot be part of a publication, per existing checks, they will be
considered by a FOR ALL TABLES publication. A subscription would not
actually apply changes for those relations, again per existing checks,
but trying to match incoming changes to local tables on the subscriber
would lead to errors if no matching local table exists. Skipping those
changes on the publisher avoids sending useless changes and eliminates
the error.
Bug: #15044
Reported-by: Chad Trabant <chad@iris.washington.edu>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
This is mostly cosmetic, but it might fix build failures, on some
platform, when copying from the documentation.
Back-patch to 9.3 (all supported versions).
Given overlapping or partially redundant join clauses, for example
t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x
the planner's EquivalenceClass machinery will ordinarily refactor the
clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't
see multiple references to the same EquivalenceClass in a list of join
equality clauses. However, if the join is outer, it's incorrect to derive
a restriction clause on the outer side from the join conditions, so the
clause refactoring does not happen and we end up with overlapping join
conditions. The code that attempted to deal with such cases had several
subtle bugs, which could result in "left and right pathkeys do not match in
mergejoin" or "outer pathkeys do not match mergeclauses" planner errors,
if the selected join plan type was a mergejoin. (It does not appear that
any actually incorrect plan could have been emitted.)
The core of the problem really was failure to recognize that the outer and
inner relations' pathkeys have different relationships to the mergeclause
list. A join's mergeclause list is constructed by reference to the outer
pathkeys, so it will always be ordered the same as the outer pathkeys, but
this cannot be presumed true for the inner pathkeys. If the inner sides of
the mergeclauses contain multiple references to the same EquivalenceClass
({t2.x} in the above example) then a simplistic rendering of the required
inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery
recognizes that the second sort column is redundant and throws it away.
The mergejoin planning code failed to account for that behavior properly.
One error was to try to generate cut-down versions of the mergeclause list
from cut-down versions of the inner pathkeys in the same way as the initial
construction of the mergeclause list from the outer pathkeys was done; this
could lead to choosing a mergeclause list that fails to match the outer
pathkeys. The other problem was that the pathkey cross-checking code in
create_mergejoin_plan treated the inner and outer pathkey lists
identically, whereas actually the expectations for them must be different.
That led to false "pathkeys do not match" failures in some cases, and in
principle could have led to failure to detect bogus plans in other cases,
though there is no indication that such bogus plans could be generated.
Reported by Alexander Kuzmenkov, who also reviewed this patch. This has
been broken for years (back to around 8.3 according to my testing), so
back-patch to all supported branches.
Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru
To support parameters in CALL, move the parse analysis of the procedure
and arguments into the global transformation phase, so that the parser
hooks can be applied. And then at execution time pass the parameters
from ProcessUtility on to ExecuteCallStmt.
Add the user-callable functions sha224, sha256, sha384, sha512. We
already had these in the C code to support SCRAM, but there was no test
coverage outside of the SCRAM tests. Adding these as user-callable
functions allows writing some tests. Also, we have a user-callable md5
function but no more modern alternative, which led to wide use of md5 as
a general-purpose hash function, which leads to occasional complaints
about using md5.
Also mark the existing md5 functions as leak-proof.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Commit 7d8ac9814b adjusted these
tests in the hope of preserving the plan shape, but I failed to
notice that the three partitions were, on my local machine, choosing
two different plan shapes. This is probably related to the fact
that all three tables have exactly the same row count. Try to
improve the situation by making pht1_e about half as large as
the other two.
Per Tom Lane and the buildfarm.
Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us
Previously, Append didn't charge anything at all, and MergeAppend
charged only cpu_operator_cost, about half the value used here. This
change might make MergeAppend plans slightly more likely to be chosen
than before, since this commit increases the assumed cost for Append
-- with default values -- by 0.005 per tuple but MergeAppend by only
0.0025 per tuple. Since the comparisons required by MergeAppend are
costed separately, it's not clear why MergeAppend needs to be
otherwise more expensive than Append, so hopefully this is OK.
Prior to partition-wise join, it didn't really matter whether or not
an Append node had any cost of its own, because every plan had to use
the same number of Append or MergeAppend nodes and in the same places.
Only the relative cost of Append vs. MergeAppend made a difference.
Now, however, it is possible to avoid some of the Append nodes using a
partition-wise join, so it's worth making an effort. Pending patches
for partition-wise aggregate care too, because an Append of Aggregate
nodes will incur the Append overhead fewer times than an Aggregate
over an Append. Although in most cases this change will favor the use
of partition-wise techniques, it does the opposite when the join
cardinality is greater than the sum of the input cardinalities. Since
this situation arises in an existing regression test, I [rhaas]
adjusted it to keep the overall plan shape approximately the same.
Jeevan Chalke, per a suggestion from David Rowley. Reviewed by
Ashutosh Bapat. Some changes by me. The larger patch series of which
this patch is a part was also reviewed and tested by Antonin Houska,
Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik,
Pascal Legrand, Rafia Sabih, and me.
Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
An updating query that reads a CTE within an InitPlan or SubPlan could get
incorrect results if it updates rows that are concurrently being modified.
This is caused by CteScanNext supposing that nothing inside its recursive
ExecProcNode call could change which read pointer is selected in the CTE's
shared tuplestore. While that's normally true because of scoping
considerations, it can break down if an EPQ plan tree gets built during the
call, because EvalPlanQualStart builds execution trees for all subplans
whether they're going to be used during the recheck or not. And it seems
like a pretty shaky assumption anyway, so let's just reselect our own read
pointer here.
Per bug #14870 from Andrei Gorita. This has been broken since CTEs were
implemented, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org
If we restrict unique constraints on partitioned tables so that they
must always include the partition key, then our standard approach to
unique indexes already works --- each unique key is forced to exist
within a single partition, so enforcing the unique restriction in each
index individually is enough to have it enforced globally. Therefore we
can implement unique indexes on partitions by simply removing a few
restrictions (and adding others.)
Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql
Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql
Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime
Casanova, Amit Langote
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.
This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.
I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.
Tom Lane, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible. In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session. And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple. A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.
Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type. DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.
To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90. A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests. In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.
In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type. This allows very cheap detection of the need to
refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.
In passing, allow plpgsql functions to accept as well as return type
"record". There was no good reason for the old restriction, and it
was out of step with most of the other PLs.
Tom Lane, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
Instead of issuing a reload after pg_hba.conf changes between test
cases, run a full restart. With a reload, an error in the new
pg_hba.conf is ignored and the tests will continue to run with the old
settings, invalidating the subsequent test cases. With a restart, a
faulty pg_hba.conf will lead to the test being aborted, which is what
we'd rather want.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
CALL statements cannot support sub-SELECTs in the arguments of the called
procedure, since they just use ExecEvalExpr to evaluate such arguments.
Teach transformSubLink() to reject the case, as it already does for other
contexts in which subqueries are not supported.
In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum
symbol line up more closely with the phrasing of the error messages it is
associated with. And fix someone's weak grasp of English grammar in the
preceding EXPR_KIND_PARTITION_EXPRESSION addition. Also update an
incorrect comment in resolve_unique_index_expr (possibly it was correct
when written, but nowadays transformExpr definitely does reject SRFs here).
Per report from Pavel Stehule --- but this resolves only one of the bugs
he mentions.
Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
Instead of using the psql/libpq connection string as the displayed test
name and relying on "notes" and source code comments to explain the
tests, give the tests self-explanatory names, like we do elsewhere.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Add explicit collation on the trigger name to avoid locale dependencies.
Also restrict the tables selected, to avoid interference from
concurrently running tests.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
The previous code assumed that we'd always succeed in creating
child-joins for a joinrel for which partition-wise join was considered,
but that's not guaranteed, at least in the case where dummy rels
are involved.
Ashutosh Bapat, with some wordsmithing by me.
Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
Failure to advance the list pointer while reading partition expressions
from a list results in invoking an input function with inappropriate data,
possibly leading to crashes or, with carefully crafted input, disclosure
of arbitrary backend memory.
Bug discovered independently by Álvaro Herrera and David Rowley.
This patch is by Álvaro but owes something to David's proposed fix.
Back-patch to v10 where the issue was introduced.
Security: CVE-2018-1052
Investigation of 2d2d06b7e2 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN. To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.
For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence. The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet. So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type. Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.
As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.
Amit Langote, reviewed by Etsuro Fujita
Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
The changes in b81b5a96f4 did not fully
address the issue, because the bit-mixing of the IV into the final
hash-key didn't prevent clustering in the input-data survive in the
output data.
This didn't cause a lot of problems because of the additional growth
conditions added d4c62a6b62. But as we
want to rein those in due to explosive growth in some edges, this
needs to be fixed.
Author: Andres Freund
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
get_relation_info() was too optimistic about opening indexes in
partitioned tables, which would raise errors when any queries were
planned on such tables. Fix by ignoring any indexes of the partitioned
kind.
CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem. Fix by
disallowing these commands in partitioned tables.
Fallout from 8b08f7d482.
Connection strings can have items with spaces in them, wrapped in
quotes. The tests however ran a SELECT '$connstr' upon connection which
broke on the embedded quotes. Use dollar quotes on the connstr to
protect against this. This was hit during the development of the macOS
Secure Transport patch, but is independent of it.
Author: Daniel Gustafsson <daniel@yesql.se>
record_image_eq was covered a bit by the materialized view code that it
is meant to support, but record_image_cmp was not tested at all.
While we're here, add more tests to record_eq and record_cmp as well,
for symmetry.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
If we're inside a lateral subquery, there may be no unparameterized paths
for a particular child relation of an appendrel, in which case we *must*
be able to create similarly-parameterized paths for each other child
relation, else the planner will fail with "could not devise a query plan
for the given query". This means that there are situations where we'd
better be able to reparameterize at least one path for each child.
This calls into question the assumption in reparameterize_path() that
it can just punt if it feels like it. However, the only case that is
known broken right now is where the child is itself an appendrel so that
all its paths are AppendPaths. (I think possibly I disregarded that in
the original coding on the theory that nested appendrels would get folded
together --- but that only happens *after* reparameterize_path(), so it's
not excused from handling a child AppendPath.) Given that this code's been
like this since 9.3 when LATERAL was introduced, it seems likely we'd have
heard of other cases by now if there were a larger problem.
Per report from Elvis Pranskevichus. Back-patch to 9.3.
Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net
When an UPDATE causes a row to no longer match the partition
constraint, try to move it to a different partition where it does
match the partition constraint. In essence, the UPDATE is split into
a DELETE from the old partition and an INSERT into the new one. This
can lead to surprising behavior in concurrency scenarios because
EvalPlanQual rechecks won't work as they normally did; the known
problems are documented. (There is a pending patch to improve the
situation further, but it needs more review.)
Amit Khandekar, reviewed and tested by Amit Langote, David Rowley,
Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro
Herrera, Amit Kapila, and me. A few final revisions by me.
Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that. It's only used in
aclcheck_error. By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.
As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones. Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.
To support pg_dump'ing these indexes, add commands
CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index). These reconstruct prior
database state exactly.
Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
This was hardly tested at all. The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
If a query against an inheritance tree runs concurrently with an ALTER
TABLE that's disinheriting one of the tree members, it's possible to get
a "could not find inherited attribute" error because after obtaining lock
on the removed member, make_inh_translation_list sees that its columns
have attinhcount=0 and decides they aren't the columns it's looking for.
An ideal fix, perhaps, would avoid including such a just-removed member
table in the query at all; but there seems no way to accomplish that
without adding expensive catalog rechecks or creating a likelihood of
deadlocks. Instead, let's just drop the check on attinhcount. In this
way, a query that's included a just-disinherited child will still
succeed, which is not a completely unreasonable behavior.
This problem has existed for a long time, so back-patch to all supported
branches. Also add an isolation test verifying related behaviors.
Patch by me; the new isolation test is based on Kyotaro Horiguchi's work.
Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.
To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing. This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.
Back-patch to 9.5 where grouping sets were introduced.
Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth
Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
This was nearly the same code. Extend wait_for_catchup to allow waiting
for pg_current_wal_lsn() and use that in the subscription tests. Also
change one use in the pg_rewind tests to use this.
Also remove some broken code in wait_for_catchup and
wait_for_slot_catchup. The error message in case the waiting failed
wanted to show the current LSN, but the way it was written never
worked. So since nobody ever cared, just remove it.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
These are compatible with Oracle and required for the datetime template
language for jsonpath in an upcoming patch.
Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
Commit ab7271677 introduced code that attempts to order the child
scans of a Parallel Append node in a way that will minimize execution
time, based on total cost and startup cost. However, it failed to
think hard about what to do when estimated costs are exactly equal;
a case that's particularly likely to occur when comparing on startup
cost. In such a case the ordering of the child paths would be left
to the whims of qsort, an algorithm that isn't even stable.
We can improve matters by applying the rule used elsewhere in the
planner: if total costs are equal, sort on startup cost, and
vice versa. When both cost estimates are exactly equal, rather
than letting qsort do something unpredictable, sort based on the
child paths' relids, which should typically result in sorting in
inheritance order. (The latter provision requires inventing a
qsort-style comparator for bitmapsets, but maybe we'll have use
for that for other reasons in future.)
This results in a few plan changes in the select_parallel test,
but those all look more reasonable than before, when the actual
underlying cost numbers are taken into account.
Discussion: https://postgr.es/m/4944.1515446989@sss.pgh.pa.us
Add a function to TestLib that allows us to check pg_config.h and then
decide the expected test outcome based on that.
Author: Michael Paquier <michael.paquier@gmail.com>
This adds a second standard channel binding type for SCRAM. It is
mainly intended for third-party clients that cannot implement
tls-unique, for example JDBC.
Author: Michael Paquier <michael.paquier@gmail.com>
Add some infrastructure (mostly macros) to make it easier to write
typical cases for constant-expression simplification. Add simplification
processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types,
which formerly went unsimplified even if all their inputs were constants.
Also teach it to simplify FieldSelect from a composite constant.
Make use of the new infrastructure to reduce the amount of code needed
for the existing ArrayExpr and ArrayCoerceExpr cases.
One existing test case changes output as a result of the fact that
RowExpr can now be folded to a constant. All the new code is exercised
by existing test cases according to gcov, so I feel no need to add
additional tests.
Tom Lane, reviewed by Dmitry Dolgov
Discussion: https://postgr.es/m/3be3b82c-e29c-b674-2163-bf47d98817b1@iki.fi
While ldaptls=1 provides an RFC 4513 conforming way to do LDAP
authentication with TLS encryption, there was an earlier de facto
standard way to do LDAP over SSL called LDAPS. Even though it's not
enshrined in a standard, it's still widely used and sometimes required
by organizations' network policies. There seems to be no reason not to
support it when available in the client library. Therefore, add support
when using OpenLDAP 2.4+ or Windows. It can be configured with
ldapscheme=ldaps or ldapurl=ldaps://...
Add tests for both ways of requesting LDAPS and a test for the
pre-existing ldaptls=1. Modify the 001_auth.pl test for "diagnostic
messages", which was previously relying on the server rejecting
ldaptls=1.
Author: Thomas Munro
Reviewed-By: Peter Eisentraut
Discussion: https://postgr.es/m/CAEepm=1s+pA-LZUjQ-9GQz0Z4rX_eK=DFXAF1nBQ+ROPimuOYQ@mail.gmail.com
Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are
supposed to be able to work in parallel, as evidenced by fixes in commit
c3d09b3bd2 specifically to support this case. In reality, one of the
sessions would be aborted by a misterious "deadlock detected" error.
Jeff Janes diagnosed that this is because of leftover snapshots used for
system catalog scans -- this was broken by 8aa3e47510 keeping track of
(registering) the catalog snapshot. To fix the deadlocks, it's enough
to de-register that snapshot prior to waiting.
Backpatch to 9.4, which introduced MVCC catalog scans.
Include an isolationtester spec that 8 out of 10 times reproduces the
deadlock with the unpatched code for me (Álvaro).
Author: Jeff Janes
Diagnosed-by: Jeff Janes
Reported-by: Jeremy Finzel
Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com
Otherwise, the test fails with "Timed out while waiting for standby to
catch up". This happened rarely, perhaps only when autovacuum wrote WAL
between our choosing the recovery target and choosing the LSN to await.
Commit b26f7fa6ae fixed one case of this.
Fix two more. Back-patch to 9.6, which introduced the affected test.
Discussion: https://postgr.es/m/20180101055227.GA2952815@rfd.leadboat.com
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements. Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.
Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql. So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter. I then
added new test cases that exercise the bits of code complained of above.
Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests. That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation. So testing each usage of
that implementation separately seems redundant.
In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.
Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
Polygon opclass uses compress method feature of SP-GiST added earlier. For now
it's a single operator class which uses this feature. SP-GiST actually indexes
a bounding boxes of input polygons, so part of supported operations are lossy.
Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
treats bounding boxes as point in 4D-space.
Bump catalog version.
Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me
Reviewed-By: all authors + Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
Since 9.4, we've allowed the syntax "select union select" and variants
of that. However, the planner wasn't expecting a no-column set operation
and ended up treating the set operation as if it were UNION ALL.
Turns out it's trivial to fix in v10 and later; we just need to be careful
about not generating a Sort node with no sort keys. However, since a weird
corner case like this is never going to be exercised by developers, we'd
better have thorough regression tests if we want to consider it supported.
Per report from Victor Yegorov.
Discussion: https://postgr.es/m/CAGnEbojGJrRSOgJwNGM7JSJZpVAf8xXcVPbVrGdhbVEHZ-BUMw@mail.gmail.com
Various Perl scripts we use to generate files were in the habit of
printing things like "generated by $0" into their output files.
That looks like a fine idea at first glance, but it results in
non-reproducible output, because in VPATH builds $0 won't be just
the name of the script file, but a full path for it. We'd prefer
that you get identical results whether using VPATH or not, so this
is a bad thing.
Some of these places also printed their input file name(s), causing
an additional hazard of the same type.
Hence, establish a policy that thou shalt not print $0, nor input file
pathnames, into output files (they're still allowed in error messages,
though). Instead just write the script name verbatim. While we are at
it, we can make these annotations more useful by giving the script's
full relative path name within the PG source tree, eg instead of
Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.
Not all of the changes made here actually affect any files shipped
in finished tarballs today, but it seems best to apply the policy
everyplace so that nobody copies unsafe code into places where it
could matter.
Christoph Berg and Tom Lane
Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.comhttps://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
When a Gather or Gather Merge node is started and stopped multiple
times, accumulate instrumentation data only once, at the end, instead
of after each execution, to avoid recording inflated totals.
Commit 778e78ae9f, the previous attempt
at a fix, instead reset the state after every execution, which worked
for the general instrumentation data but had problems for the additional
instrumentation specific to Sort and Hash nodes.
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
following a design proposal from Thomas Munro, with a comment tweak
by me.
Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
This parameter can be used to enforce the channel binding type used
during a SCRAM authentication. This can be useful to check code paths
where an invalid channel binding type is used by a client and will be
even more useful to allow testing other channel binding types when they
are added.
The default value is tls-unique, which is what RFC 5802 specifies.
Clients can optionally specify an empty value, which has as effect to
not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism.
More tests for SCRAM and channel binding are added to the SSL test
suite.
Author: Author: Michael Paquier <michael.paquier@gmail.com>
Don't write to stdin of a psql process that could have already exited
with an authentication failure. Buildfarm members crake and mandrill
have failed once by doing so. Ignore SIGPIPE in all TAP tests.
Back-patch to v10, where these tests were introduced.
Reviewed by Michael Paquier.
Discussion: https://postgr.es/m/20171209210203.GC3362632@rfd.leadboat.com
Previously it was possible that a tuple was not pruned during vacuum,
even though its update xmax (i.e. the updating xid in a multixact with
both key share lockers and an updater) was below the cutoff horizon.
As the freezing code assumed, rightly so, that that's not supposed to
happen, xmax would be preserved (as a member of a new multixact or
xmax directly). That causes two problems: For one the tuple is below
the xmin horizon, which can cause problems if the clog is truncated or
once there's an xid wraparound. The bigger problem is that that will
break HOT chains, which in turn can lead two to breakages: First,
failing index lookups, which in turn can e.g lead to constraints being
violated. Second, future hot prunes / vacuums can end up making
invisible tuples visible again. There's other harmful scenarios.
Fix the problem by recognizing that tuples can be DEAD instead of
RECENTLY_DEAD, even if the multixactid has alive members, if the
update_xid is below the xmin horizon. That's safe because newer
versions of the tuple will contain the locking xids.
A followup commit will harden the code somewhat against future similar
bugs and already corrupted data.
Author: Andres Freund, with changes by Alvaro Herrera
Reported-By: Daniel Wood
Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter
Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier
Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier
Discussion:
https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.comhttps://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome. The other PLs have their own test suites split up into
smaller files by topic. It would be nice to have that for plpgsql as
well. So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there. That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.
msvc build system changes with help from Michael Paquier
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent. But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely. We can revert this if we do something
in the backend to eliminate the inconsistency.
Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is). But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list. This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.
Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well. (We don't need to consider other DTYPEs because they don't
represent separately-stored values.) I failed to resist the temptation
to make some nearby cosmetic adjustments, too.
No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior. In any case this change would have no
impact on correctly-written code.
Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
Those cases currently crash and supporting them is more work then
originally thought, so we'll just prohibit these scenarios for now.
Author: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Reported-by: Мансур Галиев <gomer94@yandex.ru>
Bug: #14866
A COPY into a table should apply identity sequence values just like it
does for ordinary defaults. This was previously forgotten, leading to
null values being inserted, which in turn would fail because identity
columns have not-null constraints.
Author: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Steven Winfield <steven.winfield@cantabcapital.com>
Bug: #14952
I suppose it is a copy-and-paste error that this test doesn't actually
test the "Parallel Append with both partial and non-partial subplans"
case (EXPLAIN alone surely doesn't qualify as a test of executor
behavior). Fix that.
Also, add cosmetic aliases to make it possible to tell apart these
otherwise-identical test cases in log_statement output.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
When a Gather or Gather Merge node is started and stopped multiple
times, the old code wouldn't reset the shared state between executions,
potentially resulting in dramatically inflated instrumentation data
for nodes beneath it. (The per-worker instrumentation ended up OK,
I think, but the overall totals were inflated.)
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
reviewed and tweaked a bit by me.
Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
If a hash join appears in a parallel query, there may be no hash table
available for explain.c to inspect even though a hash table may have
been built in other processes. This could happen either because
parallel_leader_participation was set to off or because the leader
happened to hit the end of the outer relation immediately (even though
the complete relation is not empty) and decided not to build the hash
table.
Commit bf11e7ee introduced a way for workers to exchange
instrumentation via the DSM segment for Sort nodes even though they
are not parallel-aware. This commit does the same for Hash nodes, so
that explain.c has a way to find instrumentation data from an
arbitrary participant that actually built the hash table.
Author: Thomas Munro
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
Hopefully, the additional logging will help avoid confusion that
could otherwise result.
Nathan Bossart, reviewed by Michael Paquier, Fabrízio Mello, and me
Without this, when partdesc->nparts == 0, we end up calling
ExecBuildSlotPartitionKeyDescription without initializing values
and isnull.
Reported by Coverity via Michael Paquier. Patch by Michael Paquier,
reviewed and revised by Amit Langote.
Discussion: http://postgr.es/m/CAB7nPqQ3mwkdMoPY-ocgTpPnjd8TKOadMxdTtMLvEzF8480Zfg@mail.gmail.com
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar. This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.
This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL). There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql. Support for defining
procedures is available in all the languages supplied by the core
distribution.
While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target. Back-patch to 9.3
(all supported versions).
Previously, this function estimated the selectivity as 1 minus eqjoinsel()
for the negator equality operator, regardless of join type (I think there
was an expectation that eqjoinsel would handle the join type). But
actually this is completely wrong for semijoin cases: the fraction of the
LHS that has a non-matching row is not one minus the fraction of the LHS
that has a matching row. In reality a semijoin with <> will nearly always
succeed: it can only fail when the RHS is empty, or it contains a single
distinct value that is equal to the particular LHS value, or the LHS value
is null. The only one of those things we should have much confidence in
estimating is the fraction of LHS values that are null, so let's just take
the selectivity as 1 minus outer nullfrac.
Per coding convention, antijoin should be estimated the same as semijoin.
Arguably this is a bug fix, but in view of the lack of field complaints
and the risk of destabilizing plans, no back-patch.
Thomas Munro, reviewed by Ashutosh Bapat
Discussion: https://postgr.es/m/CAEepm=270ze2hVxWkJw-5eKzc3AB4C9KpH3L2kih75R5pdSogg@mail.gmail.com
Although hash joins are already tested by many queries, these tests
systematically cover the four different states we can reach as part of
the strategy for respecting work_mem.
Author: Thomas Munro
Reviewed-By: Andres Freund
Currently, partition pruning happens via constraint exclusion, but
there are pending places to replace that with a different and
hopefully faster mechanism. To be sure that we don't change behavior
without realizing it, add extensive test coverage.
Note that not all of these behaviors are optimal; in some cases,
partitions are not pruned even though it would be safe to do so.
These tests therefore serve to memorialize the current state rather
than the ideal state. Patches that improve things can update the test
results as appropriate.
Amit Langote, adjusted by me. Review and testing of the larger patch
set of which this is a part by Ashutosh Bapat, David Rowley, Dilip
Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul,
and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
Non-default range partitions have a constraint which include null
tests, and both default and non-default list partitions also have a
constraint which includes null tests, but for some reason this was
missed for default range partitions. This could cause the partition
constraint to evaluate to false for rows that were (correctly) routed
to that partition by insert tuple routing, which could in turn
cause constraint exclusion to prune the default partition in cases
where it should not.
Amit Langote, reviewed by Kyotaro Horiguchi
Discussion: http://postgr.es/m/ba7aaeb1-4399-220e-70b4-62eade1522d0@lab.ntt.co.jp
Doing this suppresses Coverity warnings and might allow improved
code in some cases. The prospects of that are not so bright as
to warrant back-patching, though.
Michael Paquier, per Coverity
When nodeValuesscan.c was written, it was impossible to have a SubPlan in
VALUES --- any sub-SELECT there would have to be uncorrelated and thereby
would produce an InitPlan instead. We therefore took a shortcut in the
logic that throws away a ValuesScan's per-row expression evaluation data
structures. This was broken by the introduction of LATERAL however; a
sub-SELECT containing a lateral reference produces a correlated SubPlan.
The cleanest fix for this would be to give up the optimization of
discarding the expression eval state. But that still seems pretty
unappetizing for long VALUES lists. It seems to work to just prevent
the subexpressions from hooking into the ValuesScan node's subPlan
list, so let's do that and see how well it works. (If this breaks,
due to additional connections between the subexpressions and the outer
query structures, we might consider compromises like throwing away data
only for VALUES rows not containing SubPlans.)
Per bug #14924 from Christian Duta. Back-patch to 9.3 where LATERAL
was introduced.
Discussion: https://postgr.es/m/20171124120836.1463.5310@wrigleys.postgresql.org
The query didn't really have a preferred index, leading to platform-
specific choices of which one to use. Adjust it to make sure tenk1_hundred
is always chosen.
Per buildfarm.
When strict aggregate combine functions, used in multi-stage/parallel
aggregation, returned NULL, we didn't check for that, invoking the
combine function with NULL the next round, despite it being strict.
The equivalent code invoking normal transition functions has a check
for that situation, which did not get copied in a7de3dc5c3. Fix the
bug by adding the equivalent check.
Based on a quick look I could not find any strict combine functions in
core actually returning NULL, and it doesn't seem very likely external
users have done so. So this isn't likely to have caused issues in
practice.
Add tests verifying transition / combine functions returning NULL is
tested.
Reported-By: Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20171121033642.7xvmjqrl4jdaaat3@alap3.anarazel.de
Backpatch: 9.6, where parallel aggregation was introduced
This hack closes a race condition in "make -j check-world" and "make -j
installcheck-world". Back-patch to v10, before which these parallel
invocations had worse problems.
Discussion: https://postgr.es/m/20171106080752.GA1298146@rfd.leadboat.com
Fix the function header comment to describe the actual behavior.
Check that table OID, modulus, and remainder arguments are not NULL
before accessing them. Check that the modulus and remainder are
sensible. If the table OID doesn't exist, return NULL instead of
emitting an internal error, similar to what we do elsewhere. Check
that the actual argument types match, or at least are binary coercible
to, the expected argument types. Correctly handle invocation of this
function using the VARIADIC syntax. Add regression tests.
Robert Haas and Amul Sul, per a report by Andreas Seltenreich and
subsequent followup investigation.
Discussion: http://postgr.es/m/871sl4sdrv.fsf@ansel.ydns.eu
This code evidently intended to treat backslash as an escape character
within double-quoted substrings, but it was sufficiently confused that
cases like ..."foo\\"... did not work right: the second backslash
managed to quote the double-quote after it, despite being quoted itself.
Rewrite to get that right, while preserving the existing behavior
outside double-quoted substrings, which is that backslash isn't special
except in the combination \".
Comparing to Oracle, it seems that their version of to_char() for
timestamps allows literal alphanumerics only within double quotes, while
non-alphanumerics are allowed outside quotes; backslashes aren't special
anywhere; there is no way at all to emit a literal double quote.
(Bizarrely, their to_char() for numbers is different; it doesn't allow
literal text at all AFAICT.) The fact that they don't treat backslash
as special justifies our existing behavior for backslash outside double
quotes. I considered making backslash inside double quotes act the same
way (ie, special only if before "), which in a green field would be a
more consistent behavior. But that would likely break more existing SQL
code than what this patch does.
Add some test cases illustrating this behavior. (Only the last new
case actually changes behavior in this commit.)
Little of this behavior was documented, either, so fix that.
Discussion: https://postgr.es/m/3626.1510949486@sss.pgh.pa.us
This is the basic feature set using OpenSSL to support the feature. In
order to allow the frontend and the backend to fetch the sent and
expected TLS Finished messages, a PG-like API is added to be able to
make the interface pluggable for other SSL implementations.
This commit also adds a infrastructure to facilitate the addition of
future channel binding types as well as libpq parameters to control the
SASL mechanism names and channel binding names. Those will be added by
upcoming commits.
Some tests are added to the SSL test suite to test SCRAM authentication
with channel binding.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Non-data template patterns would consume characters whether or not those
characters were what the pattern expected, for example
SELECT TO_NUMBER('1234', '9,999');
produced 134 because the '2' got eaten by the comma pattern. This seems
undesirable, not least because it doesn't happen in Oracle. For the ','
and 'G' template patterns, we can fix this by consuming characters only
if they match what the pattern would output. For non-data patterns such
as 'L' and 'TH', it seems impractical to tighten things up to the point of
consuming only exact matches to what the pattern would output; but we can
improve matters quite a lot by redefining the behavior as "consume only
characters that aren't digits, signs, decimal point, or comma".
Also, fix it so that the behavior is to consume the number of *characters*
the pattern would output, not the number of *bytes*. The old coding would
do surprising things with non-ASCII currency symbols, for example. (It
would be good to apply that rule for literal text as well, but this commit
only fixes it for non-data patterns.)
Oliver Ford, reviewed by Thomas Munro and Nathan Wagner, and whacked around
a bit more by me
Discussion: https://postgr.es/m/CAGMVOdvpbMqPf9XWNzOwBpzJfErkydr_fEGhmuDGa015z97mwg@mail.gmail.com
It appears that proargmodes should always be set for variadic
functions, but satifies_hash_partition had it as NULL. In addition to
fixing the problem, add a regression test to guard against future
mistakes of this type.
If a PARAM_EXEC parameter is used below a Gather (Merge) but the InitPlan
that computes it is attached to or above the Gather (Merge), force the
value to be computed before starting parallelism and pass it down to all
workers. This allows us to use parallelism in cases where it previously
would have had to be rejected as unsafe. We do - in this case - lose the
optimization that the value is only computed if it's actually used. An
alternative strategy would be to have the first worker that needs the value
compute it, but one downside of that approach is that we'd then need to
select a parallel-safe path to compute the parameter value; it couldn't for
example contain a Gather (Merge) node. At some point in the future, we
might want to consider both approaches.
Independent of that consideration, there is a great deal more work that
could be done to make more kinds of PARAM_EXEC parameters parallel-safe.
This infrastructure could be used to allow a Gather (Merge) on the inner
side of a nested loop (although that's not a very appealing plan) and
cases where the InitPlan is attached below the Gather (Merge) could be
addressed as well using various techniques. But this is a good start.
Amit Kapila, reviewed and revised by me. Reviewing and testing from
Kuntal Ghosh, Haribabu Kommi, and Tushar Ahuja.
Discussion: http://postgr.es/m/CAA4eK1LV0Y1AUV4cUCdC+sYOx0Z0-8NAJ2Pd9=UKsbQ5Sr7+JQ@mail.gmail.com
It's become apparent during testing that there are problems with at
least the testing regime. I don't think we should have it without a
working test regime, and the difficulties might indicate implementation
problems anyway, so I'm backing out the whole thing until that's sorted
out.
This reverts commits 74594849989f92cd8ce3a
Move the sub-routines wrappers to check if a connection to a server is
fine or not into the test main module. This is useful for other tests
willing to check connectivity into a server.
Author: Michael Paquier <michael@paquier.xyz>
The module requires a preloaded library and the defect can't be cured by
a LOAD instruction in the test script. To achieve this we override the
installcheck target in the module's Makefile, and exclude ithe module in
vcregress.pl.
Along the way, revert commit 9989f92aab.
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.
Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break. Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.
The only way out of the box is to make type int128 conform to the system's
alignment assumptions. Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.
Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.
Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.
This will need to be back-patched, along with the preparatory commit
91aec93e6. But let's see what the buildfarm makes of it first.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
When a value contained an XML declaration naming some other encoding,
this function interpreted UTF8 bytes as the named encoding, yielding
mojibake. xml_parse() already has similar logic. This would be
necessary but not sufficient for non-UTF8 databases, so preserve
behavior there until the xpath facility can support such databases
comprehensively. Back-patch to 9.3 (all supported versions).
Pavel Stehule and Noah Misch
Discussion: https://postgr.es/m/CAFj8pRC-dM=tT=QkGi+Achkm+gwPmjyOayGuUfXVumCxkDgYWg@mail.gmail.com
Hash partitioning is useful when you want to partition a growing data
set evenly. This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.
At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work. Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.
Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few
final tweaks also by me.
Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
While it's generally unwise to give permissions on these functions to
anyone but a superuser, we've been moving away from hard-wired permission
checks inside functions in favor of using the SQL permission system to
control access. Bring lo_import() and lo_export() into compliance with
that approach.
In particular, this removes the manual configuration option
ALLOW_DANGEROUS_LO_FUNCTIONS. That dates back to 1999 (commit 4cd4a54c8);
it's unlikely anyone has used it in many years. Moreover, if you really
want such behavior, now you can get it with GRANT ... TO PUBLIC instead.
Michael Paquier
Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com
The problem reported as CVE-2017-15098 was already resolved in HEAD by
commit 37a795a60, but let's add the relevant test cases anyway.
Michael Paquier and Tom Lane, per a report from David Rowley.
Security: CVE-2017-15098
The update path of an INSERT ... ON CONFLICT DO UPDATE requires SELECT
permission on the columns of the arbiter index, but it failed to check
for that in the case of an arbiter specified by constraint name.
In addition, for a table with row level security enabled, it failed to
check updated rows against the table's SELECT policies when the update
path was taken (regardless of how the arbiter index was specified).
Backpatch to 9.5 where ON CONFLICT DO UPDATE and RLS were introduced.
Security: CVE-2017-15099
Makefile.global assigns this prerequisite to every target named "check",
but similar targets must mention it explicitly. Affected targets
failed, tested $PATH binaries, or tested a stale temporary installation.
The src/test/modules examples worked properly when called as "make -C
src/test/modules/$FOO check", but "make -j" allowed the test to start
before the temporary installation was in place. Back-patch to 9.5,
where commit dcae5facca introduced the
shared temp-install.
When a publisher table has fewer columns than a subscriber, the update
of a row on the publisher should result in updating of only the columns
in common. The previous coding mistakenly reset the values of
additional columns on the subscriber to NULL because it failed to skip
updates of columns not found in the attribute map.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
This is only used in the pg_rewind tests, so only set it there. It's
better if other tests run closer to a default configuration.
Author: Michael Paquier <michael.paquier@gmail.com>
It turns out we misdiagnosed what the real problem was. Revert the
previous changes, because they may have worse consequences going
forward. A better fix is forthcoming.
The simplistic test case is kept, though disabled.
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
If we don't have to return any columns from heap tuples, and there's
no need to recheck qual conditions, and the heap page is all-visible,
then we can skip fetching the heap page altogether.
Skip prefetching pages too, when possible, on the assumption that the
recheck flag will remain the same from one page to the next. While that
assumption is hardly bulletproof, it seems like a good bet most of the
time, and better than prefetching pages we don't need.
This commit installs the executor infrastructure, but doesn't change
any planner cost estimates, thus possibly causing bitmap scans to
not be chosen in cases where this change renders them the best choice.
I (tgl) am not entirely convinced that we need to account for this
behavior in the planner, because I think typically the bitmap scan would
get chosen anyway if it's the best bet. In any case the submitted patch
took way too many shortcuts, resulting in too many clearly-bad choices,
to be committable.
Alexander Kuzmenkov, reviewed by Alexey Chernyshov, and whacked around
rather heavily by me.
Discussion: https://postgr.es/m/239a8955-c0fc-f506-026d-c837e86c827b@postgrespro.ru
It's possible for dropping a column, or altering its type, to require
changes in domain CHECK constraint expressions; but the code was
previously only expecting to find dependent table CHECK constraints.
Make the necessary adjustments.
This is a fairly old oversight, but it's a lot easier to encounter
the problem in the context of domains over composite types than it
was before. Given the lack of field complaints, I'm not going to
bother with a back-patch, though I'd be willing to reconsider that
decision if someone does complain.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/30656.1509128130@sss.pgh.pa.us
Without this fix, dropping a role can sometimes result in parallel
query failures in sessions that have used "SET ROLE" to assume the
dropped role, even if that setting isn't active any more.
Report by Pavan Deolasee. Patch by Amit Kapila, reviewed by me.
Discussion: http://postgr.es/m/CABOikdOomRcZsLsLK+Z+qENM1zxyaWnAvFh3MJZzZnnKiF+REg@mail.gmail.com
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed. This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.
To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns. While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.
Per bug #14876 from Samuel Horwitz. This has been busted since commit
ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org
This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't. In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked. In a generic plan, the Param node survives
and must be treated as parallel-safe. SerializeParamList provides
for the transmission of parameter values to workers.
Amit Kapila with help from Kuntal Ghosh. Some changes by me.
Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load. What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract. Otherwise, we can break it by dropping that
column. The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.
Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column. That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.
Likewise for FieldStore.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us
This is the last major omission in our domains feature: you can now
make a domain over anything that's not a pseudotype.
The major complication from an implementation standpoint is that places
that might be creating tuples of a domain type now need to be prepared
to apply domain_check(). It seems better that unprepared code fail
with an error like "<type> is not composite" than that it silently fail
to apply domain constraints. Therefore, relevant infrastructure like
get_func_result_type() and lookup_rowtype_tupdesc() has been adjusted
to treat domain-over-composite as a distinct case that unprepared code
won't recognize, rather than just transparently treating it the same
as plain composite. This isn't a 100% solution to the possibility of
overlooked domain checks, but it catches most places.
In passing, improve typcache.c's support for domains (it can now cache
the identity of a domain's base type), and rewrite the argument handling
logic in jsonfuncs.c's populate_record[set]_worker to reduce duplicative
per-call lookups.
I believe this is code-complete so far as the core and contrib code go.
The PLs need varying amounts of work, which will be tackled in followup
patches.
Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
Previously, we skipped using search_indexed_tlist_for_sortgroupref()
if the tlist expression being sought in the child plan node was merely
a Var. This is purely an optimization, based on the theory that
search_indexed_tlist_for_var() is faster, and one copy of a Var should
be as good as another. However, the GROUPING SETS patch broke the
latter assumption: grouping columns containing the "same" Var can
sometimes have different outputs, as shown in the test case added here.
So do it the hard way whenever a ressortgroupref marking exists.
(If this seems like a bottleneck, we could imagine building a tlist index
data structure for ressortgroupref values, as we do for Vars. But I'll
let that idea go until there's some evidence it's worthwhile.)
Back-patch to 9.6. The problem also exists in 9.5 where GROUPING SETS
came in, but this patch is insufficient to resolve the problem in 9.5:
there is some obscure dependency on the upper-planner-pathification
work that happened in 9.6. Given that this is such a weird corner case,
and no end users have complained about it, it doesn't seem worth the work
to develop a fix for 9.5.
Patch by me, per a report from Heikki Linnakangas. (This does not fix
Heikki's original complaint, just the follow-on one.)
Discussion: https://postgr.es/m/aefc657e-edb2-64d5-6df1-a0828f6e9104@iki.fi
json_build_object and json_build_array and the jsonb equivalents did not
correctly process explicit VARIADIC arguments. They are modified to use
the new extract_variadic_args() utility function which abstracts away
the details of the call method.
Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov.
Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as
that's where they originated.
Although joinaliasvars lists coming out of the parser are quite simple,
those lists can contain arbitrarily complex expressions after subquery
pullup. We do not perform expression preprocessing on them, meaning that
expressions in those lists will not meet the expectations of later phases
of the planner (for example, that they do not contain SubLinks). This had
been thought pretty harmless, since we don't intentionally touch those
lists in later phases --- but Andreas Seltenreich found a case in which
adjust_appendrel_attrs() could recurse into a joinaliasvars list and then
die on its assertion that it never sees a SubLink. We considered a couple
of localized fixes to prevent that specific case from looking at the
joinaliasvars lists, but really this seems like a generic hazard for all
expression processing in the planner. Therefore, probably the best answer
is to delete the joinaliasvars lists from the parsetree at the end of
expression preprocessing, so that there are no reachable expressions that
haven't been through preprocessing.
The case Andreas found seems to be harmless in non-Assert builds, and so
far there are no field reports suggesting that there are user-visible
effects in other cases. I considered back-patching this anyway, but
it turns out that Andreas' test doesn't fail at all in 9.4-9.6, because
in those versions adjust_appendrel_attrs contains code (added in commit
842faa714 and removed again in commit 215b43cdc) to process SubLinks
rather than complain about them. Barring discovery of another path by
which unprocessed joinaliasvars lists can cause trouble, the most
prudent compromise seems to be to patch this into v10 but not further.
Patch by me, with thanks to Amit Langote for initial investigation
and review.
Discussion: https://postgr.es/m/87r2tvt9f1.fsf@ansel.ydns.eu
Like the similar logic for arrays and records, it's necessary to examine
the range's subtype to decide whether the range type can support hashing.
We can omit checking the subtype for btree-defined operations, though,
since range subtypes are required to have those operations. (Possibly
that simplification for btree cases led us to overlook that it does
not apply for hash cases.)
This is only an issue if the subtype lacks hash support, which is not
true of any built-in range type, but it's easy to demonstrate a problem
with a range type over, eg, money: you can get a "could not identify
a hash function" failure when the planner is misled into thinking that
hash join or aggregation would work.
This was born broken, so back-patch to all supported branches.
This is preparation for a future patch to extensively change how
reloptions work.
Author: Nikolay Shaplov
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/2615372.orqtEn8VGB@x200m
setTargetTable threw an error if the proposed target RangeVar's relname
matched any visible CTE or ENR. This breaks backwards compatibility in
the CTE case, since pre-v10 we never looked for a CTE here at all, so that
CTE names did not mask regular tables. It does seem like a good idea to
throw an error for the ENR case, though, thus causing ENRs to mask tables
for this purpose; ENRs are new in v10 so we're not breaking existing code,
and we may someday want to allow them to be the targets of DML.
To fix that, replace use of getRTEForSpecialRelationTypes, which was
overkill anyway, with use of scanNameSpaceForENR.
A second problem was that the check neglected to verify null schemaname,
so that a CTE or ENR could incorrectly be thought to match a qualified
RangeVar. That happened because getRTEForSpecialRelationTypes relied
on its caller to have checked for null schemaname. Even though the one
remaining caller got it right, this is obviously bug-prone, so move
the check inside getRTEForSpecialRelationTypes.
Also, revert commit 18ce3a4ab's extremely poorly thought out decision to
add a NULL return case to parserOpenTable --- without either documenting
that or adjusting any of the callers to check for it. The current bug
seems to have arisen in part due to working around that bad idea.
In passing, remove the one-line shim functions transformCTEReference and
transformENRReference --- they don't seem to be adding any clarity or
functionality.
Per report from Hugo Mercier (via Julien Rouhaud). Back-patch to v10
where the bug was introduced.
Thomas Munro, with minor editing by me
Discussion: https://postgr.es/m/CAOBaU_YdPVH+PTtiKSSLOiiW3mVDYsnNUekK+XPbHXiP=wrFLA@mail.gmail.com
Flex generates a lot of functions that are not actually used. In order
to avoid coverage figures being ruined by that, mark up the part of the
.l files where the generated code appears by lcov exclusion markers.
That way, lcov will typically only reported on coverage for the .l file,
which is under our control, but not for the .c file.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
The built-in OSAs all share the same transition function, so they can
share transition state as long as the final functions cooperate to not
do the sort step more than once. To avoid running the tuplesort object
in randomAccess mode unnecessarily, add a bit of infrastructure to
nodeAgg.c to let the aggregate functions find out whether the transition
state is actually being shared or not.
This doesn't work for the hypothetical aggregates, since those inject
a hypothetical row that isn't traceable to the shared input state.
So they remain marked aggfinalmodify = 'w'.
Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com
An aggregate's input expression(s) are not supposed to be evaluated
at all for a row where its FILTER test fails ... but commit 8ed3f11bb
overlooked that requirement. Reshuffle so that aggregates having a
filter clause evaluate their arguments separately from those without.
This still gets the benefit of doing only one ExecProject in the
common case of multiple Aggrefs, none of which have filters.
While at it, arrange for filter clauses to be included in the common
ExecProject evaluation, thus perhaps buying a little bit even when
there are filters.
Back-patch to v10 where the bug was introduced.
Discussion: https://postgr.es/m/30065.1508161354@sss.pgh.pa.us
Up to now, there's been hard-wired assumptions that normal aggregates'
final functions never modify their transition states, while ordered-set
aggregates' final functions always do. This has always been a bit
limiting, and in particular it's getting in the way of improving the
built-in ordered-set aggregates to allow merging of transition states.
Therefore, let's introduce catalog and CREATE AGGREGATE infrastructure
that lets the finalfn's behavior be declared explicitly.
There are now three possibilities for the finalfn behavior: it's purely
read-only, it trashes the transition state irrecoverably, or it changes
the state in such a way that no more transfn calls are possible but the
state can still be passed to other, compatible finalfns. There are no
examples of this third case today, but we'll shortly make the built-in
OSAs act like that.
This change allows user-defined aggregates to explicitly disclaim support
for use as window functions, and/or to prevent transition state merging,
if their implementations cannot handle that. While it was previously
possible to handle the window case with a run-time error check, there was
not any way to prevent transition state merging, which in retrospect is
something commit 804163bc2 should have provided for. But better late
than never.
In passing, split out pg_aggregate.c's extern function declarations into
a new header file pg_aggregate_fn.h, similarly to what we've done for
some other catalog headers, so that pg_aggregate.h itself can be safe
for frontend files to include. This lets pg_dump use the symbolic
names for relevant constants.
Discussion: https://postgr.es/m/4834.1507849699@sss.pgh.pa.us
A few command line options accepted by pg_regress were not being output
by help(), including --help itself. Add that one, as well as --version
and --bindir, and the corresponding short options for the first two.
We could consider this for backpatching, but it did not seem worthwhile
and no one else advocated for it, so apply only to master for now.
Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/dd519469-06d7-2662-83ef-c926f6c4f0f1%40joeconway.com
This ought to work, but the built-in OSAs are not capable of coping,
because their final-functions destructively modify their transition
state (specifically, the contained tuplesort object). That was fine
when those functions were written, but commit 804163bc2 moved the
goalposts without telling orderedsetaggs.c.
We should fix the built-in OSAs to support this, but it will take
a little work, especially if we don't want to sacrifice performance
in the normal non-shared-state case. Given that it took a year after
9.6 release for anyone to notice this bug, we should not prioritize
sharable-state over nonsharable-state performance. And a proper fix
is likely to be more complicated than we'd want to back-patch, too.
Therefore, let's just put in this stop-gap patch to prevent nodeAgg.c
from choosing to use shared state for OSAs. We can revert it in HEAD
when we get a better fix.
Report from Lukas Eder, diagnosis by me, patch by David Rowley.
Back-patch to 9.6 where the problem was introduced.
Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com
I noticed the tmp_check subdirectory wasn't getting cleaned up
after a check-world run. Apparently pgxs.mk will only do this
for you if you've defined REGRESS. The only other src/test/modules
Makefile that does not set that is snapshot_too_old, and it
does it like this.
If the operator is a strict btree equality operator, and X isn't volatile,
then the clause must yield true for any non-null value of X, or null if X
is null. At top level of a WHERE clause, we can ignore the distinction
between false and null results, so it's valid to simplify the clause to
"X IS NOT NULL". This is a useful improvement mainly because we'll get
a far better selectivity estimate in most cases.
Because such cases seldom arise in well-written queries, it is unappetizing
to expend a lot of planner cycles looking for them ... but it turns out
that there's a place we can shoehorn this in practically for free, because
equivclass.c already has to detect and reject candidate equivalences of the
form X = X. That doesn't catch every place that it would be valid to
simplify to X IS NOT NULL, but it catches the typical case. Working harder
doesn't seem justified.
Patch by me, reviewed by Petr Jelinek
Discussion: https://postgr.es/m/CAMjNa7cC4X9YR-vAJS-jSYCajhRDvJQnN7m2sLH1wLh-_Z2bsw@mail.gmail.com
The previous coding here trashed the line buffer as it scanned it,
making it impossible to print the source line in subsequent error
messages. With a few save/restore/strdup pushups we can improve
that situation.
In passing, move the free'ing of the various strings that are collected
while processing one set of tests down to the bottom of the loop.
That's simpler, less surprising, and should make valgrind less unhappy
about the strings that were previously leaked by the last iteration.
We have a very old rule that parallel_schedule should have no more
than twenty tests in any one parallel group, so as to provide a
bound on the number of concurrently running processes needed to
pass the tests. But people keep forgetting the rule, so let's add
a few lines of code to check it.
Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com
The partition_join test was added to a parallel group that was already
at the maximum of 20 concurrent tests. The hash_func test wasn't
added to serial_schedule at all. The identity and partition_join tests
were added to serial_schedule with the aid of a dartboard, rather than
maintaining consistency with parallel_schedule.
There are proposals afoot to make these sorts of errors harder to make,
but in the meantime let's fix the ones already in place.
Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com
Instead of joining two partitioned tables in their entirety we can, if
it is an equi-join on the partition keys, join the matching partitions
individually. This involves teaching the planner about "other join"
rels, which are related to regular join rels in the same way that
other member rels are related to baserels. This can use significantly
more CPU time and memory than regular join planning, because there may
now be a set of "other" rels not only for every base relation but also
for every join relation. In most practical cases, this probably
shouldn't be a problem, because (1) it's probably unusual to join many
tables each with many partitions using the partition keys for all
joins and (2) if you do that scenario then you probably have a big
enough machine to handle the increased memory cost of planning and (3)
the resulting plan is highly likely to be better, so what you spend in
planning you'll make up on the execution side. All the same, for now,
turn this feature off by default.
Currently, we can only perform joins between two tables whose
partitioning schemes are absolutely identical. It would be nice to
cope with other scenarios, such as extra partitions on one side or the
other with no match on the other side, but that will have to wait for
a future patch.
Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit
Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit
Khandekar, and by me. A few final adjustments by me.
Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com
Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com
If the table attached as a partition is itself partitioned, individual
partitions might have constraints strong enough to skip scanning the
table even if the table actually attached does not. This is pretty
cheap to check, and possibly a big win if it works out.
Amit Langote, with test case changes by me.
Discussion: http://postgr.es/m/1f08b844-0078-aa8d-452e-7af3bf77d05f@lab.ntt.co.jp
Haribabu Kommi, reviewed by Dilip Kumar and Rafia Sabih. Various
cosmetic changes by me to explain why this appears to be safe but
allowing inserts in parallel mode in general wouldn't be. Also, I
removed the REFRESH MATERIALIZED VIEW case from Haribabu's patch,
since I'm not convinced that case is OK, and hacked on the
documentation somewhat.
Discussion: http://postgr.es/m/CAJrrPGdo5bak6qnPWe8Kpi8g_jfQEs-G4SYmG9y+OFaw2-dPvA@mail.gmail.com
A lot of semi-internal code just prints out numeric SPI error codes,
which is not very helpful. We already have an API function to convert
the codes to a string, so let's make more use of that.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Not much to say about this; does what it says on the tin.
However, formerly, if there was a column list then the ANALYZE action was
implied; now it must be specified, or you get an error. This is because
it would otherwise be a bit unclear what the user meant if some tables
have column lists and some don't.
Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada, with some
editorialization by me
Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
For \d sequencename, the psql code just did SELECT * FROM sequencename
to get the information to display, but this does not contain much
interesting information anymore in PostgreSQL 10, because the metadata
has been moved to a separate system catalog.
This patch creates a newly designed sequence display that is not merely
an extension of the general relation/table display as it was previously.
Example:
PostgreSQL 9.6:
=> \d foobar
Sequence "public.foobar"
Column | Type | Value
---------------+---------+---------------------
sequence_name | name | foobar
last_value | bigint | 1
start_value | bigint | 1
increment_by | bigint | 1
max_value | bigint | 9223372036854775807
min_value | bigint | 1
cache_value | bigint | 1
log_cnt | bigint | 0
is_cycled | boolean | f
is_called | boolean | f
PostgreSQL 10 before this change:
=> \d foobar
Sequence "public.foobar"
Column | Type | Value
------------+---------+-------
last_value | bigint | 1
log_cnt | bigint | 0
is_called | boolean | f
New:
=> \d foobar
Sequence "public.foobar"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Add bgw_type field to background worker structure. It is intended to be
set to the same value for all workers of the same type, so they can be
grouped in pg_stat_activity, for example.
The backend_type column in pg_stat_activity now shows bgw_type for a
background worker. The ps listing also no longer calls out that a
process is a background worker but just show the bgw_type. That way,
being a background worker is more of an implementation detail now that
is not shown to the user. However, most log messages still refer to
'background worker "%s"'; otherwise constructing sensible and
translatable log messages would become tricky.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
At the time replacement_sort_tuples was introduced, there were still
cases where replacement selection sort noticeably outperformed using
quicksort even for the first run. However, those cases seem to have
evaporated as a result of further improvements made since that time
(and perhaps also advances in CPU technology). So remove replacement
selection and the controlling GUC entirely. This makes tuplesort.c
noticeably simpler and probably paves the way for further
optimizations someone might want to do later.
Peter Geoghegan, with review and testing by Tomas Vondra and me.
Discussion: https://postgr.es/m/CAH2-WzmmNjG_K0R9nqYwMq3zjyJJK+hCbiZYNGhAy-Zyjs64GQ@mail.gmail.com
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing
liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But
concurrent transaction commit/abort may turn DEAD some of the HOT tuples
that survived the prune, before HeapTupleSatisfiesVacuum tests them.
This happens to activate the code that decides to freeze the tuple ...
which resuscitates it, duplicating data.
(This is especially bad if there's any unique constraints, because those
are now internally violated due to the duplicate entries, though you
won't know until you try to REINDEX or dump/restore the table.)
One possible fix would be to simply skip doing anything to the tuple,
and hope that the next HOT prune would remove it. But there is a
problem: if the tuple is older than freeze horizon, this would leave an
unfrozen XID behind, and if no HOT prune happens to clean it up before
the containing pg_clog segment is truncated away, it'd later cause an
error when the XID is looked up.
Fix the problem by having the tuple freezing routines cope with the
situation: don't freeze the tuple (and keep it dead). In the cases that
the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag
so that there is no need to look up the XID in pg_clog later on.
An isolation test is included, authored by Michael Paquier, loosely
based on Daniel Wood's original reproducer. It only tests one
particular scenario, though, not all the possible ways for this problem
to surface; it be good to have a more reliable way to test this more
fully, but it'd require more work.
In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql
I outlined another test case (more closely matching Dan Wood's) that
exposed a few more ways for the problem to occur.
Backpatch all the way back to 9.3, where this problem was introduced by
multixact juggling. In branches 9.3 and 9.4, this includes a backpatch
of commit e5ff9fefcd50 (of 9.5 era), since the original is not
correctable without matching the coding pattern in 9.5 up.
Reported-by: Daniel Wood
Diagnosed-by: Daniel Wood
Reviewed-by: Yi Wen Wong, Michaël Paquier
Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity. The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like
ERROR: invalid input syntax for type numeric: "inf"
but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.
Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.
While at it, let's use snprintf not sprintf. Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.
Problem reported by Taiki Kondo. Patch by me based on fix suggestion
from KaiGai Kohei. Back-patch to all supported branches.
Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
This reverts commit 15bc038f9, along with the followon commits 1635e80d3
and 984c92074 that tried to clean up the problems exposed by bug #14825.
The result was incomplete because it failed to address parallel-query
requirements. With 10.0 release so close upon us, now does not seem like
the time to be adding more code to fix that. I hope we can un-revert this
code and add the missing parallel query support during the v11 cycle.
Back-patch to v10.
Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
The blacklist mechanism added by the preceding commit directly fixes
most of the practical cases that the same-transaction test was meant
to cover. What remains is use-cases like
begin;
create type e as enum('x');
alter type e add value 'y';
-- use 'y' somehow
commit;
However, because the same-transaction test is heuristic, it fails on
small variants of that, such as renaming the type or changing its
owner. Rather than try to explain the behavior to users, let's
remove it and just have a rule that the newly added value can't be
used before being committed, full stop. Perhaps later it will be
worth the implementation effort and overhead to have a more accurate
test for type-was-created-in-this-transaction. We'll wait for some
field experience with v10 before deciding to do that.
Back-patch to v10.
Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
Commit 15bc038f9 allowed ALTER TYPE ADD VALUE to be executed inside
transaction blocks, by disallowing the use of the added value later
in the same transaction, except under limited circumstances. However,
the test for "limited circumstances" was heuristic and could reject
references to enum values that were created during CREATE TYPE AS ENUM,
not just later. This breaks the use-case of restoring pg_dump scripts
in a single transaction, as reported in bug #14825 from Balazs Szilfai.
We can improve this by keeping a "blacklist" table of enum value OIDs
created by ALTER TYPE ADD VALUE during the current transaction. Any
visible-but-uncommitted value whose OID is not in the blacklist must
have been created by CREATE TYPE AS ENUM, and can be used safely
because it could not have a lifespan shorter than its parent enum type.
This change also removes the restriction that a renamed enum value
can't be used before being committed (unless it was on the blacklist).
Andrew Dunstan, with cosmetic improvements by me.
Back-patch to v10.
Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
A FOR ALL TABLES publication naturally considers all base tables to be a
candidate for replication. This includes transient heaps that are
created during a table rewrite during DDL. This causes failures on the
subscriber side because it will not have a table like pg_temp_16386 to
receive data (and if it did, it would be the wrong table).
The prevent this problem, we filter out any tables that match this
naming pattern and match an actual table from FOR ALL TABLES
publications. This is only a heuristic, meaning that user tables that
match that naming could accidentally be omitted. A more robust solution
might require an explicit marking of such tables in pg_class somehow.
Reported-by: yxq <yxq@o2.pl>
Bug: #14785
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
If construct_array() or construct_md_array() were given a dimension of
zero, they'd produce an array that contains no elements but has positive
dimension. This violates a general expectation that empty arrays should
have ndims = 0; in particular, while arrays like this print as empty,
they don't compare equal to other empty arrays.
Up to now we've expected callers to avoid making such calls and instead
be careful to call construct_empty_array() if there would be no elements.
But this has always been an easily missed case, and we've repeatedly had to
fix callers to do it right. In bug #14826, Erwin Brandstetter pointed out
yet another such oversight, in ts_lexize(); and a bit of examination of
other call sites found at least two more with similar issues. So let's
fix the problem centrally and permanently by changing these two functions
to construct a proper zero-D empty array whenever the array would be empty.
This renders a few explicit calls of construct_empty_array() redundant,
but the only such place I found that really seemed worth changing was in
ExecEvalArrayExpr().
Although this fixes some very old bugs, no back-patch: the problem is
pretty minor and the risk of changing behavior seems to outweigh the
benefit in stable branches.
Discussion: https://postgr.es/m/20170923125723.1448.39412@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20570.1506198383@sss.pgh.pa.us