Commit Graph

4376 Commits

Author SHA1 Message Date
Robert Haas e2f1eb0ee3 Implement partition-wise grouping/aggregation.
If the partition keys of input relation are part of the GROUP BY
clause, all the rows belonging to a given group come from a single
partition.  This allows aggregation/grouping over a partitioned
relation to be broken down * into aggregation/grouping on each
partition.  This should be no worse, and often better, than the normal
approach.

If the GROUP BY clause does not contain all the partition keys, we can
still perform partial aggregation for each partition and then finalize
aggregation after appending the partial results.  This is less certain
to be a win, but it's still useful.

Jeevan Chalke, Ashutosh Bapat, Robert Haas.  The larger patch series
of which this patch is a part was also reviewed and tested by Antonin
Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin
Knizhnik, Pascal Legrand, and Rafia Sabih.

Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
2018-03-22 12:49:48 -04:00
Tom Lane 742869946f Fix mishandling of quoted-list GUC values in pg_dump and ruleutils.c.
Code that prints out the contents of setconfig or proconfig arrays in
SQL format needs to handle GUC_LIST_QUOTE variables differently from
other ones, because for those variables, flatten_set_variable_args()
already applied a layer of quoting.  The value can therefore safely
be printed as-is, and indeed must be, or flatten_set_variable_args()
will muck it up completely on reload.  For all other GUC variables,
it's necessary and sufficient to quote the value as a SQL literal.

We'd recognized the need for this long ago, but mis-analyzed the
need slightly, thinking that all GUC_LIST_INPUT variables needed
the special treatment.  That's actually wrong, since a valid value
of a LIST variable might include characters that need quoting,
although no existing variables accept such values.

More to the point, we hadn't made any particular effort to keep the
various places that deal with this up-to-date with the set of variables
that actually need special treatment, meaning that we'd do the wrong
thing with, for example, temp_tablespaces values.  This affects dumping
of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET
commands.

In ruleutils.c we can fix it reasonably honestly by exporting a guc.c
function that allows discovering the flags for a given GUC variable.
But pg_dump doesn't have easy access to that, so continue the old method
of having a hard-wired list of affected variable names.  At least we can
fix it to have just one list not two, and update the list to match
current reality.

A remaining problem with this is that it only works for built-in
GUC variables.  pg_dump's list obvious knows nothing of third-party
extensions, and even the "ask guc.c" method isn't bulletproof since
the relevant extension might not be loaded.  There's no obvious
solution to that, so for now, we'll just have to discourage extension
authors from inventing custom GUCs that need GUC_LIST_QUOTE.

This has been busted for a long time, so back-patch to all supported
branches.

Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and
Pavel Stehule

Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
2018-03-21 20:03:28 -04:00
Tom Lane 0f0deb7194 Improve predtest.c's handling of cases with NULL-constant inputs.
Currently, if operator_predicate_proof() is given an operator clause like
"something op NULL", it just throws up its hands and reports it can't prove
anything.  But we can often do better than that, if the operator is strict,
because then we know that the clause returns NULL overall.  Depending on
whether we're trying to prove or refute something, and whether we need
weak or strong semantics for NULL, this may be enough to prove the
implication, especially when we rely on the standard rule that "false
implies anything".  In particular, this lets us do something useful with
questions like "does X IN (1,3,5,NULL) imply X <= 5?"  The null entry
in the IN list can effectively be ignored for this purpose, but the
proof rules were not previously smart enough to deduce that.

This patch is by me, but it owes something to previous work by
Amit Langote to try to solve problems of the form mentioned.
Thanks also to Emre Hasegeli and Ashutosh Bapat for review.

Discussion: https://postgr.es/m/3bad48fc-f257-c445-feeb-8a2b2fb622ba@lab.ntt.co.jp
2018-03-21 18:30:46 -04:00
Andrew Gierth d2d79887ea Repair crash with unsortable grouping sets.
If there were multiple grouping sets, none of them empty, all of which
were unsortable, then an oversight in consider_groupingsets_paths led
to a null pointer dereference. Fix, and add a regression test for this
case.

Per report from Dang Minh Huong, though I didn't use their patch.

Backpatch to 10.x where hashed grouping sets were added.
2018-03-21 11:39:28 +00:00
Andrew Dunstan 9ad21a6957 Don't use an Msys virtual path to create a tablespace
The new unlogged_reinit recovery tests create a new tablespace using
TestLib.pm's tempdir. However, on msys that function returns a virtual
path that isn't understood by Postgres. Here we add a new function to
TestLib.pm to turn such a path into a real path on the underlying file
system, and use it in the new test to create the tablespace. The new
function is essentially a NOOP everywhere but msys.
2018-03-20 08:25:36 +10:30
Alvaro Herrera 6666ee49f4 Fix state reversal after partition tuple routing
We make some changes to ModifyTableState and the EState it uses whenever
we route tuples to partitions; but we weren't restoring properly in all
cases, possibly causing crashes when partitions with different tuple
descriptors are targeted by tuples inserted in the same command.
Refactor some code, creating ExecPrepareTupleRouting, to encapsulate the
needed state changing logic, and have it invoked one level above its
current place (ie. put it in ExecModifyTable instead of ExecInsert);
this makes it all more readable.

Add a test case to exercise this.

We don't support having views as partitions; and since only views can
have INSTEAD OF triggers, there is no point in testing for INSTEAD OF
when processing insertions into a partitioned table.  Remove code that
appears to support this (but which is actually never relevant.)

In passing, fix location of some very confusing comments in
ModifyTableState.

Reported-by: Amit Langote
Author: Etsuro Fujita, Amit Langote
Discussion: https://postgr/es/m/0473bf5c-57b1-f1f7-3d58-455c2230bc5f@lab.ntt.co.jp
2018-03-19 17:45:53 -03:00
Tom Lane 8f5ac44043 Fix WHERE CURRENT OF when the referenced cursor uses an index-only scan.
"UPDATE/DELETE WHERE CURRENT OF cursor_name" failed, with an error message
like "cannot extract system attribute from virtual tuple", if the cursor
was using a index-only scan for the target table.  Fix it by digging the
current TID out of the indexscan state.

It seems likely that the same failure could occur for CustomScan plans
and perhaps some FDW plan types, so that leaving this to be treated as an
internal error with an obscure message isn't as good an idea as it first
seemed.  Hence, add a bit of heaptuple.c infrastructure to let us deliver
a more on-topic message.  I chose to make the message match what you get
for the case where execCurrentOf can't identify the target scan node at
all, "cursor "foo" is not a simply updatable scan of table "bar"".
Perhaps it should be different, but we can always adjust that later.

In the future, it might be nice to provide hooks that would let custom
scan providers and/or FDWs deal with this in other ways; but that's
not a suitable topic for a back-patchable bug fix.

It's been like this all along, so back-patch to all supported branches.

Yugo Nagata and Tom Lane

Discussion: https://postgr.es/m/20180201013349.937dfc5f.nagata@sraoss.co.jp
2018-03-17 14:59:49 -04:00
Peter Eisentraut 8a3d942529 Add ssl_passphrase_command setting
This allows specifying an external command for prompting for or
otherwise obtaining passphrases for SSL key files.  This is useful
because in many cases there is no TTY easily available during service
startup.

Also add a setting ssl_passphrase_command_supports_reload, which allows
supporting SSL configuration reload even if SSL files need passphrases.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-03-17 08:28:51 -04:00
Peter Eisentraut 81148856b0 Improve savepoint error messages
Include the savepoint name in the error message and rephrase it a bit to
match common style.

Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
2018-03-16 13:18:06 -04:00
Tom Lane 877cdf11ea Mop-up for letting VOID-returning SQL functions end with a SELECT.
Part of the intent in commit fd1a421fe was to allow SQL functions that are
declared to return VOID to contain anything, including an unrelated final
SELECT, the same as SQL-language procedures can.  However, the planner's
inlining logic didn't get that memo.  Fix it, and add some regression tests
covering this area, since evidently we had none.

In passing, clean up some typos in comments in create_function_3.sql,
and get rid of its none-too-safe assumption that DROP CASCADE notice
output is immutably ordered.

Per report from Prabhat Sahu.

Discussion: https://postgr.es/m/CANEvxPqxAj6nNHVcaXxpTeEFPmh24Whu+23emgjiuKrhJSct0A@mail.gmail.com
2018-03-16 12:48:13 -04:00
Tom Lane 2cf8c7aa48 Clean up duplicate table and function names in regression tests.
Many of the objects we create during the regression tests are put in the
public schema, so that using the same names in different regression tests
creates a hazard of test failures if any two such scripts run concurrently.
This patch cleans up a bunch of latent hazards of that sort, as well as two
live hazards.

The current situation in this regard is far worse than it was a year or two
back, because practically all of the partitioning-related test cases have
reused table names with enthusiasm.  I despaired of cleaning up that mess
within the five most-affected tests (create_table, alter_table, insert,
update, inherit); fortunately those don't run concurrently.

Other than partitioning problems, most of the issues boil down to using
names like "foo", "bar", "tmp", etc, without thought for the fact that
other test scripts might use similar names concurrently.  I've made an
effort to make all such names more specific.

One of the live hazards was that commit 7421f4b8 caused with.sql to
create a table named "test", conflicting with a similarly-named table
in alter_table.sql; this was exposed in the buildfarm recently.
The other one was that join.sql and transactions.sql both create tables
named "foo" and "bar"; but join.sql's uses of those names date back
only to December or so.

Since commit 7421f4b8 was back-patched to v10, back-patch a minimal
fix for that problem.  The rest of this is just future-proofing.

Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
2018-03-15 17:09:02 -04:00
Alvaro Herrera a446a1c70f test_ddl_deparse: rename matview
Should have done this in e69f5e0efc ...
Per note from Tom Lane.
2018-03-15 15:21:01 -03:00
Tom Lane fb7db40ad2 Clean up duplicate role and schema names in regression tests.
Since these names are global, using the same ones in different regression
tests creates a hazard of test failures if any two such scripts run
concurrently.  Let's establish a policy of not doing that.  In the cases
where a conflict existed, I chose to rename both sides: in principle one
script or the other could've been left in possession of the common name,
but that seems to just invite more trouble of the same sort.

There are a number of places where scripts are using names that seem
unduly generic, but in the absence of actual conflicts I left them alone.

In addition, fix insert.sql's use of "someone_else" as a role name.
That's a flat out violation of longstanding project policy, so back-patch
that change to v10 where the usage appeared.  The rest of this is just
future-proofing, as no two of these scripts are actually run concurrently
in the existing parallel_schedule.

Conflicts of schema-qualified names also exist, but will be dealt with
separately.

Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
2018-03-15 14:00:31 -04:00
Alvaro Herrera e69f5e0efc test_ddl_deparse: Don't use pg_class as source for a matview
Doing so causes a pg_upgrade of a database containing these objects to
fail whenever pg_class changes.  And it's pointless anyway: we have more
interesting tables anyhow.

Discussion: https://postgr.es/m/CAD5tBc+s8pW9WvH2+_z=B4x95FD4QuzZKcaMpff_9H4rS0VU1A@mail.gmail.com
2018-03-15 09:51:12 -03:00
Peter Eisentraut 33803f67f1 Support INOUT arguments in procedures
In a top-level CALL, the values of INOUT arguments will be returned as a
result row.  In PL/pgSQL, the values are assigned back to the input
arguments.  In other languages, the same convention as for return a
record from a function is used.  That does not require any code changes
in the PL implementations.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-03-14 12:07:28 -04:00
Peter Eisentraut df411e7c66 Add tests for reinit.c
This tests how the different forks of unlogged tables behave in crash
recovery.

Author: David Steele <david@pgmasters.net>
2018-03-14 09:03:44 -04:00
Andres Freund 1e22166e5e Expand AND/OR regression tests around NULL handling.
Previously there were no tests verifying that NULL handling in AND/OR
was correct (i.e. that NULL rather than false is returned if
expression doesn't return true).

Author: Andres Freund
2018-03-13 16:12:31 -07:00
Andres Freund 4f63e85eb1 Add COSTS off to two EXPLAIN using tests.
Discussion: https://postgr.es/m/20180312222023.i4sgkbl4oqtstus3@alap3.anarazel.de
2018-03-13 16:12:31 -07:00
Robert Haas 0927d2f46d Let Parallel Append over simple UNION ALL have partial subpaths.
A simple UNION ALL gets flattened into an appendrel of subquery
RTEs, but up until now it's been impossible for the appendrel to use
the partial paths for the subqueries, so we can implement the
appendrel as a Parallel Append but only one with non-partial paths
as children.

There are three separate obstacles to removing that limitation.
First, when planning a subquery, propagate any partial paths to the
final_rel so that they are potentially visible to outer query levels
(but not if they have initPlans attached, because that wouldn't be
safe).  Second, after planning a subquery, propagate any partial paths
for the final_rel to the subquery RTE in the outer query level in the
same way we do for non-partial paths.  Third, teach finalize_plan() to
account for the possibility that the fake parameter we use for rescan
signalling when the plan contains a Gather (Merge) node may be
propagated from an outer query level.

Patch by me, reviewed and tested by Amit Khandekar, Rajkumar
Raghuwanshi, and Ashutosh Bapat.  Test cases based on examples by
Rajkumar Raghuwanshi.

Discussion: http://postgr.es/m/CA+Tgmoa6L9A1nNCk3aTDVZLZ4KkHDn1+tm7mFyFvP+uQPS7bAg@mail.gmail.com
2018-03-13 16:34:08 -04:00
Peter Eisentraut 377b5ac484 Fix CREATE TABLE / LIKE with bigint identity column
CREATE TABLE / LIKE with a bigint identity column would fail on
platforms where long is 32 bits.  Copying the sequence values used
makeInteger(), which would truncate the 64-bit sequence data to 32 bits.
To fix, use makeFloat() instead, like the parser.  (This does not
actually make use of floats, but stores the values as strings.)

Bug: #15096
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-13 09:41:30 -04:00
Alvaro Herrera 1f8a3327a9 Avoid having two PKs in a partition
If a table containing a primary key is attach as partition to a
partitioned table which has a primary key with a different definition,
we would happily create a second one in the new partition.  Oops.  It
turns out that this is because an error check in DefineIndex is executed
only if you tell it that it's being run by ALTER TABLE, and the original
code here wasn't.  Change it so that it does.

Added a couple of test cases for this, also.  A previously working test
started to fail in a different way than before patch because the new
check is called earlier; change the PK to plain UNIQUE so that the new
behavior isn't invoked, so that the test continues to verify what we
want it to verify.

Reported by: Noriyoshi Shinoda
Discussion: https://postgr.es/m/DF4PR8401MB102060EC2615EC9227CC73F7EEDF0@DF4PR8401MB1020.NAMPRD84.PROD.OUTLOOK.COM
2018-03-12 19:42:32 -03:00
Alvaro Herrera 63cbee6a78 doc: Reword restriction on partition keys in unique indexes
New wording from David G. Johnston, who noticed the unreadable original
also.  Include his suggested test case as well.

Fix a typo I noticed elsewhere while doing this.

Discussion: https://postgr.es/m/CAKFQuwY4Ld7ecxL_KAmaxwt0FUu5VcPPN2L4dh+3BeYbrdBa5g@mail.gmail.com
2018-03-12 13:32:28 -03:00
Tom Lane 4a4e2442a7 Fix improper uses of canonicalize_qual().
One of the things canonicalize_qual() does is to remove constant-NULL
subexpressions of top-level AND/OR clauses.  It does that on the assumption
that what it's given is a top-level WHERE clause, so that NULL can be
treated like FALSE.  Although this is documented down inside a subroutine
of canonicalize_qual(), it wasn't mentioned in the documentation of that
function itself, and some callers hadn't gotten that memo.

Notably, commit d007a9505 caused get_relation_constraints() to apply
canonicalize_qual() to CHECK constraints.  That allowed constraint
exclusion to misoptimize situations in which a CHECK constraint had a
provably-NULL subclause, as seen in the regression test case added here,
in which a child table that should be scanned is not.  (Although this
thinko is ancient, the test case doesn't fail before 9.2, for reasons
I've not bothered to track down in detail.  There may be related cases
that do fail before that.)

More recently, commit f0e44751d added an independent bug by applying
canonicalize_qual() to index expressions, which is even sillier since
those might not even be boolean.  If they are, though, I think this
could lead to making incorrect index entries for affected index
expressions in v10.  I haven't attempted to prove that though.

To fix, add an "is_check" parameter to canonicalize_qual() to specify
whether it should assume WHERE or CHECK semantics, and make it perform
NULL-elimination accordingly.  Adjust the callers to apply the right
semantics, or remove the call entirely in cases where it's not known
that the expression has one or the other semantics.  I also removed
the call in some cases involving partition expressions, where it should
be a no-op because such expressions should be canonical already ...
and was a no-op, independently of whether it could in principle have
done something, because it was being handed the qual in implicit-AND
format which isn't what it expects.  In HEAD, add an Assert to catch
that type of mistake in future.

This represents an API break for external callers of canonicalize_qual().
While that's intentional in HEAD to make such callers think about which
case applies to them, it seems like something we probably wouldn't be
thanked for in released branches.  Hence, in released branches, the
extra parameter is added to a new function canonicalize_qual_ext(),
and canonicalize_qual() is a wrapper that retains its old behavior.

Patch by me with suggestions from Dean Rasheed.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/24475.1520635069@sss.pgh.pa.us
2018-03-11 18:10:42 -04:00
Tom Lane 5748f3a0aa Improve predtest.c's internal docs, and enhance its functionality a bit.
Commit b08df9cab left things rather poorly documented as far as the
exact semantics of "clause_is_check" mode went.  Also, that mode did
not really work correctly for predicate_refuted_by; although given the
lack of specification as to what it should do, as well as the lack
of any actual use-case, that's perhaps not surprising.

Rename "clause_is_check" to "weak" proof mode, and provide specifications
for what it should do.  I defined weak refutation as meaning "truth of A
implies non-truth of B", which makes it possible to use the mode in the
part of relation_excluded_by_constraints that checks for mutually
contradictory WHERE clauses.  Fix up several places that did things wrong
for that definition.  (As far as I can see, these errors would only lead
to failure-to-prove, not incorrect claims of proof, making them not
serious bugs even aside from the fact that v10 contains no use of this
mode.  So there seems no need for back-patching.)

In addition, teach predicate_refuted_by_recurse that it can use
predicate_implied_by_recurse after all when processing a strong NOT-clause,
so long as it asks for the correct proof strength.  This is an optimization
that could have been included in commit b08df9cab, but wasn't.

Also, simplify and generalize the logic that checks for whether nullness of
the argument of IS [NOT] NULL would force overall nullness of the predicate
or clause.  (This results in a change in the partition_prune test's output,
as it is now able to prune an all-nulls partition that it did not recognize
before.)

In passing, in PartConstraintImpliedByRelConstraint, remove bogus
conversion of the constraint list to explicit-AND form and then right back
again; that accomplished nothing except forcing a useless extra level of
recursion inside predicate_implied_by.

Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
2018-03-09 16:58:26 -05:00
Tom Lane a63c3274a6 Fix test_predtest's idea of what weak refutation means.
I'd initially supposed that predicate_refuted_by(..., true) ought to
say that "A refutes B" means "non-falsity of A implies non-truth of B".
But it seems better to define it as "truth of A implies non-truth of B".
This is more useful in the current system, slightly easier to prove,
and in closer correspondence to the existing code behavior.

With this change, test_predtest no longer claims that any existing
test cases show false proof reports, though there still are cases
where we could prove something and fail to.

Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
2018-03-08 19:44:23 -05:00
Tom Lane 44468f49bb Add test scaffolding for exercising optimizer's predicate-proof logic.
The predicate-proof code in predtest.c is a bit hard to test as-is:
you have to construct a query whose plan changes depending on the success
of a test, and in tests that have been around for awhile, it's always
possible that the plan shape is now being determined by some other factor.
Our existing regression tests aren't doing real well at providing full
code coverage of predtest.c, either.  So, let's add a small test module
that allows directly inspecting the results of predicate_implied_by()
and predicate_refuted_by() for arbitrary boolean expressions.

I chose the set of tests committed here in order to get reasonably
complete code coverage of predtest.c just from running this test
module, and to cover some cases called out as being interesting in
the existing comments.  We might want to add more later.  But this
set already shows a few cases where perhaps things could be improved.

Indeed, this exercise proves that predicate_refuted_by() is buggy for
the case of clause_is_check = true, though fortunately we aren't using
that case anywhere yet.  I'll look into doing something about that in
a separate commit.  For now, just memorialize the current behavior.

Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
2018-03-08 13:25:26 -05:00
Peter Eisentraut 2dadd061b3 Fix test counting in SSL tests
The branch that does not support tls-server-end-point runs more tests,
so we need to structure the test counting dynamically.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-07 00:53:25 -05:00
Peter Eisentraut 286c0ab257 Fix expected error message in test 2018-03-06 14:45:05 -05:00
Peter Eisentraut f0ae2fa88b Raise Test::More version requirement
Since ed8a7c6fcf, version 0.87 is
required.
2018-03-06 14:42:10 -05:00
Peter Eisentraut 4c831aeaa7 Tests for Kerberos/GSSAPI authentication
Like the LDAP and SSL tests, these are not run by default but can be
selected via PG_TEST_EXTRA.

Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-06 10:57:36 -05:00
Andres Freund 854dd8cff5 Add parenthesized options syntax for ANALYZE.
This is analogous to the syntax allowed for VACUUM. This allows us to
avoid making new options reserved keywords and makes it easier to
allow arbitrary argument order. Oh, and it's consistent with the other
commands, too.

Author: Nathan Bossart
Reviewed-By: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/D3FC73E2-9B1A-4DB4-8180-55F57D116B4E@amazon.com
2018-03-05 16:21:05 -08:00
Alvaro Herrera 5564c11815 Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so.  Patch it up so that it does.  Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.

While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.

Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.

In pg11, comments on statistics objects are cloned too.  In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it.  Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason.  Any corresponding user-visible changes (docs) are backpatched,
though.

Reported-by: Stephen Froehlich
Author: David Rowley
Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
2018-03-05 19:37:19 -03:00
Peter Eisentraut 39314efa4d Minor fixes for reloptions tests
Follow-up to 4b95cc1dc3

Author: Nikolay Shaplov <dhyan@nataraj.su>
2018-03-03 12:51:56 -05:00
Peter Eisentraut f96f48113f In SSL tests, restart after pg_hba.conf changes
This prevents silently using a wrong configuration, similar to
b4e2ada347.
2018-03-03 08:54:46 -05:00
Peter Eisentraut ff18115ae9 Prevent LDAP and SSL tests from running without support in build
Add checks in each test file that the build supports the feature,
otherwise skip all the tests.  Before, if someone were to (accidentally)
invoke these tests without build support, they would fail in confusing
ways.

based on patch from Michael Paquier <michael@paquier.xyz>
2018-03-03 08:52:21 -05:00
Peter Eisentraut fdb34824e0 Add PG_TEST_EXTRA to control optional test suites
The SSL and LDAP test suites are not run by default, as they are not
secure for multi-user environments.  This commit adds an extra make
variable to optionally enable them, for example:

make check-world PG_TEST_EXTRA='ldap ssl'

Author: Michael Paquier <michael@paquier.xyz>
2018-03-03 01:40:48 -05:00
Peter Eisentraut fd1a421fe6 Add prokind column, replacing proisagg and proiswindow
The new column distinguishes normal functions, procedures, aggregates,
and window functions.  This replaces the existing columns proisagg and
proiswindow, and replaces the convention that procedures are indicated
by prorettype == 0.  Also change prorettype to be VOIDOID for procedures.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-02 13:48:33 -05:00
Tom Lane 8f72a57048 Fix format_type() to restore its old behavior.
Commit a26116c6c accidentally changed the behavior of the SQL format_type()
function while refactoring.  For the reasons explained in that function's
comment, a NULL typemod argument should behave differently from a -1
argument.  Since we've managed to break this, add a regression test
memorializing the intended behavior.

In passing, be consistent about the type of the "flags" parameter.

Noted by Rushabh Lathia, though I revised the patch some more.

Discussion: https://postgr.es/m/CAGPqQf3RB2q-d2Awp_-x-Ur6aOxTUwnApt-vm-iTtceZxYnePg@mail.gmail.com
2018-03-01 11:37:46 -05:00
Andres Freund 1437824564 pg_regress: Increase space available for test names.
A few isolationtester tests with reasonable names are too wide to
nicely align. Increase space.

Author: Thomas Munro
Discussion: https://postgr.es/m/CAEepm=2v7+EHs6zsJzFn+zJOT4F4Kb69Z1xJ7Zf5kgwLr1n=VA@mail.gmail.com
2018-03-01 02:48:33 -08:00
Tom Lane 38a1144a91 Remove restriction on SQL block length in isolationtester scanner.
specscanner.l had a fixed limit of 1024 bytes on the length of
individual SQL stanzas in an isolation test script.  People are
starting to run into that, so fix it by making the buffer resizable.

Once we allow this in HEAD, it seems inevitable that somebody will
try to back-patch a test that exceeds the old limit, so back-patch
this change as a preventive measure.

Daniel Gustafsson

Discussion: https://postgr.es/m/8D628BE4-6606-4FF6-A3FF-8B2B0E9B43D0@yesql.se
2018-02-28 16:57:37 -05:00
Robert Haas ce1663cdcd Fix assertion failure when Parallel Append is run serially.
Parallel-aware plan nodes must be prepared to run without parallelism
if it's not possible at execution time for whatever reason.  Commit
ab72716778, which introduced Parallel
Append, overlooked this.

Rajkumar Raghuwanshi reported this problem, and I included his test
case in this patch.  The code changes are by me.

Discussion: http://postgr.es/m/CAKcux6=WqkUudLg1GLZZ7fc5ScWC1+Y9qD=pAHeqy32WoeJQvw@mail.gmail.com
2018-02-28 10:58:27 -05:00
Tom Lane e98a4de7d2 Use the correct tuplestore read pointer in a NamedTuplestoreScan.
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node.  That's because commit
18ce3a4ab allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times.  Repair.

Thomas Munro

Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
2018-02-27 15:56:51 -05:00
Tom Lane c40e20a83c Revert renaming of int44in/int44out.
This seemed like a good idea in commit be42eb9d6, but it causes more
trouble than it's worth for cross-branch upgrade testing.

Discussion: https://postgr.es/m/11927.1519756619@sss.pgh.pa.us
2018-02-27 15:15:35 -05:00
Tom Lane 25b692568f Prevent dangling-pointer access when update trigger returns old tuple.
A before-update row trigger may choose to return the "new" or "old" tuple
unmodified.  ExecBRUpdateTriggers failed to consider the second
possibility, and would proceed to free the "old" tuple even if it was the
one returned, leading to subsequent access to already-deallocated memory.
In debug builds this reliably leads to an "invalid memory alloc request
size" failure; in production builds it might accidentally work, but data
corruption is also possible.

This is a very old bug.  There are probably a couple of reasons it hasn't
been noticed up to now.  It would be more usual to return NULL if one
wanted to suppress the update action; returning "old" is significantly less
efficient since the update will occur anyway.  Also, none of the standard
PLs would ever cause this because they all returned freshly-manufactured
tuples even if they were just copying "old".  But commit 4b93f5799 changed
that for plpgsql, making it possible to see the bug with a plpgsql trigger.
Still, this is certainly legal behavior for a trigger function, so it's
ExecBRUpdateTriggers's fault not plpgsql's.

It seems worth creating a test case that exercises returning "old" directly
with a C-language trigger; testing this through plpgsql seems unreliable
because its behavior might change again.

Report and fix by Rushabh Lathia; regression test case by me.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
2018-02-27 13:28:02 -05:00
Tom Lane be42eb9d62 Improve regression test coverage of regress.c.
It's a bit silly to have test functions that aren't tested, so test
them.

In passing, rename int44in/int44out to city_budget_in/_out so that they
match how the regression tests use them.  Also, fix city_budget_out
so that it emits the format city_budget_in expects to read; otherwise
we'd have dump/reload failures when testing pg_dump against the
regression database.  (We avoided that in the past only because no
data of type city_budget was actually stored anywhere.)

Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
2018-02-27 12:13:14 -05:00
Tom Lane db3af9feb1 Remove unused functions in regress.c.
This patch removes five functions that presumably were once used in the
regression tests, but haven't been so used in many years.  Nonetheless
we've been wasting maintenance effort on them (e.g., by converting them
to V1 function protocol).  I see no reason to think that reviving them
would add any useful test coverage, so drop 'em.

In passing, mark regress_lseg_construct static, since it's not called
from outside this file.

Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
2018-02-27 11:11:25 -05:00
Tom Lane fb533e45f9 Schema-qualify references in test_ddl_deparse test script.
This omission seems to be what is causing buildfarm failures on crake.

Security: CVE-2018-1058
2018-02-26 12:22:39 -05:00
Noah Misch 5770172cb0 Document security implications of search_path and the public schema.
The ability to create like-named objects in different schemas opens up
the potential for users to change the behavior of other users' queries,
maliciously or accidentally.  When you connect to a PostgreSQL server,
you should remove from your search_path any schema for which a user
other than yourself or superusers holds the CREATE privilege.  If you do
not, other users holding CREATE privilege can redefine the behavior of
your commands, causing them to perform arbitrary SQL statements under
your identity.  "SET search_path = ..." and "SELECT
pg_catalog.set_config(...)" are not vulnerable to such hijacking, so one
can use either as the first command of a session.  As special
exceptions, the following client applications behave as documented
regardless of search_path settings and schema privileges: clusterdb
createdb createlang createuser dropdb droplang dropuser ecpg (not
programs it generates) initdb oid2name pg_archivecleanup pg_basebackup
pg_config pg_controldata pg_ctl pg_dump pg_dumpall pg_isready
pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_standby
pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb
vacuumlo.  Not included are core client programs that run user-specified
SQL commands, namely psql and pgbench.  PostgreSQL encourages non-core
client applications to do likewise.

Document this in the context of libpq connections, psql connections,
dblink connections, ECPG connections, extension packaging, and schema
usage patterns.  The principal defense for applications is "SELECT
pg_catalog.set_config('search_path', '', false)", and the principal
defense for databases is "REVOKE CREATE ON SCHEMA public FROM PUBLIC".
Either one is sufficient to prevent attack.  After a REVOKE, consider
auditing the public schema for objects named like pg_catalog objects.

Authors of SECURITY DEFINER functions use some of the same defenses, and
the CREATE FUNCTION reference page already covered them thoroughly.
This is a good opportunity to audit SECURITY DEFINER functions for
robust security practice.

Back-patch to 9.3 (all supported versions).

Reviewed by Michael Paquier and Jonathan S. Katz.  Reported by Arseniy
Sharoglazov.

Security: CVE-2018-1058
2018-02-26 07:39:44 -08:00
Tom Lane 3d2aed664e Avoid using unsafe search_path settings during dump and restore.
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object.  This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run.  That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.

This patch changes pg_dump so that it does not change the search_path
dynamically.  The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter.  Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.

Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.

Security: CVE-2018-1058
2018-02-26 10:18:21 -05:00
Peter Eisentraut fde03e8b55 Use croak instead of die in Perl code when appropriate 2018-02-24 14:54:16 -05:00
Tom Lane 8b29e88cdc Add window RANGE support for float4, float8, numeric.
Commit 0a459cec9 left this for later, but since time's running out,
I went ahead and took care of it.  There are more data types that
somebody might someday want RANGE support for, but this is enough
to satisfy all expectations of the SQL standard, which just says that
"numeric, datetime, and interval" types should have RANGE support.
2018-02-24 13:23:38 -05:00
Peter Eisentraut 081bfc19b3 Check error messages in SSL tests
In tests that check whether a connection fails, also check the error
message.  That makes sure that the connection was rejected for the right
reason.

This discovered that two tests had their connection failing for the
wrong reason.  One test failed because pg_hba.conf was not set up to
allow that user, one test failed because the client key file did not
have the right permissions.  Fix those tests and add a new one that is
really supposed to check the file permission issue.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-02-24 10:07:45 -05:00
Peter Eisentraut bc1adc651b Fix filtering of unsupported relations in logical replication
In the pgoutput plugin, skip changes for relations that are not
publishable, per is_publishable_class().  This concerns in particular
materialized views and information_schema tables.  While those relations
cannot be part of a publication, per existing checks, they will be
considered by a FOR ALL TABLES publication.  A subscription would not
actually apply changes for those relations, again per existing checks,
but trying to match incoming changes to local tables on the subscriber
would lead to errors if no matching local table exists.  Skipping those
changes on the publisher avoids sending useless changes and eliminates
the error.

Bug: #15044
Reported-by: Chad Trabant <chad@iris.washington.edu>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2018-02-23 22:13:21 -05:00
Noah Misch fe35cea7cf Synchronize doc/ copies of src/test/examples/.
This is mostly cosmetic, but it might fix build failures, on some
platform, when copying from the documentation.

Back-patch to 9.3 (all supported versions).
2018-02-23 11:24:04 -08:00
Tom Lane 9afd513df0 Fix planner failures with overlapping mergejoin clauses in an outer join.
Given overlapping or partially redundant join clauses, for example
	t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x
the planner's EquivalenceClass machinery will ordinarily refactor the
clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't
see multiple references to the same EquivalenceClass in a list of join
equality clauses.  However, if the join is outer, it's incorrect to derive
a restriction clause on the outer side from the join conditions, so the
clause refactoring does not happen and we end up with overlapping join
conditions.  The code that attempted to deal with such cases had several
subtle bugs, which could result in "left and right pathkeys do not match in
mergejoin" or "outer pathkeys do not match mergeclauses" planner errors,
if the selected join plan type was a mergejoin.  (It does not appear that
any actually incorrect plan could have been emitted.)

The core of the problem really was failure to recognize that the outer and
inner relations' pathkeys have different relationships to the mergeclause
list.  A join's mergeclause list is constructed by reference to the outer
pathkeys, so it will always be ordered the same as the outer pathkeys, but
this cannot be presumed true for the inner pathkeys.  If the inner sides of
the mergeclauses contain multiple references to the same EquivalenceClass
({t2.x} in the above example) then a simplistic rendering of the required
inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery
recognizes that the second sort column is redundant and throws it away.
The mergejoin planning code failed to account for that behavior properly.
One error was to try to generate cut-down versions of the mergeclause list
from cut-down versions of the inner pathkeys in the same way as the initial
construction of the mergeclause list from the outer pathkeys was done; this
could lead to choosing a mergeclause list that fails to match the outer
pathkeys.  The other problem was that the pathkey cross-checking code in
create_mergejoin_plan treated the inner and outer pathkey lists
identically, whereas actually the expectations for them must be different.
That led to false "pathkeys do not match" failures in some cases, and in
principle could have led to failure to detect bogus plans in other cases,
though there is no indication that such bogus plans could be generated.

Reported by Alexander Kuzmenkov, who also reviewed this patch.  This has
been broken for years (back to around 8.3 according to my testing), so
back-patch to all supported branches.

Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru
2018-02-23 13:47:33 -05:00
Peter Eisentraut 76b6aa41f4 Support parameters in CALL
To support parameters in CALL, move the parse analysis of the procedure
and arguments into the global transformation phase, so that the parser
hooks can be applied.  And then at execution time pass the parameters
from ProcessUtility on to ExecuteCallStmt.
2018-02-22 21:36:48 -05:00
Peter Eisentraut 10cfce34c0 Add user-callable SHA-2 functions
Add the user-callable functions sha224, sha256, sha384, sha512.  We
already had these in the C code to support SCRAM, but there was no test
coverage outside of the SCRAM tests.  Adding these as user-callable
functions allows writing some tests.  Also, we have a user-callable md5
function but no more modern alternative, which led to wide use of md5 as
a general-purpose hash function, which leads to occasional complaints
about using md5.

Also mark the existing md5 functions as leak-proof.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-02-22 11:34:53 -05:00
Robert Haas 9a5c4f58f3 Try to stabilize EXPLAIN output in partition_check test.
Commit 7d8ac9814b adjusted these
tests in the hope of preserving the plan shape, but I failed to
notice that the three partitions were, on my local machine, choosing
two different plan shapes.  This is probably related to the fact
that all three tables have exactly the same row count.  Try to
improve the situation by making pht1_e about half as large as
the other two.

Per Tom Lane and the buildfarm.

Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us
2018-02-22 08:51:00 -05:00
Robert Haas 7d8ac9814b Charge cpu_tuple_cost * 0.5 for Append and MergeAppend nodes.
Previously, Append didn't charge anything at all, and MergeAppend
charged only cpu_operator_cost, about half the value used here.  This
change might make MergeAppend plans slightly more likely to be chosen
than before, since this commit increases the assumed cost for Append
-- with default values -- by 0.005 per tuple but MergeAppend by only
0.0025 per tuple.  Since the comparisons required by MergeAppend are
costed separately, it's not clear why MergeAppend needs to be
otherwise more expensive than Append, so hopefully this is OK.

Prior to partition-wise join, it didn't really matter whether or not
an Append node had any cost of its own, because every plan had to use
the same number of Append or MergeAppend nodes and in the same places.
Only the relative cost of Append vs. MergeAppend made a difference.
Now, however, it is possible to avoid some of the Append nodes using a
partition-wise join, so it's worth making an effort.  Pending patches
for partition-wise aggregate care too, because an Append of Aggregate
nodes will incur the Append overhead fewer times than an Aggregate
over an Append.  Although in most cases this change will favor the use
of partition-wise techniques, it does the opposite when the join
cardinality is greater than the sum of the input cardinalities.  Since
this situation arises in an existing regression test, I [rhaas]
adjusted it to keep the overall plan shape approximately the same.

Jeevan Chalke, per a suggestion from David Rowley.  Reviewed by
Ashutosh Bapat.  Some changes by me.  The larger patch series of which
this patch is a part was also reviewed and tested by Antonin Houska,
Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik,
Pascal Legrand, Rafia Sabih, and me.

Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
2018-02-21 23:09:27 -05:00
Peter Eisentraut c2ff42c6c1 Error message improvement 2018-02-20 17:58:27 -05:00
Tom Lane 159efe4af4 Fix misbehavior of CTE-used-in-a-subplan during EPQ rechecks.
An updating query that reads a CTE within an InitPlan or SubPlan could get
incorrect results if it updates rows that are concurrently being modified.
This is caused by CteScanNext supposing that nothing inside its recursive
ExecProcNode call could change which read pointer is selected in the CTE's
shared tuplestore.  While that's normally true because of scoping
considerations, it can break down if an EPQ plan tree gets built during the
call, because EvalPlanQualStart builds execution trees for all subplans
whether they're going to be used during the recheck or not.  And it seems
like a pretty shaky assumption anyway, so let's just reselect our own read
pointer here.

Per bug #14870 from Andrei Gorita.  This has been broken since CTEs were
implemented, so back-patch to all supported branches.

Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org
2018-02-19 16:00:31 -05:00
Alvaro Herrera 4108a28d3a Fix expected output 2018-02-19 17:56:43 -03:00
Alvaro Herrera eb7ed3f306 Allow UNIQUE indexes on partitioned tables
If we restrict unique constraints on partitioned tables so that they
must always include the partition key, then our standard approach to
unique indexes already works --- each unique key is forced to exist
within a single partition, so enforcing the unique restriction in each
index individually is enough to have it enforced globally.  Therefore we
can implement unique indexes on partitions by simply removing a few
restrictions (and adding others.)

Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql
Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql
Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime
	Casanova, Amit Langote
2018-02-19 17:40:00 -03:00
Alvaro Herrera cef60043dd Mention trigger name in trigger test
This makes it more explicit exactly what is going on, for further
proposed behavior changes.

Discussion: https://postgr.es/m/20180214212624.hm7of76flesodamf@alvherre.pgsql
2018-02-17 13:18:34 -03:00
Peter Eisentraut 2fb1abaeb0 Rename enable_partition_wise_join to enable_partitionwise_join
Discussion: https://www.postgresql.org/message-id/flat/ad24e4f4-6481-066e-e3fb-6ef4a3121882%402ndquadrant.com
2018-02-16 10:33:59 -05:00
Tom Lane f9263006d8 Support CONSTANT/NOT NULL/initial value for plpgsql composite variables.
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.

This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.

I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.

Tom Lane, reviewed by Daniel Gustafsson

Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
2018-02-13 22:15:08 -05:00
Tom Lane 4b93f57999 Make plpgsql use its DTYPE_REC code paths for composite-type variables.
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible.  In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session.  And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple.  A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.

Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type.  DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.

To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90.  A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests.  In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.

In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type.  This allows very cheap detection of the need to
refresh tupdesc-dependent data.  This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.

In passing, allow plpgsql functions to accept as well as return type
"record".  There was no good reason for the old restriction, and it
was out of step with most of the other PLs.

Tom Lane, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
2018-02-13 18:52:21 -05:00
Peter Eisentraut 7a32ac8a66 Add procedure support to pg_get_functiondef
This also makes procedures work in psql's \ef and \sf commands.

Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-02-13 15:13:44 -05:00
Peter Eisentraut 7cd56f218d Add tests for pg_get_functiondef 2018-02-13 15:13:44 -05:00
Peter Eisentraut a7b8f0661d Fix typo 2018-02-13 15:13:44 -05:00
Peter Eisentraut b4e2ada347 In LDAP test, restart after pg_hba.conf changes
Instead of issuing a reload after pg_hba.conf changes between test
cases, run a full restart.  With a reload, an error in the new
pg_hba.conf is ignored and the tests will continue to run with the old
settings, invalidating the subsequent test cases.  With a restart, a
faulty pg_hba.conf will lead to the test being aborted, which is what
we'd rather want.
2018-02-13 09:12:45 -05:00
Tom Lane d02d4a6d4f Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure.  This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.

In addition, using a separate EState for each argument was just silly.

So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.

While at it, improve the function's commentary a bit.

Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 13:37:12 -05:00
Tom Lane 65b1d76785 Fix oversight in CALL argument handling, and do some minor cleanup.
CALL statements cannot support sub-SELECTs in the arguments of the called
procedure, since they just use ExecEvalExpr to evaluate such arguments.
Teach transformSubLink() to reject the case, as it already does for other
contexts in which subqueries are not supported.

In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum
symbol line up more closely with the phrasing of the error messages it is
associated with.  And fix someone's weak grasp of English grammar in the
preceding EXPR_KIND_PARTITION_EXPRESSION addition.  Also update an
incorrect comment in resolve_unique_index_expr (possibly it was correct
when written, but nowadays transformExpr definitely does reject SRFs here).

Per report from Pavel Stehule --- but this resolves only one of the bugs
he mentions.

Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
2018-02-10 13:05:14 -05:00
Peter Eisentraut b3a101eff0 Refine SSL tests test name reporting
Instead of using the psql/libpq connection string as the displayed test
name and relying on "notes" and source code comments to explain the
tests, give the tests self-explanatory names, like we do elsewhere.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-02-08 09:57:10 -05:00
Peter Eisentraut 7c44b75a2a Make new triggers tests more robust
Add explicit collation on the trigger name to avoid locale dependencies.
Also restrict the tables selected, to avoid interference from
concurrently running tests.
2018-02-07 14:57:19 -05:00
Peter Eisentraut 32ff269117 Add more information_schema columns
- table_constraints.enforced
- triggers.action_order
- triggers.action_reference_old_table
- triggers.action_reference_new_table

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-02-07 10:08:02 -05:00
Tom Lane 0a459cec96 Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions.  We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion.  That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value.  Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that.  (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)

The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types.  Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.

In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011.  As far as I can see, we are now fully
up to spec on window framing options.

Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.

Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.

Oliver Ford, reviewed and whacked around some by me

Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 00:06:56 -05:00
Robert Haas f069c91a57 Fix possible crash in partition-wise join.
The previous code assumed that we'd always succeed in creating
child-joins for a joinrel for which partition-wise join was considered,
but that's not guaranteed, at least in the case where dummy rels
are involved.

Ashutosh Bapat, with some wordsmithing by me.

Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
2018-02-05 17:31:57 -05:00
Tom Lane 3492a0af0b Fix RelationBuildPartitionKey's processing of partition key expressions.
Failure to advance the list pointer while reading partition expressions
from a list results in invoking an input function with inappropriate data,
possibly leading to crashes or, with carefully crafted input, disclosure
of arbitrary backend memory.

Bug discovered independently by Álvaro Herrera and David Rowley.
This patch is by Álvaro but owes something to David's proposed fix.
Back-patch to v10 where the issue was introduced.

Security: CVE-2018-1052
2018-02-05 10:37:30 -05:00
Peter Eisentraut 533c5d8bdd Fix application of identity values in some cases
Investigation of 2d2d06b7e2 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN.  To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.

For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence.  The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet.  So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-02-02 14:39:10 -05:00
Robert Haas 22757960bb Fix typo: colums -> columns.
Along the way, also fix code indentation.

Alexander Lakhin, reviewed by Michael Paquier

Discussion: http://postgr.es/m/45c44aa7-7cfa-7f3b-83fd-d8300677fdda@gmail.com
2018-01-31 16:45:37 -05:00
Robert Haas 3ccdc6f9a5 Fix list partition constraints for partition keys of array type.
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type.  Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.

As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.

Amit Langote, reviewed by Etsuro Fujita

Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
2018-01-31 15:43:11 -05:00
Andres Freund c068f87723 Improve bit perturbation in TupleHashTableHash.
The changes in b81b5a96f4 did not fully
address the issue, because the bit-mixing of the IV into the final
hash-key didn't prevent clustering in the input-data survive in the
output data.

This didn't cause a lot of problems because of the additional growth
conditions added d4c62a6b62. But as we
want to rein those in due to explosive growth in some edges, this
needs to be fixed.

Author: Andres Freund
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
2018-01-29 11:24:57 -08:00
Tom Lane 2e668c522e Avoid crash during EvalPlanQual recheck of an inner indexscan.
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch.  It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.

The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node.  Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple.  Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.

Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag.  We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.

The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan.  It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here.  Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case.  Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.

Back-patch to v10 where commit 09529a70b came in.  In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.

Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
2018-01-27 13:52:24 -05:00
Tom Lane fb8697b31a Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily.  Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser.  Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.

This does create a user-visible behavioral change, which is that while
formerly all of these would work:
	alter table foo set (fillfactor = 50);
	alter table foo set (FillFactor = 50);
	alter table foo set ("fillfactor" = 50);
	alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others.  However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.

Daniel Gustafsson, reviewed by Michael Paquier, small changes by me

Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-26 18:25:14 -05:00
Alvaro Herrera 05fb5d6619 Ignore partitioned indexes where appropriate
get_relation_info() was too optimistic about opening indexes in
partitioned tables, which would raise errors when any queries were
planned on such tables.  Fix by ignoring any indexes of the partitioned
kind.

CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem.  Fix by
disallowing these commands in partitioned tables.

Fallout from 8b08f7d482.
2018-01-25 16:12:15 -03:00
Peter Eisentraut 4a3fdbdf76 Allow spaces in connection strings in SSL tests
Connection strings can have items with spaces in them, wrapped in
quotes.  The tests however ran a SELECT '$connstr' upon connection which
broke on the embedded quotes.  Use dollar quotes on the connstr to
protect against this.  This was hit during the development of the macOS
Secure Transport patch, but is independent of it.

Author: Daniel Gustafsson <daniel@yesql.se>
2018-01-25 09:14:24 -05:00
Peter Eisentraut a61116da8b Add tests for record_image_eq and record_image_cmp
record_image_eq was covered a bit by the materialized view code that it
is meant to support, but record_image_cmp was not tested at all.

While we're here, add more tests to record_eq and record_cmp as well,
for symmetry.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-24 13:23:57 -05:00
Tom Lane bb94ce4d26 Teach reparameterize_path() to handle AppendPaths.
If we're inside a lateral subquery, there may be no unparameterized paths
for a particular child relation of an appendrel, in which case we *must*
be able to create similarly-parameterized paths for each other child
relation, else the planner will fail with "could not devise a query plan
for the given query".  This means that there are situations where we'd
better be able to reparameterize at least one path for each child.

This calls into question the assumption in reparameterize_path() that
it can just punt if it feels like it.  However, the only case that is
known broken right now is where the child is itself an appendrel so that
all its paths are AppendPaths.  (I think possibly I disregarded that in
the original coding on the theory that nested appendrels would get folded
together --- but that only happens *after* reparameterize_path(), so it's
not excused from handling a child AppendPath.)  Given that this code's been
like this since 9.3 when LATERAL was introduced, it seems likely we'd have
heard of other cases by now if there were a larger problem.

Per report from Elvis Pranskevichus.  Back-patch to 9.3.

Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net
2018-01-23 16:50:34 -05:00
Peter Eisentraut f5da5683a8 Add installcheck support to more test suites
Several of the test suites under src/test/ were missing an installcheck
target.
2018-01-23 07:11:38 -05:00
Robert Haas 2f17844104 Allow UPDATE to move rows between partitions.
When an UPDATE causes a row to no longer match the partition
constraint, try to move it to a different partition where it does
match the partition constraint.  In essence, the UPDATE is split into
a DELETE from the old partition and an INSERT into the new one.  This
can lead to surprising behavior in concurrency scenarios because
EvalPlanQual rechecks won't work as they normally did; the known
problems are documented.  (There is a pending patch to improve the
situation further, but it needs more review.)

Amit Khandekar, reviewed and tested by Amit Langote, David Rowley,
Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro
Herrera, Amit Kapila, and me.  A few final revisions by me.

Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
2018-01-19 15:33:06 -05:00
Peter Eisentraut 8b9e9644dc Replace AclObjectKind with ObjectType
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that.  It's only used in
aclcheck_error.  By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-19 14:01:15 -05:00
Alvaro Herrera 189d0ff588 Fix regression tests for better stability
Per buildfarm
2018-01-19 12:31:34 -03:00
Alvaro Herrera 8b08f7d482 Local partitioned indexes
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.

As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones.  Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.

To support pg_dump'ing these indexes, add commands
    CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
    ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index).  These reconstruct prior
database state exactly.

Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
	Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
2018-01-19 11:49:22 -03:00
Peter Eisentraut 958c7ae0b7 Fix typo and improve punctuation 2018-01-18 13:00:49 -05:00
Peter Eisentraut 77216cae47 Add tests for session_replication_role
This was hardly tested at all.  The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
2018-01-18 11:24:07 -05:00
Tom Lane 680d540502 Avoid unnecessary failure in SELECT concurrent with ALTER NO INHERIT.
If a query against an inheritance tree runs concurrently with an ALTER
TABLE that's disinheriting one of the tree members, it's possible to get
a "could not find inherited attribute" error because after obtaining lock
on the removed member, make_inh_translation_list sees that its columns
have attinhcount=0 and decides they aren't the columns it's looking for.

An ideal fix, perhaps, would avoid including such a just-removed member
table in the query at all; but there seems no way to accomplish that
without adding expensive catalog rechecks or creating a likelihood of
deadlocks.  Instead, let's just drop the check on attinhcount.  In this
way, a query that's included a just-disinherited child will still
succeed, which is not a completely unreasonable behavior.

This problem has existed for a long time, so back-patch to all supported
branches.  Also add an isolation test verifying related behaviors.

Patch by me; the new isolation test is based on Kyotaro Horiguchi's work.

Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp
2018-01-12 15:46:37 -05:00
Tom Lane 90947674fc Fix incorrect handling of subquery pullup in the presence of grouping sets.
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.

To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing.  This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.

Back-patch to 9.5 where grouping sets were introduced.

Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth

Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
2018-01-12 12:24:50 -05:00
Bruce Momjian bdb70c12b3 C comment: fix "the the" mentions in C comments
Reported-by: Christoph Dreis

Discussion: https://postgr.es/m/007e01d3519e$2734ca10$759e5e30$@freenet.de

Author: Christoph Dreis
2018-01-11 21:50:21 -05:00
Peter Eisentraut bbd3363e12 Refactor subscription tests to use PostgresNode's wait_for_catchup
This was nearly the same code.  Extend wait_for_catchup to allow waiting
for pg_current_wal_lsn() and use that in the subscription tests.  Also
change one use in the pg_rewind tests to use this.

Also remove some broken code in wait_for_catchup and
wait_for_slot_catchup.  The error message in case the waiting failed
wanted to show the current LSN, but the way it was written never
worked.  So since nobody ever cared, just remove it.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-11 13:35:38 -05:00
Peter Eisentraut 5115854170 Add tests for PL/pgSQL returning unnamed portals as refcursor
Existing tests only covered returning explicitly named portals as
refcursor.  The unnamed cursor case was recently broken without a test
failing.
2018-01-10 16:39:13 -05:00
Andrew Dunstan 11b623dd0a Implement TZH and TZM timestamp format patterns
These are compatible with Oracle and required for the datetime template
language for jsonpath in an upcoming patch.

Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
2018-01-09 14:25:05 -05:00
Tom Lane 624e440a47 Improve the heuristic for ordering child paths of a parallel append.
Commit ab7271677 introduced code that attempts to order the child
scans of a Parallel Append node in a way that will minimize execution
time, based on total cost and startup cost.  However, it failed to
think hard about what to do when estimated costs are exactly equal;
a case that's particularly likely to occur when comparing on startup
cost.  In such a case the ordering of the child paths would be left
to the whims of qsort, an algorithm that isn't even stable.

We can improve matters by applying the rule used elsewhere in the
planner: if total costs are equal, sort on startup cost, and
vice versa.  When both cost estimates are exactly equal, rather
than letting qsort do something unpredictable, sort based on the
child paths' relids, which should typically result in sorting in
inheritance order.  (The latter provision requires inventing a
qsort-style comparator for bitmapsets, but maybe we'll have use
for that for other reasons in future.)

This results in a few plan changes in the select_parallel test,
but those all look more reasonable than before, when the actual
underlying cost numbers are taken into account.

Discussion: https://postgr.es/m/4944.1515446989@sss.pgh.pa.us
2018-01-09 13:07:52 -05:00
Peter Eisentraut c3d41ccf59 Fix ssl tests for when tls-server-end-point is not supported
Add a function to TestLib that allows us to check pg_config.h and then
decide the expected test outcome based on that.

Author: Michael Paquier <michael.paquier@gmail.com>
2018-01-09 12:28:49 -05:00
Simon Riggs 6271fceb8a Add TIMELINE to backup_label file
Allows new test to confirm timelines match

Author: Michael Paquier
Reviewed-by: David Steele
2018-01-06 12:24:19 +00:00
Peter Eisentraut d3fb72ea6d Implement channel binding tls-server-end-point for SCRAM
This adds a second standard channel binding type for SCRAM.  It is
mainly intended for third-party clients that cannot implement
tls-unique, for example JDBC.

Author: Michael Paquier <michael.paquier@gmail.com>
2018-01-04 15:29:50 -05:00
Tom Lane 934c7986f4 Tweak parallel hash join test case in hopes of improving stability.
This seems to make things better on gaur, let's see what the rest
of the buildfarm thinks.

Thomas Munro

Discussion: https://postgr.es/m/CAEepm=1uuT8iJxMEsR=jL+3zEi87DB2v0+0H9o_rUXXCZPZT3A@mail.gmail.com
2018-01-04 01:06:58 -05:00
Alvaro Herrera 6c8be5962a Revert "Fix isolation test to be less timing-dependent"
This reverts commit 2268e6afd5.  It turned out that inconsistency in
the report is still possible, so go back to the simpler formulation of
the test and instead add an alternate expected output.

Discussion: https://postgr.es/m/20180103193728.ysqpcp2xjnqpiep7@alvherre.pgsql
2018-01-03 18:22:41 -03:00
Tom Lane 3decd150a2 Teach eval_const_expressions() to handle some more cases.
Add some infrastructure (mostly macros) to make it easier to write
typical cases for constant-expression simplification.  Add simplification
processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types,
which formerly went unsimplified even if all their inputs were constants.
Also teach it to simplify FieldSelect from a composite constant.
Make use of the new infrastructure to reduce the amount of code needed
for the existing ArrayExpr and ArrayCoerceExpr cases.

One existing test case changes output as a result of the fact that
RowExpr can now be folded to a constant.  All the new code is exercised
by existing test cases according to gcov, so I feel no need to add
additional tests.

Tom Lane, reviewed by Dmitry Dolgov

Discussion: https://postgr.es/m/3be3b82c-e29c-b674-2163-bf47d98817b1@iki.fi
2018-01-03 12:35:09 -05:00
Peter Eisentraut 35c0754fad Allow ldaps when using ldap authentication
While ldaptls=1 provides an RFC 4513 conforming way to do LDAP
authentication with TLS encryption, there was an earlier de facto
standard way to do LDAP over SSL called LDAPS.  Even though it's not
enshrined in a standard, it's still widely used and sometimes required
by organizations' network policies.  There seems to be no reason not to
support it when available in the client library.  Therefore, add support
when using OpenLDAP 2.4+ or Windows.  It can be configured with
ldapscheme=ldaps or ldapurl=ldaps://...

Add tests for both ways of requesting LDAPS and a test for the
pre-existing ldaptls=1.  Modify the 001_auth.pl test for "diagnostic
messages", which was previously relying on the server rejecting
ldaptls=1.

Author: Thomas Munro
Reviewed-By: Peter Eisentraut
Discussion: https://postgr.es/m/CAEepm=1s+pA-LZUjQ-9GQz0Z4rX_eK=DFXAF1nBQ+ROPimuOYQ@mail.gmail.com
2018-01-03 10:11:26 -05:00
Alvaro Herrera 2268e6afd5 Fix isolation test to be less timing-dependent
I did this by adding another locking process, which makes the other two
wait.  This way the output should be stable enough.

Per buildfarm and Andres Freund
Discussion: https://postgr.es/m/20180103034445.t3utrtrnrevfsghm@alap3.anarazel.de
2018-01-03 12:08:16 -03:00
Bruce Momjian 9d4649ca49 Update copyright for 2018
Backpatch-through: certain files through 9.3
2018-01-02 23:30:12 -05:00
Alvaro Herrera 54eff5311d Fix deadlock hazard in CREATE INDEX CONCURRENTLY
Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are
supposed to be able to work in parallel, as evidenced by fixes in commit
c3d09b3bd2 specifically to support this case.  In reality, one of the
sessions would be aborted by a misterious "deadlock detected" error.

Jeff Janes diagnosed that this is because of leftover snapshots used for
system catalog scans -- this was broken by 8aa3e47510 keeping track of
(registering) the catalog snapshot.  To fix the deadlocks, it's enough
to de-register that snapshot prior to waiting.

Backpatch to 9.4, which introduced MVCC catalog scans.

Include an isolationtester spec that 8 out of 10 times reproduces the
deadlock with the unpatched code for me (Álvaro).

Author: Jeff Janes
Diagnosed-by: Jeff Janes
Reported-by: Jeremy Finzel
Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com
2018-01-02 19:16:16 -03:00
Noah Misch 6078770c1a In tests, await an LSN no later than the recovery target.
Otherwise, the test fails with "Timed out while waiting for standby to
catch up".  This happened rarely, perhaps only when autovacuum wrote WAL
between our choosing the recovery target and choosing the LSN to await.
Commit b26f7fa6ae fixed one case of this.
Fix two more.  Back-patch to 9.6, which introduced the affected test.

Discussion: https://postgr.es/m/20180101055227.GA2952815@rfd.leadboat.com
2017-12-31 21:58:29 -08:00
Tom Lane dd2243f2ad Improve regression tests' code coverage for plpgsql control structures.
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements.  Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.

Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql.  So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter.  I then
added new test cases that exercise the bits of code complained of above.

Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests.  That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation.  So testing each usage of
that implementation separately seems redundant.

In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.

Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
2017-12-31 17:04:11 -05:00
Teodor Sigaev ff963b393c Add polygon opclass for SP-GiST
Polygon opclass uses compress method feature of SP-GiST added earlier. For now
it's a single operator class which uses this feature. SP-GiST actually indexes
a bounding boxes of input polygons, so part of supported operations are lossy.
Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
treats bounding boxes as point in 4D-space.

Bump catalog version.

Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me
Reviewed-By: all authors + Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
2017-12-25 18:59:38 +03:00
Tom Lane c4c2885cbb Fix UNION/INTERSECT/EXCEPT over no columns.
Since 9.4, we've allowed the syntax "select union select" and variants
of that.  However, the planner wasn't expecting a no-column set operation
and ended up treating the set operation as if it were UNION ALL.

Turns out it's trivial to fix in v10 and later; we just need to be careful
about not generating a Sort node with no sort keys.  However, since a weird
corner case like this is never going to be exercised by developers, we'd
better have thorough regression tests if we want to consider it supported.

Per report from Victor Yegorov.

Discussion: https://postgr.es/m/CAGnEbojGJrRSOgJwNGM7JSJZpVAf8xXcVPbVrGdhbVEHZ-BUMw@mail.gmail.com
2017-12-22 12:08:06 -05:00
Tom Lane c98c35cd08 Avoid putting build-location-dependent strings into generated files.
Various Perl scripts we use to generate files were in the habit of
printing things like "generated by $0" into their output files.
That looks like a fine idea at first glance, but it results in
non-reproducible output, because in VPATH builds $0 won't be just
the name of the script file, but a full path for it.  We'd prefer
that you get identical results whether using VPATH or not, so this
is a bad thing.

Some of these places also printed their input file name(s), causing
an additional hazard of the same type.

Hence, establish a policy that thou shalt not print $0, nor input file
pathnames, into output files (they're still allowed in error messages,
though).  Instead just write the script name verbatim.  While we are at
it, we can make these annotations more useful by giving the script's
full relative path name within the PG source tree, eg instead of
Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.

Not all of the changes made here actually affect any files shipped
in finished tarballs today, but it seems best to apply the policy
everyplace so that nobody copies unsafe code into places where it
could matter.

Christoph Berg and Tom Lane

Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de
2017-12-21 10:57:06 -05:00
Andres Freund 1804284042 Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash.  While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.

After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory.  If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.

The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:

 * avoids wasting memory on duplicated hash tables
 * avoids wasting disk space on duplicated batch files
 * divides the work of building the hash table over the CPUs

One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables.  This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes.  Another is that
outer batch 0 must be written to disk if multiple batches are required.

A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.

A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.

Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
    https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
    https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 00:43:41 -08:00
Robert Haas 7d3583ad9a Test instrumentation of Hash nodes with parallel query.
Commit 8526bcb2df fixed bugs related
to both Sort and Hash, but only added a test case for Sort.  This
adds a test case for Hash to match.

Thomas Munro

Discussion: http://postgr.es/m/CAEepm=2-LRnfwUBZDqQt+XAcd0af_ykNyyVvP3h1uB1AQ=e-eA@mail.gmail.com
2017-12-19 15:29:08 -05:00
Robert Haas 8526bcb2df Try again to fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, accumulate instrumentation data only once, at the end, instead
of after each execution, to avoid recording inflated totals.

Commit 778e78ae9f, the previous attempt
at a fix, instead reset the state after every execution, which worked
for the general instrumentation data but had problems for the additional
instrumentation specific to Sort and Hash nodes.

Report by hubert depesz lubaczewski.  Analysis and fix by Amit Kapila,
following a design proposal from Thomas Munro, with a comment tweak
by me.

Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
2017-12-19 12:21:56 -05:00
Peter Eisentraut 4bbf110d2f Add libpq connection parameter "scram_channel_binding"
This parameter can be used to enforce the channel binding type used
during a SCRAM authentication.  This can be useful to check code paths
where an invalid channel binding type is used by a client and will be
even more useful to allow testing other channel binding types when they
are added.

The default value is tls-unique, which is what RFC 5802 specifies.
Clients can optionally specify an empty value, which has as effect to
not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism.

More tests for SCRAM and channel binding are added to the SSL test
suite.

Author: Author: Michael Paquier <michael.paquier@gmail.com>
2017-12-19 10:12:36 -05:00
Noah Misch c757a3da0a Avoid and detect SIGPIPE race in TAP tests.
Don't write to stdin of a psql process that could have already exited
with an authentication failure.  Buildfarm members crake and mandrill
have failed once by doing so.  Ignore SIGPIPE in all TAP tests.
Back-patch to v10, where these tests were introduced.

Reviewed by Michael Paquier.

Discussion: https://postgr.es/m/20171209210203.GC3362632@rfd.leadboat.com
2017-12-16 10:03:35 -08:00
Andres Freund 9c2f0a6c3c Fix pruning of locked and updated tuples.
Previously it was possible that a tuple was not pruned during vacuum,
even though its update xmax (i.e. the updating xid in a multixact with
both key share lockers and an updater) was below the cutoff horizon.

As the freezing code assumed, rightly so, that that's not supposed to
happen, xmax would be preserved (as a member of a new multixact or
xmax directly). That causes two problems: For one the tuple is below
the xmin horizon, which can cause problems if the clog is truncated or
once there's an xid wraparound. The bigger problem is that that will
break HOT chains, which in turn can lead two to breakages: First,
failing index lookups, which in turn can e.g lead to constraints being
violated. Second, future hot prunes / vacuums can end up making
invisible tuples visible again. There's other harmful scenarios.

Fix the problem by recognizing that tuples can be DEAD instead of
RECENTLY_DEAD, even if the multixactid has alive members, if the
update_xid is below the xmin horizon. That's safe because newer
versions of the tuple will contain the locking xids.

A followup commit will harden the code somewhat against future similar
bugs and already corrupted data.

Author: Andres Freund, with changes by Alvaro Herrera
Reported-By: Daniel Wood
Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter
   Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier
Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier
Discussion:
    https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
    https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
2017-12-14 18:20:47 -08:00
Robert Haas 1d6fb35ad6 Revert "Fix accumulation of parallel worker instrumentation."
This reverts commit 2c09a5c12a.  Per
further discussion, that doesn't seem to be the best possible fix.

Discussion: http://postgr.es/m/CAA4eK1LW2aFKzY3=vwvc=t-juzPPVWP2uT1bpx_MeyEqnM+p8g@mail.gmail.com
2017-12-13 15:19:28 -05:00
Peter Eisentraut 632b03da31 Start a separate test suite for plpgsql
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome.  The other PLs have their own test suites split up into
smaller files by topic.  It would be nice to have that for plpgsql as
well.  So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there.  That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.

msvc build system changes with help from Michael Paquier
2017-12-13 11:02:29 -05:00
Peter Eisentraut 3d8874224f Fix crash when using CALL on an aggregate
Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reported-by: Rushabh Lathia <rushabh.lathia@gmail.com>
2017-12-13 10:37:48 -05:00
Tom Lane 9edc97b712 Stabilize output of new regression test case.
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent.  But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely.  We can revert this if we do something
in the backend to eliminate the inconsistency.

Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
2017-12-10 12:44:03 -05:00
Tom Lane 390d58135b Fix plpgsql to reinitialize record variables at block re-entry.
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is).  But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list.  This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.

Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well.  (We don't need to consider other DTYPEs because they don't
represent separately-stored values.)  I failed to resist the temptation
to make some nearby cosmetic adjustments, too.

No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior.  In any case this change would have no
impact on correctly-written code.

Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
2017-12-09 12:03:04 -05:00
Magnus Hagander ce1468d02b Fix regression test output
Missed this in the last commit.
2017-12-09 13:45:06 +01:00
Peter Eisentraut 005ac298b1 Prohibit identity columns on typed tables and partitions
Those cases currently crash and supporting them is more work then
originally thought, so we'll just prohibit these scenarios for now.

Author: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Reported-by: Мансур Галиев <gomer94@yandex.ru>
Bug: #14866
2017-12-08 12:13:04 -05:00
Peter Eisentraut 2d2d06b7e2 Apply identity sequence values on COPY
A COPY into a table should apply identity sequence values just like it
does for ordinary defaults.  This was previously forgotten, leading to
null values being inserted, which in turn would fail because identity
columns have not-null constraints.

Author: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Steven Winfield <steven.winfield@cantabcapital.com>
Bug: #14952
2017-12-08 09:18:18 -05:00
Robert Haas 0a3edbb330 Speed up isolation test for concurrent VACUUM/ANALYZE behavior.
Per Tom Lane, the old test sometimes times out with CLOBBER_CACHE_ALWAYS.

Nathan Bossart

Discussion: http://postgr.es/m/28614.1512583046@sss.pgh.pa.us
2017-12-07 11:11:07 -05:00
Tom Lane 979a36c389 Adjust regression test cases added by commit ab7271677.
I suppose it is a copy-and-paste error that this test doesn't actually
test the "Parallel Append with both partial and non-partial subplans"
case (EXPLAIN alone surely doesn't qualify as a test of executor
behavior).  Fix that.

Also, add cosmetic aliases to make it possible to tell apart these
otherwise-identical test cases in log_statement output.
2017-12-05 22:40:43 -05:00
Robert Haas ab72716778 Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention.  We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.

Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.

Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 17:28:39 -05:00
Robert Haas 2c09a5c12a Fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, the old code wouldn't reset the shared state between executions,
potentially resulting in dramatically inflated instrumentation data
for nodes beneath it.  (The per-worker instrumentation ended up OK,
I think, but the overall totals were inflated.)

Report by hubert depesz lubaczewski.  Analysis and fix by Amit Kapila,
reviewed and tweaked a bit by me.

Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
2017-12-05 14:35:33 -05:00
Andres Freund 5bcf389ecf Fix EXPLAIN ANALYZE of hash join when the leader doesn't participate.
If a hash join appears in a parallel query, there may be no hash table
available for explain.c to inspect even though a hash table may have
been built in other processes.  This could happen either because
parallel_leader_participation was set to off or because the leader
happened to hit the end of the outer relation immediately (even though
the complete relation is not empty) and decided not to build the hash
table.

Commit bf11e7ee introduced a way for workers to exchange
instrumentation via the DSM segment for Sort nodes even though they
are not parallel-aware.  This commit does the same for Hash nodes, so
that explain.c has a way to find instrumentation data from an
arbitrary participant that actually built the hash table.

Author: Thomas Munro
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
2017-12-05 10:55:56 -08:00
Robert Haas ab6eaee884 When VACUUM or ANALYZE skips a concurrently dropped table, log it.
Hopefully, the additional logging will help avoid confusion that
could otherwise result.

Nathan Bossart, reviewed by Michael Paquier, Fabrízio Mello, and me
2017-12-04 15:25:55 -05:00
Robert Haas 87c37e3291 Re-allow INSERT .. ON CONFLICT DO NOTHING on partitioned tables.
Commit 8355a011a0 was reverted in
f05230752d, but this attempt is
hopefully better-considered: we now pass the correct value to
ExecOpenIndices, which should avoid the crash that we hit before.

Amit Langote, reviewed by Simon Riggs and by me.  Some final
editing by me.

Discussion: http://postgr.es/m/7ff1e8ec-dc39-96b1-7f47-ff5965dceeac@lab.ntt.co.jp
2017-12-01 12:53:21 -05:00
Robert Haas 59c8078744 Fix uninitialized memory reference.
Without this, when partdesc->nparts == 0, we end up calling
ExecBuildSlotPartitionKeyDescription without initializing values
and isnull.

Reported by Coverity via Michael Paquier.  Patch by Michael Paquier,
reviewed and revised by Amit Langote.

Discussion: http://postgr.es/m/CAB7nPqQ3mwkdMoPY-ocgTpPnjd8TKOadMxdTtMLvEzF8480Zfg@mail.gmail.com
2017-12-01 10:05:00 -05:00
Peter Eisentraut e4128ee767 SQL procedures
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar.  This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.

This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL).  There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql.  Support for defining
procedures is available in all the languages supplied by the core
distribution.

While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-11-30 11:03:20 -05:00
Noah Misch e21a556e13 Fix non-GNU makefiles for AIX make.
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target.  Back-patch to 9.3
(all supported versions).
2017-11-30 00:57:22 -08:00
Tom Lane 7ca25b7de6 Fix neqjoinsel's behavior for semi/anti join cases.
Previously, this function estimated the selectivity as 1 minus eqjoinsel()
for the negator equality operator, regardless of join type (I think there
was an expectation that eqjoinsel would handle the join type).  But
actually this is completely wrong for semijoin cases: the fraction of the
LHS that has a non-matching row is not one minus the fraction of the LHS
that has a matching row.  In reality a semijoin with <> will nearly always
succeed: it can only fail when the RHS is empty, or it contains a single
distinct value that is equal to the particular LHS value, or the LHS value
is null.  The only one of those things we should have much confidence in
estimating is the fraction of LHS values that are null, so let's just take
the selectivity as 1 minus outer nullfrac.

Per coding convention, antijoin should be estimated the same as semijoin.

Arguably this is a bug fix, but in view of the lack of field complaints
and the risk of destabilizing plans, no back-patch.

Thomas Munro, reviewed by Ashutosh Bapat

Discussion: https://postgr.es/m/CAEepm=270ze2hVxWkJw-5eKzc3AB4C9KpH3L2kih75R5pdSogg@mail.gmail.com
2017-11-29 22:00:37 -05:00
Andres Freund fa330f9adf Add some regression tests that exercise hash join code.
Although hash joins are already tested by many queries, these tests
systematically cover the four different states we can reach as part of
the strategy for respecting work_mem.

Author: Thomas Munro
Reviewed-By: Andres Freund
2017-11-29 16:06:50 -08:00
Robert Haas 8d4e70a63b Add extensive tests for partition pruning.
Currently, partition pruning happens via constraint exclusion, but
there are pending places to replace that with a different and
hopefully faster mechanism.  To be sure that we don't change behavior
without realizing it, add extensive test coverage.

Note that not all of these behaviors are optimal; in some cases,
partitions are not pruned even though it would be safe to do so.
These tests therefore serve to memorialize the current state rather
than the ideal state.  Patches that improve things can update the test
results as appropriate.

Amit Langote, adjusted by me.  Review and testing of the larger patch
set of which this is a part by Ashutosh Bapat, David Rowley, Dilip
Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul,
and Kyotaro Horiguchi.

Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
2017-11-29 15:25:29 -05:00
Robert Haas eaedf0df71 Update typedefs.list and re-run pgindent
Discussion: http://postgr.es/m/CA+TgmoaA9=1RWKtBWpDaj+sF3Stgc8sHgf5z=KGtbjwPLQVDMA@mail.gmail.com
2017-11-29 09:24:24 -05:00
Robert Haas 2d7950f222 If a range-partitioned table has no default partition, reject null keys.
Commit 4e5fe9ad19 introduced this
problem.  Also add a test so it doesn't get broken again.

Report by Rushabh Lathia.  Fix by Amit Langote.  Reviewed by Rushabh
Lathia and Amul Sul.  Tweaked by me.

Discussion: http://postgr.es/m/CAGPqQf0Y1iJyk4QJBdMf=pS9i6Q0JUMM_h5-qkR3OMJ-e04PyA@mail.gmail.com
2017-11-28 14:11:16 -05:00
Robert Haas 7b88d63a91 Add null test to partition constraint for default range partitions.
Non-default range partitions have a constraint which include null
tests, and both default and non-default list partitions also have a
constraint which includes null tests, but for some reason this was
missed for default range partitions.  This could cause the partition
constraint to evaluate to false for rows that were (correctly) routed
to that partition by insert tuple routing, which could in turn
cause constraint exclusion to prune the default partition in cases
where it should not.

Amit Langote, reviewed by Kyotaro Horiguchi

Discussion: http://postgr.es/m/ba7aaeb1-4399-220e-70b4-62eade1522d0@lab.ntt.co.jp
2017-11-28 10:51:01 -05:00
Tom Lane 0772c152b9 Mark some more functions as pg_attribute_noreturn().
Doing this suppresses Coverity warnings and might allow improved
code in some cases.  The prospects of that are not so bright as
to warrant back-patching, though.

Michael Paquier, per Coverity
2017-11-27 20:56:46 -05:00
Tom Lane 9b63c13f0a Repair failure with SubPlans in multi-row VALUES lists.
When nodeValuesscan.c was written, it was impossible to have a SubPlan in
VALUES --- any sub-SELECT there would have to be uncorrelated and thereby
would produce an InitPlan instead.  We therefore took a shortcut in the
logic that throws away a ValuesScan's per-row expression evaluation data
structures.  This was broken by the introduction of LATERAL however; a
sub-SELECT containing a lateral reference produces a correlated SubPlan.

The cleanest fix for this would be to give up the optimization of
discarding the expression eval state.  But that still seems pretty
unappetizing for long VALUES lists.  It seems to work to just prevent
the subexpressions from hooking into the ValuesScan node's subPlan
list, so let's do that and see how well it works.  (If this breaks,
due to additional connections between the subexpressions and the outer
query structures, we might consider compromises like throwing away data
only for VALUES rows not containing SubPlans.)

Per bug #14924 from Christian Duta.  Back-patch to 9.3 where LATERAL
was introduced.

Discussion: https://postgr.es/m/20171124120836.1463.5310@wrigleys.postgresql.org
2017-11-25 14:15:48 -05:00