Commit Graph

3976 Commits

Author SHA1 Message Date
Tom Lane
877cdf11ea Mop-up for letting VOID-returning SQL functions end with a SELECT.
Part of the intent in commit fd1a421fe was to allow SQL functions that are
declared to return VOID to contain anything, including an unrelated final
SELECT, the same as SQL-language procedures can.  However, the planner's
inlining logic didn't get that memo.  Fix it, and add some regression tests
covering this area, since evidently we had none.

In passing, clean up some typos in comments in create_function_3.sql,
and get rid of its none-too-safe assumption that DROP CASCADE notice
output is immutably ordered.

Per report from Prabhat Sahu.

Discussion: https://postgr.es/m/CANEvxPqxAj6nNHVcaXxpTeEFPmh24Whu+23emgjiuKrhJSct0A@mail.gmail.com
2018-03-16 12:48:13 -04:00
Tom Lane
2cf8c7aa48 Clean up duplicate table and function names in regression tests.
Many of the objects we create during the regression tests are put in the
public schema, so that using the same names in different regression tests
creates a hazard of test failures if any two such scripts run concurrently.
This patch cleans up a bunch of latent hazards of that sort, as well as two
live hazards.

The current situation in this regard is far worse than it was a year or two
back, because practically all of the partitioning-related test cases have
reused table names with enthusiasm.  I despaired of cleaning up that mess
within the five most-affected tests (create_table, alter_table, insert,
update, inherit); fortunately those don't run concurrently.

Other than partitioning problems, most of the issues boil down to using
names like "foo", "bar", "tmp", etc, without thought for the fact that
other test scripts might use similar names concurrently.  I've made an
effort to make all such names more specific.

One of the live hazards was that commit 7421f4b8 caused with.sql to
create a table named "test", conflicting with a similarly-named table
in alter_table.sql; this was exposed in the buildfarm recently.
The other one was that join.sql and transactions.sql both create tables
named "foo" and "bar"; but join.sql's uses of those names date back
only to December or so.

Since commit 7421f4b8 was back-patched to v10, back-patch a minimal
fix for that problem.  The rest of this is just future-proofing.

Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
2018-03-15 17:09:02 -04:00
Tom Lane
fb7db40ad2 Clean up duplicate role and schema names in regression tests.
Since these names are global, using the same ones in different regression
tests creates a hazard of test failures if any two such scripts run
concurrently.  Let's establish a policy of not doing that.  In the cases
where a conflict existed, I chose to rename both sides: in principle one
script or the other could've been left in possession of the common name,
but that seems to just invite more trouble of the same sort.

There are a number of places where scripts are using names that seem
unduly generic, but in the absence of actual conflicts I left them alone.

In addition, fix insert.sql's use of "someone_else" as a role name.
That's a flat out violation of longstanding project policy, so back-patch
that change to v10 where the usage appeared.  The rest of this is just
future-proofing, as no two of these scripts are actually run concurrently
in the existing parallel_schedule.

Conflicts of schema-qualified names also exist, but will be dealt with
separately.

Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
2018-03-15 14:00:31 -04:00
Peter Eisentraut
33803f67f1 Support INOUT arguments in procedures
In a top-level CALL, the values of INOUT arguments will be returned as a
result row.  In PL/pgSQL, the values are assigned back to the input
arguments.  In other languages, the same convention as for return a
record from a function is used.  That does not require any code changes
in the PL implementations.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-03-14 12:07:28 -04:00
Andres Freund
1e22166e5e Expand AND/OR regression tests around NULL handling.
Previously there were no tests verifying that NULL handling in AND/OR
was correct (i.e. that NULL rather than false is returned if
expression doesn't return true).

Author: Andres Freund
2018-03-13 16:12:31 -07:00
Andres Freund
4f63e85eb1 Add COSTS off to two EXPLAIN using tests.
Discussion: https://postgr.es/m/20180312222023.i4sgkbl4oqtstus3@alap3.anarazel.de
2018-03-13 16:12:31 -07:00
Robert Haas
0927d2f46d Let Parallel Append over simple UNION ALL have partial subpaths.
A simple UNION ALL gets flattened into an appendrel of subquery
RTEs, but up until now it's been impossible for the appendrel to use
the partial paths for the subqueries, so we can implement the
appendrel as a Parallel Append but only one with non-partial paths
as children.

There are three separate obstacles to removing that limitation.
First, when planning a subquery, propagate any partial paths to the
final_rel so that they are potentially visible to outer query levels
(but not if they have initPlans attached, because that wouldn't be
safe).  Second, after planning a subquery, propagate any partial paths
for the final_rel to the subquery RTE in the outer query level in the
same way we do for non-partial paths.  Third, teach finalize_plan() to
account for the possibility that the fake parameter we use for rescan
signalling when the plan contains a Gather (Merge) node may be
propagated from an outer query level.

Patch by me, reviewed and tested by Amit Khandekar, Rajkumar
Raghuwanshi, and Ashutosh Bapat.  Test cases based on examples by
Rajkumar Raghuwanshi.

Discussion: http://postgr.es/m/CA+Tgmoa6L9A1nNCk3aTDVZLZ4KkHDn1+tm7mFyFvP+uQPS7bAg@mail.gmail.com
2018-03-13 16:34:08 -04:00
Peter Eisentraut
377b5ac484 Fix CREATE TABLE / LIKE with bigint identity column
CREATE TABLE / LIKE with a bigint identity column would fail on
platforms where long is 32 bits.  Copying the sequence values used
makeInteger(), which would truncate the 64-bit sequence data to 32 bits.
To fix, use makeFloat() instead, like the parser.  (This does not
actually make use of floats, but stores the values as strings.)

Bug: #15096
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-13 09:41:30 -04:00
Alvaro Herrera
1f8a3327a9 Avoid having two PKs in a partition
If a table containing a primary key is attach as partition to a
partitioned table which has a primary key with a different definition,
we would happily create a second one in the new partition.  Oops.  It
turns out that this is because an error check in DefineIndex is executed
only if you tell it that it's being run by ALTER TABLE, and the original
code here wasn't.  Change it so that it does.

Added a couple of test cases for this, also.  A previously working test
started to fail in a different way than before patch because the new
check is called earlier; change the PK to plain UNIQUE so that the new
behavior isn't invoked, so that the test continues to verify what we
want it to verify.

Reported by: Noriyoshi Shinoda
Discussion: https://postgr.es/m/DF4PR8401MB102060EC2615EC9227CC73F7EEDF0@DF4PR8401MB1020.NAMPRD84.PROD.OUTLOOK.COM
2018-03-12 19:42:32 -03:00
Alvaro Herrera
63cbee6a78 doc: Reword restriction on partition keys in unique indexes
New wording from David G. Johnston, who noticed the unreadable original
also.  Include his suggested test case as well.

Fix a typo I noticed elsewhere while doing this.

Discussion: https://postgr.es/m/CAKFQuwY4Ld7ecxL_KAmaxwt0FUu5VcPPN2L4dh+3BeYbrdBa5g@mail.gmail.com
2018-03-12 13:32:28 -03:00
Tom Lane
4a4e2442a7 Fix improper uses of canonicalize_qual().
One of the things canonicalize_qual() does is to remove constant-NULL
subexpressions of top-level AND/OR clauses.  It does that on the assumption
that what it's given is a top-level WHERE clause, so that NULL can be
treated like FALSE.  Although this is documented down inside a subroutine
of canonicalize_qual(), it wasn't mentioned in the documentation of that
function itself, and some callers hadn't gotten that memo.

Notably, commit d007a9505 caused get_relation_constraints() to apply
canonicalize_qual() to CHECK constraints.  That allowed constraint
exclusion to misoptimize situations in which a CHECK constraint had a
provably-NULL subclause, as seen in the regression test case added here,
in which a child table that should be scanned is not.  (Although this
thinko is ancient, the test case doesn't fail before 9.2, for reasons
I've not bothered to track down in detail.  There may be related cases
that do fail before that.)

More recently, commit f0e44751d added an independent bug by applying
canonicalize_qual() to index expressions, which is even sillier since
those might not even be boolean.  If they are, though, I think this
could lead to making incorrect index entries for affected index
expressions in v10.  I haven't attempted to prove that though.

To fix, add an "is_check" parameter to canonicalize_qual() to specify
whether it should assume WHERE or CHECK semantics, and make it perform
NULL-elimination accordingly.  Adjust the callers to apply the right
semantics, or remove the call entirely in cases where it's not known
that the expression has one or the other semantics.  I also removed
the call in some cases involving partition expressions, where it should
be a no-op because such expressions should be canonical already ...
and was a no-op, independently of whether it could in principle have
done something, because it was being handed the qual in implicit-AND
format which isn't what it expects.  In HEAD, add an Assert to catch
that type of mistake in future.

This represents an API break for external callers of canonicalize_qual().
While that's intentional in HEAD to make such callers think about which
case applies to them, it seems like something we probably wouldn't be
thanked for in released branches.  Hence, in released branches, the
extra parameter is added to a new function canonicalize_qual_ext(),
and canonicalize_qual() is a wrapper that retains its old behavior.

Patch by me with suggestions from Dean Rasheed.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/24475.1520635069@sss.pgh.pa.us
2018-03-11 18:10:42 -04:00
Tom Lane
5748f3a0aa Improve predtest.c's internal docs, and enhance its functionality a bit.
Commit b08df9cab left things rather poorly documented as far as the
exact semantics of "clause_is_check" mode went.  Also, that mode did
not really work correctly for predicate_refuted_by; although given the
lack of specification as to what it should do, as well as the lack
of any actual use-case, that's perhaps not surprising.

Rename "clause_is_check" to "weak" proof mode, and provide specifications
for what it should do.  I defined weak refutation as meaning "truth of A
implies non-truth of B", which makes it possible to use the mode in the
part of relation_excluded_by_constraints that checks for mutually
contradictory WHERE clauses.  Fix up several places that did things wrong
for that definition.  (As far as I can see, these errors would only lead
to failure-to-prove, not incorrect claims of proof, making them not
serious bugs even aside from the fact that v10 contains no use of this
mode.  So there seems no need for back-patching.)

In addition, teach predicate_refuted_by_recurse that it can use
predicate_implied_by_recurse after all when processing a strong NOT-clause,
so long as it asks for the correct proof strength.  This is an optimization
that could have been included in commit b08df9cab, but wasn't.

Also, simplify and generalize the logic that checks for whether nullness of
the argument of IS [NOT] NULL would force overall nullness of the predicate
or clause.  (This results in a change in the partition_prune test's output,
as it is now able to prune an all-nulls partition that it did not recognize
before.)

In passing, in PartConstraintImpliedByRelConstraint, remove bogus
conversion of the constraint list to explicit-AND form and then right back
again; that accomplished nothing except forcing a useless extra level of
recursion inside predicate_implied_by.

Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
2018-03-09 16:58:26 -05:00
Andres Freund
854dd8cff5 Add parenthesized options syntax for ANALYZE.
This is analogous to the syntax allowed for VACUUM. This allows us to
avoid making new options reserved keywords and makes it easier to
allow arbitrary argument order. Oh, and it's consistent with the other
commands, too.

Author: Nathan Bossart
Reviewed-By: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/D3FC73E2-9B1A-4DB4-8180-55F57D116B4E@amazon.com
2018-03-05 16:21:05 -08:00
Alvaro Herrera
5564c11815 Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so.  Patch it up so that it does.  Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.

While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.

Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.

In pg11, comments on statistics objects are cloned too.  In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it.  Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason.  Any corresponding user-visible changes (docs) are backpatched,
though.

Reported-by: Stephen Froehlich
Author: David Rowley
Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
2018-03-05 19:37:19 -03:00
Peter Eisentraut
39314efa4d Minor fixes for reloptions tests
Follow-up to 4b95cc1dc3

Author: Nikolay Shaplov <dhyan@nataraj.su>
2018-03-03 12:51:56 -05:00
Peter Eisentraut
fd1a421fe6 Add prokind column, replacing proisagg and proiswindow
The new column distinguishes normal functions, procedures, aggregates,
and window functions.  This replaces the existing columns proisagg and
proiswindow, and replaces the convention that procedures are indicated
by prorettype == 0.  Also change prorettype to be VOIDOID for procedures.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-03-02 13:48:33 -05:00
Tom Lane
8f72a57048 Fix format_type() to restore its old behavior.
Commit a26116c6c accidentally changed the behavior of the SQL format_type()
function while refactoring.  For the reasons explained in that function's
comment, a NULL typemod argument should behave differently from a -1
argument.  Since we've managed to break this, add a regression test
memorializing the intended behavior.

In passing, be consistent about the type of the "flags" parameter.

Noted by Rushabh Lathia, though I revised the patch some more.

Discussion: https://postgr.es/m/CAGPqQf3RB2q-d2Awp_-x-Ur6aOxTUwnApt-vm-iTtceZxYnePg@mail.gmail.com
2018-03-01 11:37:46 -05:00
Andres Freund
1437824564 pg_regress: Increase space available for test names.
A few isolationtester tests with reasonable names are too wide to
nicely align. Increase space.

Author: Thomas Munro
Discussion: https://postgr.es/m/CAEepm=2v7+EHs6zsJzFn+zJOT4F4Kb69Z1xJ7Zf5kgwLr1n=VA@mail.gmail.com
2018-03-01 02:48:33 -08:00
Robert Haas
ce1663cdcd Fix assertion failure when Parallel Append is run serially.
Parallel-aware plan nodes must be prepared to run without parallelism
if it's not possible at execution time for whatever reason.  Commit
ab72716778, which introduced Parallel
Append, overlooked this.

Rajkumar Raghuwanshi reported this problem, and I included his test
case in this patch.  The code changes are by me.

Discussion: http://postgr.es/m/CAKcux6=WqkUudLg1GLZZ7fc5ScWC1+Y9qD=pAHeqy32WoeJQvw@mail.gmail.com
2018-02-28 10:58:27 -05:00
Tom Lane
e98a4de7d2 Use the correct tuplestore read pointer in a NamedTuplestoreScan.
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node.  That's because commit
18ce3a4ab allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times.  Repair.

Thomas Munro

Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
2018-02-27 15:56:51 -05:00
Tom Lane
c40e20a83c Revert renaming of int44in/int44out.
This seemed like a good idea in commit be42eb9d6, but it causes more
trouble than it's worth for cross-branch upgrade testing.

Discussion: https://postgr.es/m/11927.1519756619@sss.pgh.pa.us
2018-02-27 15:15:35 -05:00
Tom Lane
25b692568f Prevent dangling-pointer access when update trigger returns old tuple.
A before-update row trigger may choose to return the "new" or "old" tuple
unmodified.  ExecBRUpdateTriggers failed to consider the second
possibility, and would proceed to free the "old" tuple even if it was the
one returned, leading to subsequent access to already-deallocated memory.
In debug builds this reliably leads to an "invalid memory alloc request
size" failure; in production builds it might accidentally work, but data
corruption is also possible.

This is a very old bug.  There are probably a couple of reasons it hasn't
been noticed up to now.  It would be more usual to return NULL if one
wanted to suppress the update action; returning "old" is significantly less
efficient since the update will occur anyway.  Also, none of the standard
PLs would ever cause this because they all returned freshly-manufactured
tuples even if they were just copying "old".  But commit 4b93f5799 changed
that for plpgsql, making it possible to see the bug with a plpgsql trigger.
Still, this is certainly legal behavior for a trigger function, so it's
ExecBRUpdateTriggers's fault not plpgsql's.

It seems worth creating a test case that exercises returning "old" directly
with a C-language trigger; testing this through plpgsql seems unreliable
because its behavior might change again.

Report and fix by Rushabh Lathia; regression test case by me.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
2018-02-27 13:28:02 -05:00
Tom Lane
be42eb9d62 Improve regression test coverage of regress.c.
It's a bit silly to have test functions that aren't tested, so test
them.

In passing, rename int44in/int44out to city_budget_in/_out so that they
match how the regression tests use them.  Also, fix city_budget_out
so that it emits the format city_budget_in expects to read; otherwise
we'd have dump/reload failures when testing pg_dump against the
regression database.  (We avoided that in the past only because no
data of type city_budget was actually stored anywhere.)

Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
2018-02-27 12:13:14 -05:00
Tom Lane
db3af9feb1 Remove unused functions in regress.c.
This patch removes five functions that presumably were once used in the
regression tests, but haven't been so used in many years.  Nonetheless
we've been wasting maintenance effort on them (e.g., by converting them
to V1 function protocol).  I see no reason to think that reviving them
would add any useful test coverage, so drop 'em.

In passing, mark regress_lseg_construct static, since it's not called
from outside this file.

Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
2018-02-27 11:11:25 -05:00
Tom Lane
3d2aed664e Avoid using unsafe search_path settings during dump and restore.
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object.  This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run.  That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.

This patch changes pg_dump so that it does not change the search_path
dynamically.  The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter.  Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.

Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.

Security: CVE-2018-1058
2018-02-26 10:18:21 -05:00
Tom Lane
8b29e88cdc Add window RANGE support for float4, float8, numeric.
Commit 0a459cec9 left this for later, but since time's running out,
I went ahead and took care of it.  There are more data types that
somebody might someday want RANGE support for, but this is enough
to satisfy all expectations of the SQL standard, which just says that
"numeric, datetime, and interval" types should have RANGE support.
2018-02-24 13:23:38 -05:00
Tom Lane
9afd513df0 Fix planner failures with overlapping mergejoin clauses in an outer join.
Given overlapping or partially redundant join clauses, for example
	t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x
the planner's EquivalenceClass machinery will ordinarily refactor the
clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't
see multiple references to the same EquivalenceClass in a list of join
equality clauses.  However, if the join is outer, it's incorrect to derive
a restriction clause on the outer side from the join conditions, so the
clause refactoring does not happen and we end up with overlapping join
conditions.  The code that attempted to deal with such cases had several
subtle bugs, which could result in "left and right pathkeys do not match in
mergejoin" or "outer pathkeys do not match mergeclauses" planner errors,
if the selected join plan type was a mergejoin.  (It does not appear that
any actually incorrect plan could have been emitted.)

The core of the problem really was failure to recognize that the outer and
inner relations' pathkeys have different relationships to the mergeclause
list.  A join's mergeclause list is constructed by reference to the outer
pathkeys, so it will always be ordered the same as the outer pathkeys, but
this cannot be presumed true for the inner pathkeys.  If the inner sides of
the mergeclauses contain multiple references to the same EquivalenceClass
({t2.x} in the above example) then a simplistic rendering of the required
inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery
recognizes that the second sort column is redundant and throws it away.
The mergejoin planning code failed to account for that behavior properly.
One error was to try to generate cut-down versions of the mergeclause list
from cut-down versions of the inner pathkeys in the same way as the initial
construction of the mergeclause list from the outer pathkeys was done; this
could lead to choosing a mergeclause list that fails to match the outer
pathkeys.  The other problem was that the pathkey cross-checking code in
create_mergejoin_plan treated the inner and outer pathkey lists
identically, whereas actually the expectations for them must be different.
That led to false "pathkeys do not match" failures in some cases, and in
principle could have led to failure to detect bogus plans in other cases,
though there is no indication that such bogus plans could be generated.

Reported by Alexander Kuzmenkov, who also reviewed this patch.  This has
been broken for years (back to around 8.3 according to my testing), so
back-patch to all supported branches.

Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru
2018-02-23 13:47:33 -05:00
Peter Eisentraut
76b6aa41f4 Support parameters in CALL
To support parameters in CALL, move the parse analysis of the procedure
and arguments into the global transformation phase, so that the parser
hooks can be applied.  And then at execution time pass the parameters
from ProcessUtility on to ExecuteCallStmt.
2018-02-22 21:36:48 -05:00
Peter Eisentraut
10cfce34c0 Add user-callable SHA-2 functions
Add the user-callable functions sha224, sha256, sha384, sha512.  We
already had these in the C code to support SCRAM, but there was no test
coverage outside of the SCRAM tests.  Adding these as user-callable
functions allows writing some tests.  Also, we have a user-callable md5
function but no more modern alternative, which led to wide use of md5 as
a general-purpose hash function, which leads to occasional complaints
about using md5.

Also mark the existing md5 functions as leak-proof.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-02-22 11:34:53 -05:00
Robert Haas
9a5c4f58f3 Try to stabilize EXPLAIN output in partition_check test.
Commit 7d8ac9814b adjusted these
tests in the hope of preserving the plan shape, but I failed to
notice that the three partitions were, on my local machine, choosing
two different plan shapes.  This is probably related to the fact
that all three tables have exactly the same row count.  Try to
improve the situation by making pht1_e about half as large as
the other two.

Per Tom Lane and the buildfarm.

Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us
2018-02-22 08:51:00 -05:00
Robert Haas
7d8ac9814b Charge cpu_tuple_cost * 0.5 for Append and MergeAppend nodes.
Previously, Append didn't charge anything at all, and MergeAppend
charged only cpu_operator_cost, about half the value used here.  This
change might make MergeAppend plans slightly more likely to be chosen
than before, since this commit increases the assumed cost for Append
-- with default values -- by 0.005 per tuple but MergeAppend by only
0.0025 per tuple.  Since the comparisons required by MergeAppend are
costed separately, it's not clear why MergeAppend needs to be
otherwise more expensive than Append, so hopefully this is OK.

Prior to partition-wise join, it didn't really matter whether or not
an Append node had any cost of its own, because every plan had to use
the same number of Append or MergeAppend nodes and in the same places.
Only the relative cost of Append vs. MergeAppend made a difference.
Now, however, it is possible to avoid some of the Append nodes using a
partition-wise join, so it's worth making an effort.  Pending patches
for partition-wise aggregate care too, because an Append of Aggregate
nodes will incur the Append overhead fewer times than an Aggregate
over an Append.  Although in most cases this change will favor the use
of partition-wise techniques, it does the opposite when the join
cardinality is greater than the sum of the input cardinalities.  Since
this situation arises in an existing regression test, I [rhaas]
adjusted it to keep the overall plan shape approximately the same.

Jeevan Chalke, per a suggestion from David Rowley.  Reviewed by
Ashutosh Bapat.  Some changes by me.  The larger patch series of which
this patch is a part was also reviewed and tested by Antonin Houska,
Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik,
Pascal Legrand, Rafia Sabih, and me.

Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
2018-02-21 23:09:27 -05:00
Peter Eisentraut
c2ff42c6c1 Error message improvement 2018-02-20 17:58:27 -05:00
Alvaro Herrera
4108a28d3a Fix expected output 2018-02-19 17:56:43 -03:00
Alvaro Herrera
eb7ed3f306 Allow UNIQUE indexes on partitioned tables
If we restrict unique constraints on partitioned tables so that they
must always include the partition key, then our standard approach to
unique indexes already works --- each unique key is forced to exist
within a single partition, so enforcing the unique restriction in each
index individually is enough to have it enforced globally.  Therefore we
can implement unique indexes on partitions by simply removing a few
restrictions (and adding others.)

Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql
Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql
Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime
	Casanova, Amit Langote
2018-02-19 17:40:00 -03:00
Alvaro Herrera
cef60043dd Mention trigger name in trigger test
This makes it more explicit exactly what is going on, for further
proposed behavior changes.

Discussion: https://postgr.es/m/20180214212624.hm7of76flesodamf@alvherre.pgsql
2018-02-17 13:18:34 -03:00
Peter Eisentraut
2fb1abaeb0 Rename enable_partition_wise_join to enable_partitionwise_join
Discussion: https://www.postgresql.org/message-id/flat/ad24e4f4-6481-066e-e3fb-6ef4a3121882%402ndquadrant.com
2018-02-16 10:33:59 -05:00
Tom Lane
f9263006d8 Support CONSTANT/NOT NULL/initial value for plpgsql composite variables.
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.

This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.

I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.

Tom Lane, reviewed by Daniel Gustafsson

Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
2018-02-13 22:15:08 -05:00
Tom Lane
4b93f57999 Make plpgsql use its DTYPE_REC code paths for composite-type variables.
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible.  In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session.  And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple.  A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.

Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type.  DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.

To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90.  A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests.  In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.

In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type.  This allows very cheap detection of the need to
refresh tupdesc-dependent data.  This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.

In passing, allow plpgsql functions to accept as well as return type
"record".  There was no good reason for the old restriction, and it
was out of step with most of the other PLs.

Tom Lane, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
2018-02-13 18:52:21 -05:00
Peter Eisentraut
7a32ac8a66 Add procedure support to pg_get_functiondef
This also makes procedures work in psql's \ef and \sf commands.

Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
2018-02-13 15:13:44 -05:00
Peter Eisentraut
7cd56f218d Add tests for pg_get_functiondef 2018-02-13 15:13:44 -05:00
Peter Eisentraut
a7b8f0661d Fix typo 2018-02-13 15:13:44 -05:00
Tom Lane
d02d4a6d4f Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure.  This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.

In addition, using a separate EState for each argument was just silly.

So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.

While at it, improve the function's commentary a bit.

Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
2018-02-10 13:37:12 -05:00
Tom Lane
65b1d76785 Fix oversight in CALL argument handling, and do some minor cleanup.
CALL statements cannot support sub-SELECTs in the arguments of the called
procedure, since they just use ExecEvalExpr to evaluate such arguments.
Teach transformSubLink() to reject the case, as it already does for other
contexts in which subqueries are not supported.

In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum
symbol line up more closely with the phrasing of the error messages it is
associated with.  And fix someone's weak grasp of English grammar in the
preceding EXPR_KIND_PARTITION_EXPRESSION addition.  Also update an
incorrect comment in resolve_unique_index_expr (possibly it was correct
when written, but nowadays transformExpr definitely does reject SRFs here).

Per report from Pavel Stehule --- but this resolves only one of the bugs
he mentions.

Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
2018-02-10 13:05:14 -05:00
Peter Eisentraut
7c44b75a2a Make new triggers tests more robust
Add explicit collation on the trigger name to avoid locale dependencies.
Also restrict the tables selected, to avoid interference from
concurrently running tests.
2018-02-07 14:57:19 -05:00
Peter Eisentraut
32ff269117 Add more information_schema columns
- table_constraints.enforced
- triggers.action_order
- triggers.action_reference_old_table
- triggers.action_reference_new_table

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-02-07 10:08:02 -05:00
Tom Lane
0a459cec96 Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions.  We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion.  That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value.  Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that.  (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)

The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types.  Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.

In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011.  As far as I can see, we are now fully
up to spec on window framing options.

Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.

Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.

Oliver Ford, reviewed and whacked around some by me

Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
2018-02-07 00:06:56 -05:00
Robert Haas
f069c91a57 Fix possible crash in partition-wise join.
The previous code assumed that we'd always succeed in creating
child-joins for a joinrel for which partition-wise join was considered,
but that's not guaranteed, at least in the case where dummy rels
are involved.

Ashutosh Bapat, with some wordsmithing by me.

Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
2018-02-05 17:31:57 -05:00
Tom Lane
3492a0af0b Fix RelationBuildPartitionKey's processing of partition key expressions.
Failure to advance the list pointer while reading partition expressions
from a list results in invoking an input function with inappropriate data,
possibly leading to crashes or, with carefully crafted input, disclosure
of arbitrary backend memory.

Bug discovered independently by Álvaro Herrera and David Rowley.
This patch is by Álvaro but owes something to David's proposed fix.
Back-patch to v10 where the issue was introduced.

Security: CVE-2018-1052
2018-02-05 10:37:30 -05:00
Peter Eisentraut
533c5d8bdd Fix application of identity values in some cases
Investigation of 2d2d06b7e2 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN.  To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.

For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence.  The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet.  So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-02-02 14:39:10 -05:00
Robert Haas
22757960bb Fix typo: colums -> columns.
Along the way, also fix code indentation.

Alexander Lakhin, reviewed by Michael Paquier

Discussion: http://postgr.es/m/45c44aa7-7cfa-7f3b-83fd-d8300677fdda@gmail.com
2018-01-31 16:45:37 -05:00
Robert Haas
3ccdc6f9a5 Fix list partition constraints for partition keys of array type.
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type.  Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.

As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.

Amit Langote, reviewed by Etsuro Fujita

Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
2018-01-31 15:43:11 -05:00
Andres Freund
c068f87723 Improve bit perturbation in TupleHashTableHash.
The changes in b81b5a96f4 did not fully
address the issue, because the bit-mixing of the IV into the final
hash-key didn't prevent clustering in the input-data survive in the
output data.

This didn't cause a lot of problems because of the additional growth
conditions added d4c62a6b62. But as we
want to rein those in due to explosive growth in some edges, this
needs to be fixed.

Author: Andres Freund
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
2018-01-29 11:24:57 -08:00
Tom Lane
fb8697b31a Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily.  Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser.  Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.

This does create a user-visible behavioral change, which is that while
formerly all of these would work:
	alter table foo set (fillfactor = 50);
	alter table foo set (FillFactor = 50);
	alter table foo set ("fillfactor" = 50);
	alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others.  However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.

Daniel Gustafsson, reviewed by Michael Paquier, small changes by me

Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-26 18:25:14 -05:00
Alvaro Herrera
05fb5d6619 Ignore partitioned indexes where appropriate
get_relation_info() was too optimistic about opening indexes in
partitioned tables, which would raise errors when any queries were
planned on such tables.  Fix by ignoring any indexes of the partitioned
kind.

CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem.  Fix by
disallowing these commands in partitioned tables.

Fallout from 8b08f7d482.
2018-01-25 16:12:15 -03:00
Peter Eisentraut
a61116da8b Add tests for record_image_eq and record_image_cmp
record_image_eq was covered a bit by the materialized view code that it
is meant to support, but record_image_cmp was not tested at all.

While we're here, add more tests to record_eq and record_cmp as well,
for symmetry.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-24 13:23:57 -05:00
Tom Lane
bb94ce4d26 Teach reparameterize_path() to handle AppendPaths.
If we're inside a lateral subquery, there may be no unparameterized paths
for a particular child relation of an appendrel, in which case we *must*
be able to create similarly-parameterized paths for each other child
relation, else the planner will fail with "could not devise a query plan
for the given query".  This means that there are situations where we'd
better be able to reparameterize at least one path for each child.

This calls into question the assumption in reparameterize_path() that
it can just punt if it feels like it.  However, the only case that is
known broken right now is where the child is itself an appendrel so that
all its paths are AppendPaths.  (I think possibly I disregarded that in
the original coding on the theory that nested appendrels would get folded
together --- but that only happens *after* reparameterize_path(), so it's
not excused from handling a child AppendPath.)  Given that this code's been
like this since 9.3 when LATERAL was introduced, it seems likely we'd have
heard of other cases by now if there were a larger problem.

Per report from Elvis Pranskevichus.  Back-patch to 9.3.

Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net
2018-01-23 16:50:34 -05:00
Robert Haas
2f17844104 Allow UPDATE to move rows between partitions.
When an UPDATE causes a row to no longer match the partition
constraint, try to move it to a different partition where it does
match the partition constraint.  In essence, the UPDATE is split into
a DELETE from the old partition and an INSERT into the new one.  This
can lead to surprising behavior in concurrency scenarios because
EvalPlanQual rechecks won't work as they normally did; the known
problems are documented.  (There is a pending patch to improve the
situation further, but it needs more review.)

Amit Khandekar, reviewed and tested by Amit Langote, David Rowley,
Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro
Herrera, Amit Kapila, and me.  A few final revisions by me.

Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
2018-01-19 15:33:06 -05:00
Peter Eisentraut
8b9e9644dc Replace AclObjectKind with ObjectType
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that.  It's only used in
aclcheck_error.  By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2018-01-19 14:01:15 -05:00
Alvaro Herrera
189d0ff588 Fix regression tests for better stability
Per buildfarm
2018-01-19 12:31:34 -03:00
Alvaro Herrera
8b08f7d482 Local partitioned indexes
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.

As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones.  Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.

To support pg_dump'ing these indexes, add commands
    CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
    ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index).  These reconstruct prior
database state exactly.

Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
	Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
2018-01-19 11:49:22 -03:00
Peter Eisentraut
77216cae47 Add tests for session_replication_role
This was hardly tested at all.  The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
2018-01-18 11:24:07 -05:00
Tom Lane
90947674fc Fix incorrect handling of subquery pullup in the presence of grouping sets.
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.

To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing.  This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.

Back-patch to 9.5 where grouping sets were introduced.

Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth

Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
2018-01-12 12:24:50 -05:00
Bruce Momjian
bdb70c12b3 C comment: fix "the the" mentions in C comments
Reported-by: Christoph Dreis

Discussion: https://postgr.es/m/007e01d3519e$2734ca10$759e5e30$@freenet.de

Author: Christoph Dreis
2018-01-11 21:50:21 -05:00
Peter Eisentraut
5115854170 Add tests for PL/pgSQL returning unnamed portals as refcursor
Existing tests only covered returning explicitly named portals as
refcursor.  The unnamed cursor case was recently broken without a test
failing.
2018-01-10 16:39:13 -05:00
Andrew Dunstan
11b623dd0a Implement TZH and TZM timestamp format patterns
These are compatible with Oracle and required for the datetime template
language for jsonpath in an upcoming patch.

Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
2018-01-09 14:25:05 -05:00
Tom Lane
624e440a47 Improve the heuristic for ordering child paths of a parallel append.
Commit ab7271677 introduced code that attempts to order the child
scans of a Parallel Append node in a way that will minimize execution
time, based on total cost and startup cost.  However, it failed to
think hard about what to do when estimated costs are exactly equal;
a case that's particularly likely to occur when comparing on startup
cost.  In such a case the ordering of the child paths would be left
to the whims of qsort, an algorithm that isn't even stable.

We can improve matters by applying the rule used elsewhere in the
planner: if total costs are equal, sort on startup cost, and
vice versa.  When both cost estimates are exactly equal, rather
than letting qsort do something unpredictable, sort based on the
child paths' relids, which should typically result in sorting in
inheritance order.  (The latter provision requires inventing a
qsort-style comparator for bitmapsets, but maybe we'll have use
for that for other reasons in future.)

This results in a few plan changes in the select_parallel test,
but those all look more reasonable than before, when the actual
underlying cost numbers are taken into account.

Discussion: https://postgr.es/m/4944.1515446989@sss.pgh.pa.us
2018-01-09 13:07:52 -05:00
Tom Lane
934c7986f4 Tweak parallel hash join test case in hopes of improving stability.
This seems to make things better on gaur, let's see what the rest
of the buildfarm thinks.

Thomas Munro

Discussion: https://postgr.es/m/CAEepm=1uuT8iJxMEsR=jL+3zEi87DB2v0+0H9o_rUXXCZPZT3A@mail.gmail.com
2018-01-04 01:06:58 -05:00
Tom Lane
3decd150a2 Teach eval_const_expressions() to handle some more cases.
Add some infrastructure (mostly macros) to make it easier to write
typical cases for constant-expression simplification.  Add simplification
processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types,
which formerly went unsimplified even if all their inputs were constants.
Also teach it to simplify FieldSelect from a composite constant.
Make use of the new infrastructure to reduce the amount of code needed
for the existing ArrayExpr and ArrayCoerceExpr cases.

One existing test case changes output as a result of the fact that
RowExpr can now be folded to a constant.  All the new code is exercised
by existing test cases according to gcov, so I feel no need to add
additional tests.

Tom Lane, reviewed by Dmitry Dolgov

Discussion: https://postgr.es/m/3be3b82c-e29c-b674-2163-bf47d98817b1@iki.fi
2018-01-03 12:35:09 -05:00
Bruce Momjian
9d4649ca49 Update copyright for 2018
Backpatch-through: certain files through 9.3
2018-01-02 23:30:12 -05:00
Tom Lane
dd2243f2ad Improve regression tests' code coverage for plpgsql control structures.
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements.  Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.

Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql.  So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter.  I then
added new test cases that exercise the bits of code complained of above.

Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests.  That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation.  So testing each usage of
that implementation separately seems redundant.

In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.

Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
2017-12-31 17:04:11 -05:00
Teodor Sigaev
ff963b393c Add polygon opclass for SP-GiST
Polygon opclass uses compress method feature of SP-GiST added earlier. For now
it's a single operator class which uses this feature. SP-GiST actually indexes
a bounding boxes of input polygons, so part of supported operations are lossy.
Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
treats bounding boxes as point in 4D-space.

Bump catalog version.

Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me
Reviewed-By: all authors + Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
2017-12-25 18:59:38 +03:00
Tom Lane
c4c2885cbb Fix UNION/INTERSECT/EXCEPT over no columns.
Since 9.4, we've allowed the syntax "select union select" and variants
of that.  However, the planner wasn't expecting a no-column set operation
and ended up treating the set operation as if it were UNION ALL.

Turns out it's trivial to fix in v10 and later; we just need to be careful
about not generating a Sort node with no sort keys.  However, since a weird
corner case like this is never going to be exercised by developers, we'd
better have thorough regression tests if we want to consider it supported.

Per report from Victor Yegorov.

Discussion: https://postgr.es/m/CAGnEbojGJrRSOgJwNGM7JSJZpVAf8xXcVPbVrGdhbVEHZ-BUMw@mail.gmail.com
2017-12-22 12:08:06 -05:00
Andres Freund
1804284042 Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash.  While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.

After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory.  If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.

The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:

 * avoids wasting memory on duplicated hash tables
 * avoids wasting disk space on duplicated batch files
 * divides the work of building the hash table over the CPUs

One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables.  This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes.  Another is that
outer batch 0 must be written to disk if multiple batches are required.

A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.

A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.

Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
    https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
    https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 00:43:41 -08:00
Robert Haas
7d3583ad9a Test instrumentation of Hash nodes with parallel query.
Commit 8526bcb2df fixed bugs related
to both Sort and Hash, but only added a test case for Sort.  This
adds a test case for Hash to match.

Thomas Munro

Discussion: http://postgr.es/m/CAEepm=2-LRnfwUBZDqQt+XAcd0af_ykNyyVvP3h1uB1AQ=e-eA@mail.gmail.com
2017-12-19 15:29:08 -05:00
Robert Haas
8526bcb2df Try again to fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, accumulate instrumentation data only once, at the end, instead
of after each execution, to avoid recording inflated totals.

Commit 778e78ae9f, the previous attempt
at a fix, instead reset the state after every execution, which worked
for the general instrumentation data but had problems for the additional
instrumentation specific to Sort and Hash nodes.

Report by hubert depesz lubaczewski.  Analysis and fix by Amit Kapila,
following a design proposal from Thomas Munro, with a comment tweak
by me.

Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
2017-12-19 12:21:56 -05:00
Robert Haas
1d6fb35ad6 Revert "Fix accumulation of parallel worker instrumentation."
This reverts commit 2c09a5c12a.  Per
further discussion, that doesn't seem to be the best possible fix.

Discussion: http://postgr.es/m/CAA4eK1LW2aFKzY3=vwvc=t-juzPPVWP2uT1bpx_MeyEqnM+p8g@mail.gmail.com
2017-12-13 15:19:28 -05:00
Peter Eisentraut
632b03da31 Start a separate test suite for plpgsql
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome.  The other PLs have their own test suites split up into
smaller files by topic.  It would be nice to have that for plpgsql as
well.  So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there.  That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.

msvc build system changes with help from Michael Paquier
2017-12-13 11:02:29 -05:00
Peter Eisentraut
3d8874224f Fix crash when using CALL on an aggregate
Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reported-by: Rushabh Lathia <rushabh.lathia@gmail.com>
2017-12-13 10:37:48 -05:00
Tom Lane
9edc97b712 Stabilize output of new regression test case.
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent.  But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely.  We can revert this if we do something
in the backend to eliminate the inconsistency.

Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
2017-12-10 12:44:03 -05:00
Tom Lane
390d58135b Fix plpgsql to reinitialize record variables at block re-entry.
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is).  But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list.  This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.

Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well.  (We don't need to consider other DTYPEs because they don't
represent separately-stored values.)  I failed to resist the temptation
to make some nearby cosmetic adjustments, too.

No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior.  In any case this change would have no
impact on correctly-written code.

Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
2017-12-09 12:03:04 -05:00
Magnus Hagander
ce1468d02b Fix regression test output
Missed this in the last commit.
2017-12-09 13:45:06 +01:00
Peter Eisentraut
005ac298b1 Prohibit identity columns on typed tables and partitions
Those cases currently crash and supporting them is more work then
originally thought, so we'll just prohibit these scenarios for now.

Author: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Reported-by: Мансур Галиев <gomer94@yandex.ru>
Bug: #14866
2017-12-08 12:13:04 -05:00
Peter Eisentraut
2d2d06b7e2 Apply identity sequence values on COPY
A COPY into a table should apply identity sequence values just like it
does for ordinary defaults.  This was previously forgotten, leading to
null values being inserted, which in turn would fail because identity
columns have not-null constraints.

Author: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Steven Winfield <steven.winfield@cantabcapital.com>
Bug: #14952
2017-12-08 09:18:18 -05:00
Tom Lane
979a36c389 Adjust regression test cases added by commit ab7271677.
I suppose it is a copy-and-paste error that this test doesn't actually
test the "Parallel Append with both partial and non-partial subplans"
case (EXPLAIN alone surely doesn't qualify as a test of executor
behavior).  Fix that.

Also, add cosmetic aliases to make it possible to tell apart these
otherwise-identical test cases in log_statement output.
2017-12-05 22:40:43 -05:00
Robert Haas
ab72716778 Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention.  We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.

Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.

Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 17:28:39 -05:00
Robert Haas
2c09a5c12a Fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, the old code wouldn't reset the shared state between executions,
potentially resulting in dramatically inflated instrumentation data
for nodes beneath it.  (The per-worker instrumentation ended up OK,
I think, but the overall totals were inflated.)

Report by hubert depesz lubaczewski.  Analysis and fix by Amit Kapila,
reviewed and tweaked a bit by me.

Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
2017-12-05 14:35:33 -05:00
Andres Freund
5bcf389ecf Fix EXPLAIN ANALYZE of hash join when the leader doesn't participate.
If a hash join appears in a parallel query, there may be no hash table
available for explain.c to inspect even though a hash table may have
been built in other processes.  This could happen either because
parallel_leader_participation was set to off or because the leader
happened to hit the end of the outer relation immediately (even though
the complete relation is not empty) and decided not to build the hash
table.

Commit bf11e7ee introduced a way for workers to exchange
instrumentation via the DSM segment for Sort nodes even though they
are not parallel-aware.  This commit does the same for Hash nodes, so
that explain.c has a way to find instrumentation data from an
arbitrary participant that actually built the hash table.

Author: Thomas Munro
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
2017-12-05 10:55:56 -08:00
Robert Haas
87c37e3291 Re-allow INSERT .. ON CONFLICT DO NOTHING on partitioned tables.
Commit 8355a011a0 was reverted in
f05230752d, but this attempt is
hopefully better-considered: we now pass the correct value to
ExecOpenIndices, which should avoid the crash that we hit before.

Amit Langote, reviewed by Simon Riggs and by me.  Some final
editing by me.

Discussion: http://postgr.es/m/7ff1e8ec-dc39-96b1-7f47-ff5965dceeac@lab.ntt.co.jp
2017-12-01 12:53:21 -05:00
Robert Haas
59c8078744 Fix uninitialized memory reference.
Without this, when partdesc->nparts == 0, we end up calling
ExecBuildSlotPartitionKeyDescription without initializing values
and isnull.

Reported by Coverity via Michael Paquier.  Patch by Michael Paquier,
reviewed and revised by Amit Langote.

Discussion: http://postgr.es/m/CAB7nPqQ3mwkdMoPY-ocgTpPnjd8TKOadMxdTtMLvEzF8480Zfg@mail.gmail.com
2017-12-01 10:05:00 -05:00
Peter Eisentraut
e4128ee767 SQL procedures
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar.  This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.

This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL).  There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql.  Support for defining
procedures is available in all the languages supplied by the core
distribution.

While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.

Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
2017-11-30 11:03:20 -05:00
Noah Misch
e21a556e13 Fix non-GNU makefiles for AIX make.
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target.  Back-patch to 9.3
(all supported versions).
2017-11-30 00:57:22 -08:00
Tom Lane
7ca25b7de6 Fix neqjoinsel's behavior for semi/anti join cases.
Previously, this function estimated the selectivity as 1 minus eqjoinsel()
for the negator equality operator, regardless of join type (I think there
was an expectation that eqjoinsel would handle the join type).  But
actually this is completely wrong for semijoin cases: the fraction of the
LHS that has a non-matching row is not one minus the fraction of the LHS
that has a matching row.  In reality a semijoin with <> will nearly always
succeed: it can only fail when the RHS is empty, or it contains a single
distinct value that is equal to the particular LHS value, or the LHS value
is null.  The only one of those things we should have much confidence in
estimating is the fraction of LHS values that are null, so let's just take
the selectivity as 1 minus outer nullfrac.

Per coding convention, antijoin should be estimated the same as semijoin.

Arguably this is a bug fix, but in view of the lack of field complaints
and the risk of destabilizing plans, no back-patch.

Thomas Munro, reviewed by Ashutosh Bapat

Discussion: https://postgr.es/m/CAEepm=270ze2hVxWkJw-5eKzc3AB4C9KpH3L2kih75R5pdSogg@mail.gmail.com
2017-11-29 22:00:37 -05:00
Andres Freund
fa330f9adf Add some regression tests that exercise hash join code.
Although hash joins are already tested by many queries, these tests
systematically cover the four different states we can reach as part of
the strategy for respecting work_mem.

Author: Thomas Munro
Reviewed-By: Andres Freund
2017-11-29 16:06:50 -08:00
Robert Haas
8d4e70a63b Add extensive tests for partition pruning.
Currently, partition pruning happens via constraint exclusion, but
there are pending places to replace that with a different and
hopefully faster mechanism.  To be sure that we don't change behavior
without realizing it, add extensive test coverage.

Note that not all of these behaviors are optimal; in some cases,
partitions are not pruned even though it would be safe to do so.
These tests therefore serve to memorialize the current state rather
than the ideal state.  Patches that improve things can update the test
results as appropriate.

Amit Langote, adjusted by me.  Review and testing of the larger patch
set of which this is a part by Ashutosh Bapat, David Rowley, Dilip
Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul,
and Kyotaro Horiguchi.

Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
2017-11-29 15:25:29 -05:00
Robert Haas
2d7950f222 If a range-partitioned table has no default partition, reject null keys.
Commit 4e5fe9ad19 introduced this
problem.  Also add a test so it doesn't get broken again.

Report by Rushabh Lathia.  Fix by Amit Langote.  Reviewed by Rushabh
Lathia and Amul Sul.  Tweaked by me.

Discussion: http://postgr.es/m/CAGPqQf0Y1iJyk4QJBdMf=pS9i6Q0JUMM_h5-qkR3OMJ-e04PyA@mail.gmail.com
2017-11-28 14:11:16 -05:00
Robert Haas
7b88d63a91 Add null test to partition constraint for default range partitions.
Non-default range partitions have a constraint which include null
tests, and both default and non-default list partitions also have a
constraint which includes null tests, but for some reason this was
missed for default range partitions.  This could cause the partition
constraint to evaluate to false for rows that were (correctly) routed
to that partition by insert tuple routing, which could in turn
cause constraint exclusion to prune the default partition in cases
where it should not.

Amit Langote, reviewed by Kyotaro Horiguchi

Discussion: http://postgr.es/m/ba7aaeb1-4399-220e-70b4-62eade1522d0@lab.ntt.co.jp
2017-11-28 10:51:01 -05:00
Tom Lane
9b63c13f0a Repair failure with SubPlans in multi-row VALUES lists.
When nodeValuesscan.c was written, it was impossible to have a SubPlan in
VALUES --- any sub-SELECT there would have to be uncorrelated and thereby
would produce an InitPlan instead.  We therefore took a shortcut in the
logic that throws away a ValuesScan's per-row expression evaluation data
structures.  This was broken by the introduction of LATERAL however; a
sub-SELECT containing a lateral reference produces a correlated SubPlan.

The cleanest fix for this would be to give up the optimization of
discarding the expression eval state.  But that still seems pretty
unappetizing for long VALUES lists.  It seems to work to just prevent
the subexpressions from hooking into the ValuesScan node's subPlan
list, so let's do that and see how well it works.  (If this breaks,
due to additional connections between the subexpressions and the outer
query structures, we might consider compromises like throwing away data
only for VALUES rows not containing SubPlans.)

Per bug #14924 from Christian Duta.  Back-patch to 9.3 where LATERAL
was introduced.

Discussion: https://postgr.es/m/20171124120836.1463.5310@wrigleys.postgresql.org
2017-11-25 14:15:48 -05:00
Tom Lane
e842791b0f Fix unstable regression test added by commits 59b71c6fe et al.
The query didn't really have a preferred index, leading to platform-
specific choices of which one to use.  Adjust it to make sure tenk1_hundred
is always chosen.

Per buildfarm.
2017-11-24 00:29:20 -05:00
Andres Freund
59b71c6fe6 Fix handling of NULLs returned by aggregate combine functions.
When strict aggregate combine functions, used in multi-stage/parallel
aggregation, returned NULL, we didn't check for that, invoking the
combine function with NULL the next round, despite it being strict.

The equivalent code invoking normal transition functions has a check
for that situation, which did not get copied in a7de3dc5c3. Fix the
bug by adding the equivalent check.

Based on a quick look I could not find any strict combine functions in
core actually returning NULL, and it doesn't seem very likely external
users have done so. So this isn't likely to have caused issues in
practice.

Add tests verifying transition / combine functions returning NULL is
tested.

Reported-By: Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20171121033642.7xvmjqrl4jdaaat3@alap3.anarazel.de
Backpatch: 9.6, where parallel aggregation was introduced
2017-11-23 17:15:27 -08:00
Simon Riggs
3bae43ca4d Sort default partition to bottom of psql \d+
Minor patch to change sort order only

Author: Ashutosh Bapat
Reviewed-by:  Álvaro Herrera, Simon Riggs
2017-11-23 05:17:47 +11:00
Simon Riggs
05b6ec39d7 Show partition info from psql \d+
Author: Amit Langote, Ashutosh Bapat
Reviewed-by:  Álvaro Herrera, Simon Riggs
2017-11-23 05:10:39 +11:00
Robert Haas
f3b0897a12 Fix multiple problems with satisfies_hash_partition.
Fix the function header comment to describe the actual behavior.
Check that table OID, modulus, and remainder arguments are not NULL
before accessing them.  Check that the modulus and remainder are
sensible.  If the table OID doesn't exist, return NULL instead of
emitting an internal error, similar to what we do elsewhere.  Check
that the actual argument types match, or at least are binary coercible
to, the expected argument types.  Correctly handle invocation of this
function using the VARIADIC syntax.  Add regression tests.

Robert Haas and Amul Sul, per a report by Andreas Seltenreich and
subsequent followup investigation.

Discussion: http://postgr.es/m/871sl4sdrv.fsf@ansel.ydns.eu
2017-11-21 13:06:32 -05:00
Simon Riggs
56f3468622 Reduce test variability for toast_tuple_target test 2017-11-20 12:09:40 +11:00
Simon Riggs
c2513365a0 Parameter toast_tuple_target controls TOAST for new rows
Specifies the point at which we try to move long column values
into TOAST tables.

No effect on existing rows.

Discussion: https://postgr.es/m/CANP8+jKsVmw6CX6YP9z7zqkTzcKV1+Uzr3XjKcZW=2Ya00OyQQ@mail.gmail.com

Author: Simon Riggs <simon@2ndQudrant.com>
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndQuadrant.com>
2017-11-20 09:50:10 +11:00
Tom Lane
63ca86318d Fix quoted-substring handling in format parsing for to_char/to_number/etc.
This code evidently intended to treat backslash as an escape character
within double-quoted substrings, but it was sufficiently confused that
cases like ..."foo\\"... did not work right: the second backslash
managed to quote the double-quote after it, despite being quoted itself.
Rewrite to get that right, while preserving the existing behavior
outside double-quoted substrings, which is that backslash isn't special
except in the combination \".

Comparing to Oracle, it seems that their version of to_char() for
timestamps allows literal alphanumerics only within double quotes, while
non-alphanumerics are allowed outside quotes; backslashes aren't special
anywhere; there is no way at all to emit a literal double quote.
(Bizarrely, their to_char() for numbers is different; it doesn't allow
literal text at all AFAICT.)  The fact that they don't treat backslash
as special justifies our existing behavior for backslash outside double
quotes.  I considered making backslash inside double quotes act the same
way (ie, special only if before "), which in a green field would be a
more consistent behavior.  But that would likely break more existing SQL
code than what this patch does.

Add some test cases illustrating this behavior.  (Only the last new
case actually changes behavior in this commit.)

Little of this behavior was documented, either, so fix that.

Discussion: https://postgr.es/m/3626.1510949486@sss.pgh.pa.us
2017-11-18 12:16:37 -05:00
Tom Lane
e87d4965bd Prevent to_number() from losing data when template doesn't match exactly.
Non-data template patterns would consume characters whether or not those
characters were what the pattern expected, for example
	SELECT TO_NUMBER('1234', '9,999');
produced 134 because the '2' got eaten by the comma pattern.  This seems
undesirable, not least because it doesn't happen in Oracle.  For the ','
and 'G' template patterns, we can fix this by consuming characters only
if they match what the pattern would output.  For non-data patterns such
as 'L' and 'TH', it seems impractical to tighten things up to the point of
consuming only exact matches to what the pattern would output; but we can
improve matters quite a lot by redefining the behavior as "consume only
characters that aren't digits, signs, decimal point, or comma".

Also, fix it so that the behavior is to consume the number of *characters*
the pattern would output, not the number of *bytes*.  The old coding would
do surprising things with non-ASCII currency symbols, for example.  (It
would be good to apply that rule for literal text as well, but this commit
only fixes it for non-data patterns.)

Oliver Ford, reviewed by Thomas Munro and Nathan Wagner, and whacked around
a bit more by me

Discussion: https://postgr.es/m/CAGMVOdvpbMqPf9XWNzOwBpzJfErkydr_fEGhmuDGa015z97mwg@mail.gmail.com
2017-11-17 12:04:13 -05:00
Robert Haas
be92769e4e Set proargmodes for satisfies_hash_partition.
It appears that proargmodes should always be set for variadic
functions, but satifies_hash_partition had it as NULL.  In addition to
fixing the problem, add a regression test to guard against future
mistakes of this type.
2017-11-17 11:53:00 -05:00
Robert Haas
e89a71fb44 Pass InitPlan values to workers via Gather (Merge).
If a PARAM_EXEC parameter is used below a Gather (Merge) but the InitPlan
that computes it is attached to or above the Gather (Merge), force the
value to be computed before starting parallelism and pass it down to all
workers.  This allows us to use parallelism in cases where it previously
would have had to be rejected as unsafe.  We do - in this case - lose the
optimization that the value is only computed if it's actually used.  An
alternative strategy would be to have the first worker that needs the value
compute it, but one downside of that approach is that we'd then need to
select a parallel-safe path to compute the parameter value; it couldn't for
example contain a Gather (Merge) node.  At some point in the future, we
might want to consider both approaches.

Independent of that consideration, there is a great deal more work that
could be done to make more kinds of PARAM_EXEC parameters parallel-safe.
This infrastructure could be used to allow a Gather (Merge) on the inner
side of a nested loop (although that's not a very appealing plan) and
cases where the InitPlan is attached below the Gather (Merge) could be
addressed as well using various techniques.  But this is a good start.

Amit Kapila, reviewed and revised by me.  Reviewing and testing from
Kuntal Ghosh, Haribabu Kommi, and Tushar Ahuja.

Discussion: http://postgr.es/m/CAA4eK1LV0Y1AUV4cUCdC+sYOx0Z0-8NAJ2Pd9=UKsbQ5Sr7+JQ@mail.gmail.com
2017-11-16 12:06:14 -05:00
Robert Haas
e5253fdc4f Add parallel_leader_participation GUC.
Sometimes, for testing, it's useful to have the leader do nothing but
read tuples from workers; and it's possible that could work out better
even in production.

Thomas Munro, reviewed by Amit Kapila and by me.  A few final tweaks
by me.

Discussion: http://postgr.es/m/CAEepm=2U++Lp3bNTv2Bv_kkr5NE2pOyHhxU=G0YTa4ZhSYhHiw@mail.gmail.com
2017-11-15 08:23:18 -05:00
Tom Lane
7518049980 Prevent int128 from requiring more than MAXALIGN alignment.
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.

Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break.  Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.

The only way out of the box is to make type int128 conform to the system's
alignment assumptions.  Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.

Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.

Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.

This will need to be back-patched, along with the preparatory commit
91aec93e6.  But let's see what the buildfarm makes of it first.

Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
2017-11-14 15:03:55 -05:00
Robert Haas
44ae64c388 Push target list evaluation through Gather Merge.
We already do this for Gather, but it got overlooked for Gather Merge.

Amit Kapila, with review and minor revisions by Rushabh Lathia
and by me.

Discussion: http://postgr.es/m/CAA4eK1KUC5Uyu7qaifxrjpHxbSeoQh3yzwN3bThnJsmJcZ-qtA@mail.gmail.com
2017-11-13 16:37:42 -05:00
Noah Misch
4b865aee25 Fix previous commit's test, for non-UTF8 databases with non-XML builds.
To ensure stable output, catch one more configuration-specific error.
Back-patch to 9.3, like the commit that added the test.
2017-11-11 13:07:46 -08:00
Noah Misch
2918fcedbf Ignore XML declaration in xpath_internal(), for UTF8 databases.
When a value contained an XML declaration naming some other encoding,
this function interpreted UTF8 bytes as the named encoding, yielding
mojibake.  xml_parse() already has similar logic.  This would be
necessary but not sufficient for non-UTF8 databases, so preserve
behavior there until the xpath facility can support such databases
comprehensively.  Back-patch to 9.3 (all supported versions).

Pavel Stehule and Noah Misch

Discussion: https://postgr.es/m/CAFj8pRC-dM=tT=QkGi+Achkm+gwPmjyOayGuUfXVumCxkDgYWg@mail.gmail.com
2017-11-11 11:10:53 -08:00
Peter Eisentraut
0e1539ba0d Add some const decorations to prototypes
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
2017-11-10 13:38:57 -05:00
Robert Haas
1aba8e651a Add hash partitioning.
Hash partitioning is useful when you want to partition a growing data
set evenly.  This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.

At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work.  Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.

Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me.  A few
final tweaks also by me.

Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
2017-11-09 18:07:44 -05:00
Tom Lane
5ecc0d738e Restrict lo_import()/lo_export() via SQL permissions not hard-wired checks.
While it's generally unwise to give permissions on these functions to
anyone but a superuser, we've been moving away from hard-wired permission
checks inside functions in favor of using the SQL permission system to
control access.  Bring lo_import() and lo_export() into compliance with
that approach.

In particular, this removes the manual configuration option
ALLOW_DANGEROUS_LO_FUNCTIONS.  That dates back to 1999 (commit 4cd4a54c8);
it's unlikely anyone has used it in many years.  Moreover, if you really
want such behavior, now you can get it with GRANT ... TO PUBLIC instead.

Michael Paquier

Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com
2017-11-09 12:36:58 -05:00
Tom Lane
b574228715 Add tests for json{b}_populate_recordset() crash case.
The problem reported as CVE-2017-15098 was already resolved in HEAD by
commit 37a795a60, but let's add the relevant test cases anyway.

Michael Paquier and Tom Lane, per a report from David Rowley.

Security: CVE-2017-15098
2017-11-06 10:29:37 -05:00
Dean Rasheed
87b2ebd352 Always require SELECT permission for ON CONFLICT DO UPDATE.
The update path of an INSERT ... ON CONFLICT DO UPDATE requires SELECT
permission on the columns of the arbiter index, but it failed to check
for that in the case of an arbiter specified by constraint name.

In addition, for a table with row level security enabled, it failed to
check updated rows against the table's SELECT policies when the update
path was taken (regardless of how the arbiter index was specified).

Backpatch to 9.5 where ON CONFLICT DO UPDATE and RLS were introduced.

Security: CVE-2017-15099
2017-11-06 09:19:22 +00:00
Noah Misch
c66b438db6 Add a temp-install prerequisite to "check"-like targets not having one.
Makefile.global assigns this prerequisite to every target named "check",
but similar targets must mention it explicitly.  Affected targets
failed, tested $PATH binaries, or tested a stale temporary installation.
The src/test/modules examples worked properly when called as "make -C
src/test/modules/$FOO check", but "make -j" allowed the test to start
before the temporary installation was in place.  Back-patch to 9.5,
where commit dcae5facca introduced the
shared temp-install.
2017-11-05 18:51:08 -08:00
Tom Lane
7c70996ebf Allow bitmap scans to operate as index-only scans when possible.
If we don't have to return any columns from heap tuples, and there's
no need to recheck qual conditions, and the heap page is all-visible,
then we can skip fetching the heap page altogether.

Skip prefetching pages too, when possible, on the assumption that the
recheck flag will remain the same from one page to the next.  While that
assumption is hardly bulletproof, it seems like a good bet most of the
time, and better than prefetching pages we don't need.

This commit installs the executor infrastructure, but doesn't change
any planner cost estimates, thus possibly causing bitmap scans to
not be chosen in cases where this change renders them the best choice.
I (tgl) am not entirely convinced that we need to account for this
behavior in the planner, because I think typically the bitmap scan would
get chosen anyway if it's the best bet.  In any case the submitted patch
took way too many shortcuts, resulting in too many clearly-bad choices,
to be committable.

Alexander Kuzmenkov, reviewed by Alexey Chernyshov, and whacked around
rather heavily by me.

Discussion: https://postgr.es/m/239a8955-c0fc-f506-026d-c837e86c827b@postgrespro.ru
2017-11-01 17:38:20 -04:00
Tom Lane
af20e2d728 Fix ALTER TABLE code to update domain constraints when needed.
It's possible for dropping a column, or altering its type, to require
changes in domain CHECK constraint expressions; but the code was
previously only expecting to find dependent table CHECK constraints.
Make the necessary adjustments.

This is a fairly old oversight, but it's a lot easier to encounter
the problem in the context of domains over composite types than it
was before.  Given the lack of field complaints, I'm not going to
bother with a back-patch, though I'd be willing to reconsider that
decision if someone does complain.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/30656.1509128130@sss.pgh.pa.us
2017-11-01 13:32:23 -04:00
Robert Haas
cf7ab13bfb Fix code related to partitioning schemes for dropped columns.
The entry in appinfo->translated_vars can be NULL; if so, we must avoid
dereferencing it.

Ashutosh Bapat

Discussion: http://postgr.es/m/CAFjFpReL7+1ien=-21rhjpO3bV7aAm1rQ8XgLVk2csFagSzpZQ@mail.gmail.com
2017-10-31 14:43:05 +05:30
Robert Haas
35f059e9bd Add sanity check for pg_proc.provariadic
Check that the values from pg_proc.h match what ProcedureCreate would
have done.

Robert Haas and Amul Sul

Discussion: http://postgr.es/m/CA+TgmoZ_UGXfq5ygeDDMdUSJ4J_VX7nFnjC6mfY6BgOJ3qZCmw@mail.gmail.com
2017-10-31 09:52:39 +05:30
Robert Haas
846fcc8516 Fix problems with the "role" GUC and parallel query.
Without this fix, dropping a role can sometimes result in parallel
query failures in sessions that have used "SET ROLE" to assume the
dropped role, even if that setting isn't active any more.

Report by Pavan Deolasee.  Patch by Amit Kapila, reviewed by me.

Discussion: http://postgr.es/m/CABOikdOomRcZsLsLK+Z+qENM1zxyaWnAvFh3MJZzZnnKiF+REg@mail.gmail.com
2017-10-29 12:58:40 +05:30
Tom Lane
d5b760ecb5 Fix crash when columns have been added to the end of a view.
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed.  This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.

To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns.  While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.

Per bug #14876 from Samuel Horwitz.  This has been busted since commit
ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org
2017-10-27 17:28:54 -04:00
Robert Haas
682ce911f8 Allow parallel query for prepared statements with generic plans.
This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't.  In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked.  In a generic plan, the Param node survives
and must be treated as parallel-safe.  SerializeParamList provides
for the transmission of parameter values to workers.

Amit Kapila with help from Kuntal Ghosh.  Some changes by me.

Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
2017-10-27 22:22:39 +02:00
Tom Lane
6784d7a1dc Rethink the dependencies recorded for FieldSelect/FieldStore nodes.
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load.  What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract.  Otherwise, we can break it by dropping that
column.  The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.

Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column.  That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.

Likewise for FieldStore.

Like the previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us
2017-10-27 12:19:09 -04:00
Tom Lane
37a795a60b Support domains over composite types.
This is the last major omission in our domains feature: you can now
make a domain over anything that's not a pseudotype.

The major complication from an implementation standpoint is that places
that might be creating tuples of a domain type now need to be prepared
to apply domain_check().  It seems better that unprepared code fail
with an error like "<type> is not composite" than that it silently fail
to apply domain constraints.  Therefore, relevant infrastructure like
get_func_result_type() and lookup_rowtype_tupdesc() has been adjusted
to treat domain-over-composite as a distinct case that unprepared code
won't recognize, rather than just transparently treating it the same
as plain composite.  This isn't a 100% solution to the possibility of
overlooked domain checks, but it catches most places.

In passing, improve typcache.c's support for domains (it can now cache
the identity of a domain's base type), and rewrite the argument handling
logic in jsonfuncs.c's populate_record[set]_worker to reduce duplicative
per-call lookups.

I believe this is code-complete so far as the core and contrib code go.
The PLs need varying amounts of work, which will be tackled in followup
patches.

Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
2017-10-26 13:47:45 -04:00
Tom Lane
08f1e1f0a4 Make setrefs.c match by ressortgroupref even for plain Vars.
Previously, we skipped using search_indexed_tlist_for_sortgroupref()
if the tlist expression being sought in the child plan node was merely
a Var.  This is purely an optimization, based on the theory that
search_indexed_tlist_for_var() is faster, and one copy of a Var should
be as good as another.  However, the GROUPING SETS patch broke the
latter assumption: grouping columns containing the "same" Var can
sometimes have different outputs, as shown in the test case added here.
So do it the hard way whenever a ressortgroupref marking exists.

(If this seems like a bottleneck, we could imagine building a tlist index
data structure for ressortgroupref values, as we do for Vars.  But I'll
let that idea go until there's some evidence it's worthwhile.)

Back-patch to 9.6.  The problem also exists in 9.5 where GROUPING SETS
came in, but this patch is insufficient to resolve the problem in 9.5:
there is some obscure dependency on the upper-planner-pathification
work that happened in 9.6.  Given that this is such a weird corner case,
and no end users have complained about it, it doesn't seem worth the work
to develop a fix for 9.5.

Patch by me, per a report from Heikki Linnakangas.  (This does not fix
Heikki's original complaint, just the follow-on one.)

Discussion: https://postgr.es/m/aefc657e-edb2-64d5-6df1-a0828f6e9104@iki.fi
2017-10-26 12:17:40 -04:00
Andrew Dunstan
18fc4ecf4a Process variadic arguments consistently in json functions
json_build_object and json_build_array and the jsonb equivalents did not
correctly process explicit VARIADIC arguments. They are modified to use
the new extract_variadic_args() utility function which abstracts away
the details of the call method.

Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov.

Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as
that's where they originated.
2017-10-25 07:34:00 -04:00
Tom Lane
896eb5efbd In the planner, delete joinaliasvars lists after we're done with them.
Although joinaliasvars lists coming out of the parser are quite simple,
those lists can contain arbitrarily complex expressions after subquery
pullup.  We do not perform expression preprocessing on them, meaning that
expressions in those lists will not meet the expectations of later phases
of the planner (for example, that they do not contain SubLinks).  This had
been thought pretty harmless, since we don't intentionally touch those
lists in later phases --- but Andreas Seltenreich found a case in which
adjust_appendrel_attrs() could recurse into a joinaliasvars list and then
die on its assertion that it never sees a SubLink.  We considered a couple
of localized fixes to prevent that specific case from looking at the
joinaliasvars lists, but really this seems like a generic hazard for all
expression processing in the planner.  Therefore, probably the best answer
is to delete the joinaliasvars lists from the parsetree at the end of
expression preprocessing, so that there are no reachable expressions that
haven't been through preprocessing.

The case Andreas found seems to be harmless in non-Assert builds, and so
far there are no field reports suggesting that there are user-visible
effects in other cases.  I considered back-patching this anyway, but
it turns out that Andreas' test doesn't fail at all in 9.4-9.6, because
in those versions adjust_appendrel_attrs contains code (added in commit
842faa714 and removed again in commit 215b43cdc) to process SubLinks
rather than complain about them.  Barring discovery of another path by
which unprocessed joinaliasvars lists can cause trouble, the most
prudent compromise seems to be to patch this into v10 but not further.

Patch by me, with thanks to Amit Langote for initial investigation
and review.

Discussion: https://postgr.es/m/87r2tvt9f1.fsf@ansel.ydns.eu
2017-10-24 18:42:47 -04:00
Tom Lane
36ea99c84d Fix typcache's failure to treat ranges as container types.
Like the similar logic for arrays and records, it's necessary to examine
the range's subtype to decide whether the range type can support hashing.
We can omit checking the subtype for btree-defined operations, though,
since range subtypes are required to have those operations.  (Possibly
that simplification for btree cases led us to overlook that it does
not apply for hash cases.)

This is only an issue if the subtype lacks hash support, which is not
true of any built-in range type, but it's easy to demonstrate a problem
with a range type over, eg, money: you can get a "could not identify
a hash function" failure when the planner is misled into thinking that
hash join or aggregation would work.

This was born broken, so back-patch to all supported branches.
2017-10-20 17:12:27 -04:00
Alvaro Herrera
4b95cc1dc3 Add more tests for reloptions
This is preparation for a future patch to extensively change how
reloptions work.

Author: Nikolay Shaplov
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/2615372.orqtEn8VGB@x200m
2017-10-19 14:22:05 +02:00
Tom Lane
7421f4b89a Fix incorrect handling of CTEs and ENRs as DML target relations.
setTargetTable threw an error if the proposed target RangeVar's relname
matched any visible CTE or ENR.  This breaks backwards compatibility in
the CTE case, since pre-v10 we never looked for a CTE here at all, so that
CTE names did not mask regular tables.  It does seem like a good idea to
throw an error for the ENR case, though, thus causing ENRs to mask tables
for this purpose; ENRs are new in v10 so we're not breaking existing code,
and we may someday want to allow them to be the targets of DML.

To fix that, replace use of getRTEForSpecialRelationTypes, which was
overkill anyway, with use of scanNameSpaceForENR.

A second problem was that the check neglected to verify null schemaname,
so that a CTE or ENR could incorrectly be thought to match a qualified
RangeVar.  That happened because getRTEForSpecialRelationTypes relied
on its caller to have checked for null schemaname.  Even though the one
remaining caller got it right, this is obviously bug-prone, so move
the check inside getRTEForSpecialRelationTypes.

Also, revert commit 18ce3a4ab's extremely poorly thought out decision to
add a NULL return case to parserOpenTable --- without either documenting
that or adjusting any of the callers to check for it.  The current bug
seems to have arisen in part due to working around that bad idea.

In passing, remove the one-line shim functions transformCTEReference and
transformENRReference --- they don't seem to be adding any clarity or
functionality.

Per report from Hugo Mercier (via Julien Rouhaud).  Back-patch to v10
where the bug was introduced.

Thomas Munro, with minor editing by me

Discussion: https://postgr.es/m/CAOBaU_YdPVH+PTtiKSSLOiiW3mVDYsnNUekK+XPbHXiP=wrFLA@mail.gmail.com
2017-10-16 17:56:54 -04:00
Tom Lane
be0ebb65f5 Allow the built-in ordered-set aggregates to share transition state.
The built-in OSAs all share the same transition function, so they can
share transition state as long as the final functions cooperate to not
do the sort step more than once.  To avoid running the tuplesort object
in randomAccess mode unnecessarily, add a bit of infrastructure to
nodeAgg.c to let the aggregate functions find out whether the transition
state is actually being shared or not.

This doesn't work for the hypothetical aggregates, since those inject
a hypothetical row that isn't traceable to the shared input state.
So they remain marked aggfinalmodify = 'w'.

Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com
2017-10-16 15:51:23 -04:00
Tom Lane
c3dfe0fec0 Repair breakage of aggregate FILTER option.
An aggregate's input expression(s) are not supposed to be evaluated
at all for a row where its FILTER test fails ... but commit 8ed3f11bb
overlooked that requirement.  Reshuffle so that aggregates having a
filter clause evaluate their arguments separately from those without.
This still gets the benefit of doing only one ExecProject in the
common case of multiple Aggrefs, none of which have filters.

While at it, arrange for filter clauses to be included in the common
ExecProject evaluation, thus perhaps buying a little bit even when
there are filters.

Back-patch to v10 where the bug was introduced.

Discussion: https://postgr.es/m/30065.1508161354@sss.pgh.pa.us
2017-10-16 15:24:36 -04:00
Tom Lane
4de2d4fba3 Explicitly track whether aggregate final functions modify transition state.
Up to now, there's been hard-wired assumptions that normal aggregates'
final functions never modify their transition states, while ordered-set
aggregates' final functions always do.  This has always been a bit
limiting, and in particular it's getting in the way of improving the
built-in ordered-set aggregates to allow merging of transition states.
Therefore, let's introduce catalog and CREATE AGGREGATE infrastructure
that lets the finalfn's behavior be declared explicitly.

There are now three possibilities for the finalfn behavior: it's purely
read-only, it trashes the transition state irrecoverably, or it changes
the state in such a way that no more transfn calls are possible but the
state can still be passed to other, compatible finalfns.  There are no
examples of this third case today, but we'll shortly make the built-in
OSAs act like that.

This change allows user-defined aggregates to explicitly disclaim support
for use as window functions, and/or to prevent transition state merging,
if their implementations cannot handle that.  While it was previously
possible to handle the window case with a run-time error check, there was
not any way to prevent transition state merging, which in retrospect is
something commit 804163bc2 should have provided for.  But better late
than never.

In passing, split out pg_aggregate.c's extern function declarations into
a new header file pg_aggregate_fn.h, similarly to what we've done for
some other catalog headers, so that pg_aggregate.h itself can be safe
for frontend files to include.  This lets pg_dump use the symbolic
names for relevant constants.

Discussion: https://postgr.es/m/4834.1507849699@sss.pgh.pa.us
2017-10-14 15:21:39 -04:00
Joe Conway
b81eba6a65 Add missing options to pg_regress help() output
A few command line options accepted by pg_regress were not being output
by help(), including --help itself. Add that one, as well as --version
and --bindir, and the corresponding short options for the first two.

We could consider this for backpatching, but it did not seem worthwhile
and no one else advocated for it, so apply only to master for now.

Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/dd519469-06d7-2662-83ef-c926f6c4f0f1%40joeconway.com
2017-10-13 16:06:41 -07:00
Robert Haas
ad4a7ed099 Synchronize error messages.
Commits 6476b26115
and 14f67a8ee2 didn't use quite the
same error message for what is basically the same situation.

Amit Langote, pared back a bit by me.

Discussion: http://postgr.es/m/54dc76d0-3b5b-ba5a-27dc-fb31a3975b61@lab.ntt.co.jp
2017-10-12 15:14:22 -04:00
Tom Lane
52328727be Prevent sharing transition states between ordered-set aggregates.
This ought to work, but the built-in OSAs are not capable of coping,
because their final-functions destructively modify their transition
state (specifically, the contained tuplesort object).  That was fine
when those functions were written, but commit 804163bc2 moved the
goalposts without telling orderedsetaggs.c.

We should fix the built-in OSAs to support this, but it will take
a little work, especially if we don't want to sacrifice performance
in the normal non-shared-state case.  Given that it took a year after
9.6 release for anyone to notice this bug, we should not prioritize
sharable-state over nonsharable-state performance.  And a proper fix
is likely to be more complicated than we'd want to back-patch, too.

Therefore, let's just put in this stop-gap patch to prevent nodeAgg.c
from choosing to use shared state for OSAs.  We can revert it in HEAD
when we get a better fix.

Report from Lukas Eder, diagnosis by me, patch by David Rowley.
Back-patch to 9.6 where the problem was introduced.

Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com
2017-10-11 22:18:10 -04:00
Peter Eisentraut
e9e0f78bde Fix whitespace 2017-10-11 09:15:20 -04:00
Tom Lane
8ec5429e2f Reduce "X = X" to "X IS NOT NULL", if it's easy to do so.
If the operator is a strict btree equality operator, and X isn't volatile,
then the clause must yield true for any non-null value of X, or null if X
is null.  At top level of a WHERE clause, we can ignore the distinction
between false and null results, so it's valid to simplify the clause to
"X IS NOT NULL".  This is a useful improvement mainly because we'll get
a far better selectivity estimate in most cases.

Because such cases seldom arise in well-written queries, it is unappetizing
to expend a lot of planner cycles looking for them ... but it turns out
that there's a place we can shoehorn this in practically for free, because
equivclass.c already has to detect and reject candidate equivalences of the
form X = X.  That doesn't catch every place that it would be valid to
simplify to X IS NOT NULL, but it catches the typical case.  Working harder
doesn't seem justified.

Patch by me, reviewed by Petr Jelinek

Discussion: https://postgr.es/m/CAMjNa7cC4X9YR-vAJS-jSYCajhRDvJQnN7m2sLH1wLh-_Z2bsw@mail.gmail.com
2017-10-08 12:23:32 -04:00
Tom Lane
b11f0d36b2 Improve pg_regress's error reporting for schedule-file problems.
The previous coding here trashed the line buffer as it scanned it,
making it impossible to print the source line in subsequent error
messages.  With a few save/restore/strdup pushups we can improve
that situation.

In passing, move the free'ing of the various strings that are collected
while processing one set of tests down to the bottom of the loop.
That's simpler, less surprising, and should make valgrind less unhappy
about the strings that were previously leaked by the last iteration.
2017-10-07 18:04:25 -04:00
Tom Lane
ef73a8162a Enforce our convention about max number of parallel regression tests.
We have a very old rule that parallel_schedule should have no more
than twenty tests in any one parallel group, so as to provide a
bound on the number of concurrently running processes needed to
pass the tests.  But people keep forgetting the rule, so let's add
a few lines of code to check it.

Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com
2017-10-07 17:20:09 -04:00
Tom Lane
1fdab4d5aa Clean up sloppy maintenance of regression test schedule files.
The partition_join test was added to a parallel group that was already
at the maximum of 20 concurrent tests.  The hash_func test wasn't
added to serial_schedule at all.  The identity and partition_join tests
were added to serial_schedule with the aid of a dartboard, rather than
maintaining consistency with parallel_schedule.

There are proposals afoot to make these sorts of errors harder to make,
but in the meantime let's fix the ones already in place.

Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com
2017-10-07 13:19:13 -04:00
Robert Haas
f49842d1ee Basic partition-wise join functionality.
Instead of joining two partitioned tables in their entirety we can, if
it is an equi-join on the partition keys, join the matching partitions
individually.  This involves teaching the planner about "other join"
rels, which are related to regular join rels in the same way that
other member rels are related to baserels.  This can use significantly
more CPU time and memory than regular join planning, because there may
now be a set of "other" rels not only for every base relation but also
for every join relation.  In most practical cases, this probably
shouldn't be a problem, because (1) it's probably unusual to join many
tables each with many partitions using the partition keys for all
joins and (2) if you do that scenario then you probably have a big
enough machine to handle the increased memory cost of planning and (3)
the resulting plan is highly likely to be better, so what you spend in
planning you'll make up on the execution side.  All the same, for now,
turn this feature off by default.

Currently, we can only perform joins between two tables whose
partitioning schemes are absolutely identical.  It would be nice to
cope with other scenarios, such as extra partitions on one side or the
other with no match on the other side, but that will have to wait for
a future patch.

Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit
Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit
Khandekar, and by me.  A few final adjustments by me.

Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com
Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com
2017-10-06 11:11:10 -04:00
Robert Haas
6476b26115 On CREATE TABLE, consider skipping validation of subpartitions.
This is just like commit 14f67a8ee2, but
for CREATE PARTITION rather than ATTACH PARTITION.

Jeevan Ladhe, with test case changes by me.

Discussion: http://postgr.es/m/CAOgcT0MWwG8WBw8frFMtRYHAgDD=tpt6U7WcsO_L2k0KYpm4Jg@mail.gmail.com
2017-10-05 13:23:28 -04:00
Robert Haas
14f67a8ee2 On attach, consider skipping validation of subpartitions individually.
If the table attached as a partition is itself partitioned, individual
partitions might have constraints strong enough to skip scanning the
table even if the table actually attached does not.  This is pretty
cheap to check, and possibly a big win if it works out.

Amit Langote, with test case changes by me.

Discussion: http://postgr.es/m/1f08b844-0078-aa8d-452e-7af3bf77d05f@lab.ntt.co.jp
2017-10-05 13:06:46 -04:00
Robert Haas
c31e9d4baf Improve error message when skipping scan of default partition.
It seems like a good idea to clearly distinguish between skipping the
scan of the new partition itself and skipping the scan of the default
partition.

Amit Langote

Discussion: http://postgr.es/m/1f08b844-0078-aa8d-452e-7af3bf77d05f@lab.ntt.co.jp
2017-10-05 12:19:40 -04:00
Robert Haas
e9baa5e9fa Allow DML commands that create tables to use parallel query.
Haribabu Kommi, reviewed by Dilip Kumar and Rafia Sabih.  Various
cosmetic changes by me to explain why this appears to be safe but
allowing inserts in parallel mode in general wouldn't be.  Also, I
removed the REFRESH MATERIALIZED VIEW case from Haribabu's patch,
since I'm not convinced that case is OK, and hacked on the
documentation somewhat.

Discussion: http://postgr.es/m/CAJrrPGdo5bak6qnPWe8Kpi8g_jfQEs-G4SYmG9y+OFaw2-dPvA@mail.gmail.com
2017-10-05 11:40:48 -04:00
Peter Eisentraut
036166f26e Document and use SPI_result_code_string()
A lot of semi-internal code just prints out numeric SPI error codes,
which is not very helpful.  We already have an API function to convert
the codes to a string, so let's make more use of that.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2017-10-04 22:14:21 -04:00
Tom Lane
11d8d72c27 Allow multiple tables to be specified in one VACUUM or ANALYZE command.
Not much to say about this; does what it says on the tin.

However, formerly, if there was a column list then the ANALYZE action was
implied; now it must be specified, or you get an error.  This is because
it would otherwise be a bit unclear what the user meant if some tables
have column lists and some don't.

Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada, with some
editorialization by me

Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
2017-10-03 18:53:44 -04:00
Tom Lane
c12d570fa1 Support arrays over domains.
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason.  This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.

In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm.  Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm.  That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics.  I don't think this change fixes any live bugs,
but it makes things more consistent.  The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().

Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion.  Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element.  This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type().  The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking.  The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.

In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost.  Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression.  The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.

Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc.  Everything except the array-coercion case seems
to just work without further effort.

Tom Lane, reviewed by Andrew Dunstan

Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
2017-09-30 13:40:56 -04:00
Peter Eisentraut
2a14b9609d psql: Update \d sequence display
For \d sequencename, the psql code just did SELECT * FROM sequencename
to get the information to display, but this does not contain much
interesting information anymore in PostgreSQL 10, because the metadata
has been moved to a separate system catalog.

This patch creates a newly designed sequence display that is not merely
an extension of the general relation/table display as it was previously.

Example:

PostgreSQL 9.6:

=> \d foobar
           Sequence "public.foobar"
    Column     |  Type   |        Value
---------------+---------+---------------------
 sequence_name | name    | foobar
 last_value    | bigint  | 1
 start_value   | bigint  | 1
 increment_by  | bigint  | 1
 max_value     | bigint  | 9223372036854775807
 min_value     | bigint  | 1
 cache_value   | bigint  | 1
 log_cnt       | bigint  | 0
 is_cycled     | boolean | f
 is_called     | boolean | f

PostgreSQL 10 before this change:

=> \d foobar
   Sequence "public.foobar"
   Column   |  Type   | Value
------------+---------+-------
 last_value | bigint  | 1
 log_cnt    | bigint  | 0
 is_called  | boolean | f

New:

=> \d foobar
                           Sequence "public.foobar"
  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
 bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1

Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
2017-09-29 13:37:30 -04:00
Robert Haas
8b304b8b72 Remove replacement selection sort.
At the time replacement_sort_tuples was introduced, there were still
cases where replacement selection sort noticeably outperformed using
quicksort even for the first run.  However, those cases seem to have
evaporated as a result of further improvements made since that time
(and perhaps also advances in CPU technology).  So remove replacement
selection and the controlling GUC entirely.  This makes tuplesort.c
noticeably simpler and probably paves the way for further
optimizations someone might want to do later.

Peter Geoghegan, with review and testing by Tomas Vondra and me.

Discussion: https://postgr.es/m/CAH2-WzmmNjG_K0R9nqYwMq3zjyJJK+hCbiZYNGhAy-Zyjs64GQ@mail.gmail.com
2017-09-29 10:25:44 -04:00
Tom Lane
7769fc000a Fix behavior when converting a float infinity to numeric.
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity.  The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like

ERROR:  invalid input syntax for type numeric: "inf"

but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.

Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.

While at it, let's use snprintf not sprintf.  Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.

Problem reported by Taiki Kondo.  Patch by me based on fix suggestion
from KaiGai Kohei.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
2017-09-27 17:05:53 -04:00
Tom Lane
28e0727076 Revert to 9.6 treatment of ALTER TYPE enumtype ADD VALUE.
This reverts commit 15bc038f9, along with the followon commits 1635e80d3
and 984c92074 that tried to clean up the problems exposed by bug #14825.
The result was incomplete because it failed to address parallel-query
requirements.  With 10.0 release so close upon us, now does not seem like
the time to be adding more code to fix that.  I hope we can un-revert this
code and add the missing parallel query support during the v11 cycle.

Back-patch to v10.

Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
2017-09-27 16:14:43 -04:00
Tom Lane
9a50a93c7b Improve wording of error message added in commit 714805010.
Per suggestions from Peter Eisentraut and David Johnston.
Back-patch, like the previous commit.

Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org
2017-09-26 15:25:56 -04:00
Tom Lane
984c92074d Remove heuristic same-transaction test from check_safe_enum_use().
The blacklist mechanism added by the preceding commit directly fixes
most of the practical cases that the same-transaction test was meant
to cover.  What remains is use-cases like

	begin;
	create type e as enum('x');
	alter type e add value 'y';
	-- use 'y' somehow
	commit;

However, because the same-transaction test is heuristic, it fails on
small variants of that, such as renaming the type or changing its
owner.  Rather than try to explain the behavior to users, let's
remove it and just have a rule that the newly added value can't be
used before being committed, full stop.  Perhaps later it will be
worth the implementation effort and overhead to have a more accurate
test for type-was-created-in-this-transaction.  We'll wait for some
field experience with v10 before deciding to do that.

Back-patch to v10.

Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
2017-09-26 13:14:46 -04:00
Tom Lane
1635e80d30 Use a blacklist to distinguish original from add-on enum values.
Commit 15bc038f9 allowed ALTER TYPE ADD VALUE to be executed inside
transaction blocks, by disallowing the use of the added value later
in the same transaction, except under limited circumstances.  However,
the test for "limited circumstances" was heuristic and could reject
references to enum values that were created during CREATE TYPE AS ENUM,
not just later.  This breaks the use-case of restoring pg_dump scripts
in a single transaction, as reported in bug #14825 from Balazs Szilfai.

We can improve this by keeping a "blacklist" table of enum value OIDs
created by ALTER TYPE ADD VALUE during the current transaction.  Any
visible-but-uncommitted value whose OID is not in the blacklist must
have been created by CREATE TYPE AS ENUM, and can be used safely
because it could not have a lifespan shorter than its parent enum type.

This change also removes the restriction that a renamed enum value
can't be used before being committed (unless it was on the blacklist).

Andrew Dunstan, with cosmetic improvements by me.
Back-patch to v10.

Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
2017-09-26 13:14:46 -04:00
Tom Lane
716ea626a8 Make construct_[md_]array return a valid empty array for zero-size input.
If construct_array() or construct_md_array() were given a dimension of
zero, they'd produce an array that contains no elements but has positive
dimension.  This violates a general expectation that empty arrays should
have ndims = 0; in particular, while arrays like this print as empty,
they don't compare equal to other empty arrays.

Up to now we've expected callers to avoid making such calls and instead
be careful to call construct_empty_array() if there would be no elements.
But this has always been an easily missed case, and we've repeatedly had to
fix callers to do it right.  In bug #14826, Erwin Brandstetter pointed out
yet another such oversight, in ts_lexize(); and a bit of examination of
other call sites found at least two more with similar issues.  So let's
fix the problem centrally and permanently by changing these two functions
to construct a proper zero-D empty array whenever the array would be empty.

This renders a few explicit calls of construct_empty_array() redundant,
but the only such place I found that really seemed worth changing was in
ExecEvalArrayExpr().

Although this fixes some very old bugs, no back-patch: the problem is
pretty minor and the risk of changing behavior seems to outweigh the
benefit in stable branches.

Discussion: https://postgr.es/m/20170923125723.1448.39412@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20570.1506198383@sss.pgh.pa.us
2017-09-25 11:55:24 -04:00
Andrew Dunstan
d57c7a7c50 Provide a test for variable existence in psql
"\if :{?variable_name}" will be translated to "\if TRUE" if the variable
exists and "\if FALSE" otherwise. Thus it will be possible to execute code
conditionally on the existence of the variable, regardless of its value.

Fabien Coelho, with some review by Robins Tharakan and some light text
editing by me.

Discussion: https://postgr.es/m/alpine.DEB.2.20.1708260835520.3627@lancre
2017-09-21 19:02:23 -04:00
Tom Lane
7148050105 Give a better error for duplicate entries in VACUUM/ANALYZE column list.
Previously, the code didn't think about this case and would just try to
analyze such a column twice.  That would fail at the point of inserting
the second version of the pg_statistic row, with obscure error messsages
like "duplicate key value violates unique constraint" or "tuple already
updated by self", depending on context and PG version.  We could allow
the case by ignoring duplicate column specifications, but it seems better
to reject it explicitly.

The bogus error messages seem like arguably a bug, so back-patch to
all supported versions.

Nathan Bossart, per a report from Michael Paquier, and whacked
around a bit by me.

Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
2017-09-21 18:13:11 -04:00
Robert Haas
57eebca03a Fix create_lateral_join_info to handle dead relations properly.
Commit 0a480502b0 broke it.

Report by Andreas Seltenreich.  Fix by Ashutosh Bapat.

Discussion: http://postgr.es/m/874ls2vrnx.fsf@ansel.ydns.eu
2017-09-20 10:20:10 -04:00
Tom Lane
2d484f9b05 Remove no-op GiST support functions in the core GiST opclasses.
The preceding patch allowed us to remove useless GiST support functions.
This patch actually does that for all the no-op cases in the core GiST
code.  This buys us whatever performance gain is to be had, and more
importantly exercises the preceding patch.

There remain no-op functions in the contrib GiST opclasses, but those
will take more work to remove.

Discussion: https://postgr.es/m/CAJEAwVELVx9gYscpE=Be6iJxvdW5unZ_LkcAaVNSeOwvdwtD=A@mail.gmail.com
2017-09-19 23:32:59 -04:00
Tom Lane
fd31f9f033 Ensure that BEFORE STATEMENT triggers fire the right number of times.
Commit 0f79440fb introduced mechanism to keep AFTER STATEMENT triggers
from firing more than once per statement, which was formerly possible
if more than one FK enforcement action had to be applied to a given
table.  Add a similar mechanism for BEFORE STATEMENT triggers, so that
we don't have the unexpected situation of firing BEFORE STATEMENT
triggers more often than AFTER STATEMENT.

As with the previous patch, back-patch to v10.

Discussion: https://postgr.es/m/22315.1505584992@sss.pgh.pa.us
2017-09-17 12:16:38 -04:00
Tom Lane
0f79440fb0 Fix SQL-spec incompatibilities in new transition table feature.
The standard says that all changes of the same kind (insert, update, or
delete) caused in one table by a single SQL statement should be reported
in a single transition table; and by that, they mean to include foreign key
enforcement actions cascading from the statement's direct effects.  It's
also reasonable to conclude that if the standard had wCTEs, they would say
that effects of wCTEs applying to the same table as each other or the outer
statement should be merged into one transition table.  We weren't doing it
like that.

Hence, arrange to merge tuples from multiple update actions into a single
transition table as much as we can.  There is a problem, which is that if
the firing of FK enforcement triggers and after-row triggers with
transition tables is interspersed, we might need to report more tuples
after some triggers have already seen the transition table.  It seems like
a bad idea for the transition table to be mutable between trigger calls.
There's no good way around this without a major redesign of the FK logic,
so for now, resolve it by opening a new transition table each time this
happens.

Also, ensure that AFTER STATEMENT triggers fire just once per statement,
or once per transition table when we're forced to make more than one.
Previous versions of Postgres have allowed each FK enforcement query
to cause an additional firing of the AFTER STATEMENT triggers for the
referencing table, but that's certainly not per spec.  (We're still
doing multiple firings of BEFORE STATEMENT triggers, though; is that
something worth changing?)

Also, forbid using transition tables with column-specific UPDATE triggers.
The spec requires such transition tables to show only the tuples for which
the UPDATE trigger would have fired, which means maintaining multiple
transition tables or else somehow filtering the contents at readout.
Maybe someday we'll bother to support that option, but it looks like a
lot of trouble for a marginal feature.

The transition tables are now managed by the AfterTriggers data structures,
rather than being directly the responsibility of ModifyTable nodes.  This
removes a subtransaction-lifespan memory leak introduced by my previous
band-aid patch 3c4359521.

In passing, refactor the AfterTriggers data structures to reduce the
management overhead for them, by using arrays of structs rather than
several parallel arrays for per-query-level and per-subtransaction state.

I failed to resist the temptation to do some copy-editing on the SGML
docs about triggers, above and beyond merely documenting the effects
of this patch.

Back-patch to v10, because we don't want the semantics of transition
tables to change post-release.

Patch by me, with help and review from Thomas Munro.

Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org
2017-09-16 13:20:36 -04:00
Robert Haas
9361f6f54e After a MINVALUE/MAXVALUE bound, allow only more of the same.
In the old syntax, which used UNBOUNDED, we had a similar restriction,
but commit d363d42bb9, which changed the
syntax, eliminated it.  Put it back.

Patch by me, reviewed by Dean Rasheed.

Discussion: http://postgr.es/m/CA+Tgmobs+pLPC27tS3gOpEAxAffHrq5w509cvkwTf9pF6cWYbg@mail.gmail.com
2017-09-15 21:15:55 -04:00
Peter Eisentraut
3012061b86 Apply pg_get_serial_sequence() to identity column sequences as well
Bug: #14813
2017-09-15 14:21:20 -04:00
Robert Haas
60cd2f8a2d Test coverage for CREATE/ALTER FOREIGN DATA WRAPPER .. HANDLER.
Amit Langote, per a suggestion from Mark Dilger.  Reviewed by
Marc Dilger and Ashutosh Bapat.

Discussion: http://postgr.es/m/CAFjFpReL0oeN7SCpnsEPbqJhB2Bp1wnH1uvbOF_w6KEuv6ZXvg@mail.gmail.com
2017-09-15 08:07:22 -04:00
Robert Haas
0a480502b0 Expand partitioned table RTEs level by level, without flattening.
Flattening the partitioning hierarchy at this stage makes various
desirable optimizations difficult.  The original use case for this
patch was partition-wise join, which wants to match up the partitions
in one partitioning hierarchy with those in another such hierarchy.
However, it now seems that it will also be useful in making partition
pruning work using the PartitionDesc rather than constraint exclusion,
because with a flattened expansion, we have no easy way to figure out
which PartitionDescs apply to which leaf tables in a multi-level
partition hierarchy.

As it turns out, we end up creating both rte->inh and !rte->inh RTEs
for each intermediate partitioned table, just as we previously did for
the root table.  This seems unnecessary since the partitioned tables
have no storage and are not scanned.  We might want to go back and
rejigger things so that no partitioned tables (including the parent)
need !rte->inh RTEs, but that seems to require some adjustments not
related to the core purpose of this patch.

Ashutosh Bapat, reviewed by me and by Amit Langote.  Some final
adjustments by me.

Discussion: http://postgr.es/m/CAFjFpRd=1venqLL7oGU=C1dEkuvk2DJgvF+7uKbnPHaum1mvHQ@mail.gmail.com
2017-09-14 15:41:08 -04:00
Tom Lane
76e134fefd Adjust unstable regression test case.
Test queries added by commit 69835bc89 are giving unexpected results
on some smaller buildfarm critters.  I think probably the seqscan
logic is kicking in to cause the scans to not start at the beginning
of the table.  Add ORDER BY to make them be indexscans instead.

Per buildfarm member chipmunk.
2017-09-13 12:27:01 -04:00
Tom Lane
69835bc898 Add psql variables to track success/failure of SQL queries.
This patch adds ERROR, SQLSTATE, and ROW_COUNT, which are updated after
every query, as well as LAST_ERROR_MESSAGE and LAST_ERROR_SQLSTATE,
which are updated only when a query fails.  The expected usage of these
is for scripting.

Fabien Coelho, reviewed by Pavel Stehule

Discussion: https://postgr.es/m/alpine.DEB.2.20.1704042158020.12290@lancre
2017-09-12 19:27:48 -04:00
Tom Lane
b8060e41b5 Prefer argument name over "$n" for the refname of a plpgsql argument.
If a function argument has a name, use that as the "refname" of the
PLpgSQL_datum representing the argument, instead of $n as before.
This allows better error messages in some cases.

Pavel Stehule, reviewed by Jeevan Chalke

Discussion: https://postgr.es/m/CAFj8pRB9GyU2U1Sb2ssgP26DZ_yq-FYDfpvUvGQ=k4R=yOPVjg@mail.gmail.com
2017-09-11 16:24:43 -04:00
Peter Eisentraut
821fb8cdbf Message style fixes 2017-09-11 11:21:27 -04:00
Tom Lane
3c43595217 Quick-hack fix for foreign key cascade vs triggers with transition tables.
AFTER triggers using transition tables crashed if they were fired due
to a foreign key ON CASCADE update.  This is because ExecEndModifyTable
flushes the transition tables, on the assumption that any trigger that
could need them was already fired during ExecutorFinish.  Normally
that's true, because we don't allow transition-table-using triggers
to be deferred.  However, foreign key CASCADE updates force any
triggers on the referencing table to be deferred to the outer query
level, by means of the EXEC_FLAG_SKIP_TRIGGERS flag.  I don't recall
all the details of why it's like that and am pretty loath to redesign
it right now.  Instead, just teach ExecEndModifyTable to skip destroying
the TransitionCaptureState when that flag is set.  This will allow the
transition table data to survive until end of the current subtransaction.

This isn't a terribly satisfactory solution, because (1) we might be
leaking the transition tables for much longer than really necessary,
and (2) as things stand, an AFTER STATEMENT trigger will fire once per
RI updating query, ie once per row updated or deleted in the referenced
table.  I suspect that is not per SQL spec.  But redesigning this is a
research project that we're certainly not going to get done for v10.
So let's go with this hackish answer for now.

In passing, tweak AfterTriggerSaveEvent to not save the transition_capture
pointer into the event record for a deferrable trigger.  This is not
necessary to fix the current bug, but it avoids letting dangling pointers
to long-gone transition tables persist in the trigger event queue.  That's
at least a safety feature.  It might also allow merging shared trigger
states in more cases than before.

I added a regression test that demonstrates the crash on unpatched code,
and also exposes the behavior of firing the AFTER STATEMENT triggers
once per row update.

Per bug #14808 from Philippe Beaudoin.  Back-patch to v10.

Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org
2017-09-10 14:59:56 -04:00
Robert Haas
6f6b99d133 Allow a partitioned table to have a default partition.
Any tuples that don't route to any other partition will route to the
default partition.

Jeevan Ladhe, Beena Emerson, Ashutosh Bapat, Rahila Syed, and Robert
Haas, with review and testing at various stages by (at least) Rushabh
Lathia, Keith Fiske, Amit Langote, Amul Sul, Rajkumar Raghuanshi, Sven
Kunze, Kyotaro Horiguchi, Thom Brown, Rafia Sabih, and Dilip Kumar.

Discussion: http://postgr.es/m/CAH2L28tbN4SYyhS7YV1YBWcitkqbhSWfQCy0G=apRcC_PEO-bg@mail.gmail.com
Discussion: http://postgr.es/m/CAOG9ApEYj34fWMcvBMBQ-YtqR9fTdXhdN82QEKG0SVZ6zeL1xg@mail.gmail.com
2017-09-08 17:28:04 -04:00
Tom Lane
6eb52da394 Fix handling of savepoint commands within multi-statement Query strings.
Issuing a savepoint-related command in a Query message that contains
multiple SQL statements led to a FATAL exit with a complaint about
"unexpected state STARTED".  This is a shortcoming of commit 4f896dac1,
which attempted to prevent such misbehaviors in multi-statement strings;
its quick hack of marking the individual statements as "not top-level"
does the wrong thing in this case, and isn't a very accurate description
of the situation anyway.

To fix, let's introduce into xact.c an explicit model of what happens for
multi-statement Query strings.  This is an "implicit transaction block
in progress" state, which for many purposes works like the normal
TBLOCK_INPROGRESS state --- in particular, IsTransactionBlock returns true,
causing the desired result that PreventTransactionChain will throw error.
But in case of error abort it works like TBLOCK_STARTED, allowing the
transaction to be cancelled without need for an explicit ROLLBACK command.

Commit 4f896dac1 is reverted in toto, so that we go back to treating the
individual statements as "top level".  We could have left it as-is, but
this allows sharpening the error message for PreventTransactionChain
calls inside functions.

Except for getting a normal error instead of a FATAL exit for savepoint
commands, this patch should result in no user-visible behavioral change
(other than that one error message rewording).  There are some things
we might want to do in the line of changing the appearance or wording of
error and warning messages around this behavior, which would be much
simpler to do now that it's an explicitly modeled state.  But I haven't
done them here.

Although this fixes a long-standing bug, no backpatch.  The consequences
of the bug don't seem severe enough to justify the risk that this commit
itself creates some new issue.

Patch by me, but it owes something to previous investigation by
Takayuki Tsunakawa, who also reported the bug in the first place.
Also thanks to Michael Paquier for reviewing.

Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F6BE40D@G01JPEXMBYT05
2017-09-07 09:49:55 -04:00
Simon Riggs
5b6d13eec7 Allow SET STATISTICS on expression indexes
Index columns are referenced by ordinal number rather than name, e.g.
CREATE INDEX coord_idx ON measured (x, y, (z + t));
ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000;

Incompatibility note for release notes:
\d+ for indexes now also displays Stats Target

Authors: Alexander Korotkov, with contribution by Adrien NAYRAT
Review: Adrien NAYRAT, Simon Riggs
Wordsmith: Simon Riggs
2017-09-06 13:46:01 -07:00
Tom Lane
49ca462eb1 Add \gdesc psql command.
This command acts somewhat like \g, but instead of executing the query
buffer, it merely prints a description of the columns that the query
result would have.  (Of course, this still requires parsing the query;
if parse analysis fails, you get an error anyway.)  We accomplish this
using an unnamed prepared statement, which should be invisible to psql
users.

Pavel Stehule, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/CAFj8pRBhYVvO34FU=EKb=nAF5t3b++krKt1FneCmR0kuF5m-QA@mail.gmail.com
2017-09-05 18:17:47 -04:00
Robert Haas
81c5e46c49 Introduce 64-bit hash functions with a 64-bit seed.
This will be useful for hash partitioning, which needs a way to seed
the hash functions to avoid problems such as a hash index on a hash
partitioned table clumping all values into a small portion of the
bucket space; it's also useful for anything that wants a 64-bit hash
value rather than a 32-bit hash value.

Just in case somebody wants a 64-bit hash value that is compatible
with the existing 32-bit hash values, make the low 32-bits of the
64-bit hash value match the 32-bit hash value when the seed is 0.

Robert Haas and Amul Sul

Discussion: http://postgr.es/m/CA+Tgmoafx2yoJuhCQQOL5CocEi-w_uG4S2xT0EtgiJnPGcHW3g@mail.gmail.com
2017-08-31 22:21:21 -04:00
Robert Haas
30833ba154 Expand partitioned tables in PartDesc order.
Previously, we expanded the inheritance hierarchy in the order in
which find_all_inheritors had locked the tables, but that turns out
to block quite a bit of useful optimization.  For example, a
partition-wise join can't count on two tables with matching bounds
to get expanded in the same order.

Where possible, this change results in expanding partitioned tables in
*bound* order.  Bound order isn't well-defined for a list-partitioned
table with a null-accepting partition or for a list-partitioned table
where the bounds for a single partition are interleaved with other
partitions.  However, when expansion in bound order is possible, it
opens up further opportunities for optimization, such as
strength-reducing MergeAppend to Append when the expansion order
matches the desired sort order.

Patch by me, with cosmetic revisions by Ashutosh Bapat.

Discussion: http://postgr.es/m/CA+TgmoZrKj7kEzcMSum3aXV4eyvvbh9WD=c6m=002WMheDyE3A@mail.gmail.com
2017-08-31 15:50:18 -04:00
Tom Lane
4b1dd62a25 Improve code coverage of select_parallel test.
Make sure that rescans of parallel indexscans are tested.
Per code coverage report.
2017-08-31 13:15:54 -04:00
Peter Eisentraut
b5c75feca7 Remove to pre-8.2 coding convention for PG_MODULE_MAGIC
PG_MODULE_MAGIC has been around since 8.2, with 8.1 long since in EOL,
so remove the mention of #ifdef guards for compiling against pre-8.2
sources from the documentation.

Author: Daniel Gustafsson <daniel@yesql.se>
2017-08-30 22:40:24 -04:00
Tom Lane
6c2c5bea3c Restore test case from a2b70c89ca.
Revert the reversion commits a20aac890 and 9b644745c.  In the wake of
commit 7df2c1f8d, we should get stable buildfarm results from this test;
if not, I'd like to know sooner not later.

Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 09:59:23 -04:00
Robert Haas
bf11e7ee2e Propagate sort instrumentation from workers back to leader.
Up until now, when parallel query was used, no details about the
sort method or space used by the workers were available; details
were shown only for any sorting done by the leader.  Fix that.

Commit 1177ab1dab forced the test case
added by commit 1f6d515a67 to run
without parallelism; now that we have this infrastructure, allow
that again, with a little tweaking to make it pass with and without
force_parallel_mode.

Robert Haas and Tom Lane

Discussion: http://postgr.es/m/CA+Tgmoa2VBZW6S8AAXfhpHczb=Rf6RqQ2br+zJvEgwJ0uoD_tQ@mail.gmail.com
2017-08-29 13:26:33 -04:00
Robert Haas
3452dc5240 Push tuple limits through Gather and Gather Merge.
If we only need, say, 10 tuples in total, then we certainly don't need
more than 10 tuples from any single process.  Pushing down the limit
lets workers exit early when possible.  For Gather Merge, there is
an additional benefit: a Sort immediately below the Gather Merge can
be done as a bounded sort if there is an applicable limit.

Robert Haas and Tom Lane

Discussion: http://postgr.es/m/CA+TgmoYa3QKKrLj5rX7UvGqhH73G1Li4B-EKxrmASaca2tFu9Q@mail.gmail.com
2017-08-29 13:16:55 -04:00
Tom Lane
95e28b7f0c Fix over-aggressive sanity check in misc_sanity.sql.
Fix thinko in commit 8be8510cf: it's okay to have dbid == 0 in normal
(non-pin) entries in pg_shdepend, because global objects such as
databases are entered that way.  The test would pass so long as it
was run in a cluster containing no databases/tablespaces owned by,
or granted to, roles other than the bootstrap superuser.  That's the
expected situation for "make check", but for "make installcheck", not
so much.

Reported by Ryan Murphy.

Discussion: https://postgr.es/m/CAHeEsBc6EQe0mxGBKDXAwJbntgfvoAd5MQC-5362SmC3Tng_6g@mail.gmail.com
2017-08-28 10:14:46 -04:00
Andres Freund
d36f7efb39 Add minimal regression test for blessed record type transfer.
Test that blessed records can be transferred through a TupleQueue and
correctly decoded by another backend.  While touching the file, make
sure that force_parallel_mode settings only cover relevant tests.

Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20170823054644.efuzftxjpfi6wwqs%40alap3.anarazel.de
2017-08-24 17:42:49 -07:00
Stephen Frost
0cdc3e47be psql: Fix \gx when FETCH_COUNT is used
Set expanded output when requested through \gx in ExecQueryUsingCursor()
(used when FETCH_COUNT is set).

Discussion: https://www.postgresql.org/message-id/CB7A53AA-5645-4BDD-AB07-4D22CD9D8FF1%40gmx.net
Author: Tobias Bussmann
2017-08-24 16:20:50 -04:00
Tom Lane
1177ab1dab Make new regression test case parallel-safe, and improve its output.
The test case added by commit 1f6d515a6 fails on buildfarm members that
have force_parallel_mode turned on, because we currently don't report sort
performance details from worker processes back to the master.  To fix that,
just make the test table be temp rather than regular; that's a good idea
anyway to forestall any possible interference from auto-analyze.
(The restriction that workers can't access temp tables might go away
someday, but almost certainly not before the other thing gets fixed.)

Also, improve the test so that we retain as much as possible of the
EXPLAIN ANALYZE output.  This aids debugging failures, and might also
expose problems that the preceding version masked.

Discussion: http://postgr.es/m/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w@mail.gmail.com
2017-08-24 13:39:58 -04:00
Peter Eisentraut
27b89876c0 Fix up secondary expected files
for commit 237a0b87b1
2017-08-24 11:13:55 -04:00
Peter Eisentraut
237a0b87b1 Improve plural handling in error message
This does not use the normal plural handling, because no numbers appear
in the actual message.
2017-08-23 13:56:59 -04:00
Robert Haas
1f6d515a67 Push limit through subqueries to underlying sort, where possible.
Douglas Doole, reviewed by Ashutosh Bapat and by me.  Minor formatting
change by me.

Discussion: http://postgr.es/m/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w@mail.gmail.com
2017-08-21 14:19:44 -04:00
Andres Freund
2cd7084524 Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).
This is a mechanical change in preparation for a later commit that
will change the layout of TupleDesc.  Introducing a macro to abstract
the details of where attributes are stored will allow us to change
that in separate step and revise it in future.

Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
2017-08-20 11:19:07 -07:00
Tom Lane
a20aac890a Temporarily revert test case from a2b70c89ca.
That code patch was good as far as it went, but the associated test case
has exposed fundamental brain damage in the parallel scan mechanism,
which is going to take nontrivial work to correct.  In the interests of
getting the buildfarm back to green so that unrelated work can proceed,
let's temporarily remove the test case.
2017-08-17 18:35:14 -04:00
Tom Lane
a2b70c89ca Fix ExecReScanGatherMerge.
Not surprisingly, since it'd never ever been tested, ExecReScanGatherMerge
didn't work.  Fix it, and add a regression test case to exercise it.

Amit Kapila

Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-17 13:49:22 -04:00
Andres Freund
d2bc501573 Expand coverage of parallel gather merge a bit.
Previously paths reaching heap_compare_slots weren't covered.

Author: Rushabh Lathia
Reviewed-By: Andres Freund
Discussion:
	https://postgr.es/m/CAGPqQf3C+3PBujb+7m=ceWeii4-vBY=XS99LjzrpkpefvzJbFg@mail.gmail.com
	https://postgr.es/m/27200.1502482851@sss.pgh.pa.us
Backpatch: 10, where gather merge was introduced
2017-08-14 15:27:47 -07:00
Tom Lane
3c8de95979 Add regression tests exercising more code paths in nodeLimit.c.
Perusal of the code coverage report shows that the existing regression
test cases for LIMIT/OFFSET don't exercise the nodeLimit code paths
involving backwards scan, empty results, or null values of LIMIT/OFFSET.
Improve the coverage.
2017-08-11 17:28:01 -04:00
Tom Lane
6efca23cc0 Add regression tests exercising the non-hashed code paths in nodeSetop.c.
Perusal of the code coverage report shows that the existing regression
test cases for INTERSECT and EXCEPT seemingly all prefer the SETOP_HASHED
implementation.  Add some test cases in which we force use of the
SETOP_SORTED mode.
2017-08-11 17:28:01 -04:00
Robert Haas
bb5d6e80b1 Improve the error message when creating an empty range partition.
The previous message didn't mention the name of the table or the
bounds.  Put the table name in the primary error message and the
bounds in the detail message.

Amit Langote, changed slightly by me.  Suggestions on the exac
phrasing from Tom Lane, David G. Johnston, and Dean Rasheed.

Discussion: http://postgr.es/m/CA+Tgmoae6bpwVa-1BMaVcwvCCeOoJ5B9Q9-RHWo-1gJxfPBZ5Q@mail.gmail.com
2017-08-10 13:46:56 -04:00
Tom Lane
749c7c4170 Fix handling of container types in find_composite_type_dependencies.
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc.  The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.

The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain.  This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.

Add test cases illustrating the problems, and back-patch to all
supported branches.

Discussion: https://postgr.es/m/15268.1502309024@sss.pgh.pa.us
2017-08-09 17:03:09 -04:00
Tom Lane
8d9881911f Require update permission for the large object written by lo_put().
lo_put() surely should require UPDATE permission, the same as lowrite(),
but it failed to check for that, as reported by Chapman Flack.  Oversight
in commit c50b7c09d; backpatch to 9.4 where that was introduced.

Tom Lane and Michael Paquier

Security: CVE-2017-7548
2017-08-07 10:19:19 -04:00
Noah Misch
e568e1eee4 Again match pg_user_mappings to information_schema.user_mapping_options.
Commit 3eefc51053 claimed to make
pg_user_mappings enforce the qualifications user_mapping_options had
been enforcing, but its removal of a longstanding restriction left them
distinct when the current user is the subject of a mapping yet has no
server privileges.  user_mapping_options emits no rows for such a
mapping, but pg_user_mappings includes full umoptions.  Change
pg_user_mappings to show null for umoptions.  Back-patch to 9.2, like
the above commit.

Reviewed by Tom Lane.  Reported by Jeff Janes.

Security: CVE-2017-7547
2017-08-07 07:09:28 -07:00
Heikki Linnakangas
bf6b9e9444 Don't allow logging in with empty password.
Some authentication methods allowed it, others did not. In the client-side,
libpq does not even try to authenticate with an empty password, which makes
using empty passwords hazardous: an administrator might think that an
account with an empty password cannot be used to log in, because psql
doesn't allow it, and not realize that a different client would in fact
allow it. To clear that confusion and to be be consistent, disallow empty
passwords in all authentication methods.

All the authentication methods that used plaintext authentication over the
wire, except for BSD authentication, already checked that the password
received from the user was not empty. To avoid forgetting it in the future
again, move the check to the recv_password_packet function. That only
forbids using an empty password with plaintext authentication, however.
MD5 and SCRAM need a different fix:

* In stable branches, check that the MD5 hash stored for the user does not
not correspond to an empty string. This adds some overhead to MD5
authentication, because the server needs to compute an extra MD5 hash, but
it is not noticeable in practice.

* In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty
string, or a password hash that corresponds to an empty string, is
specified. The user-visible behavior is the same as in the stable branches,
the user cannot log in, but it seems better to stop the empty password from
entering the system in the first place. Secondly, it is fairly expensive to
check that a SCRAM hash doesn't correspond to an empty string, because
computing a SCRAM hash is much more expensive than an MD5 hash by design,
so better avoid doing that on every authentication.

We could clear the password on CREATE/ALTER ROLE also in stable branches,
but we would still need to check at authentication time, because even if we
prevent empty passwords from being stored in pg_authid, there might be
existing ones there already.

Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema.

Security: CVE-2017-7546
2017-08-07 17:03:42 +03:00
Robert Haas
f85f88bcc2 Fix bug in deciding whether to scan newly-attached partition.
If the table being attached had different attribute numbers than the
parent, the old code could incorrectly decide it needed to be scanned.

Amit Langote, reviewed by Ashutosh Bapat

Discussion: http://postgr.es/m/CA+TgmobexgbBr2+Utw-pOMw9uxaBRKRjMW_-mmzKKx9PejPLMg@mail.gmail.com
2017-08-04 22:01:37 -04:00
Tom Lane
c30f1770a9 Apply ALTER ... SET NOT NULL recursively in ALTER ... ADD PRIMARY KEY.
If you do ALTER COLUMN SET NOT NULL against an inheritance parent table,
it will recurse to mark all the child columns as NOT NULL as well.  This
is necessary for consistency: if the column is labeled NOT NULL then
reading it should never produce nulls.

However, that didn't happen in the case where ALTER ... ADD PRIMARY KEY
marks a target column NOT NULL that wasn't before.  That was questionable
from the beginning, and now Tushar Ahuja points out that it can lead to
dump/restore failures in some cases.  So let's make that case recurse too.

Although this is meant to fix a bug, it's enough of a behavioral change
that I'm pretty hesitant to back-patch, especially in view of the lack
of similar field complaints.  It doesn't seem to be too late to put it
into v10 though.

Michael Paquier, editorialized on slightly by me

Discussion: https://postgr.es/m/b8794d6a-38f0-9d7c-ad4b-e85adf860fc9@enterprisedb.com
2017-08-04 11:45:18 -04:00
Peter Eisentraut
b374481221 Further unify ROLE and USER command grammar rules
ALTER USER ... SET did not support all the syntax variants of ALTER ROLE
...  SET.  Fix that, and to avoid further deviations of this kind, unify
many the grammar rules for ROLE/USER/GROUP commands.

Reported-by: Pavel Golub <pavel@microolap.com>
2017-08-03 20:34:45 -04:00
Robert Haas
610e8ebb0f Teach map_partition_varattnos to handle whole-row expressions.
Otherwise, partitioned tables with RETURNING expressions or subject
to a WITH CHECK OPTION do not work properly.

Amit Langote, reviewed by Amit Khandekar and Etsuro Fujita.  A few
comment changes by me.

Discussion: http://postgr.es/m/9a39df80-871e-6212-0684-f93c83be4097@lab.ntt.co.jp
2017-08-03 11:21:29 -04:00
Tom Lane
32ca22b02d Revert test case added by commit 1e165d05fe.
The buildfarm is still showing at least three distinct behaviors for
a bad locale name in CREATE COLLATION.  Although this test was helpful
for getting the error reporting code into some usable shape, it doesn't
seem worth carrying multiple expected-files in order to support the
test in perpetuity.  So pull it back out.

Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01 20:15:10 -04:00
Tom Lane
8e7537261c Suppress less info in regression tests using DROP CASCADE.
DROP CASCADE doesn't currently promise to visit dependent objects in
a fixed order, so when the regression tests use it, we typically need
to suppress the details of which objects get dropped in order to have
predictable test output.  Traditionally we've done that by setting
client_min_messages higher than NOTICE, but there's a better way:
we can "\set VERBOSITY terse" in psql.  That suppresses the DETAIL
message with the object list, but we still get the basic notice telling
how many objects were dropped.  So at least the test case can verify
that the expected number of objects were dropped.

The VERBOSITY method was already in use in a few places, but run
around and use it wherever it makes sense.

Discussion: https://postgr.es/m/10766.1501608885@sss.pgh.pa.us
2017-08-01 16:49:23 -04:00
Tom Lane
1e165d05fe Try to deliver a sane message for _create_locale() failure on Windows.
We were just printing errno, which is certainly not gonna work on
Windows.  Now, it's not entirely clear from Microsoft's documentation
whether _create_locale() adheres to standard Windows error reporting
conventions, but let's assume it does and try to map the GetLastError
result to an errno.  If this turns out not to work, probably the best
thing to do will be to assume the error is always ENOENT on Windows.

This is a longstanding bug, but given the lack of previous field
complaints, I'm not excited about back-patching it.

Per report from Murtuza Zabuawala.

Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01 16:11:51 -04:00
Tom Lane
f97256570f Allow creation of C/POSIX collations without depending on libc behavior.
Most of our collations code has special handling for the locale names
"C" and "POSIX", allowing those collations to be used whether or not
the system libraries think those locale names are valid, or indeed
whether said libraries even have any locale support.  But we missed
handling things that way in CREATE COLLATION.  This meant you couldn't
clone the C/POSIX collations, nor explicitly define a new collation
using those locale names, unless the libraries allow it.  That's pretty
pointless, as well as being a violation of pg_newlocale_from_collation's
API specification.

The practical effect of this change is quite limited: it allows creating
such collations even on platforms that don't HAVE_LOCALE_T, and it allows
making "POSIX" collation objects on Windows, which before this would only
let you make "C" collation objects.  Hence, even though this is a bug fix
IMO, it doesn't seem worth the trouble to back-patch.

In passing, suppress the DROP CASCADE detail messages at the end of the
collation regression test.  I'm surprised we've never been bit by
message ordering issues there.

Per report from Murtuza Zabuawala.

Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
2017-08-01 13:51:05 -04:00
Tom Lane
9dea962b3e Include publication owner's name in the output of \dRp+.
Without this, \dRp prints information that \dRp+ does not, which
seems pretty odd.

Daniel Gustafsson

Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
2017-07-28 17:44:48 -04:00
Robert Haas
4132dbec69 Fix partitioning crashes during error reporting.
In various places where we reverse-map a tuple before calling
ExecBuildSlotValueDescription, we neglected to ensure that the
slot descriptor matched the tuple stored in it.

Amit Langote and Amit Khandekar, reviewed by Etsuro Fujita

Discussion: http://postgr.es/m/CAJ3gD9cqpP=WvJj=dv1ONkPWjy8ZuUaOM4_x86i3uQPas=0_jg@mail.gmail.com
2017-07-24 18:08:08 -04:00
Tom Lane
b4af9e3f37 Ensure that pg_get_ruledef()'s output matches pg_get_viewdef()'s.
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT.  For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command.  Third-party tools might not be happy either.

In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.

Per report from Gilles Darold.  Back-patch to all supported versions.

Discussion: https://postgr.es/m/ec05659a-40ff-4510-fc45-ca9d965d0838@dalibo.com
2017-07-24 15:16:31 -04:00
Tom Lane
991c8b04fc Update expected results for collate.linux.utf8 regression test.
I believe this changed as a consequence of commit 54baa4813: trying to
clone the "C" collation now produces a true clone with collencoding -1,
hence the error message if it's duplicate no longer specifies an encoding.

Per buildfarm member crake, which apparently hadn't been running this
test for the last few weeks.
2017-07-22 12:15:19 -04:00
Dean Rasheed
68f785fd52 Make the new partition regression tests locale-independent.
The order of partitions listed by \d+ is in general locale-dependent.
Rename the partitions in the test added by d363d42bb9 to force them to
be listed in a consistent order.
2017-07-21 10:18:01 +01:00
Dean Rasheed
d363d42bb9 Use MINVALUE/MAXVALUE instead of UNBOUNDED for range partition bounds.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.

Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.

Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.

Note: Forces a post-PG 10 beta2 initdb.

Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.

Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
2017-07-21 09:20:47 +01:00
Tom Lane
eb145fdfea Fix dumping of outer joins with empty qual lists.
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax.  However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too.  The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join.  Fix by printing ON TRUE in that case.

This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings.  Back-patch to 9.3 where that commit appeared.

Per report from Tushar Ahuja.

Discussion: https://postgr.es/m/98b283cd-6dda-5d3f-f8ac-87db8c76a3da@enterprisedb.com
2017-07-20 11:29:36 -04:00
Robert Haas
c85ec643ff Reverse-convert row types in ExecWithCheckOptions.
Just as we already do in ExecConstraints, and for the same reason:
to improve the quality of error messages.

Etsuro Fujita, reviewed by Amit Langote

Discussion: http://postgr.es/m/56e0baa8-e458-2bbb-7936-367f7d832e43@lab.ntt.co.jp
2017-07-17 21:56:31 -04:00
Robert Haas
f81a91db4d Use a real RT index when setting up partition tuple routing.
Before, we always used a dummy value of 1, but that's not right when
the partitioned table being modified is inside of a WITH clause
rather than part of the main query.

Amit Langote, reported and reviewd by Etsuro Fujita, with a comment
change by me.

Discussion: http://postgr.es/m/ee12f648-8907-77b5-afc0-2980bcb0aa37@lab.ntt.co.jp
2017-07-17 21:29:45 -04:00
Tom Lane
a570feaf92 Merge large_object.sql test into largeobject.source.
It seems pretty confusing to have tests named both largeobject and
large_object.  The latter is of very recent vintage (commit ff992c074),
so get rid of it in favor of merging into the former.

Also, enable the LO comment test that was added by commit 70ad7ed4e,
since the later commit added the then-missing pg_upgrade functionality.
The large_object.sql test case is almost completely redundant with that,
but not quite: it seems like creating a user-defined LO with an OID in
the system range might be an interesting case for pg_upgrade, so let's
keep it.

Like the earlier patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/18665.1500306372@sss.pgh.pa.us
2017-07-17 15:28:20 -04:00
Tom Lane
a3ca72ae9a Fix dumping of FUNCTION RTEs that contain non-function-call expressions.
The grammar will only accept something syntactically similar to a function
call in a function-in-FROM expression.  However, there are various ways
to input something that ruleutils.c won't deparse that way, potentially
leading to a view or rule that fails dump/reload.  Fix by inserting a
dummy CAST around anything that isn't going to deparse as a function
(which is one of the ways to get something like that in there in the
first place).

In HEAD, also make use of the infrastructure added by this to avoid
emitting unnecessary parentheses in CREATE INDEX deparsing.  I did
not change that in back branches, thinking that people might find it
to be unexpected/unnecessary behavioral change.

In HEAD, also fix incorrect logic for when to add extra parens to
partition key expressions.  Somebody apparently thought they could
get away with simpler logic than pg_get_indexdef_worker has, but
they were wrong --- a counterexample is PARTITION BY LIST ((a[1])).
Ignoring the prettyprint flag for partition expressions isn't exactly
a nice solution anyway.

This has been broken all along, so back-patch to all supported branches.

Discussion: https://postgr.es/m/10477.1499970459@sss.pgh.pa.us
2017-07-13 19:25:03 -04:00
Tom Lane
bc2d716ad0 Fix ruleutils.c for domain-over-array cases, too.
Further investigation shows that ruleutils isn't quite up to speed either
for cases where we have a domain-over-array: it needs to be prepared to
look past a CoerceToDomain at the top level of field and element
assignments, else it decompiles them incorrectly.  Potentially this would
result in failure to dump/reload a rule, if it looked like the one in the
new test case.  (I also added a test for EXPLAIN; that output isn't broken,
but clearly we need more test coverage here.)

Like commit b1cb32fb6, this bug is reachable in cases we already support,
so back-patch all the way.
2017-07-12 18:00:04 -04:00
Tom Lane
b1cb32fb62 Fix multiple assignments to a column of a domain type.
We allow INSERT and UPDATE commands to assign to the same column more than
once, as long as the assignments are to subfields or elements rather than
the whole column.  However, this failed when the target column was a domain
over array rather than plain array.  Fix by teaching process_matched_tle()
to look through CoerceToDomain nodes, and add relevant test cases.

Also add a group of test cases exercising domains over array of composite.
It's doubtless accidental that CREATE DOMAIN allows this case while not
allowing straight domain over composite; but it does, so we'd better make
sure we don't break it.  (I could not find any documentation mentioning
either side of that, so no doc changes.)

It's been like this for a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
2017-07-11 16:48:59 -04:00
Andrew Gierth
1add0b15f1 Fix COPY's handling of transition tables with indexes.
Commit c46c0e5202 failed to pass the
TransitionCaptureState object to ExecARInsertTriggers() in the case
where it's using heap_multi_insert and there are indexes.  Repair.

Thomas Munro, from a report by David Fetter
Discussion: https://postgr.es/m/20170708084213.GA14720%40fetter.org
2017-07-10 11:40:08 +01:00
Dean Rasheed
c03911d945 Simplify the logic checking new range partition bounds.
The previous logic, whilst not actually wrong, was overly complex and
involved doing two binary searches, where only one was really
necessary. This simplifies that logic and improves the comments.

One visible change is that if the new partition overlaps multiple
existing partitions, the error message now always reports the overlap
with the first existing partition (the one with the lowest
bounds). The old code would sometimes report the clash with the first
partition and sometimes with the last one.

Original patch idea from Amit Langote, substantially rewritten by me.

Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
2017-07-06 09:58:06 +01:00
Andrew Gierth
8c55244ae3 Fix transition tables for ON CONFLICT.
We now disallow having triggers with both transition tables and ON
INSERT OR UPDATE (which was a PG extension to the spec anyway),
because in this case it's not at all clear how the transition tables
should work for an INSERT ... ON CONFLICT query.  Separate ON INSERT
and ON UPDATE triggers with transition tables are allowed, and the
transition tables for these reflect only the inserted and only the
updated tuples respectively.

Patch by Thomas Munro

Discussion: https://postgr.es/m/CAEepm%3D11KHQ0JmETJQihSvhZB5mUZL2xrqHeXbCeLhDiqQ39%3Dw%40mail.gmail.com
2017-06-28 19:00:55 +01:00
Andrew Gierth
c46c0e5202 Fix transition tables for wCTEs.
The original coding didn't handle this case properly; each separate
DML substatement needs its own set of transitions.

Patch by Thomas Munro

Discussion: https://postgr.es/m/CAL9smLCDQ%3D2o024rBgtD4WihzX8B3C6u_oSQ2K3%2BR5grJrV0bg%40mail.gmail.com
2017-06-28 18:59:01 +01:00
Andrew Gierth
501ed02cf6 Fix transition tables for partition/inheritance.
We disallow row-level triggers with transition tables on child tables.
Transition tables for triggers on the parent table contain only those
columns present in the parent.  (We can't mix tuple formats in a
single transition table.)

Patch by Thomas Munro

Discussion: https://postgr.es/m/CA%2BTgmoZzTBBAsEUh4MazAN7ga%3D8SsMC-Knp-6cetts9yNZUCcg%40mail.gmail.com
2017-06-28 18:55:03 +01:00
Tom Lane
9c7dc89282 Re-allow SRFs and window functions within sub-selects within aggregates.
check_agg_arguments_walker threw an error upon seeing a SRF or window
function, but that is too aggressive: if the function is within a
sub-select then it's perfectly fine.  I broke the SRF case in commit
0436f6bde by copying the logic for window functions ... but that was
broken too, and had been since commit eaccfded9.

Repair both cases in HEAD, and the window function case back to 9.3.
9.2 gets this right.
2017-06-27 17:51:11 -04:00
Tom Lane
8be8510cf8 Add testing to detect errors of omission in "pin" dependency creation.
It's essential that initdb.c's setup_depend() scan each system catalog
that could contain objects that need to have "p" (pin) entries in pg_depend
or pg_shdepend.  Forgetting to add that, either when a catalog is first
invented or when it first acquires DATA() entries, is an obvious bug
hazard.  We can detect such omissions at reasonable cost by probing every
OID-containing system catalog to see whether the lowest-numbered OID in it
is pinned.  If so, the catalog must have been properly accounted for in
setup_depend().  If the lowest OID is above FirstNormalObjectId then the
catalog must have been empty at the end of initdb, so it doesn't matter.
There are a small number of catalogs whose first entry is made later in
initdb than setup_depend(), resulting in nonempty expected output of the
test, but these can be manually inspected to see that they are OK.  Any
future mistake of this ilk will manifest as a new entry in the test's
output.

Since pg_conversion is already in the test's output, add it to the set of
catalogs scanned by setup_depend().  That has no effect today (hence, no
catversion bump here) but it will protect us if we ever do add pin-worthy
conversions.

This test is very much like the catalog sanity checks embodied in
opr_sanity.sql and type_sanity.sql, but testing pg_depend doesn't seem to
fit naturally into either of those scripts' charters.  Hence, invent a new
test script misc_sanity.sql, which can be a home for this as well as tests
on any other catalogs we might want in future.

Discussion: https://postgr.es/m/8068.1498155068@sss.pgh.pa.us
2017-06-23 11:03:04 -04:00
Alvaro Herrera
5dfd564b10 Fix IF NOT EXISTS in CREATE STATISTICS
I misplaced the IF NOT EXISTS clause in commit 7b504eb282, before the
word STATISTICS.  Put it where it belongs.

Patch written independently by Amit Langote and myself.  I adopted his
submitted test case with a slight edit also.

Reported-by: Bruno Wolff III
Discussion: https://postgr.es/m/20170621004237.GB8337@wolff.to
2017-06-22 13:17:08 -04:00
Tom Lane
382ceffdf7 Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.

By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis.  However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent.  That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.

This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
Tom Lane
c7b8998ebb Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.

Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code.  The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there.  BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs.  So the
net result is that in about half the cases, such comments are placed
one tab stop left of before.  This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.

Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:19:25 -04:00
Tom Lane
e3860ffa4d Initial pgindent run with pg_bsd_indent version 2.0.
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:

* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
  sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
  well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
  with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
  than the expected column 33.

On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list.  This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.

There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses.  I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 14:39:04 -04:00
Dean Rasheed
bcbf392ec8 Prevent table partitions from being turned into views.
A table partition must be a table, not a view, so don't allow a
"_RETURN" rule to be added that would convert an existing table
partition into a view.

Amit Langote

Discussion: https://postgr.es/m/CAEZATCVzFcAjZwC1bTFvJ09skB_sgkF4SwPKMywev-XTnimp9Q%40mail.gmail.com
2017-06-21 10:43:17 +01:00
Tom Lane
d412f79381 Make opr_sanity test complain about built-in functions marked prosecdef.
Currently, there are no built-in functions that are SECURITY DEFINER.
But we just found an instance where one was mistakenly marked that way,
so it seems prudent to add a test about it.  If we ever grow some
functions that are intentionally SECURITY DEFINER, we can alter the
expected output of this test, or adjust the query to filter out functions
for which it's okay.

Per suggestion from Robert Haas.

Discussion: https://postgr.es/m/CA+TgmoYXg7McY33+jbWmG=rS-HNUur0S6W8Q8kVNFf7epFimVA@mail.gmail.com
2017-06-20 17:06:43 -04:00
Tom Lane
d8e6b84bd2 Avoid regressions in foreign-key-based selectivity estimates.
David Rowley found that the "use the smallest per-column selectivity"
heuristic applied in some cases by get_foreign_key_join_selectivity()
was badly off if the FK columns are independent, producing estimates
much worse than we got before that code was added in 9.6.

One case where that heuristic was used was for LEFT and FULL outer joins
with the referenced rel on the outside of the join.  But we should not
really need to special-case those here.  eqjoinsel() never has had such a
special case; the correction is applied by calc_joinrel_size_estimate()
instead.  Let's just estimate such cases like inner joins and rely on that
later adjustment.  (I think there was something of a thinko here, in that
the comments seem to be thinking about the selectivity as defined for
semi/anti joins; but that shouldn't apply to left/full joins.)  Add a
regression test exercising such a case to show that this is sane in
at least some cases.

The other case where we used that heuristic was for SEMI/ANTI outer joins,
either if the referenced rel was on the outside, or if it was on the inside
but was part of a join within the RHS.  In either case, the FK doesn't give
us a lot of traction towards estimating the selectivity.  To ensure that
we don't have regressions from what happened before 9.6, let's punt by
ignoring the FK in such cases and applying the traditional selectivity
calculation.  (We might be able to improve on that later, but for now
I just want to be sure it's not worse than 9.5.)

Report and patch by David Rowley, simplified a bit by me.  Back-patch
to 9.6 where this code was added.

Discussion: https://postgr.es/m/CAKJS1f8NO8oCDcxrteohG6O72uU1saEVT9qX=R8pENr5QWerXw@mail.gmail.com
2017-06-19 15:33:41 -04:00
Peter Eisentraut
94da2a6a9a Use RangeVarGetRelidExtended() in AlterSequence()
This allows us to combine the opening and the ownership check.

Reported-by: Robert Haas <robertmhaas@gmail.com>
2017-06-16 10:24:50 -04:00
Heikki Linnakangas
30681c830d Fix dependency, when changing a function's argument/return type.
When a new base type is created using the old-style procedure of first
creating the input/output functions with "opaque" in place of the base
type, the "opaque" argument/return type is changed to the final base type,
on CREATE TYPE. However, we did not create a pg_depend record when doing
that, so the functions were left not depending on the type.

Fixes bug #14706, reported by Karen Huddleston.

Discussion: https://www.postgresql.org/message-id/20170614232259.1424.82774@wrigleys.postgresql.org
2017-06-16 11:33:12 +03:00
Peter Eisentraut
915379c3c2 psql: Improve display of "for all tables" publications
Show "All tables" property in \dRp and \dRp+.  Don't list tables for
such publications in \dRp+, since it's redundant and the list could be
very long.

Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Jeff Janes <jeff.janes@gmail.com>
2017-06-15 10:46:41 -04:00
Peter Eisentraut
b6966d4627 Use DEFACLOBJ_ macros in error message instead of hardcoding 2017-06-14 14:44:24 -04:00
Robert Haas
b08df9cab7 Teach predtest.c about CHECK clauses to fix partitioning bugs.
In a CHECK clause, a null result means true, whereas in a WHERE clause
it means false.  predtest.c provided different functions depending on
which set of semantics applied to the predicate being proved, but had
no option to control what a null meant in the clauses provided as
axioms.  Add one.

Use that in the partitioning code when figuring out whether the
validation scan on a new partition can be skipped.  Rip out the
old logic that attempted (not very successfully) to compensate
for the absence of the necessary support in predtest.c.

Ashutosh Bapat and Robert Haas, reviewed by Amit Langote and
incorporating feedback from Tom Lane.

Discussion: http://postgr.es/m/CAFjFpReT_kq_uwU_B8aWDxR7jNGE=P0iELycdq5oupi=xSQTOw@mail.gmail.com
2017-06-14 13:13:11 -04:00
Tom Lane
8e72239e9d Fix no-longer-valid shortcuts in expression_returns_set().
expression_returns_set() used to short-circuit its recursion upon
seeing certain node types, such as DistinctExpr, that it knew the
executor did not support set-valued arguments for.  That was never
inherent, though, just a reflection of laziness in execQual.c.
With the new implementation of SRFs there is no reason to think
that any scalar-valued expression node could not have a set-valued
subexpression, except for AggRefs and WindowFuncs where we know there
is a parser check rejecting it.  And indeed, the shortcut causes
unexpected failures for cases such as a SRF underneath DistinctExpr,
because the planner stops looking for SRFs too soon.

Discussion: https://postgr.es/m/5259.1497044025@sss.pgh.pa.us
2017-06-14 11:10:05 -04:00
Dean Rasheed
d3c3f2b1e2 Teach PL/pgSQL about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.

Dean Rasheed, reviewed by Amit Langote.

Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
2017-06-14 09:00:01 +01:00
Dean Rasheed
f356ec5744 Teach RemoveRoleFromObjectPolicy() about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update
RemoveRoleFromObjectPolicy() to handle it, otherwise DROP OWNED BY
will fail if the role has any RLS policies referring to partitioned
tables.

Dean Rasheed, reviewed by Amit Langote.

Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
2017-06-14 08:43:40 +01:00
Tom Lane
0436f6bde8 Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE).  But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.

Hence, add a parser mechanism to allow detection of such cases during
parse analysis.  The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis.  Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments.  Furthermore, if it does,
it provides a suitable error cursor location for the complaint.  (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first.  While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)

While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set.  We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.

Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.

It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.

initdb forced because of user_mapping_options change.

Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.

Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
2017-06-13 23:46:39 -04:00
Peter Eisentraut
272171279f psql: Use more consistent capitalization of some output headings 2017-06-13 14:41:14 -04:00
Dean Rasheed
b6263cd851 Teach relation_is_updatable() about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update relation_is_updatable() to
handle it. Specifically, partitioned tables and simple views built on
top of them are updatable.

This affects the SQL-callable functions pg_relation_is_updatable() and
pg_column_is_updatable(), and the views information_schema.views and
information_schema.columns.

Dean Rasheed, reviewed by Ashutosh Bapat.

Discussion: https://postgr.es/m/CAEZATCXnbiFkMXgF4Ez1pmM2c-tS1z33bSq7OGbw7QQhHov%2B6Q%40mail.gmail.com
2017-06-13 17:30:36 +01:00
Robert Haas
ee252f074b Fix failure to remove dependencies when a partition is detached.
Otherwise, dropping the partitioned table will automatically drop
any previously-detached children, which would be unfortunate.

Ashutosh Bapat and Rahila Syed, reviewed by Amit Langote and by me.

Discussion: http://postgr.es/m/CAFjFpRdOwHuGj45i25iLQ4QituA0uH6RuLX1h5deD4KBZJ25yg@mail.gmail.com
2017-06-13 11:51:42 -04:00
Tom Lane
78a030a441 Fix confusion about number of subplans in partitioned INSERT setup.
ExecInitModifyTable() thought there was a plan per partition, but no,
there's only one.  The problem had escaped detection so far because there
would only be visible misbehavior if there were a SubPlan (not an InitPlan)
in the quals being duplicated for each partition.  However, valgrind
detected a bogus memory access in test cases added by commit 4f7a95be2,
and investigation of that led to discovery of the bug.  The additional
test case added here crashes without the patch.

Patch by Amit Langote, test case by me.

Discussion: https://postgr.es/m/10974.1497227727@sss.pgh.pa.us
2017-06-12 23:29:53 -04:00
Joe Conway
4f7a95be2c Apply RLS policies to partitioned tables.
The new partitioned table capability added a new relkind, namely
RELKIND_PARTITIONED_TABLE. Update fireRIRrules() to apply RLS
policies on RELKIND_PARTITIONED_TABLE as it does RELKIND_RELATION.

In addition, add RLS regression test coverage for partitioned tables.

Issue raised by Fakhroutdinov Evgenievich and patch by Mike Palmiotto.
Regression test editorializing by me.

Discussion: https://postgr.es/m/flat/20170601065959.1486.69906@wrigleys.postgresql.org
2017-06-11 08:51:18 -07:00
Robert Haas
15ce775faa Prevent BEFORE triggers from violating partitioning constraints.
Since tuple-routing implicitly checks the partitioning constraints
at least for the levels of the partitioning hierarchy it traverses,
there's normally no need to revalidate the partitioning constraint
after performing tuple routing.  However, if there's a BEFORE trigger
on the target partition, it could modify the tuple, causing the
partitioning constraint to be violated.  Catch that case.

Also, instead of checking the root table's partition constraint after
tuple-routing, check it beforehand.  Otherwise, the rules for when
the partitioning constraint gets checked get too complicated, because
you sometimes have to check part of the constraint but not all of it.
This effectively reverts commit 39162b2030
in favor of a different approach altogether.

Report by me.  Initial debugging by Jeevan Ladhe.  Patch by Amit
Langote, reviewed by me.

Discussion: http://postgr.es/m/CA+Tgmoa9DTgeVOqopieV8d1QRpddmP65aCdxyjdYDoEO5pS5KA@mail.gmail.com
2017-06-07 12:50:45 -04:00
Peter Eisentraut
9907b55ceb Fix ALTER SUBSCRIPTION grammar ambiguity
There was a grammar ambiguity between SET PUBLICATION name REFRESH and
SET PUBLICATION SKIP REFRESH, because SKIP is not a reserved word.  To
resolve that, fold the refresh choice into the WITH options.  Refreshing
is the default now.

Reported-by: tushar <tushar.ahuja@enterprisedb.com>
2017-06-05 21:43:25 -04:00
Tom Lane
0d18852666 Disallow CREATE INDEX if table is already in use in current session.
If we allow this, whatever outer command has the table open will not know
about the new index and may fail to update it as needed, as shown in a
report from Laurenz Albe.  We already had such a prohibition in place for
ALTER TABLE, but the CREATE INDEX syntax missed the check.

Fixing it requires an API change for DefineIndex(), which conceivably
would break third-party extensions if we were to back-patch it.  Given
how long this problem has existed without being noticed, fixing it in
the back branches doesn't seem worth that risk.

Discussion: https://postgr.es/m/A737B7A37273E048B164557ADEF4A58B53A4DC9A@ntex2010i.host.magwien.gv.at
2017-06-04 12:02:41 -04:00
Tom Lane
23886581b5 Fix old corner-case logic error in final_cost_nestloop().
When costing a nestloop with stop-at-first-inner-match semantics, and a
non-indexscan inner path, final_cost_nestloop() wants to charge the full
scan cost of the inner rel at least once, with additional scans charged
at inner_rescan_run_cost which might be less.  However the logic for
doing this effectively assumed that outer_matched_rows is at least 1.
If it's zero, which is not unlikely for a small outer rel, we ended up
charging inner_run_cost plus N times inner_rescan_run_cost, as much as
double the correct charge for an outer rel with only one row that
we're betting won't be matched.  (Unless the inner rel is materialized,
in which case it has very small inner_rescan_run_cost and the cost
is not so far off what it should have been.)

The upshot of this was that the planner had a tendency to select plans
that failed to make effective use of the stop-at-first-inner-match
semantics, and that might have Materialize nodes in them even when the
predicted number of executions of the Materialize subplan was only 1.
This was not so obvious before commit 9c7f5229a, because the case only
arose in connection with semi/anti joins where there's not freedom to
reverse the join order.  But with the addition of unique-inner joins,
it could result in some fairly bad planning choices, as reported by
Teodor Sigaev.  Indeed, some of the test cases added by that commit
have plans that look dubious on closer inspection, and are changed
by this patch.

Fix the logic to ensure that we don't charge for too many inner scans.
I chose to adjust it so that the full-freight scan cost is associated
with an unmatched outer row if possible, not a matched one, since that
seems like a better model of what would happen at runtime.

This is a longstanding bug, but given the lesser impact in back branches,
and the lack of field complaints, I won't risk a back-patch.

Discussion: https://postgr.es/m/CAKJS1f-LzkUsFxdJ_-Luy38orQ+AdEXM5o+vANR+-pHAWPSecg@mail.gmail.com
2017-06-03 13:48:15 -04:00
Tom Lane
d5cb3bab56 Fix improper quoting of format_type_be() output.
Per our message style guidelines, error messages incorporating the
results of format_type_be() and its siblings should not add quotes
around those results, because those functions already add quotes
at need.  Fix a few places that hadn't gotten that memo.
2017-05-29 21:48:26 -04:00
Tom Lane
68cff231e3 Make edge-case behavior of jsonb_populate_record match json_populate_record
json_populate_record throws an error if asked to convert a JSON scalar
or array into a composite type.  jsonb_populate_record was returning
a record full of NULL fields instead.  It seems better to make it
throw an error for this case as well.

Nikita Glukhov

Discussion: https://postgr.es/m/fbd1d566-bba0-a3de-d6d0-d3b1d7c24ff2@postgrespro.ru
2017-05-29 19:29:42 -04:00
Tom Lane
76a3df6e5e Code review focused on new node types added by partitioning support.
Fix failure to check that we got a plain Const from const-simplification of
a coercion request.  This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with.  Add test cases around this coercion behavior.

In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner.  Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types.  And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.

Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.

Avoid not-per-project-style usages like !strcmp(...).

Fix assorted failures to avoid scribbling on the input of parse
transformation.  I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.

Add guards against partitioning on system columns.

Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.

Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c.  This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.

Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.

Consolidate duplicative code in transformPartitionBound().

Improve a couple of error messages.

Improve assorted commentary.

Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.

Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
2017-05-28 23:20:28 -04:00
Tom Lane
eac0a6c7d3 Avoid locale-dependent output in select_views regression test.
Use 'COLLATE "C"' to force locale-independent sorting of the iexit
view results in select_views.sql.  We aren't particularly interested
in the exact sorting behavior here, and this doesn't change the shape
of the generated plan, so it seems like a wash as far as the goals
of this test go.

This is in response to bug #14637 from Tomasz Kontusz.  It doesn't
fully resolve his problem, because he also saw some diffs in the
create_index test.  But other people have had issues with select_views
too, and this fix lets us drop the select_views_1.out variant expected
file altogether, which is a nice win from a maintenance standpoint.

Emre Hasegeli

Discussion: https://postgr.es/m/20170501000609.24360.24248@wrigleys.postgresql.org
2017-05-28 14:52:18 -04:00
Tom Lane
94aced8cd0 Move autogenerated array types out of the way during ALTER ... RENAME.
Commit 9aa3c782c added code to allow CREATE TABLE/CREATE TYPE to not fail
when the desired type name conflicts with an autogenerated array type, by
dint of renaming the array type out of the way.  But I (tgl) overlooked
that the same case arises in ALTER TABLE/TYPE RENAME.  Fix that too.
Back-patch to all supported branches.

Report and patch by Vik Fearing, modified a bit by me

Discussion: https://postgr.es/m/0f4ade49-4f0b-a9a3-c120-7589f01d1eb8@2ndquadrant.com
2017-05-26 15:16:59 -04:00
Tom Lane
d761fe2182 Fix precision and rounding issues in money multiplication and division.
The cash_div_intX functions applied rint() to the result of the division.
That's not merely useless (because the result is already an integer) but
it causes precision loss for values larger than 2^52 or so, because of
the forced conversion to float8.

On the other hand, the cash_mul_fltX functions neglected to apply rint() to
their multiplication results, thus possibly causing off-by-one outputs.

Per C standard, arithmetic between any integral value and a float value is
performed in float format.  Thus, cash_mul_flt4 and cash_div_flt4 produced
answers good to only about six digits, even when the float value is exact.
We can improve matters noticeably by widening the float inputs to double.
(It's tempting to consider using "long double" arithmetic if available,
but that's probably too much of a stretch for a back-patched fix.)

Also, document that cash_div_intX operators truncate rather than round.

Per bug #14663 from Richard Pistole.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/22403.1495223615@sss.pgh.pa.us
2017-05-21 13:05:16 -04:00
Robert Haas
b522759508 Copy partitioned_rels lists to avoid shared substructure.
Otherwise, set_plan_refs() can get applied to the same list
multiple times through different references, leading to chaos.

Amit Langote, Dilip Kumar, and Robert Haas, reviewed by Ashutosh
Bapat.  Original report by Sveinn Sveinsson.

Discussion: http://postgr.es/m/20170517141151.1435.79890@wrigleys.postgresql.org
2017-05-19 15:26:05 -04:00
Robert Haas
3ec76ff1f2 Don't explicitly mark range partitioning columns NOT NULL.
This seemed like a good idea originally because there's no way to mark
a range partition as accepting NULL, but that now seems more like a
current limitation than something we want to lock down for all time.
For example, there's a proposal to add the notion of a default
partition which accepts all rows not otherwise routed, which directly
conflicts with the idea that a range-partitioned table should never
allow nulls anywhere.  So let's change this while we still can, by
putting the NOT NULL test into the partition constraint instead of
changing the column properties.

Amit Langote and Robert Haas, reviewed by Amit Kapila

Discussion: http://postgr.es/m/8e2dd63d-c6fb-bb74-3c2b-ed6d63629c9d@lab.ntt.co.jp
2017-05-18 13:49:31 -04:00
Peter Eisentraut
6234569851 Improve CREATE SUBSCRIPTION option parsing
When creating a subscription with slot_name = NONE, we failed to check
that also create_slot = false and enabled = false were set.  This
created an invalid subscription and could later lead to a crash if a
NULL slot name was accessed.  Add more checks around that for
robustness.

Reported-by: tushar <tushar.ahuja@enterprisedb.com>
2017-05-17 20:47:37 -04:00
Bruce Momjian
a6fd7b7a5f Post-PG 10 beta1 pgindent run
perltidy run not included.
2017-05-17 16:31:56 -04:00
Peter Eisentraut
3db22794b7 Add more tests for CREATE SUBSCRIPTION
Add some tests for parsing different option combinations.  Fix some of
the resulting error messages for recent changes in option naming.

Author: Masahiko Sawada <sawada.mshk@gmail.com>
2017-05-17 12:24:48 -04:00
Kevin Grittner
a19ea9c660 Revert "Add a test for transition table usage in FOR EACH ROW trigger."
This reverts commit 4a03f935b3.
2017-05-16 17:15:33 -05:00
Kevin Grittner
4a03f935b3 Add a test for transition table usage in FOR EACH ROW trigger. 2017-05-16 16:09:55 -05:00
Robert Haas
59f40566ca Fix relcache leak when row triggers on partitions are fired by COPY.
Thomas Munro, reviewed by Amit Langote

Discussion: http://postgr.es/m/CAEepm=15Jss-yhFApuKzxcoCuFnb8TR8iQiWMjG=CLYPx48QLw@mail.gmail.com
2017-05-16 12:46:32 -04:00
Tom Lane
e3f67a5a17 Update oidjoins regression test for v10. 2017-05-15 14:04:11 -04:00
Tom Lane
12590c5d33 Fix unsafe reference into relcache in constructed CommentStmt.
The CommentStmt made by RebuildConstraintComment() has to pstrdup the
relation name, else it will contain a dangling pointer after that
relcache entry is flushed.  (I'm less sure that pstrdup'ing conname
is necessary, but let's be safe.)  Failure to do this leads to weird
errors or crashes, as reported by Marko Elezovic.

Bug introduced by commit e42375fc8, so back-patch to 9.5 as that was.

Fix by David Rowley, regression test by Michael Paquier

Discussion: https://postgr.es/m/DB6PR03MB30775D58E732D4EB0C13725B9AE00@DB6PR03MB3077.eurprd03.prod.outlook.com
2017-05-15 11:33:44 -04:00
Tom Lane
eda4ef8151 stats regression test's wait_for_stats() must check timestamp too.
pg_stat_get_snapshot_timestamp() returns the timestamp seen in the "global"
stats file.  Because pgstat_write_statsfiles() writes per-DB stats files
before the global file (or at least before renaming it into place), there
is a window where the test backend can see all the stats updates that
wait_for_stats() was checking for (all of which come from the per-DB file)
but also see the same global stats file it had seen at the start of the
test script.  This results in a failure in only the "snapshot_newer" query,
as reported by a couple of buildfarm members recently.

I suspect that this ought to be back-patched.  Commit 4e37b3e15 has
evidently increased the probability of this window getting hit, but
it's not apparent why it could not have been hit before.  I'll refrain
for the moment though.
2017-05-14 23:33:18 -04:00
Tom Lane
7606bbb3de Make stats regression test more robust in the face of parallel query.
Commit 60690a6fe attempted to fix the wait_for_stats() function in this
test so that it would wait properly if the tenk2 scans were done in
parallel workers instead of the main session (typically as a consequence of
force_parallel_mode being turned on).  However, we made it test for whether
the main session's actions had been reported by looking for inserts on
'trunc_stats_test'.  This is the Wrong Thing, because those aren't the last
updates we expect the main session to do.  As shown by recent failures on
buildfarm member frogmouth, it's entirely likely that the trunc_stats_test
updates will be reported in a separate message from later updates, which
means there can be a window in which wait_for_stats() will exit but not all
the updates we are expecting to see will have arrived.  We should test for
the last updates we're expecting, namely those on 'trunc_stats_test4'.

Unfortunately, I doubt that this explains frogmouth's failures, because
there's no reason to believe that it's running the tenk2 queries in
parallel.  Still, the test is wrong on its own terms, so fix and back-patch
to 9.6 where parallel query came in.
2017-05-14 21:39:10 -04:00
Tom Lane
b5b0db19b8 Fix handling of extended statistics during ALTER COLUMN TYPE.
ALTER COLUMN TYPE on a column used by a statistics object fails since
commit 928c4de30, because the relevant switch in ATExecAlterColumnType
is unprepared for columns to have dependencies from OCLASS_STATISTIC_EXT
objects.

Although the existing types of extended statistics don't actually need us
to do any work for a column type change, it seems completely indefensible
that that assumption is hidden behind the failure of an unrelated module
to contain any code for the case.  Hence, create and call an API function
in statscmds.c where the assumption can be explained, and where we could
add code to deal with the problem when it inevitably becomes real.

Also, the reason this wasn't handled before, neither for extended stats
nor for the last half-dozen new OCLASS kinds :-(, is that the default:
in that switch suppresses compiler warnings, allowing people to miss the
need to consider it when adding an OCLASS.  We don't really need a default
because surely getObjectClass should only return valid values of the enum;
so remove it, and add the missed OCLASS entries where they should be.

Discussion: https://postgr.es/m/20170512221010.nglatgt5azzdxjlj@alvherre.pgsql
2017-05-14 12:22:25 -04:00
Tom Lane
f04c9a6146 Standardize terminology for pg_statistic_ext entries.
Consistently refer to such an entry as a "statistics object", not just
"statistics" or "extended statistics".  Previously we had a mismash of
terms, accompanied by utter confusion as to whether the term was
singular or plural.  That's not only grating (at least to the ear of
a native English speaker) but could be outright misleading, eg in error
messages that seemed to be referring to multiple objects where only one
could be meant.

This commit fixes the code and a lot of comments (though I may have
missed a few).  I also renamed two new SQL functions,
pg_get_statisticsextdef -> pg_get_statisticsobjdef
pg_statistic_ext_is_visible -> pg_statistics_obj_is_visible
to conform better with this terminology.

I have not touched the SGML docs other than fixing those function
names; the docs certainly need work but it seems like a separable task.

Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
2017-05-14 10:55:01 -04:00
Robert Haas
1848b73d45 Teach \d+ to show partitioning constraints.
The fact that we didn't have this in the first place is likely why
the problem fixed by f8bffe9e6d
escaped detection.

Patch by Amit Langote, reviewed and slightly adjusted by me.

Discussion: http://postgr.es/m/CA+TgmoYWnV2GMnYLG-Czsix-E1WGAbo4D+0tx7t9NdfYBDMFsA@mail.gmail.com
2017-05-13 12:04:53 -04:00
Robert Haas
f8bffe9e6d Fix multi-column range partitioning constraints.
The old logic was just plain wrong.

Report by Olaf Gawenda.  Patch by Amit Langote, reviewed by
Beena Emerson and by me.  Minor adjustments by me also.
2017-05-13 11:36:41 -04:00
Tom Lane
4e37b3e15c Avoid hard-wired sleep delays in stats regression test.
On faster machines, the overall runtime for running the core regression
tests is under twenty seconds these days, of which the hard-wired delays
in the stats test are a significant fraction.  But on closer inspection,
it seems like we shouldn't need those.

The initial 2-second delay is there only to reduce the risk of the test's
stats messages not getting sent due to contention.  But analysis of the
last ten years' worth of buildfarm runs shows no evidence that such
failures actually occur.  (We do see failures that look like stats
messages not getting sent, particularly on Windows; but there is little
reason to believe that the initial delay reduces their frequency.)

The later 1-second delay is there to ensure that our session's stats
will have gotten sent.  But we could also do that by starting a fresh
session, which takes well under 1 second even on very slow machines.

Hence, let's remove both delays and see what happens.  The first delay
was the only test of pg_sleep_for() in the regression tests, but we can
move that responsibility into wait_for_stats().

Discussion: https://postgr.es/m/17795.1493869423@sss.pgh.pa.us
2017-05-13 09:42:12 -04:00
Tom Lane
928c4de309 Fix dependencies for extended statistics objects.
A stats object ought to have a dependency on each individual column
it reads, not the entire table.  Doing this honestly lets us get rid
of the hard-wired logic in RemoveStatisticsExt, which seems to have
been misguidedly modeled on RemoveStatistics; and it will be far easier
to extend to multiple tables later.

Also, add overlooked dependency on owner, and make the dependency on
schema be NORMAL like every other such dependency.

There remains some unfinished work here, which is to allow statistics
objects to be extension members.  That takes more effort than just
adding the dependency call, though, so I left it out for now.

initdb forced because this changes the set of pg_depend records that
should exist for a statistics object.

Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
2017-05-12 16:26:31 -04:00
Alvaro Herrera
bc085205c8 Change CREATE STATISTICS syntax
Previously, we had the WITH clause in the middle of the command, where
you'd specify both generic options as well as statistic types.  Few
people liked this, so this commit changes it to remove the WITH keyword
from that clause and makes it accept statistic types only.  (We
currently don't have any generic options, but if we invent in the
future, we will gain a new WITH clause, probably at the end of the
command).

Also, the column list is now specified without parens, which makes the
whole command look more similar to a SELECT command.  This change will
let us expand the command to supporting expressions (not just columns
names) as well as multiple tables and their join conditions.

Tom added lots of code comments and fixed some parts of the CREATE
STATISTICS reference page, too; more changes in this area are
forthcoming.  He also fixed a potential problem in the alter_generic
regression test, reducing verbosity on a cascaded drop to avoid
dependency on message ordering, as we do in other tests.

Tom also closed a security bug: we documented that table ownership was
required in order to create a statistics object on it, but didn't
actually implement it.

Implement tab-completion for statistics objects.  This can stand some
more improvement.

Authors: Alvaro Herrera, with lots of cleanup by Tom Lane
Discussion: https://postgr.es/m/20170420212426.ltvgyhnefvhixm6i@alvherre.pgsql
2017-05-12 14:59:35 -03:00
Peter Eisentraut
b807f59828 Rework the options syntax for logical replication commands
For CREATE/ALTER PUBLICATION/SUBSCRIPTION, use similar option style as
other statements that use a WITH clause for options.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2017-05-12 08:57:49 -04:00
Tom Lane
d10c626de4 Rename WAL-related functions and views to use "lsn" not "location".
Per discussion, "location" is a rather vague term that could refer to
multiple concepts.  "LSN" is an unambiguous term for WAL locations and
should be preferred.  Some function names, view column names, and function
output argument names used "lsn" already, but others used "location",
as well as yet other terms such as "wal_position".  Since we've already
renamed a lot of things in this area from "xlog" to "wal" for v10,
we may as well incur a bit more compatibility pain and make these names
all consistent.

David Rowley, minor additional docs hacking by me

Discussion: https://postgr.es/m/CAKJS1f8O0njDKe8ePFQ-LK5-EjwThsDws6ohJ-+c6nWK+oUxtg@mail.gmail.com
2017-05-11 11:49:59 -04:00
Robert Haas
9e6104c667 Prohibit transition tables on views and foreign tables.
Thomas Munro, per off-list report from Prabhat Sabu.  Changes
to the message wording for consistency with the existing
relkind check for partitioned tables by me.

Discussion: http://postgr.es/m/CAEepm=2xJFFpGM+N=gpWx-9Nft2q1oaFZX07_y23AHCrJQLt0g@mail.gmail.com
2017-05-09 23:34:02 -04:00
Robert Haas
29fd3d9da0 Don't permit transition tables with TRUNCATE triggers.
Prior to this prohibition, such a trigger caused a crash.

Thomas Munro, per a report from Neha Sharma.  I added a
regression test.

Discussion: http://postgr.es/m/CAEepm=0VR5W-N38eTkO_FqJbGqQ_ykbBRmzmvHyxDhy1p=0Csw@mail.gmail.com
2017-05-09 23:24:23 -04:00
Robert Haas
3439f84475 Disallow finite partition bound following earlier UNBOUNDED column.
Amit Langote, per an observation by me.

Discussion: http://postgr.es/m/CA+TgmoYWnV2GMnYLG-Czsix-E1WGAbo4D+0tx7t9NdfYBDMFsA@mail.gmail.com
2017-05-09 22:41:12 -04:00
Peter Eisentraut
013c1178fd Remove the NODROP SLOT option from DROP SUBSCRIPTION
It turned out this approach had problems, because a DROP command should
not have any options other than CASCADE and RESTRICT.  Instead, always
attempt to drop the slot if there is one configured, but also add an
ALTER SUBSCRIPTION action to set the slot to NONE.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/29431.1493730652@sss.pgh.pa.us
2017-05-09 10:20:42 -04:00
Peter Eisentraut
fe974cc5a6 Check connection info string in ALTER SUBSCRIPTION
Previously it would allow an invalid connection string to be set.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: tushar <tushar.ahuja@enterprisedb.com>
2017-05-08 14:01:00 -04:00
Noah Misch
3eefc51053 Match pg_user_mappings limits to information_schema.user_mapping_options.
Both views replace the umoptions field with NULL when the user does not
meet qualifications to see it.  They used different qualifications, and
pg_user_mappings documented qualifications did not match its implemented
qualifications.  Make its documentation and implementation match those
of user_mapping_options.  One might argue for stronger qualifications,
but these have long, documented tenure.  pg_user_mappings has always
exhibited this problem, so back-patch to 9.2 (all supported versions).

Michael Paquier and Feike Steenbergen.  Reviewed by Jeff Janes.
Reported by Andrew Wheelwright.

Security: CVE-2017-7486
2017-05-08 07:24:24 -07:00
Peter Eisentraut
e2d4ef8de8 Add security checks to selectivity estimation functions
Some selectivity estimation functions run user-supplied operators over
data obtained from pg_statistic without security checks, which allows
those operators to leak pg_statistic data without having privileges on
the underlying tables.  Fix by checking that one of the following is
satisfied: (1) the user has table or column privileges on the table
underlying the pg_statistic data, or (2) the function implementing the
user-supplied operator is leak-proof.  If neither is satisfied, planning
will proceed as if there are no statistics available.

At least one of these is satisfied in most cases in practice.  The only
situations that are negatively impacted are user-defined or
not-leak-proof operators on a security-barrier view.

Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Peter Eisentraut <peter_e@gmx.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>

Security: CVE-2017-7484
2017-05-08 09:26:32 -04:00
Heikki Linnakangas
eb61136dc7 Remove support for password_encryption='off' / 'plain'.
Storing passwords in plaintext hasn't been a good idea for a very long
time, if ever. Now seems like a good time to finally forbid it, since we're
messing with this in PostgreSQL 10 anyway.

Remove the CREATE/ALTER USER UNENCRYPTED PASSSWORD 'foo' syntax, since
storing passwords unencrypted is no longer supported. ENCRYPTED PASSWORD
'foo' is still accepted, but ENCRYPTED is now just a noise-word, it does
the same as just PASSWORD 'foo'.

Likewise, remove the --unencrypted option from createuser, but accept
--encrypted as a no-op for backward compatibility. AFAICS, --encrypted was
a no-op even before this patch, because createuser encrypted the password
before sending it to the server even if --encrypted was not specified. It
added the ENCRYPTED keyword to the SQL command, but since the password was
already in encrypted form, it didn't make any difference. The documentation
was not clear on whether that was intended or not, but it's moot now.

Also, while password_encryption='on' is still accepted as an alias for
'md5', it is now marked as hidden, so that it is not listed as an accepted
value in error hints, for example. That's not directly related to removing
'plain', but it seems better this way.

Reviewed by Michael Paquier

Discussion: https://www.postgresql.org/message-id/16e9b768-fd78-0b12-cfc1-7b6b7f238fde@iki.fi
2017-05-08 11:26:07 +03:00
Heikki Linnakangas
0557a5dc2c Make SCRAM salts and nonces longer.
The salt is stored base64-encoded. With the old 10 bytes raw length, it was
always padded to 16 bytes after encoding. We might as well use 12 raw bytes
for the salt, and it's still encoded into 16 bytes.

Similarly for the random nonces, use a raw length that's divisible by 3, so
that there's no padding after base64 encoding. Make the nonces longer while
we're at it. 10 bytes was probably enough to prevent replay attacks, but
there's no reason to be skimpy here.

Per suggestion from Álvaro Hernández Tortosa.

Discussion: https://www.postgresql.org/message-id/df8c6e27-4d8e-5281-96e5-131a4e638fc8@8kdata.com
2017-05-05 10:02:13 +03:00
Tom Lane
3f074845a8 Fix pfree-of-already-freed-tuple when rescanning a GiST index-only scan.
GiST's getNextNearest() function attempts to pfree the previously-returned
tuple if any (that is, scan->xs_hitup in HEAD, or scan->xs_itup in older
branches).  However, if we are rescanning a plan node after ending a
previous scan early, those tuple pointers could be pointing to garbage,
because they would be pointing into the scan's pageDataCxt or queueCxt
which has been reset.  In a debug build this reliably results in a crash,
although I think it might sometimes accidentally fail to fail in
production builds.

To fix, clear the pointer field anyplace we reset a context it might
be pointing into.  This may be overkill --- I think probably only the
queueCxt case is involved in this bug, so that resetting in gistrescan()
would be sufficient --- but dangling pointers are generally bad news,
so let's avoid them.

Another plausible answer might be to just not bother with the pfree in
getNextNearest().  The reconstructed tuples would go away anyway in the
context resets, and I'm far from convinced that freeing them a bit earlier
really saves anything meaningful.  I'll stick with the original logic in
this patch, but if we find more problems in the same area we should
consider that approach.

Per bug #14641 from Denis Smirnov.  Back-patch to 9.5 where this
logic was introduced.

Discussion: https://postgr.es/m/20170504072034.24366.57688@wrigleys.postgresql.org
2017-05-04 13:59:39 -04:00
Peter Eisentraut
0de791ed76 Fix cursor_to_xml in tableforest false mode
It only produced <row> elements but no wrapping <table> element.

By contrast, cursor_to_xmlschema produced a schema that is now correct
but did not previously match the XML data produced by cursor_to_xml.

In passing, also fix a minor misunderstanding about moving cursors in
the tests related to this.

Reported-by: filip@jirsak.org
Based-on-patch-by: Thomas Munro <thomas.munro@enterprisedb.com>
2017-05-03 21:41:10 -04:00
Tom Lane
4dd4104342 Remove useless and rather expensive stanza in matview regression test.
This removes a test case added by commit b69ec7cc9, which was intended
to exercise a corner case involving the rule used at that time that
materialized views were unpopulated iff they had physical size zero.
We got rid of that rule very shortly later, in commit 1d6c72a55, but
kept the test case.  However, because the case now asks what VACUUM
will do to a zero-sized physical file, it would be pretty surprising
if the answer were ever anything but "nothing" ... and if things were
indeed that broken, surely we'd find it out from other tests.  Since
the test involves a table that's fairly large by regression-test
standards (100K rows), it's quite slow to run.  Dropping it should
save some buildfarm cycles, so let's do that.

Discussion: https://postgr.es/m/32386.1493831320@sss.pgh.pa.us
2017-05-03 19:37:01 -04:00
Alvaro Herrera
93bbeec6a2 extstats: change output functions to emit valid JSON
Manipulating extended statistics is more convenient as JSON than the
current ad-hoc format, so let's change before it's too late.

Discussion: https://postgr.es/m/20170420193828.k3fliiock5hdnehn@alvherre.pgsql
2017-05-02 18:49:32 -03:00
Tom Lane
92a43e4857 Reduce semijoins with unique inner relations to plain inner joins.
If the inner relation can be proven unique, that is it can have no more
than one matching row for any row of the outer query, then we might as
well implement the semijoin as a plain inner join, allowing substantially
more freedom to the planner.  This is a form of outer join strength
reduction, but it can't be implemented in reduce_outer_joins() because
we don't have enough info about the individual relations at that stage.
Instead do it much like remove_useless_joins(): once we've built base
relations, we can make another pass over the SpecialJoinInfo list and
get rid of any entries representing reducible semijoins.

This is essentially a followon to the inner-unique patch (commit 9c7f5229a)
and makes use of the proof machinery that that patch created.  We need only
minor refactoring of innerrel_is_unique's API to support this usage.

Per performance complaint from Teodor Sigaev.

Discussion: https://postgr.es/m/f994fc98-389f-4a46-d1bc-c42e05cb43ed@sigaev.ru
2017-05-01 14:53:42 -04:00
Tom Lane
2057a58d16 Fix mis-optimization of semijoins with more than one LHS relation.
The inner-unique patch (commit 9c7f5229a) supposed that if we're
considering a JOIN_UNIQUE_INNER join path, we can always set inner_unique
for the join, because the inner path produced by create_unique_path should
be unique relative to the outer relation.  However, that's true only if
we're considering joining to the whole outer relation --- otherwise we may
be applying only some of the join quals, and so the inner path might be
non-unique from the perspective of this join.  Adjust the test to only
believe that we can set inner_unique if we have the whole semijoin LHS on
the outer side.

There is more that can be done in this area, but this commit is only
intended to provide the minimal fix needed to get correct plans.

Per report from Teodor Sigaev.  Thanks to David Rowley for preliminary
investigation.

Discussion: https://postgr.es/m/f994fc98-389f-4a46-d1bc-c42e05cb43ed@sigaev.ru
2017-05-01 14:39:11 -04:00