Commit Graph

3341 Commits

Author SHA1 Message Date
Teodor Sigaev 07d25a964b Improve error reporting in format()
Clarify invalid format conversion type error message and add hint.

Author: Jim Nasby
2016-02-11 18:11:11 +03:00
Robert Haas 4c9864b9b4 Add some isolation tests for deadlock detection and resolution.
Previously, we had no test coverage for the deadlock detector.
2016-02-11 08:38:09 -05:00
Robert Haas 38f8bdcac4 Modify the isolation tester so that multiple sessions can wait.
This allows testing of deadlock scenarios.  Scenarios that would
previously have been considered invalid are now simply taken as a
scenario in which more than one backend will wait.
2016-02-11 08:36:30 -05:00
Robert Haas c9882c60f4 Specify permutations for isolation tests with "invalid" permutations.
This is a necessary prerequisite for forthcoming changes to allow deadlock
scenarios to be tested by the isolation tester.  It is also a good idea on
general principle, since these scenarios add no useful test coverage not
provided by other scenarios, but do to take time to execute.
2016-02-11 08:33:24 -05:00
Noah Misch 64d89a93c0 In pg_rewind test suite, triple promote timeout to 90s.
Thirty seconds was not consistently enough for promotion to complete on
buildfarm members sungazer and tern.  Experiments suggest 43s would have
been enough.  Back-patch to 9.5, where pg_rewind was introduced.
2016-02-10 20:34:57 -05:00
Tom Lane 3bb3f42f37 Fix some regex issues with out-of-range characters and large char ranges.
Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a
bad choice because it is outside the range of type "celt" (int32).
Characters approaching that limit could lead to infinite loops in logic
such as "for (c = a; c <= b; c++)" where c is of type celt but the
range bounds are chr.  Such loops will work safely only if CHR_MAX+1
is representable in celt, since c must advance to beyond b before the
loop will exit.

Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe.
It's highly unlikely that Unicode will ever assign codes that high, and
none of our other backend encodings need characters beyond that either.

In addition to modifying the macro, we have to explicitly enforce character
range restrictions on the values of \u, \U, and \x escape sequences, else
the limit is trivially bypassed.

Also, the code for expanding case-independent character ranges in bracket
expressions had a potential integer overflow in its calculation of the
number of characters it could generate, which could lead to allocating too
small a character vector and then overwriting memory.  An attacker with the
ability to supply arbitrary regex patterns could easily cause transient DOS
via server crashes, and the possibility for privilege escalation has not
been ruled out.

Quite aside from the integer-overflow problem, the range expansion code was
unnecessarily inefficient in that it always produced a result consisting of
individual characters, abandoning the knowledge that we had a range to
start with.  If the input range is large, this requires excessive memory.
Change it so that the original range is reported as-is, and then we add on
any case-equivalent characters that are outside that range.  With this
approach, we can bound the number of individual characters allowed without
sacrificing much.  This patch allows at most 100000 individual characters,
which I believe to be more than the number of case pairs existing in
Unicode, so that the restriction will never be hit in practice.

It's still possible for range() to take awhile given a large character code
range, so also add statement-cancel detection to its loop.  The downstream
function dovec() also lacked cancel detection, and could take a long time
given a large output from range().

Per fuzz testing by Greg Stark.  Back-patch to all supported branches.

Security: CVE-2016-0773
2016-02-08 10:25:40 -05:00
Fujii Masao f8a1c1d5a3 Make GIN regression test stable.
Commit 7f46eaf added the regression test which checks that
gin_clean_pending_list() cleans up the GIN pending list and returns >0.
This usually works fine. But if autovacuum comes along and cleans
the list before gin_clean_pending_list() starts, the function may
return 0, and then the regression test may fail.

To fix the problem, this commit disables autovacuum on the target
index of gin_clean_pending_list() by setting autovacuum_enabled
reloption to off when creating the table.

Also this commit sets gin_pending_list_limit reloption to 4MB on
the target index. Otherwise when running "make installcheck" with
small gin_pending_list_limit GUC, insertions of data may trigger
the cleanup of pending list before gin_clean_pending_list() starts
and the function may return 0. This could cause the regression test
to fail.

Per buildfarm member spoonbill.

Reported-By: Tom Lane
2016-02-08 23:41:46 +09:00
Andres Freund a6897efab9 Fix overeager pushdown of HAVING clauses when grouping sets are used.
In 61444bfb we started to allow HAVING clauses to be fully pushed down
into WHERE, even when grouping sets are in use. That turns out not to
work correctly, because grouping sets can "produce" NULLs, meaning that
filtering in WHERE and HAVING can have different results, even when no
aggregates or volatile functions are involved.

Instead only allow pushdown of empty grouping sets.

It'd be nice to do better, but the exact mechanics of deciding which
cases are safe are still being debated. It's important to give correct
results till we find a good solution, and such a solution might not be
appropriate for backpatching anyway.

Bug: #13863
Reported-By: 'wrb'
Diagnosed-By: Dean Rasheed
Author: Andrew Gierth
Reviewed-By: Dean Rasheed and Andres Freund
Discussion: 20160113183558.12989.56904@wrigleys.postgresql.org
Backpatch: 9.5, where grouping sets were introduced
2016-02-08 11:03:31 +01:00
Tom Lane cc2ca9319a Fix deparsing of ON CONFLICT arbiter WHERE clauses.
The parser doesn't allow qualification of column names appearing in
these clauses, but ruleutils.c would sometimes qualify them, leading
to dump/reload failures.  Per bug #13891 from Onder Kalaci.

(In passing, make stanzas in ruleutils.c that save/restore varprefix
more consistent.)

Peter Geoghegan
2016-02-07 14:57:24 -05:00
Tom Lane f867ce5518 ExecHashRemoveNextSkewBucket must physically copy tuples to main hashtable.
Commit 45f6240a8f added an assumption in ExecHashIncreaseNumBatches
and ExecHashIncreaseNumBuckets that they could find all tuples in the main
hash table by iterating over the "dense storage" introduced by that patch.
However, ExecHashRemoveNextSkewBucket continued its old practice of simply
re-linking deleted skew tuples into the main table's hashchains.  Hence,
such tuples got lost during any subsequent increase in nbatch or nbuckets,
and would never get joined, as reported in bug #13908 from Seth P.

I (tgl) think that the aforesaid commit has got multiple design issues
and should be reworked rather completely; but there is no time for that
right now, so band-aid the problem by making ExecHashRemoveNextSkewBucket
physically copy deleted skew tuples into the "dense storage" arena.

The added test case is able to exhibit the problem by means of fooling the
planner with a WHERE condition that it will underestimate the selectivity
of, causing the initial nbatch estimate to be too small.

Tomas Vondra and Tom Lane.  Thanks to David Johnston for initial
investigation into the bug report.
2016-02-07 12:29:32 -05:00
Tom Lane 6819514fca Add num_nulls() and num_nonnulls() to count NULL arguments.
An example use-case is "CHECK(num_nonnulls(a,b,c) = 1)" to assert that
exactly one of a,b,c isn't NULL.  The functions are variadic, so they
can also be pressed into service to count the number of null or nonnull
elements in an array.

Marko Tiikkaja, reviewed by Pavel Stehule
2016-02-04 23:03:37 -05:00
Fujii Masao 7f46eaf035 Add gin_clean_pending_list function to clean up GIN pending list
This function cleans up the pending list of the GIN index by
moving entries in it to the main GIN data structure in bulk.
It returns the number of pages cleaned up from the pending list.

This function is useful, for example, when the pending list
needs to be cleaned up *quickly* to improve the performance of
the search using GIN index. VACUUM can do the same thing, too,
but it may take days to run on a large table.

Jeff Janes,
reviewed by Julien Rouhaud, Jaime Casanova, Alvaro Herrera and me.

Discussion: CAMkU=1x8zFkpfnozXyt40zmR3Ub_kHu58LtRmwHUKRgQss7=iQ@mail.gmail.com
2016-01-28 12:57:52 +09:00
Tatsuo Ishii ad2e233385 Revert "Fix broken multibyte regression tests."
This reverts commit efc1610b64.
The commit was plain wrong as pointed out in:
http://www.postgresql.org/message-id/27771.1448736909@sss.pgh.pa.us
2016-01-26 08:29:15 +09:00
Tom Lane 73193d82d7 Adjust degree-based trig functions for more portability.
The buildfarm isn't very happy with the results of commit e1bd684a34.
To try to get the expected exact results everywhere:

* Replace M_PI / 180 subexpressions with a precomputed constant, so that
the compiler can't decide to rearrange that division with an adjacent
operation.  Hopefully this will fix failures to get exactly 0.5 from
sind(30) and cosd(60).

* Add scaling to ensure that tand(45) and cotd(45) give exactly 1; there
was nothing particularly guaranteeing that before.

* Replace minus zero by zero when tand() or cotd() would output that;
many machines did so for tand(180) and cotd(270), but not all.  We could
alternatively deem both results valid, but that doesn't seem likely to
be what users will want.
2016-01-23 11:26:07 -05:00
Tom Lane e1bd684a34 Add trigonometric functions that work in degrees.
The implementations go to some lengths to deliver exact results for values
where an exact result can be expected, such as sind(30) = 0.5 exactly.

Dean Rasheed, reviewed by Michael Paquier
2016-01-22 15:46:22 -05:00
Tom Lane 647d87c56a Make extract() do something more reasonable with infinite datetimes.
Historically, extract() just returned zero for any case involving an
infinite timestamp[tz] input; even cases in which the unit name was
invalid.  This is not very sensible.  Instead, return infinity or
-infinity as appropriate when the requested field is one that is
monotonically increasing (e.g, year, epoch), or NULL when it is not
(e.g., day, hour).  Also, throw the expected errors for bad unit names.

BACKWARDS INCOMPATIBLE CHANGE

Vitaly Burovoy, reviewed by Vik Fearing
2016-01-21 22:26:20 -05:00
Tom Lane be44ed27b8 Improve index AMs' opclass validation procedures.
The amvalidate functions added in commit 65c5fcd353 were on the
crude side.  Improve them in a few ways:

* Perform signature checking for operators and support functions.

* Apply more thorough checks for missing operators and functions,
where possible.

* Instead of reporting problems as ERRORs, report most problems as INFO
messages and make the amvalidate function return FALSE.  This allows
more than one problem to be discovered per run.

* Report object names rather than OIDs, and work a bit harder on making
the messages understandable.

Also, remove a few more opr_sanity regression test queries that are
now superseded by the amvalidate checks.
2016-01-21 19:47:15 -05:00
Robert Haas a7de3dc5c3 Support multi-stage aggregation.
Aggregate nodes now have two new modes: a "partial" mode where they
output the unfinalized transition state, and a "finalize" mode where
they accept unfinalized transition states rather than individual
values as input.

These new modes are not used anywhere yet, but they will be necessary
for parallel aggregation.  The infrastructure also figures to be
useful for cases where we want to aggregate local data and remote
data via the FDW interface, and want to bring back partial aggregates
from the remote side that can then be combined with locally generated
partial aggregates to produce the final value.  It may also be useful
even when neither FDWs nor parallelism are in play, as explained in
the comments in nodeAgg.c.

David Rowley and Simon Riggs, reviewed by KaiGai Kohei, Heikki
Linnakangas, Haribabu Kommi, and me.
2016-01-20 13:46:50 -05:00
Alvaro Herrera c8642d909f PostgresNode: Add names to nodes
This makes the log files easier to follow when investigating a test
failure.

Author: Michael Paquier
Review: Noah Misch
2016-01-20 14:13:11 -03:00
Tom Lane 9ff60273e3 Fix assorted inconsistencies in GiST opclass support function declarations.
The conventions specified by the GiST SGML documentation were widely
ignored.  For example, the strategy-number argument for "consistent" and
"distance" functions is specified to be a smallint, but most of the
built-in support functions declared it as an integer, and for that matter
the core code passed it using Int32GetDatum not Int16GetDatum.  None of
that makes any real difference at runtime, but it's quite confusing for
newcomers to the code, and it makes it very hard to write an amvalidate()
function that checks support function signatures.  So let's try to instill
some consistency here.

Another similar issue is that the "query" argument is not of a single
well-defined type, but could have different types depending on the strategy
(corresponding to search operators with different righthand-side argument
types).  Some of the functions threw up their hands and declared the query
argument as being of "internal" type, which surely isn't right ("any" would
have been more appropriate); but the majority position seemed to be to
declare it as being of the indexed data type, corresponding to a search
operator with both input types the same.  So I've specified a convention
that that's what to do always.

Also, the result of the "union" support function actually must be of the
index's storage type, but the documentation suggested declaring it to
return "internal", and some of the functions followed that.  Standardize
on telling the truth, instead.

Similarly, standardize on declaring the "same" function's inputs as
being of the storage type, not "internal".

Also, somebody had forgotten to add the "recheck" argument to both
the documentation of the "distance" support function and all of their
SQL declarations, even though the C code was happily using that argument.
Clean that up too.

Fix up some other omissions in the docs too, such as documenting that
union's second input argument is vestigial.

So far as the errors in core function declarations go, we can just fix
pg_proc.h and bump catversion.  Adjusting the erroneous declarations in
contrib modules is more debatable: in principle any change in those
scripts should involve an extension version bump, which is a pain.
However, since these changes are purely cosmetic and make no functional
difference, I think we can get away without doing that.
2016-01-19 12:04:36 -05:00
Tom Lane 65c5fcd353 Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function.  All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function.  This is similar to
the designs we've adopted for FDWs and tablesample methods.  There
are multiple advantages.  For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.

A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL.  We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.

Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-17 19:36:59 -05:00
Tom Lane 3ef16c46fb Add some checks on "char"-type columns to type_sanity and opr_sanity.
I noticed that the sanity checks in the regression tests omitted to
check a couple of "poor man's enum" columns that you'd reasonably
expect them to check.

There are other "char"-type columns in system catalogs that are not
covered by either type_sanity or opr_sanity, e.g. pg_rewrite.ev_type.
However, those catalogs are not populated with any manually-created
data during bootstrap, so it seems less necessary to check them
this way.
2016-01-09 17:20:58 -05:00
Tom Lane 1cb63c791c Clean up code for widget_in() and widget_out().
Given syntactically wrong input, widget_in() could call atof() with an
indeterminate pointer argument, typically leading to a crash; or if it
didn't do that, it might return a NULL pointer, which again would lead
to a crash since old-style C functions aren't supposed to do things
that way.  Fix that by correcting the off-by-one syntax test and
throwing a proper error rather than just returning NULL.

Also, since widget_in and widget_out have been marked STRICT for a
long time, their tests for null inputs are just dead code; remove 'em.
In the oldest branches, also improve widget_out to use snprintf not
sprintf, just to be sure.

In passing, get rid of a long-since-useless sprintf into a local buffer
that nothing further is done with, and make some other minor coding
style cleanups.

In the intended regression-testing usage of these functions, none of
this is very significant; but if the regression test database were
left around in a production installation, these bugs could amount
to a minor security hazard.

Piotr Stefaniak, Michael Paquier, and Tom Lane
2016-01-09 13:44:49 -05:00
Tom Lane 529baf6a2f Add STRICT to some C functions created by the regression tests.
These functions readily crash when passed a NULL input value.  The tests
themselves do not pass NULL values to them; but when the regression
database is used as a basis for fuzz testing, they cause a lot of noise.
Also, if someone were to leave a regression database lying about in a
production installation, these would create a minor security hazard.

Andreas Seltenreich
2016-01-09 13:02:54 -05:00
Magnus Hagander c662ef1d03 Remove reundand include of TestLib
Kyotaro HORIGUCHI
2016-01-08 08:53:00 +01:00
Alvaro Herrera b1a9bad9e7 pgstat: add WAL receiver status view & SRF
This new view provides insight into the state of a running WAL receiver
in a HOT standby node.
The information returned includes the PID of the WAL receiver process,
its status (stopped, starting, streaming, etc), start LSN and TLI, last
received LSN and TLI, timestamp of last message send and receipt, latest
end-of-WAL LSN and time, and the name of the slot (if any).

Access to the detailed data is only granted to superusers; others only
get the PID.

Author: Michael Paquier
Reviewer: Haribabu Kommi
2016-01-07 16:21:19 -03:00
Alvaro Herrera abb1733922 Add scale(numeric)
Author: Marko Tiikkaja
2016-01-05 19:02:13 -03:00
Tom Lane 419400c5da Remove some ancient and unmaintained encoding-conversion test cruft.
In commit 921191912c I claimed that we weren't testing encoding
conversion functions, but further poking around reveals that we did
have an equivalent though hard-wired set of tests in conversion.sql.
AFAICS there is no advantage to doing it like that as compared to letting
the catalog contents drive the test, so let the opr_sanity addition stand
and remove the now-redundant tests in conversion.sql.

Also, remove some infrastructure in src/backend/utils/mb/conversion_procs
for building conversion.sql's list of tests.  That was unmaintained, and
had not corresponded to the actual contents of conversion.sql since 2007
or perhaps even further back.
2016-01-05 16:43:40 -05:00
Tom Lane 921191912c In opr_sanity regression test, check for unexpected uses of cstring.
In light of commit ea0d494dae, it seems like a good idea to add
a regression test that will complain about random functions taking or
returning cstring.  Only I/O support functions and encoding conversion
functions should be declared that way.

While at it, add some checks that encoding conversion functions are
declared properly.  Since pg_conversion isn't populated manually,
it's not quite as necessary to check its contents as it is for catalogs
like pg_proc; but one thing we definitely have not tested in the past
is whether the identified conproc for a conversion actually does that
conversion vs. some other one.
2016-01-05 15:00:54 -05:00
Tom Lane 5d35438273 Adjust behavior of row_security GUC to match the docs.
Some time back we agreed that row_security=off should not be a way to
bypass RLS entirely, but only a way to get an error if it was being
applied.  However, the code failed to act that way for table owners.
Per discussion, this is a must-fix bug for 9.5.0.

Adjust the logic in rls.c to behave as expected; also, modify the
error message to be more consistent with the new interpretation.
The regression tests need minor corrections as well.  Also update
the comments about row_security in ddl.sgml to be correct.  (The
official description of the GUC in config.sgml is already correct.)

I failed to resist the temptation to do some other very minor
cleanup as well, such as getting rid of a duplicate extern declaration.
2016-01-04 12:21:41 -05:00
Tom Lane fb1227af67 Fix regrole and regnamespace types to honor quoting like other reg* types.
Aside from any consistency arguments, this is logically necessary because
the I/O functions for these types also handle numeric OID values.  Without
a quoting rule it is impossible to distinguish numeric OIDs from role or
namespace names that happen to contain only digits.

Also change the to_regrole and to_regnamespace functions to dequote their
arguments.  While not logically essential, this seems like a good idea
since the other to_reg* functions do it.  Anyone who really wants raw
lookup of an uninterpreted name can fall back on the time-honored solution
of (SELECT oid FROM pg_namespace WHERE nspname = whatever).

Report and patch by Jim Nasby, reviewed by Michael Paquier
2016-01-04 01:03:53 -05:00
Tom Lane 48c9f2889a Update copyright for 2016
On closer inspection, the reason copyright.pl was missing files is
that it is looking for 'Copyright (c)' and they had 'Copyright (C)'.
Fix that, and update a couple more that grepping for that revealed.
2016-01-02 14:19:48 -05:00
Tom Lane ad08bf5c8b Update copyright for 2016
Manually fix some copyright lines missed by the automated script.
2016-01-02 14:08:55 -05:00
Bruce Momjian ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
Tom Lane 0dab5ef39b Fix ALTER OPERATOR to update dependencies properly.
Fix an oversight in commit 321eed5f0f7563a0: replacing an operator's
selectivity functions needs to result in a corresponding update in
pg_depend.  We have a function that can handle that, but it was not
called by AlterOperator().

To fix this without enlarging pg_operator.h's #include list beyond
what clients can safely include, split off the function definitions
into a new file pg_operator_fn.h, similarly to what we've done for
some other catalog header files.  It's not entirely clear whether
any client-side code needs to include pg_operator.h, but it seems
prudent to assume that there is some such code somewhere.
2015-12-31 17:37:31 -05:00
Tom Lane 870df2b3b7 Fix omission of -X (--no-psqlrc) in some psql invocations.
As of commit d5563d7df, psql -c no longer implies -X, but not all of
our regression testing scripts had gotten that memo.

To ensure consistency of results across different developers, make
sure that *all* invocations of psql in all scripts in our tree
use -X, even where this is not what previously happened.

Michael Paquier and Tom Lane
2015-12-28 11:46:43 -05:00
Tom Lane fec1ad94df Include typmod when complaining about inherited column type mismatches.
MergeAttributes() rejects cases where columns to be merged have the same
type but different typmod, which is correct; but the error message it
printed didn't show either typmod, which is unhelpful.  Changing this
requires using format_type_with_typemod() in place of TypeNameToString(),
which will have some minor side effects on the way some type names are
printed, but on balance this is an improvement: the old code sometimes
printed one type according to one set of rules and the other type according
to the other set, which could be confusing in its own way.

Oddly, there were no regression test cases covering any of this behavior,
so add some.

Complaint and fix by Amit Langote
2015-12-26 13:41:29 -05:00
Tom Lane 3d2b31e30e Fix brin_summarize_new_values() to check index type and ownership.
brin_summarize_new_values() did not check that the passed OID was for
an index at all, much less that it was a BRIN index, and would fail in
obscure ways if it wasn't (possibly damaging data first?).  It also
lacked any permissions test; by analogy to VACUUM, we should only allow
the table's owner to summarize.

Noted by Jeff Janes, fix by Michael Paquier and me
2015-12-26 12:56:09 -05:00
Tom Lane 6efbded6e4 Allow omitting one or both boundaries in an array slice specifier.
Omitted boundaries represent the upper or lower limit of the corresponding
array subscript.  This allows simpler specification of many common
use-cases.

(Revised version of commit 9246af6799)

YUriy Zhuravlev
2015-12-22 21:05:29 -05:00
Stephen Frost 6f8cb1e234 Make viewquery a copy in rewriteTargetView()
Rather than expect the Query returned by get_view_query() to be
read-only and then copy bits and pieces of it out, simply copy the
entire structure when we get it.  This addresses an issue where
AcquireRewriteLocks, which is called by acquireLocksOnSubLinks(),
scribbles on the parsetree passed in, which was actually an entry
in relcache, leading to segfaults with certain view definitions.
This also future-proofs us a bit for anyone adding more code to this
path.

The acquireLocksOnSubLinks() was added in commit c3e0ddd40.

Back-patch to 9.3 as that commit was.
2015-12-21 10:34:14 -05:00
Tom Lane 654218138b Add missing COSTS OFF to EXPLAIN commands in rowsecurity.sql.
Commit e5e11c8cc added a bunch of EXPLAIN statements without COSTS OFF
to the regression tests.  This is contrary to project policy since it
results in unnecessary platform dependencies in the output (it's just
luck that we didn't get buildfarm failures from it).  Per gripe from
Mike Wilson.
2015-12-19 16:55:14 -05:00
Teodor Sigaev bbbd807097 Revert 9246af6799 because
I miss too much. Patch is returned to commitfest process.
2015-12-18 21:35:22 +03:00
Teodor Sigaev 9246af6799 Allow to omit boundaries in array subscript
Allow to omiy lower or upper or both boundaries in array subscript
for selecting slice of array.

Author: YUriy Zhuravlev
2015-12-18 15:18:58 +03:00
Alvaro Herrera 756e7b4c9d Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite.  Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.

To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly.  AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.

I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised.  Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.

Per bug #13666 reported by Chris Pacejo.

Backpatch to 9.5.

(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched.  If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 14:25:41 -03:00
Robert Haas f27a6b15e6 Mark CHECK constraints declared NOT VALID valid if created with table.
FOREIGN KEY constraints have behaved this way for a long time, but for
some reason the behavior of CHECK constraints has been inconsistent up
until now.

Amit Langote and Amul Sul, with assorted tweaks by me.
2015-12-16 07:43:56 -05:00
Stephen Frost e5e11c8cca Collect the global OR of hasRowSecurity flags for plancache
We carry around information about if a given query has row security or
not to allow the plancache to use that information to invalidate a
planned query in the event that the environment changes.

Previously, the flag of one of the subqueries was simply being copied
into place to indicate if the query overall included RLS components.
That's wrong as we need the global OR of all subqueries.  Fix by
changing the code to match how fireRIRules works, which is results
in OR'ing all of the flags.

Noted by Tom.

Back-patch to 9.5 where RLS was introduced.
2015-12-14 20:05:43 -05:00
Kevin Grittner e2f1765ce0 Remove xmlparse(document '') test
This one test was behaving differently between the ubuntu fix for
CVE-2015-7499 and the base "expected" file.  It's not worth having
yet another version of the expected file for this test, so drop it.
Perhaps at some point when all distros have settled down to the
same behavior on this test, it can be restored.

Problem found by me on libxml2 (2.9.1+dfsg1-3ubuntu4.6).
Solution suggested by Tom Lane.
Backpatch to 9.5, where the test was added.
2015-12-14 11:37:26 -06:00
Tom Lane 085423e3e3 Add an expected-file to match behavior of latest libxml2.
Recent releases of libxml2 do not provide error context reports for errors
detected at the very end of the input string.  This appears to be a bug, or
at least an infelicity, introduced by the fix for libxml2's CVE-2015-7499.
We can hope that this behavioral change will get undone before too long;
but the security patch is likely to spread a lot faster/further than any
follow-on cleanup, which means this behavior is likely to be present in the
wild for some time to come.  As a stopgap, add a variant regression test
expected-file that matches what you get with a libxml2 that acts this way.
2015-12-11 19:09:04 -05:00
Alvaro Herrera 8c1615531f For REASSIGN OWNED for foreign user mappings
As reported in bug #13809 by Alexander Ashurkov, the code for REASSIGN
OWNED hadn't gotten word about user mappings.  Deal with them in the
same way default ACLs do, which is to ignore them altogether; they are
handled just fine by DROP OWNED.  The other foreign object cases are
already handled correctly by both commands.

Also add a REASSIGN OWNED statement to foreign_data test to exercise the
foreign data objects.  (The changes are just before the "cleanup" phase,
so it shouldn't remove any existing live test.)

Reported by Alexander Ashurkov, then independently by Jaime Casanova.
2015-12-11 18:39:09 -03:00
Stephen Frost 833728d4c8 Handle policies during DROP OWNED BY
DROP OWNED BY handled GRANT-based ACLs but was not removing roles from
policies.  Fix that by having DROP OWNED BY remove the role specified
from the list of roles the policy (or policies) apply to, or the entire
policy (or policies) if it only applied to the role specified.

As with ACLs, the DROP OWNED BY caller must have permission to modify
the policy or a WARNING is thrown and no change is made to the policy.
2015-12-11 16:12:25 -05:00
Stephen Frost ed8bec915e Handle dependencies properly in ALTER POLICY
ALTER POLICY hadn't fully considered partial policy alternation
(eg: change just the roles on the policy, or just change one of
the expressions) when rebuilding the dependencies.  Instead, it
would happily remove all dependencies which existed for the
policy and then only recreate the dependencies for the objects
referred to in the specific ALTER POLICY command.

Correct that by extracting and building the dependencies for all
objects referenced by the policy, regardless of if they were
provided as part of the ALTER POLICY command or were already in
place as part of the pre-existing policy.
2015-12-11 15:43:03 -05:00
Tom Lane acfcd45cac Still more fixes for planner's handling of LATERAL references.
More fuzz testing by Andreas Seltenreich exposed that the planner did not
cope well with chains of lateral references.  If relation X references Y
laterally, and Y references Z laterally, then we will have to scan X on the
inside of a nestloop with Z, so for all intents and purposes X is laterally
dependent on Z too.  The planner did not understand this and would generate
intermediate joins that could not be used.  While that was usually harmless
except for wasting some planning cycles, under the right circumstances it
would lead to "failed to build any N-way joins" or "could not devise a
query plan" planner failures.

To fix that, convert the existing per-relation lateral_relids and
lateral_referencers relid sets into their transitive closures; that is,
they now show all relations on which a rel is directly or indirectly
laterally dependent.  This not only fixes the chained-reference problem
but allows some of the relevant tests to be made substantially simpler
and faster, since they can be reduced to simple bitmap manipulations
instead of searches of the LateralJoinInfo list.

Also, when a PlaceHolderVar that is due to be evaluated at a join contains
lateral references, we should treat those references as indirect lateral
dependencies of each of the join's base relations.  This prevents us from
trying to join any individual base relations to the lateral reference
source before the join is formed, which again cannot work.

Andreas' testing also exposed another oversight in the "dangerous
PlaceHolderVar" test added in commit 85e5e222b1.  Simply rejecting
unsafe join paths in joinpath.c is insufficient, because in some cases
we will end up rejecting *all* possible paths for a particular join, again
leading to "could not devise a query plan" failures.  The restriction has
to be known also to join_is_legal and its cohort functions, so that they
will not select a join for which that will happen.  I chose to move the
supporting logic into joinrels.c where the latter functions are.

Back-patch to 9.3 where LATERAL support was introduced.
2015-12-11 14:22:20 -05:00
Peter Eisentraut a351705d8a Improve some messages 2015-12-10 22:05:27 -05:00
Andres Freund 84ac126ee7 Fix ON CONFLICT UPDATE bug breaking AFTER UPDATE triggers.
ExecOnConflictUpdate() passed t_ctid of the to-be-updated tuple to
ExecUpdate(). That's problematic primarily because of two reason: First
and foremost t_ctid could point to a different tuple. Secondly, and
that's what triggered the complaint by Stanislav, t_ctid is changed by
heap_update() to point to the new tuple version.  The behavior of AFTER
UPDATE triggers was therefore broken, with NEW.* and OLD.* tuples
spuriously identical within AFTER UPDATE triggers.

To fix both issues, pass a pointer to t_self of a on-stack HeapTuple
instead.

Fixing this bug lead to one change in regression tests, which previously
failed due to the first issue mentioned above. There's a reasonable
expectation that test fails, as it updates one row repeatedly within one
INSERT ... ON CONFLICT statement. That is only possible if the second
update is triggered via ON CONFLICT ... SET, ON CONFLICT ... WHERE, or
by a WITH CHECK expression, as those are executed after
ExecOnConflictUpdate() does a visibility check. That could easily be
prohibited, but given it's allowed for plain UPDATEs and a rare corner
case, it doesn't seem worthwhile.

Reported-By: Stanislav Grozev
Author: Andres Freund and Peter Geoghegan
Discussion: CAA78GVqy1+LisN-8DygekD_Ldfy=BJLarSpjGhytOsgkpMavfQ@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
2015-12-10 16:29:26 +01:00
Tom Lane 938d797b84 Avoid odd portability problem in TestLib.pm's slurp_file function.
For unclear reasons, this function doesn't always read the expected data
in some old Perl versions.  Rewriting it to avoid use of ARGV seems to
dodge the problem, and this version is clearer anyway if you ask me.

In passing, also improve error message in adjacent append_to_file function.
2015-12-08 16:58:05 -05:00
Robert Haas d5563d7df9 psql: Support multiple -c and -f options, and allow mixing them.
To support this, we must reconcile some historical anomalies in the
behavior of -c.  In particular, as a backward-incompatibility, -c no
longer implies --no-psqlrc.

Pavel Stehule (code) and Catalin Iacob (documentation).  Review by
Michael Paquier and myself.  Proposed behavior per Tom Lane.
2015-12-08 14:04:08 -05:00
Alvaro Herrera 7ac5d9b316 PostgresNode: wrap correctly around port number range end
Per note from Tom Lane
2015-12-07 20:06:21 -03:00
Tom Lane 7e19db0c09 Fix another oversight in checking if a join with LATERAL refs is legal.
It was possible for the planner to decide to join a LATERAL subquery to
the outer side of an outer join before the outer join itself is completed.
Normally that's fine because of the associativity rules, but it doesn't
work if the subquery contains a lateral reference to the inner side of the
outer join.  In such a situation the outer join *must* be done first.
join_is_legal() missed this consideration and would allow the join to be
attempted, but the actual path-building code correctly decided that no
valid join path could be made, sometimes leading to planner errors such as
"failed to build any N-way joins".

Per report from Andreas Seltenreich.  Back-patch to 9.3 where LATERAL
support was added.
2015-12-07 17:42:11 -05:00
Alvaro Herrera 9821492ee4 Cleanup some problems in new Perl test code
Noted by Tom Lane:
- PostgresNode had a BEGIN block which created files, contrary to
  perlmod suggestions to do that only on INIT blocks.
- Assign ports randomly rather than starting from 90600.

Noted by Noah Misch:
- Change use of no-longer-set PGPORT environment variable to $node->port
- Don't start a server in pg_controldata test
- PostgresNode was reading the PID file incorrectly; test the right
  thing, and chomp the line we read from the PID file.
- Remove an unused $devnull variable
- Use 'pg_ctl kill' instead of "kill" directly, for Windos portability.
- Make server log names more informative.

Author: Michael Paquier
2015-12-07 19:39:57 -03:00
Tom Lane db0723631e Create TestLib.pm's tempdir underneath tmp_check/, not out in the open.
This way, existing .gitignore entries and makefile clean actions will
automatically apply to the tempdir, should it survive a TAP test run
(which can happen if the user control-C's out of the run, for example).

Michael Paquier, per a complaint from me
2015-12-05 13:23:48 -05:00
Alvaro Herrera a2983cfd9d Fix broken subroutine call in TestLib
Michael Paquier
2015-12-02 23:16:22 -03:00
Alvaro Herrera 1caef31d9e Refactor Perl test code
The original code was a bit clunky; make it more amenable for further
reuse by creating a new Perl package PostgresNode, which is an
object-oriented representation of a single server, with some support
routines such as init, start, stop, psql.  This serves as a better basis
on which to build further test code, and enables writing tests that use
more than one server without too much complication.

This commit modifies a lot of the existing test files, mostly to remove
explicit calls to system commands (pg_ctl) replacing them with method
calls of a PostgresNode object.  The result is quite a bit more
straightforward.

Also move some initialization code to BEGIN and INIT blocks instead of
having it straight in as top-level code.

This commit also introduces package RecursiveCopy so that we can copy
whole directories without having to depend on packages that may not be
present on vanilla Perl 5.8 installations.

I also ran perltidy on the modified files, which changes some code sites
that are not otherwise touched by this patch.  I tried to avoid this,
but it ended up being more trouble than it's worth.

Authors: Michael Paquier, Álvaro Herrera
Review: Noah Misch
2015-12-02 18:46:16 -03:00
Tom Lane 95708e1d8e Further tweaking of print_aligned_vertical().
Don't force the data width to extend all the way to the right margin if it
doesn't need to.  This reverts the behavior in non-wrapping cases to be
what it was in 9.4.  Also, make the logic that ensures the data line width
is at least equal to the record-header line width a little less obscure.

In passing, avoid possible calculation of log10(0).  Probably that's
harmless, given the lack of field complaints, but it seems risky:
conversion of NaN to an integer isn't well defined.
2015-12-01 14:47:13 -05:00
Tom Lane 0e0776bc99 Rework wrap-width calculation in psql's print_aligned_vertical() function.
This area was rather heavily whacked around in 6513633b9 and follow-on
commits, and it was showing it, because the logic to calculate the
allowable data width in wrapped expanded mode had only the vaguest
relationship to the logic that was actually printing the data.  It was
not very close to being right about the conditions requiring overhead
columns to be added.  Aside from being wrong, it was pretty unreadable
and under-commented.  Rewrite it so it corresponds to what the printing
code actually does.

In passing, remove a couple of dead tests in the printing logic, too.

Per a complaint from Jeff Janes, though this doesn't look much like his
patch because it fixes a number of other corner-case bogosities too.
One such fix that's visible in the regression test results is that
although the code was attempting to enforce a minimum data width of
3 columns, it sometimes left less space than that available.
2015-11-30 17:53:32 -05:00
Tom Lane ec7eef6b11 Avoid caching expression state trees for domain constraints across queries.
In commit 8abb3cda0d I attempted to cache
the expression state trees constructed for domain CHECK constraints for
the life of the backend (assuming the domain's constraints don't get
redefined).  However, this turns out not to work very well, because
execQual.c will run those state trees with ecxt_per_query_memory pointing
to a query-lifespan context, and in some situations we'll end up with
pointers into that context getting stored into the state trees.  This
happens in particular with SQL-language functions, as reported by
Emre Hasegeli, but there are many other cases.

To fix, keep only the expression plan trees for domain CHECK constraints
in the typcache's data structure, and revert to performing ExecInitExpr
(at least) once per query to set up expression state trees in the query's
context.

Eventually it'd be nice to undo this, but that will require some careful
thought about memory management for expression state trees, and it seems
far too late for any such redesign in 9.5.  This way is still much more
efficient than what happened before 8abb3cda0.
2015-11-29 18:18:42 -05:00
Teodor Sigaev 92e38182d7 COPY (INSERT/UPDATE/DELETE .. RETURNING ..)
Attached is a patch for being able to do COPY (query) without a CTE.

Author: Marko Tiikkaja
Review: Michael Paquier
2015-11-27 19:11:22 +03:00
Tom Lane 074c5cfbfb Fix handling of inherited check constraints in ALTER COLUMN TYPE (again).
The previous way of reconstructing check constraints was to do a separate
"ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance
hierarchy.  However, that way has no hope of reconstructing the check
constraints' own inheritance properties correctly, as pointed out in
bug #13779 from Jan Dirk Zijlstra.  What we should do instead is to do
a regular "ALTER TABLE", allowing recursion, at the topmost table that
has a particular constraint, and then suppress the work queue entries
for inherited instances of the constraint.

Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf,
but we failed to notice that it wasn't reconstructing the pg_constraint
field values correctly.

As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to
always schema-qualify the target table name; this seems like useful backup
to the protections installed by commit 5f173040.

In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused.
(I could alternatively have modified it to also return conislocal, but that
seemed like a pretty single-purpose API, so let's not pretend it has some
other use.)  It's unused in the back branches as well, but I left it in
place just in case some third-party code has decided to use it.

In HEAD/9.5, also rename pg_get_constraintdef_string to
pg_get_constraintdef_command, as the previous name did nothing to explain
what that entry point did differently from others (and its comment was
equally useless).  Again, that change doesn't seem like material for
back-patching.

I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well.

Otherwise, back-patch to all supported branches.
2015-11-20 14:55:47 -05:00
Robert Haas bc4996e61b Make ALTER .. SET SCHEMA do nothing, instead of throwing an ERROR.
This was already true for CREATE EXTENSION, but historically has not
been true for other object types.  Therefore, this is a backward
incompatibility.  Per discussion on pgsql-hackers, everyone seems to
agree that the new behavior is better.

Marti Raudsepp, reviewed by Haribabu Kommi and myself
2015-11-19 10:49:25 -05:00
Peter Eisentraut c5ec406412 Message style fix
from Euler Taveira
2015-11-17 06:53:07 -05:00
Peter Eisentraut 5db837d3f2 Message improvements 2015-11-16 21:39:23 -05:00
Tom Lane 8004953b5a Speed up ruleutils' name de-duplication code, and fix overlength-name case.
Since commit 11e131854f, ruleutils.c has
attempted to ensure that each RTE in a query or plan tree has a unique
alias name.  However, the code that was added for this could be quite slow,
even as bad as O(N^3) if N identical RTE names must be replaced, as noted
by Jeff Janes.  Improve matters by building a transient hash table within
set_rtable_names.  The hash table in itself reduces the cost of detecting a
duplicate from O(N) to O(1), and we can save another factor of N by storing
the number of de-duplicated names already created for each entry, so that
we don't have to re-try names already created.  This way is probably a bit
slower overall for small range tables, but almost by definition, such cases
should not be a performance problem.

In principle the same problem applies to the column-name-de-duplication
code; but in practice that seems to be less of a problem, first because
N is limited since we don't support extremely wide tables, and second
because duplicate column names within an RTE are fairly rare, so that in
practice the cost is more like O(N^2) not O(N^3).  It would be very much
messier to fix the column-name code, so for now I've left that alone.

An independent problem in the same area was that the de-duplication code
paid no attention to the identifier length limit, and would happily produce
identifiers that were longer than NAMEDATALEN and wouldn't be unique after
truncation to NAMEDATALEN.  This could result in dump/reload failures, or
perhaps even views that silently behaved differently than before.  We can
fix that by shortening the base name as needed.  Fix it for both the
relation and column name cases.

In passing, check for interrupts in set_rtable_names, just in case it's
still slow enough to be an issue.

Back-patch to 9.3 where this code was introduced.
2015-11-16 13:45:17 -05:00
Tom Lane 7745bc352a Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts.
Normally ruleutils prints a whole-row Var as "foo.*".  We already knew that
that doesn't work at top level of a SELECT list, because the parser would
treat the "*" as a directive to expand the reference into separate columns,
not a whole-row Var.  However, Joshua Yanovski points out in bug #13776
that the same thing happens at top level of a ROW() construct; and some
nosing around in the parser shows that the same is true in VALUES().
Hence, apply the same workaround already devised for the SELECT-list case,
namely to add a forced cast to the appropriate rowtype in these cases.
(The alternative of just printing "foo" was rejected because it is
difficult to avoid ambiguity against plain columns named "foo".)

Back-patch to all supported branches.
2015-11-15 14:41:09 -05:00
Tom Lane 7d9a4737c2 Improve type numeric's calculations for ln(), log(), exp(), pow().
Set the "rscales" for intermediate-result calculations to ensure that
suitable numbers of significant digits are maintained throughout.  The
previous coding hadn't thought this through in any detail, and as a result
could deliver results with many inaccurate digits, or in the worst cases
even fail with divide-by-zero errors as a result of losing all nonzero
digits of intermediate results.

In exp_var(), get rid entirely of the logic that separated the calculation
into integer and fractional parts: that was neither accurate nor
particularly fast.  The existing range-reduction method of dividing by 2^n
can be applied across the full input range instead of only 0..1, as long as
we are careful to set an appropriate rscale for each step.

Also fix the logic in mul_var() for shortening the calculation when the
caller asks for fewer output digits than an exact calculation would
require.  This bug doesn't affect simple multiplications since that code
path asks for an exact result, but it does contribute to accuracy issues
in the transcendental math functions.

In passing, improve performance of mul_var() a bit by forcing the shorter
input to be on the left, thus reducing the number of iterations of the
outer loop and probably also reducing the number of carry-propagation
steps needed.

This is arguably a bug fix, but in view of the lack of field complaints,
it does not seem worth the risk of back-patching.

Dean Rasheed
2015-11-14 14:55:46 -05:00
Tom Lane c5e86ea932 Add "xid <> xid" and "xid <> int4" operators.
The corresponding "=" operators have been there a long time, and not
having their negators is a bit of a nuisance.

Michael Paquier
2015-11-07 16:40:15 -05:00
Tom Lane a43b4ab111 Fix enforcement of restrictions inside regexp lookaround constraints.
Lookahead and lookbehind constraints aren't allowed to contain backrefs,
and parentheses within them are always considered non-capturing.  Or so
says the manual.  But the regexp parser forgot about these rules once
inside a parenthesized subexpression, so that constructs like (\w)(?=(\1))
were accepted (but then not correctly executed --- a case like this acted
like (\w)(?=\w), without any enforcement that the two \w's match the same
text).  And in (?=((foo))) the innermost parentheses would be counted as
capturing parentheses, though no text would ever be captured for them.

To fix, properly pass down the "type" argument to the recursive invocation
of parse().

Back-patch to all supported branches; it was agreed that silent
misexecution of such patterns is worse than throwing an error, even though
new errors in minor releases are generally not desirable.
2015-11-07 12:43:24 -05:00
Robert Haas 8a1fab36ab pg_size_pretty: Format negative values similar to positive ones.
Previously, negative values were always displayed in bytes, regardless
of how large they were.

Adrian Vondendriesch, reviewed by Julien Rouhaud and myself
2015-11-06 11:03:02 -05:00
Tom Lane b23af45875 Fix erroneous hash calculations in gin_extract_jsonb_path().
The jsonb_path_ops code calculated hash values inconsistently in some cases
involving nested arrays and objects.  This would result in queries possibly
not finding entries that they should find, when using a jsonb_path_ops GIN
index for the search.  The problem cases involve JSONB values that contain
both scalars and sub-objects at the same nesting level, for example an
array containing both scalars and sub-arrays.  To fix, reset the current
stack->hash after processing each value or sub-object, not before; and
don't try to be cute about the outermost level's initial hash.

Correcting this means that existing jsonb_path_ops indexes may now be
inconsistent with the new hash calculation code.  The symptom is the same
--- searches not finding entries they should find --- but the specific
rows affected are likely to be different.  Users will need to REINDEX
jsonb_path_ops indexes to make sure that all searches work as expected.

Per bug #13756 from Daniel Cheng.  Back-patch to 9.4 where the faulty
logic was introduced.
2015-11-05 18:15:48 -05:00
Peter Eisentraut 7bd099d511 Update spelling of COPY options
The preferred spelling was changed from FORCE QUOTE to FORCE_QUOTE and
the like, but some code was still referring to the old spellings.
2015-11-04 21:01:26 -05:00
Tom Lane 12c9a04008 Implement lookbehind constraints in our regular-expression engine.
A lookbehind constraint is like a lookahead constraint in that it consumes
no text; but it checks for existence (or nonexistence) of a match *ending*
at the current point in the string, rather than one *starting* at the
current point.  This is a long-requested feature since it exists in many
other regex libraries, but Henry Spencer had never got around to
implementing it in the code we use.

Just making it work is actually pretty trivial; but naive copying of the
logic for lookahead constraints leads to code that often spends O(N^2) time
to scan an N-character string, because we have to run the match engine
from string start to the current probe point each time the constraint is
checked.  In typical use-cases a lookbehind constraint will be written at
the start of the regex and hence will need to be checked at every character
--- so O(N^2) work overall.  To fix that, I introduced a third copy of the
core DFA matching loop, paralleling the existing longest() and shortest()
loops.  This version, matchuntil(), can suspend and resume matching given
a couple of pointers' worth of storage space.  So we need only run it
across the string once, stopping at each interesting probe point and then
resuming to advance to the next one.

I also put in an optimization that simplifies one-character lookahead and
lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND
constraints, which already existed in the engine.  This avoids the overhead
of the LACON machinery entirely for these rather common cases.

The net result is that lookbehind constraints run a factor of three or so
slower than Perl's for multi-character constraints, but faster than Perl's
for one-character constraints ... and they work fine for variable-length
constraints, which Perl gives up on entirely.  So that's not bad from a
competitive perspective, and there's room for further optimization if
anyone cares.  (In reality, raw scan rate across a large input string is
probably not that big a deal for Postgres usage anyway; so I'm happy if
it's linear.)
2015-10-30 19:14:19 -04:00
Peter Eisentraut a8d585c091 Message style improvements
Message style, plurals, quoting, spelling, consistency with similar
messages
2015-10-28 20:38:36 -04:00
Alvaro Herrera dbe6f434ee Fix secondary expected output for commit_ts test
Per red wall in buildfarm
2015-10-27 23:03:22 -03:00
Tom Lane d371bebd3d Remove redundant CREATEUSER/NOCREATEUSER options in CREATE ROLE et al.
Once upon a time we did not have a separate CREATEROLE privilege, and
CREATEUSER effectively meant SUPERUSER.  When we invented CREATEROLE
(in 8.1) we also added SUPERUSER so as to have a less confusing keyword
for this role property.  However, we left CREATEUSER in place as a
deprecated synonym for SUPERUSER, because of backwards-compatibility
concerns.  It's still there and is still confusing people, as for example
in bug #13694 from Justin Catterson.  9.6 will be ten years or so later,
which surely ought to be long enough to end the deprecation and just
remove these old keywords.  Hence, do so.
2015-10-22 09:34:03 -07:00
Tom Lane d435542583 Fix incorrect translation of minus-infinity datetimes for json/jsonb.
Commit bda76c1c8c caused both plus and
minus infinity to be rendered as "infinity", which is not only wrong
but inconsistent with the pre-9.4 behavior of to_json().  Fix that by
duplicating the coding in date_out/timestamp_out/timestamptz_out more
closely.  Per bug #13687 from Stepan Perlov.  Back-patch to 9.4, like
the previous commit.

In passing, also re-pgindent json.c, since it had gotten a bit messed up by
recent patches (and I was already annoyed by indentation-related problems
in back-patching this fix ...)
2015-10-20 11:07:04 -07:00
Noah Misch 8e3b4d9d40 Eschew "RESET statement_timeout" in tests.
Instead, use transaction abort.  Given an unlucky bout of latency, the
timeout would cancel the RESET itself.  Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that.  Back-patch
to 9.1 (all supported versions).  The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
2015-10-20 00:37:22 -04:00
Tom Lane 9f1e642d50 Fix incorrect handling of lookahead constraints in pg_regprefix().
pg_regprefix was doing nothing with lookahead constraints, which would
be fine if it were the right kind of nothing, but it isn't: we have to
terminate our search for a fixed prefix, not just pretend the LACON arc
isn't there.  Otherwise, if the current state has both a LACON outarc and a
single plain-color outarc, we'd falsely conclude that the color represents
an addition to the fixed prefix, and generate an extracted index condition
that restricts the indexscan too much.  (See added regression test case.)

Terminating the search is conservative: we could traverse the LACON arc
(thus assuming that the constraint can be satisfied at runtime) and then
examine the outarcs of the linked-to state.  But that would be a lot more
work than it seems worth, because writing a LACON followed by a single
plain character is a pretty silly thing to do.

This makes a difference only in rather contrived cases, but it's a bug,
so back-patch to all supported branches.
2015-10-19 13:54:53 -07:00
Tom Lane afdfcd3f76 Miscellaneous cleanup of regular-expression compiler.
Revert our previous addition of "all" flags to copyins() and copyouts();
they're no longer needed, and were never anything but an unsightly hack.

Improve a couple of infelicities in the REG_DEBUG code for dumping
the NFA data structure, including adding code to count the total
number of states and arcs.

Add a couple of missed error checks.

Add some more documentation in the README file, and some regression tests
illustrating cases that exceeded the state-count limit and/or took
unreasonable amounts of time before this set of patches.

Back-patch to all supported branches.
2015-10-16 15:55:59 -04:00
Tom Lane 48789c5d23 Fix regular-expression compiler to handle loops of constraint arcs.
It's possible to construct regular expressions that contain loops of
constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs).  There's no use
in fully traversing such a loop at execution, since you'd just end up in
the same NFA state without having consumed any input.  Worse, such a loop
leads to infinite looping in the pullback/pushfwd stage of compilation,
because we keep pushing or pulling the same constraints around the loop
in a vain attempt to move them to the pre or post state.  Such looping was
previously recognized in CVE-2007-4772; but the fix only handled the case
of trivial single-state loops (that is, a constraint arc leading back to
its source state) ... and not only that, it was incorrect even for that
case, because it broke the admittedly-not-very-clearly-stated API contract
of the pull() and push() subroutines.  The first two regression test cases
added by this commit exhibit patterns that result in assertion failures
because of that (though there seem to be no ill effects in non-assert
builds).  The other new test cases exhibit multi-state constraint loops;
in an unpatched build they will run until the NFA state-count limit is
exceeded.

To fix, remove the code added for CVE-2007-4772, and instead create a
general-purpose constraint-loop-breaking phase of regex compilation that
executes before we do pullback/pushfwd.  Since we never need to traverse
a constraint loop fully, we can just break the loop at any chosen spot,
if we add clone states that can replicate any sequence of arc transitions
that would've traversed just part of the loop.

Also add some commentary clarifying why we have to have all these
machinations in the first place.

This class of problems has been known for some time --- we had a report
from Marc Mamin about two years ago, for example, and there are related
complaints in the Tcl bug tracker.  I had discussed a fix of this kind
off-list with Henry Spencer, but didn't get around to doing something
about it until the issue was rediscovered by Greg Stark recently.

Back-patch to all supported branches.
2015-10-16 15:55:58 -04:00
Tom Lane 3587cbc34f Fix NULL handling in datum_to_jsonb().
The function failed to adhere to its specification that the "tcategory"
argument should not be examined when the input value is NULL.  This
resulted in a crash in some cases.  Per bug #13680 from Boyko Yordanov.

In passing, re-pgindent some recent changes in jsonb.c, and fix a rather
ungrammatical comment.

Diagnosis and patch by Michael Paquier, cosmetic changes by me
2015-10-15 13:46:09 -04:00
Robert Haas b8dd19af50 Improve INSERT .. ON CONFLICT error message.
Peter Geoghegan, reviewed by me.
2015-10-13 15:33:07 -04:00
Peter Eisentraut f20b26960a Fix whitespace 2015-10-11 21:44:27 -04:00
Robert Haas db0f6cad48 Remove set_latch_on_sigusr1 flag.
This flag has proven to be a recipe for bugs, and it doesn't seem like
it can really buy anything in terms of performance.  So let's just
*always* set the process latch when we receive SIGUSR1 instead of
trying to do it only when needed.

Per my recent proposal on pgsql-hackers.
2015-10-09 14:31:04 -04:00
Stephen Frost b7aac36245 Handle append_rel_list in expand_security_qual
During expand_security_quals, we take the security barrier quals on an
RTE and create a subquery which evaluates the quals.  During this, we
have to replace any variables in the outer query which refer to the
original RTE with references to the columns from the subquery.

We need to also perform that replacement for any Vars in the
append_rel_list.

Only backpatching to 9.5 as we only go through this process in 9.4 for
auto-updatable security barrier views, which UNION ALL queries aren't.

Discovered by Haribabu Kommi

Patch by Dean Rasheed
2015-10-09 10:49:02 -04:00
Andrew Dunstan b6363772fd Factor out encoding specific tests for json
This lets us remove the large alternative results files for the main
json and jsonb tests, which makes modifying those tests simpler for
committers and patch submitters.

Backpatch to 9.4 for jsonb and 9.3 for json.
2015-10-07 22:18:27 -04:00
Bruce Momjian b943f502b7 Have CREATE TABLE LIKE add OID column if any LIKEd table has one
Also, process constraints for LIKEd tables at the end so an OID column
can be referenced in a constraint.

Report by Tom Lane
2015-10-05 21:19:16 -04:00
Bruce Momjian 2d87eedc1d to_char(): Do not count negative sign as a digit for time values
For time masks, like HH24, MI, SS, CC, MM, do not count the negative
sign as part of the zero-padding length specified by the mask, e.g. have
to_char('-4 years'::interval, 'YY') return '-04', not '-4'.

Report by Craig Ringer
2015-10-05 20:51:46 -04:00
Tom Lane 9e36c91b46 Fix insufficiently-portable regression test case.
Some of the buildfarm members are evidently miserly enough of stack space
to pass the originally-committed form of this test.  Increase the
requirement 10X to hopefully ensure that it fails as-expected everywhere.

Security: CVE-2015-5289
2015-10-05 12:19:14 -04:00
Stephen Frost be400cd25c Add regression tests for INSERT/UPDATE+RETURNING
This adds regressions tests which are specific to INSERT+RETURNING and
UPDATE+RETURNING to ensure that the SELECT policies are added as
WithCheckOptions (and should therefore throw an error when the policy is
violated).

Per suggestion from Andres.

Back-patch to 9.5 as the prior commit was.
2015-10-05 10:14:49 -04:00
Noah Misch 08fa47c485 Prevent stack overflow in json-related functions.
Sufficiently-deep recursion heretofore elicited a SIGSEGV.  If an
application constructs PostgreSQL json or jsonb values from arbitrary
user input, application users could have exploited this to terminate all
active database connections.  That applies to 9.3, where the json parser
adopted recursive descent, and later versions.  Only row_to_json() and
array_to_json() were at risk in 9.2, both in a non-security capacity.
Back-patch to 9.2, where the json type was introduced.

Oskari Saarenmaa, reviewed by Michael Paquier.

Security: CVE-2015-5289
2015-10-05 10:06:29 -04:00
Stephen Frost 088c83363a ALTER TABLE .. FORCE ROW LEVEL SECURITY
To allow users to force RLS to always be applied, even for table owners,
add ALTER TABLE .. FORCE ROW LEVEL SECURITY.

row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump
output is complete (by default).

Also add SECURITY_NOFORCE_RLS context to avoid data corruption when
ALTER TABLE .. FORCE ROW SECURITY is being used. The
SECURITY_NOFORCE_RLS security context is used only during referential
integrity checks and is only considered in check_enable_rls() after we
have already checked that the current user is the owner of the relation
(which should always be the case during referential integrity checks).

Back-patch to 9.5 where RLS was added.
2015-10-04 21:05:08 -04:00
Andrew Dunstan 1edd4ec831 Disallow invalid path elements in jsonb_set
Null path elements and, where the object is an array, invalid integer
elements now cause an error.

Incorrect behaviour noted by Thom Brown, patch from Dmitry Dolgov.

Backpatch to 9.5 where jsonb_set was introduced
2015-10-04 13:28:16 -04:00