Commit Graph

1997 Commits

Author SHA1 Message Date
Tom Lane
0cd787802f Rename a function to avoid naming conflict in parallel regression tests.
Commit 31a891857a added some tests in
plpgsql.sql that used a function rather unthinkingly named "foo()".
However, rangefuncs.sql has some much older tests that create a function
of that name, and since these test scripts run in parallel, there is a
chance of failures if the timing is just right.  Use another name to
avoid that.  Per buildfarm (failure seen today on "hamerkop", but
probably it's happened before and not been noticed).
2013-07-06 11:16:50 -04:00
Noah Misch
02d2b694ee Update messages, comments and documentation for materialized views.
All instances of the verbiage lagging the code.  Back-patch to 9.3,
where materialized views were introduced.
2013-07-05 15:37:51 -04:00
Magnus Hagander
c87ff71f37 Expose the estimation of number of changed tuples since last analyze
This value, now pg_stat_all_tables.n_mod_since_analyze, was already
tracked and used by autovacuum, but not exposed to the user.

Mark Kirkwood, review by Laurenz Albe
2013-07-05 15:10:15 +02:00
Fujii Masao
2ef085d0e6 Get rid of pg_class.reltoastidxid.
Treat TOAST index just the same as normal one and get the OID
of TOAST index from pg_index but not pg_class.reltoastidxid.
This change allows us to handle multiple TOAST indexes, and
which is required infrastructure for upcoming
REINDEX CONCURRENTLY feature.

Patch by Michael Paquier, reviewed by Andres Freund and me.
2013-07-04 03:24:09 +09:00
Robert Haas
f33c53ec5b Revert "Hopefully-portable regression tests for CREATE/ALTER/DROP COLLATION."
This reverts commit 263645305b.

The buildfarm is sad.
2013-07-03 13:27:50 -04:00
Robert Haas
263645305b Hopefully-portable regression tests for CREATE/ALTER/DROP COLLATION.
The collate.linux.utf8 test covers some of the same territory, but
isn't portable and so probably does not get run often, or on
non-Linux platforms.  If this approach turns out to be sufficiently
portable, we may want to look at trimming the redundant tests out
of that file to avoid duplication.

Robins Tharakan, reviewed by Michael Paquier and Fabien Coelho,
with further changes and cleanup by me.
2013-07-03 12:31:27 -04:00
Tom Lane
5530a82643 Fix handling of auto-updatable views on inherited tables.
An INSERT into such a view should work just like an INSERT into its base
table, ie the insertion should go directly into that table ... not be
duplicated into each child table, as was happening before, per bug #8275
from Rushabh Lathia.  On the other hand, the current behavior for
UPDATE/DELETE seems reasonable: the update/delete traverses the child
tables, or not, depending on whether the view specifies ONLY or not.
Add some regression tests covering this area.

Dean Rasheed
2013-07-03 12:26:52 -04:00
Robert Haas
00a7767fcc Regression tests for LISTEN/NOTIFY/UNLISTEN/pg_notify.
Robins Tharakan, reviewed by Szymon Guz
2013-07-03 11:07:08 -04:00
Robert Haas
ada3e776c2 Additional regression tests for CREATE OPERATOR.
Robins Tharakan, reviewed by Szymon Guz
2013-07-03 10:48:26 -04:00
Noah Misch
7cd9b1371d Expose object name error fields in PL/pgSQL.
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS.  RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields.  Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal.  This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.

Back-patch to 9.3, where core support for the error fields was
introduced.  This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.

Pavel Stehule, lexical revisions by Noah Misch.
2013-07-03 07:29:56 -04:00
Robert Haas
3682025015 Add support for multiple kinds of external toast datums.
To that end, support tags rather than lengths for external datums.
As an example of how this can be used, add support or "indirect"
tuples which point to some externally allocated memory containing
a toast tuple.  Similar infrastructure could be used for other
purposes, including, perhaps, support for alternative compression
algorithms.

Andres Freund, reviewed by Hitoshi Harada and myself
2013-07-02 13:38:55 -04:00
Simon Riggs
f177cbfe67 ALTER TABLE ... ALTER CONSTRAINT for FKs
Allow constraint attributes to be altered,
so the default setting of NOT DEFERRABLE
can be altered to DEFERRABLE and back.

Review by Abhijit Menon-Sen
2013-06-29 00:27:30 +01:00
Robert Haas
5893ffa79c Make the OVER keyword unreserved.
This results in a slightly less specific error message when OVER
is used in a context where we don't accept window functions, but
per discussion, it's worth it to get the benefit of not needing
to reserve this keyword any more.  This same refactoring will
also let us avoid reserving some other keywords that we expect
to add in upcoming patches (specifically, IGNORE, RESPECT, and
FILTER).

Troels Nielsen, with minor changes by me
2013-06-28 11:11:00 -04:00
Simon Riggs
4f14c86d74 Reverting previous commit, pending investigation
of sporadic seg faults from various build farm members.
2013-06-24 21:21:18 +01:00
Simon Riggs
b577a57d41 ALTER TABLE ... ALTER CONSTRAINT for FKs
Allow constraint attributes to be altered,
so the default setting of NOT DEFERRABLE
can be altered to DEFERRABLE and back.

Review by Abhijit Menon-Sen
2013-06-24 20:07:41 +01:00
Kevin Grittner
8791627b8f Fix the create_index regression test for Danish collation.
In Danish collations, there are letter combinations which sort
higher than 'Z'.  A test for values > 'WA' was picking up rows
where the value started with 'AA', causing the test to fail.

Backpatch to 9.2, where the failing test was added.

Per report from Svenne Krap and analysis by Jeff Janes
2013-06-19 10:36:45 -05:00
Andrew Dunstan
78ed8e03c6 Fix unescaping of JSON Unicode escapes, especially for non-UTF8.
Per discussion  on -hackers. We treat Unicode escapes when unescaping
them similarly to the way we treat them in PostgreSQL string literals.
Escapes in the ASCII range are always accepted, no matter what the
database encoding. Escapes for higher code points are only processed in
UTF8 databases, and attempts to process them in other databases will
result in an error. \u0000 is never unescaped, since it would result in
an impermissible null byte.
2013-06-12 13:35:24 -04:00
Tom Lane
a4424c57c3 Remove unnecessary restrictions about RowExprs in transformAExprIn().
When the existing code here was written, it made sense to special-case
RowExprs because that was the only way that we could handle row comparisons
at all.  Now that we have record_eq() and arrays of composites, the generic
logic for "scalar" types will in fact work on RowExprs too, so there's no
reason to throw error for combinations of RowExprs and other ways of
forming composite values, nor to ignore the possibility of using a
ScalarArrayOpExpr.  But keep using the old logic when comparing two
RowExprs, for consistency with the main transformAExprOp() logic.  (This
allows some cases with not-quite-identical rowtypes to succeed, so we might
get push-back if we removed it.)  Per bug #8198 from Rafal Rzepecki.

Back-patch to all supported branches, since this works fine as far back as
8.4.

Rafal Rzepecki and Tom Lane
2013-06-09 18:39:20 -04:00
Andrew Dunstan
94e3311b97 Handle Unicode surrogate pairs correctly when processing JSON.
In 9.2, Unicode escape sequences are not analysed at all other than
to make sure that they are in the form \uXXXX. But in 9.3 many of the
new operators and functions try to turn JSON text values into text in
the server encoding, and this includes de-escaping Unicode escape
sequences. This processing had not taken into account the possibility
that this might contain a surrogate pair to designate a character
outside the BMP. That is now handled correctly.

This also enforces correct use of surrogate pairs, something that is not
done by the type's input routines. This fact is noted in the docs.
2013-06-08 09:12:48 -04:00
Tom Lane
964c0d0f80 Prevent pushing down WHERE clauses into unsafe UNION/INTERSECT nests.
The planner is aware that it mustn't push down upper-level quals into
subqueries if the quals reference subquery output columns that contain
set-returning functions or volatile functions, or are non-DISTINCT outputs
of a DISTINCT ON subquery.  However, it missed making this check when
there were one or more levels of UNION or INTERSECT above the dangerous
expression.  This could lead to "set-valued function called in context that
cannot accept a set" errors, as seen in bug #8213 from Eric Soroos, or to
silently wrong answers in the other cases.

To fix, refactor the checks so that we make the column-is-unsafe checks
during subquery_is_pushdown_safe(), which already has to recursively
inspect all arms of a set-operation tree.  This makes
qual_is_pushdown_safe() considerably simpler, at the cost that we will
spend some cycles checking output columns that possibly aren't referenced
in any upper qual.  But the cases where this code gets executed at all
are already nontrivial queries, so it's unlikely anybody will notice any
slowdown of planning.

This has been broken since commit 05f916e6ad,
which makes the bug over ten years old.  A bit surprising nobody noticed it
before now.
2013-06-05 23:45:11 -04:00
Noah Misch
97c4d9b7c7 Don't emit non-canonical empty arrays in array_remove().
Dean Rasheed
2013-05-31 21:50:59 -04:00
Peter Eisentraut
8b5a3998a1 Remove whitespace from end of lines 2013-05-30 21:05:07 -04:00
Heikki Linnakangas
d0cab7903b Remove unused regression test files.
euc_* and mule_internal test cases were identical to the ones in
src/test/mb. sql_ascii didn't exist elsewhere, but has been broken since
2001, and doesn't seem very interesting anyway. drop.sql hasn't been used
since 2000, when regress.sh was removed.
2013-05-18 22:35:37 +03:00
Tom Lane
403bd6a18b Fix crash when trying to display a NOTIFY rule action.
Fixes oversight in commit 2ffa740be9.
Per report from Josh Kupershmidt.

I think we've broken this case before, so let's add a regression test
this time.
2013-05-16 16:47:26 -04:00
Tom Lane
e7bfc7e42c Fix some uses of "the quick brown fox".
If we're going to quote a well-known pangram, we should quote it
accurately.  Per gripe from Thom Brown.
2013-05-16 12:30:41 -04:00
Tom Lane
904af8db8a Fix handling of strict non-set functions with NULLs in set-valued inputs.
In a construct like "select plain_function(set_returning_function(...))",
the plain function is applied to each output row of the SRF successively.
If some of the SRF outputs are NULL, and the plain function is strict,
you'd expect to get NULL results for such rows ... but what actually
happened was that such rows were omitted entirely from the result set.
This was due to confusion of this case with what should happen for nested
set-returning functions; a strict SRF is indeed supposed to yield an empty
set for null input.  Per bug #8150 from Erwin Brandstetter.

Although this has been broken forever, we're not back-patching because
of the possibility that some apps out there expect the incorrect behavior.
This change should be listed as a possible incompatibility in the 9.3
release notes.
2013-05-12 13:08:12 -04:00
Tom Lane
284e28f228 Update collate.linux.utf8.out for ruleutils.c line-wrapping changes.
Missed in commit 62e666400d.
2013-05-08 22:47:33 -04:00
Tom Lane
a7b965382c Better fix for permissions tests in excluded subqueries.
This reverts the code changes in 50c137487c,
which turned out to induce crashes and not completely fix the problem
anyway.  That commit only considered single subqueries that were excluded
by constraint-exclusion logic, but actually the problem also exists for
subqueries that are appendrel members (ie part of a UNION ALL list).  In
such cases we can't add a dummy subpath to the appendrel's AppendPath list
without defeating the logic that recognizes when an appendrel is completely
excluded.  Instead, fix the problem by having setrefs.c scan the rangetable
an extra time looking for subqueries that didn't get into the plan tree.
(This approach depends on the 9.2 change that made set_subquery_pathlist
generate dummy paths for excluded single subqueries, so that the exclusion
behavior is the same for single subqueries and appendrel members.)

Note: it turns out that the appendrel form of the missed-permissions-checks
bug exists as far back as 8.4.  However, since the practical effect of that
bug seems pretty minimal, consensus is to not attempt to fix it in the back
branches, at least not yet.  Possibly we could back-port this patch once
it's gotten a reasonable amount of testing in HEAD.  For the moment I'm
just going to revert the previous patch in 9.2.
2013-05-08 16:59:58 -04:00
Heikki Linnakangas
cb953d8b1b Use the term "radix tree" instead of "suffix tree" for SP-GiST text opclass.
What we have implemented is a radix tree (or a radix trie or a patricia
trie), but the docs and code comments incorrectly called it a "suffix tree".

Alexander Korotkov
2013-05-08 14:34:26 +03:00
Tom Lane
1d6c72a55b Move materialized views' is-populated status into their pg_class entries.
Previously this state was represented by whether the view's disk file had
zero or nonzero size, which is problematic for numerous reasons, since it's
breaking a fundamental assumption about heap storage.  This was done to
allow unlogged matviews to revert to unpopulated status after a crash
despite our lack of any ability to update catalog entries post-crash.
However, this poses enough risk of future problems that it seems better to
not support unlogged matviews until we can find another way.  Accordingly,
revert that choice as well as a number of existing kluges forced by it
in favor of creating a pg_class.relispopulated flag column.
2013-05-06 13:27:22 -04:00
Tom Lane
3223b25ff7 Disallow unlogged materialized views.
The initial implementation of this feature was really unsupportable,
because it's relying on the physical size of an on-disk file to carry the
relation's populated/unpopulated state, which is at least a modularity
violation and could have serious long-term consequences.  We could say that
an unlogged matview goes to empty on crash, but not everybody likes that
definition, so let's just remove the feature for 9.3.  We can add it back
when we have a less klugy implementation.

I left the grammar and tab-completion support for CREATE UNLOGGED
MATERIALIZED VIEW in place, since it's harmless and allows delivering a
more specific error message about the unsupported feature.

I'm committing this separately to ease identification of what should be
reverted when/if we are able to re-enable the feature.
2013-05-06 12:00:06 -04:00
Kevin Grittner
b69ec7cc99 Prevent (auto)vacuum from truncating first page of populated matview.
Per report from Fujii Masao, with regression test using his example.
2013-05-02 17:33:03 -05:00
Tom Lane
50c137487c Fix permission tests for views/tables proven empty by constraint exclusion.
A view defined as "select <something> where false" had the curious property
that the system wouldn't check whether users had the privileges necessary
to select from it.  More generally, permissions checks could be skipped
for tables referenced in sub-selects or views that were proven empty by
constraint exclusion (although some quick testing suggests this seldom
happens in cases of practical interest).  This happened because the planner
failed to include rangetable entries for such tables in the finished plan.

This was noticed in connection with erroneous handling of materialized
views, but actually the issue is quite unrelated to matviews.  Therefore,
revert commit 200ba1667b in favor of a more
direct test for the real problem.

Back-patch to 9.2 where the bug was introduced (by commit
7741dd6590).
2013-05-01 18:26:50 -04:00
Kevin Grittner
200ba1667b Add regression test for bug fixed by recent refactoring.
Test case by Andres Freund for bug fixed by Tom Lane's refactoring
in commit 5194024d72
2013-04-30 15:02:43 -05:00
Tom Lane
db9f0e1d9a Postpone creation of pathkeys lists to fix bug #8049.
This patch gets rid of the concept of, and infrastructure for,
non-canonical PathKeys; we now only ever create canonical pathkey lists.

The need for non-canonical pathkeys came from the desire to have
grouping_planner initialize query_pathkeys and related pathkey lists before
calling query_planner.  However, since query_planner didn't actually *do*
anything with those lists before they'd been made canonical, we can get rid
of the whole mess by just not creating the lists at all until the point
where we formerly canonicalized them.

There are several ways in which we could implement that without making
query_planner itself deal with grouping/sorting features (which are
supposed to be the province of grouping_planner).  I chose to add a
callback function to query_planner's API; other alternatives would have
required adding more fields to PlannerInfo, which while not bad in itself
would create an ABI break for planner-related plugins in the 9.2 release
series.  This still breaks ABI for anything that calls query_planner
directly, but it seems somewhat unlikely that there are any such plugins.

I had originally conceived of this change as merely a step on the way to
fixing bug #8049 from Teun Hoogendoorn; but it turns out that this fixes
that bug all by itself, as per the added regression test.  The reason is
that now get_eclass_for_sort_expr is adding the ORDER BY expression at the
end of EquivalenceClass creation not the start, and so anything that is in
a multi-member EquivalenceClass has already been created with correct
em_nullable_relids.  I am suspicious that there are related scenarios in
which we still need to teach get_eclass_for_sort_expr to compute correct
nullable_relids, but am not eager to risk destabilizing either 9.2 or 9.3
to fix bugs that are only hypothetical.  So for the moment, do this and
stop here.

Back-patch to 9.2 but not to earlier branches, since they don't exhibit
this bug for lack of join-clause-movement logic that depends on
em_nullable_relids being correct.  (We might have to revisit that choice
if any related bugs turn up.)  In 9.2, don't change the signature of
make_pathkeys_for_sortclauses nor remove canonicalize_pathkeys, so as
not to risk more plugin breakage than we have to.
2013-04-29 14:50:03 -04:00
Tom Lane
41a2760f61 Fix collation assignment for aggregates with ORDER BY.
ORDER BY expressions were being treated the same as regular aggregate
arguments for purposes of collation determination, but really they should
not affect the aggregate's collation at all; only collations of the
aggregate's regular arguments should affect it.

In many cases this mistake would lead to incorrectly throwing a "collation
conflict" error; but in some cases the corrected code will silently assign
a different collation to the aggregate than before, for example
	agg(foo ORDER BY bar COLLATE "x")
which will now use foo's collation rather than "x" for the aggregate.
Given this risk and the lack of field complaints about the issue, it
doesn't seem prudent to back-patch.

In passing, rearrange code in assign_collations_walker so that we don't
need multiple copies of the standard logic for computing collation of a
node with children.  (Previously, CaseExpr duplicated the standard logic,
and we would have needed a third copy for Aggref without this change.)

Andrew Gierth and David Fetter
2013-04-26 15:48:53 -04:00
Peter Eisentraut
cc26ea9fe2 Clean up references to SQL92
In most cases, these were just references to the SQL standard in
general.  In a few cases, a contrast was made between SQL92 and later
standards -- those have been kept unchanged.
2013-04-20 11:04:41 -04:00
Peter Eisentraut
e08fdf1310 Add serial comma 2013-04-14 11:12:30 -04:00
Alvaro Herrera
6cd18a88b6 Remove quotes around SQL statement in error message 2013-04-11 12:00:09 -03:00
Andrew Dunstan
a570c98d7f Add new JSON processing functions and parser API.
The JSON parser is converted into a recursive descent parser, and
exposed for use by other modules such as extensions. The API provides
hooks for all the significant parser event such as the beginning and end
of objects and arrays, and providing functions to handle these hooks
allows for fairly simple construction of a wide variety of JSON
processing functions. A set of new basic processing functions and
operators is also added, which use this API, including operations to
extract array elements, object fields, get the length of arrays and the
set of keys of a field, deconstruct an object into a set of key/value
pairs, and create records from JSON objects and arrays of objects.

Catalog version bumped.

Andrew Dunstan, with some documentation assistance from Merlin Moncure.
2013-03-29 14:12:13 -04:00
Alvaro Herrera
473ab40c8b Add sql_drop event for event triggers
This event takes place just before ddl_command_end, and is fired if and
only if at least one object has been dropped by the command.  (For
instance, DROP TABLE IF EXISTS of a table that does not in fact exist
will not lead to such a trigger firing).  Commands that drop multiple
objects (such as DROP SCHEMA or DROP OWNED BY) will cause a single event
to fire.  Some firings might be surprising, such as
ALTER TABLE DROP COLUMN.

The trigger is fired after the drop has taken place, because that has
been deemed the safest design, to avoid exposing possibly-inconsistent
internal state (system catalogs as well as current transaction) to the
user function code.  This means that careful tracking of object
identification is required during the object removal phase.

Like other currently existing events, there is support for tag
filtering.

To support the new event, add a new pg_event_trigger_dropped_objects()
set-returning function, which returns a set of rows comprising the
objects affected by the command.  This is to be used within the user
function code, and is mostly modelled after the recently introduced
pg_identify_object() function.

Catalog version bumped due to the new function.

Dimitri Fontaine and Álvaro Herrera
Review by Robert Haas, Tom Lane
2013-03-28 13:05:48 -03:00
Heikki Linnakangas
f897c4744f Fix "element <@ range" cost estimation.
The statistics-based cost estimation patch for range types broke that, by
incorrectly assuming that the left operand of all range oeprators is a
range. That lead to a "type x is not a range type" error. Because it took so
long for anyone to notice, add a regression test for that case.

We still don't do proper statistics-based cost estimation for that, so you
just get a default constant estimate. We should look into implementing that,
but this patch at least fixes the regression.

Spotted by Tom Lane, when testing query from Josh Berkus.
2013-03-21 11:21:51 +02:00
Kevin Grittner
241139ae4b Use ORDER BY on matview definitions were needed for stable plans.
Per report from Hadi Moshayedi of matview regression test failure
with optimization of aggregates.  A few ORDER BY clauses improve
code coverage for matviews while solving that problem.
2013-03-19 10:33:37 -05:00
Tom Lane
73e7025bd8 Extend format() to handle field width and left/right alignment.
This change adds some more standard sprintf() functionality to format().

Pavel Stehule, reviewed by Dean Rasheed and Kyotaro Horiguchi
2013-03-14 22:56:56 -04:00
Tom Lane
1a1832eb08 Avoid inserting no-op Limit plan nodes.
This was discussed in connection with the patch to avoid inserting no-op
Result nodes, but not actually implemented therein.
2013-03-14 15:11:05 -04:00
Kevin Grittner
8d7ff13ed5 Add regression test for MV join to view.
This would have caught a bug in the initial patch, and seems like
a good thing to test going forward.

Per bug report by Erik Rijkers and fix by Tom Lane
2013-03-14 13:34:51 -05:00
Tom Lane
4387cf956b Avoid inserting Result nodes that only compute identity projections.
The planner sometimes inserts Result nodes to perform column projections
(ie, arbitrary scalar calculations) above plan nodes that lack projection
logic of their own.  However, we did that even if the lower plan node was
in fact producing the required column set already; which is a pretty common
case given the popularity of "SELECT * FROM ...".  Measurements show that
the useless plan node adds non-negligible overhead, especially when there
are many columns in the result.  So add a check to avoid inserting a Result
node unless there's something useful for it to do.

There are a couple of remaining places where unnecessary Result nodes
could get inserted, but they are (a) much less performance-critical,
and (b) coded in such a way that it's hard to avoid inserting a Result,
because the desired tlist is changed on-the-fly in subsequent logic.
We'll leave those alone for now.

Kyotaro Horiguchi; reviewed and further hacked on by Amit Kapila and
Tom Lane.
2013-03-14 13:43:18 -04:00
Peter Eisentraut
788bce13d3 Add regression tests for XML mapping of domains
Pavel Stěhule
2013-03-13 22:42:57 -04:00
Tom Lane
a0c6dfeecf Allow default expressions to be attached to columns of foreign tables.
There's still some discussion about exactly how postgres_fdw ought to
handle this case, but there seems no debate that we want to allow defaults
to be used for inserts into foreign tables.  So remove the core-code
restrictions that prevented it.

While at it, get rid of the special grammar productions for CREATE FOREIGN
TABLE, and instead add explicit FEATURE_NOT_SUPPORTED error checks for the
disallowed cases.  This makes the grammar a shade smaller, and more
importantly results in much more intelligible error messages for
unsupported cases.  It's also one less thing to fix if we ever start
supporting constraints on foreign tables.
2013-03-12 17:37:07 -04:00
Andrew Dunstan
38fb4d978c JSON generation improvements.
This adds the following:

    json_agg(anyrecord) -> json
    to_json(any) -> json
    hstore_to_json(hstore) -> json (also used as a cast)
    hstore_to_json_loose(hstore) -> json

The last provides heuristic treatment of numbers and booleans.

Also, in json generation, if any non-builtin type has a cast to json,
that function is used instead of the type's output function.

Andrew Dunstan, reviewed by Steve Singer.

Catalog version bumped.
2013-03-10 17:35:36 -04:00
Heikki Linnakangas
23f10b6473 SP-GiST support of the range adjacent operator -|-
Alexander Korotkov, reviewed by Jeff Davis.
2013-03-08 15:03:19 +02:00
Tom Lane
a7b61d4f5a Fix infinite-loop risk in fixempties() stage of regex compilation.
The previous coding of this function could get into situations where it
would never terminate, because successive passes would re-add EMPTY arcs
that had been removed by the previous pass.  Rewrite the function
completely using a new algorithm that is guaranteed to terminate, and
also seems to be usually faster than the old one.  Per Tcl bugs 3604074
and 3606683.

Tom Lane and Don Porter
2013-03-07 11:51:03 -05:00
Kevin Grittner
3bf3ab8c56 Add a materialized view relations.
A materialized view has a rule just like a view and a heap and
other physical properties like a table.  The rule is only used to
populate the table, references in queries refer to the
materialized data.

This is a minimal implementation, but should still be useful in
many cases.  Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed.  At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.

Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
2013-03-03 18:23:31 -06:00
Tom Lane
b15a6da292 Get rid of any toast table when converting a table to a view.
Also make sure other fields of the view's pg_class entry are appropriate
for a view; it shouldn't have relfrozenxid set for instance.

This ancient omission isn't believed to have any serious consequences in
versions 8.4-9.2, so no backpatch.  But let's fix it before it does bite
us in some serious way.  It's just luck that the case doesn't cause
problems for autovacuum.  (It did cause problems in 8.3, but that's out
of support.)

Andres Freund
2013-03-03 19:05:47 -05:00
Tom Lane
71627f3d19 Fix CVE-2013-0255 properly.
Revert commit ab0f7b6089 (in HEAD only)
in favor of the proper solution, which is to declare enum_recv() correctly
in the system catalogs.  It should be declared to take type "internal"
not "cstring".

Also improve the type_sanity regression test, which should have caught
this typo, so that it actually would.  Most of the relevant checks on
the signature of type I/O functions should not have been restricted to
basetypes/pseudotypes, as they should apply to any type's I/O functions.
2013-02-13 16:20:01 -05:00
Tom Lane
c61e26ee3e Add support for ALTER RULE ... RENAME TO.
Ali Dar, reviewed by Dean Rasheed.
2013-02-08 23:58:40 -05:00
Tom Lane
3c29b196b0 Fix gist_box_same and gist_point_consistent to handle fuzziness correctly.
While there's considerable doubt that we want fuzzy behavior in the
geometric operators at all (let alone as currently implemented), nobody is
stepping forward to redesign that stuff.  In the meantime it behooves us
to make sure that index searches agree with the behavior of the underlying
operators.  This patch fixes two problems in this area.

First, gist_box_same was using fuzzy equality, but it really needs to use
exact equality to prevent not-quite-identical upper index keys from being
treated as identical, which for example would prevent an existing upper
key from being extended by an amount less than epsilon.  This would result
in inconsistent indexes.  (The next release notes will need to recommend
that users reindex GiST indexes on boxes, polygons, circles, and points,
since all four opclasses use gist_box_same.)

Second, gist_point_consistent used exact comparisons for upper-page
comparisons in ~= searches, when it needs to use fuzzy comparisons to
ensure it finds all matches; and it used fuzzy comparisons for point <@ box
searches, when it needs to use exact comparisons because that's what the
<@ operator (rather inconsistently) does.

The added regression test cases illustrate all three misbehaviors.

Back-patch to all active branches.  (8.4 did not have GiST point_ops,
but it still seems prudent to apply the gist_box_same patch to it.)

Alexander Korotkov, reviewed by Noah Misch
2013-02-08 18:03:17 -05:00
Tom Lane
62e666400d Perform line wrapping and indenting by default in ruleutils.c.
This patch changes pg_get_viewdef() and allied functions so that
PRETTY_INDENT processing is always enabled.  Per discussion, only the
PRETTY_PAREN processing (that is, stripping of "unnecessary" parentheses)
poses any real forward-compatibility risk, so we may as well make dump
output look as nice as we safely can.

Also, set the default wrap length to zero (i.e, wrap after each SELECT
or FROM list item), since there's no very principled argument for the
former default of 80-column wrapping, and most people seem to agree this
way looks better.

Marko Tiikkaja, reviewed by Jeevan Chalke, further hacking by Tom Lane
2013-02-03 15:56:45 -05:00
Tom Lane
d2d153fdb0 Create a psql command \gset to store query results into psql variables.
This eases manipulation of query results in psql scripts.

Pavel Stehule, reviewed by Piyush Newe, Shigeru Hanada, and Tom Lane
2013-02-02 17:06:38 -05:00
Bruce Momjian
e8ae019661 Adjust COPY FREEZE error message to be more accurate and consistent.
Per suggestions from Noah and Tom.
2013-02-02 12:56:52 -05:00
Peter Eisentraut
5839052693 Add CREATE RECURSIVE VIEW syntax
This is specified in the SQL standard.  The CREATE RECURSIVE VIEW
specification is transformed into a normal CREATE VIEW statement with a
WITH RECURSIVE clause.

reviewed by Abhijit Menon-Sen and Stephen Frost
2013-01-31 22:31:58 -05:00
Tom Lane
0900ac2d0d Fix plpgsql's reporting of plan-time errors in possibly-simple expressions.
exec_simple_check_plan and exec_eval_simple_expr attempted to call
GetCachedPlan directly.  This meant that if an error was thrown during
planning, the resulting context traceback would not include the line
normally contributed by _SPI_error_callback.  This is already inconsistent,
but just to be really odd, a re-execution of the very same expression
*would* show the additional context line, because we'd already have cached
the plan and marked the expression as non-simple.

The problem is easy to demonstrate in 9.2 and HEAD because planning of a
cached plan doesn't occur at all until GetCachedPlan is done.  In earlier
versions, it could only be an issue if initial planning had succeeded, then
a replan was forced (already somewhat improbable for a simple expression),
and the replan attempt failed.  Since the issue is mainly cosmetic in older
branches anyway, it doesn't seem worth the risk of trying to fix it there.
It is worth fixing in 9.2 since the instability of the context printout can
affect the results of GET STACKED DIAGNOSTICS, as per a recent discussion
on pgsql-novice.

To fix, introduce a SPI function that wraps GetCachedPlan while installing
the correct callback function.  Use this instead of calling GetCachedPlan
directly from plpgsql.

Also introduce a wrapper function for extracting a SPI plan's
CachedPlanSource list.  This lets us stop including spi_priv.h in
pl_exec.c, which was never a very good idea from a modularity standpoint.

In passing, fix a similar inconsistency that could occur in SPI_cursor_open,
which was also calling GetCachedPlan without setting up a context callback.
2013-01-30 20:02:23 -05:00
Tom Lane
670a6c7a22 Fix grammar for subscripting or field selection from a sub-SELECT result.
Such cases should work, but the grammar failed to accept them because of
our ancient precedence hacks to convince bison that extra parentheses
around a sub-SELECT in an expression are unambiguous.  (Formally, they
*are* ambiguous, but we don't especially care whether they're treated as
part of the sub-SELECT or part of the expression.  Bison cares, though.)
Fix by adding a redundant-looking production for this case.

This is a fine example of why fixing shift/reduce conflicts via
precedence declarations is more dangerous than it looks: you can easily
cause the parser to reject cases that should work.

This has been wrong since commit 3db4056e22
or maybe before, and apparently some people have been working around it
by inserting no-op casts.  That method introduces a dump/reload hazard,
as illustrated in bug #7838 from Jan Mate.  Hence, back-patch to all
active branches.
2013-01-30 14:17:48 -05:00
Tom Lane
2378d79ab2 Make LATERAL implicit for functions in FROM.
The SQL standard does not have general functions-in-FROM, but it does
allow UNNEST() there (see the <collection derived table> production),
and the semantics of that are defined to include lateral references.
So spec compliance requires allowing lateral references within UNNEST()
even without an explicit LATERAL keyword.  Rather than making UNNEST()
a special case, it seems best to extend this flexibility to any
function-in-FROM.  We'll still allow LATERAL to be written explicitly
for clarity's sake, but it's now a noise word in this context.

In theory this change could result in a change in behavior of existing
queries, by allowing what had been an outer reference in a function-in-FROM
to be captured by an earlier FROM-item at the same level.  However, all
pre-9.3 PG releases have a bug that causes them to match variable
references to earlier FROM-items in preference to outer references (and
then throw an error).  So no previously-working query could contain the
type of ambiguity that would risk a change of behavior.

Per a suggestion from Andrew Gierth, though I didn't use his patch.
2013-01-26 16:18:42 -05:00
Bruce Momjian
4deb57de7d Issue ERROR if FREEZE mode can't be honored by COPY
Previously non-honored FREEZE mode was ignored.  This also issues an
appropriate error message based on the cause of the failure, per
suggestion from Tom.  Additional regression test case added.
2013-01-26 13:33:24 -05:00
Tom Lane
0d5fbdc157 Change plan caching to honor, not resist, changes in search_path.
In the initial implementation of plan caching, we saved the active
search_path when a plan was first cached, then reinstalled that path
anytime we needed to reparse or replan.  The idea of that was to try to
reselect the same referenced objects, in somewhat the same way that views
continue to refer to the same objects in the face of schema or name
changes.  Of course, that analogy doesn't bear close inspection, since
holding the search_path fixed doesn't cope with object drops or renames.
Moreover sticking with the old path seems to create more surprises than
it avoids.  So instead of doing that, consider that the cached plan depends
on search_path, and force reparse/replan if the active search_path is
different than it was when we last saved the plan.

This gets us fairly close to having "transparency" of plan caching, in the
sense that the cached statement acts the same as if you'd just resubmitted
the original query text for another execution.  There are still some corner
cases where this fails though: a new object added in the search path
schema(s) might capture a reference in the query text, but we'd not realize
that and force a reparse.  We might try to fix that in the future, but for
the moment it looks too expensive and complicated.
2013-01-25 14:14:41 -05:00
Tom Lane
760f3c043a Fix concat() and format() to handle VARIADIC-labeled arguments correctly.
Previously, the VARIADIC labeling was effectively ignored, but now these
functions act as though the array elements had all been given as separate
arguments.

Pavel Stehule
2013-01-25 00:19:56 -05:00
Robert Haas
601e2935e2 Update comments and output for event_trigger regression test. 2013-01-23 06:49:30 -05:00
Robert Haas
841a5150c5 Add ddl_command_end support for event triggers.
Dimitri Fontaine, with slight changes by me
2013-01-21 18:00:24 -05:00
Alvaro Herrera
765cbfdc92 Refactor ALTER some-obj RENAME implementation
Remove duplicate implementations of catalog munging and miscellaneous
privilege checks.  Instead rely on already existing data in
objectaddress.c to do the work.

Author: KaiGai Kohei, changes by me
Reviewed by: Robert Haas, Álvaro Herrera, Dimitri Fontaine
2013-01-21 12:06:41 -03:00
Alvaro Herrera
7ac5760fa2 Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema").  The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com

To fix, refactor the way these checks are done.  As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.

While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1.  (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)

This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 13:23:43 -03:00
Tom Lane
2ffa740be9 Fix ruleutils to cope with conflicts from adding/dropping/renaming columns.
In commit 11e131854f, we improved the
rule/view dumping code so that it would produce valid query representations
even if some of the tables involved in a query had been renamed since the
query was parsed.  This patch extends that idea to fix problems that occur
when individual columns are renamed, or added or dropped.  As before, the
core of the fix is to assign unique new aliases when a name conflict has
been created.  This is complicated by the JOIN USING feature, which
requires the same column alias to be used in both input relations, but we
can handle that with a sufficiently complex approach to assigning aliases.

A fortiori, this patch takes care of situations where the query didn't have
unique column names to begin with, such as in a recent complaint from Bryan
Nuse.  (Because of expansion of "SELECT *", re-parsing a dumped query can
require column name uniqueness even though the original text did not.)
2012-12-31 15:13:26 -05:00
Peter Eisentraut
740ee42da5 Make some messages more consistent in style 2012-12-21 00:10:46 -05:00
Tom Lane
a99c42f291 Support automatically-updatable views.
This patch makes "simple" views automatically updatable, without the need
to create either INSTEAD OF triggers or INSTEAD rules.  "Simple" views
are those classified as updatable according to SQL-92 rules.  The rewriter
transforms INSERT/UPDATE/DELETE commands on such views directly into an
equivalent command on the underlying table, which will generally have
noticeably better performance than is possible with either triggers or
user-written rules.  A view that has INSTEAD OF triggers or INSTEAD rules
continues to operate the same as before.

For the moment, security_barrier views are not considered simple.
Also, we do not support WITH CHECK OPTION.  These features may be
added in future.

Dean Rasheed, reviewed by Amit Kapila
2012-12-08 18:26:21 -05:00
Simon Riggs
ef754fb51b Correct xmax test for COPY FREEZE 2012-12-07 14:18:47 +00:00
Simon Riggs
1f023f9297 Optimize COPY FREEZE with CREATE TABLE also.
Jeff Davis, additional test by me
2012-12-07 13:26:52 +00:00
Tom Lane
31a891857a Improve pl/pgsql to support composite-type expressions in RETURN.
For some reason lost in the mists of prehistory, RETURN was only coded to
allow a simple reference to a composite variable when the function's return
type is composite.  Allow an expression instead, while preserving the
efficiency of the original code path in the case where the expression is
indeed just a composite variable's name.  Likewise for RETURN NEXT.

As is true in various other places, the supplied expression must yield
exactly the number and data types of the required columns.  There was some
discussion of relaxing that for pl/pgsql, but no consensus yet, so this
patch doesn't address that.

Asif Rehman, reviewed by Pavel Stehule
2012-12-06 23:09:52 -05:00
Simon Riggs
5457a130d3 Reduce scope of changes for COPY FREEZE.
Allow support only for freezing tuples by explicit
command. Previous coding mistakenly extended
slightly beyond what was agreed as correct on -hackers.
So essentially a partial revoke of earlier work,
leaving just the COPY FREEZE command.
2012-12-02 20:52:52 +00:00
Tom Lane
7b90469b71 Allow adding values to an enum type created in the current transaction.
Normally it is unsafe to allow ALTER TYPE ADD VALUE in a transaction block,
because instances of the value could be added to indexes later in the same
transaction, and then they would still be accessible even if the
transaction rolls back.  However, we can allow this if the enum type itself
was created in the current transaction, because then any such indexes would
have to go away entirely on rollback.

The reason for allowing this is to support pg_upgrade's new usage of
pg_restore --single-transaction: in --binary-upgrade mode, pg_dump emits
enum types as a succession of ALTER TYPE ADD VALUE commands so that it can
preserve the values' OIDs.  The support is a bit limited, so we'll leave
it undocumented.

Andres Freund
2012-12-01 14:27:30 -05:00
Simon Riggs
02aea36414 Second tweak of COPY FREEZE 2012-12-01 14:55:35 +00:00
Simon Riggs
ddf509eb4a Tweak tests in COPY FREEZE 2012-12-01 13:46:41 +00:00
Simon Riggs
8de72b66a2 COPY FREEZE and mark committed on fresh tables.
When a relfilenode is created in this subtransaction or
a committed child transaction and it cannot otherwise
be seen by our own process, mark tuples committed ahead
of transaction commit for all COPY commands in same
transaction. If FREEZE specified on COPY
and pre-conditions met then rows will also be frozen.
Both options designed to avoid revisiting rows after commit,
increasing performance of subsequent commands after
data load and upgrade. pg_restore changes later.

Simon Riggs, review comments from Heikki Linnakangas, Noah Misch and design
input from Tom Lane, Robert Haas and Kevin Grittner
2012-12-01 12:54:20 +00:00
Tom Lane
d3237e04ca Fix SELECT DISTINCT with index-optimized MIN/MAX on inheritance trees.
In a query such as "SELECT DISTINCT min(x) FROM tab", the DISTINCT is
pretty useless (there being only one output row), but nonetheless it
shouldn't fail.  But it could fail if "tab" is an inheritance parent,
because planagg.c's code for fixing up equivalence classes after making the
index-optimized MIN/MAX transformation wasn't prepared to find child-table
versions of the aggregate expression.  The least ugly fix seems to be
to add an option to mutate_eclass_expressions() to skip child-table
equivalence class members, which aren't used anymore at this stage of
planning so it's not really necessary to fix them.  Since child members
are ignored in many cases already, it seems plausible for
mutate_eclass_expressions() to have an option to ignore them too.

Per bug #7703 from Maxim Boguk.

Back-patch to 9.1.  Although the same code exists before that, it cannot
encounter child-table aggregates AFAICS, because the index optimization
transformation cannot succeed on inheritance trees before 9.1 (for lack
of MergeAppend).
2012-11-26 12:57:58 -05:00
Tom Lane
1f7cb5c309 Improve handling of INT_MIN / -1 and related cases.
Some platforms throw an exception for this division, rather than returning
a necessarily-overflowed result.  Since we were testing for overflow after
the fact, an exception isn't nice.  We can avoid the problem by treating
division by -1 as negation.

Add some regression tests so that we'll find out if any compilers try to
optimize away the overflow check conditions.

This ought to be back-patched, but I'm going to see what the buildfarm
reports about the regression tests first.

Per discussion with Xi Wang, though this is different from the patch he
submitted.
2012-11-19 12:24:25 -05:00
Tom Lane
5ed6546cf7 Fix handling of inherited check constraints in ALTER COLUMN TYPE.
This case got broken in 8.4 by the addition of an error check that
complains if ALTER TABLE ONLY is used on a table that has children.
We do use ONLY for this situation, but it's okay because the necessary
recursion occurs at a higher level.  So we need to have a separate
flag to suppress recursion without making the error check.

Reported and patched by Pavan Deolasee, with some editorial adjustments by
me.  Back-patch to 8.4, since this is a regression of functionality that
worked in earlier branches.
2012-11-05 13:36:16 -05:00
Kevin Grittner
6868ed7491 Throw error if expiring tuple is again updated or deleted.
This prevents surprising behavior when a FOR EACH ROW trigger
BEFORE UPDATE or BEFORE DELETE directly or indirectly updates or
deletes the the old row.  Prior to this patch the requested action
on the row could be silently ignored while all triggered actions
based on the occurence of the requested action could be committed.
One example of how this could happen is if the BEFORE DELETE
trigger for a "parent" row deleted "children" which had trigger
functions to update summary or status data on the parent.

This also prevents similar surprising problems if the query has a
volatile function which updates a target row while it is already
being updated.

There are related issues present in FOR UPDATE cursors and READ
COMMITTED queries which are not handled by this patch.  These
issues need further evalution to determine what change, if any, is
needed.

Where the new error messages are generated, in most cases the best
fix will be to move code from the BEFORE trigger to an AFTER
trigger.  Where this is not feasible, the trigger can avoid the
error by re-issuing the triggering statement and returning NULL.

Documentation changes will be submitted in a separate patch.

Kevin Grittner and Tom Lane with input from Florian Pflug and
Robert Haas, based on problems encountered during conversion of
Wisconsin Circuit Court trigger logic to plpgsql triggers.
2012-10-26 14:55:36 -05:00
Tom Lane
a4e8680a6c When converting a table to a view, remove its system columns.
Views should not have any pg_attribute entries for system columns.
However, we forgot to remove such entries when converting a table to a
view.  This could lead to crashes later on, if someone attempted to
reference such a column, as reported by Kohei KaiGai.

Patch in HEAD only.  This bug has been there forever, but in the back
branches we will have to defend against existing mis-converted views,
so it doesn't seem worthwhile to change the conversion code too.
2012-10-24 13:39:37 -04:00
Tom Lane
72a4231f0c Fix planning of non-strict equivalence clauses above outer joins.
If a potential equivalence clause references a variable from the nullable
side of an outer join, the planner needs to take care that derived clauses
are not pushed to below the outer join; else they may use the wrong value
for the variable.  (The problem arises only with non-strict clauses, since
if an upper clause can be proven strict then the outer join will get
simplified to a plain join.)  The planner attempted to prevent this type
of error by checking that potential equivalence clauses aren't
outerjoin-delayed as a whole, but actually we have to check each side
separately, since the two sides of the clause will get moved around
separately if it's treated as an equivalence.  Bugs of this type can be
demonstrated as far back as 7.4, even though releases before 8.3 had only
a very ad-hoc notion of equivalence clauses.

In addition, we neglected to account for the possibility that such clauses
might have nonempty nullable_relids even when not outerjoin-delayed; so the
equivalence-class machinery lacked logic to compute correct nullable_relids
values for clauses it constructs.  This oversight was harmless before 9.2
because we were only using RestrictInfo.nullable_relids for OR clauses;
but as of 9.2 it could result in pushing constructed equivalence clauses
to incorrect places.  (This accounts for bug #7604 from Bill MacArthur.)

Fix the first problem by adding a new test check_equivalence_delay() in
distribute_qual_to_rels, and fix the second one by adding code in
equivclass.c and called functions to set correct nullable_relids for
generated clauses.  Although I believe the second part of this is not
currently necessary before 9.2, I chose to back-patch it anyway, partly to
keep the logic similar across branches and partly because it seems possible
we might find other reasons why we need valid values of nullable_relids in
the older branches.

Add regression tests illustrating these problems.  In 9.0 and up, also
add test cases checking that we can push constants through outer joins,
since we've broken that optimization before and I nearly broke it again
with an overly simplistic patch for this problem.
2012-10-18 12:30:10 -04:00
Tom Lane
8b728e5c6e Fix oversight in new code for printing rangetable aliases.
In commit 11e131854f, I missed the case of
a CTE RTE that doesn't have a user-defined alias, but does have an
alias assigned by set_rtable_names().  Per report from Peter Eisentraut.

While at it, refactor slightly to reduce code duplication.
2012-10-12 16:14:43 -04:00
Tom Lane
4816d2ea32 Fix cross-type case in partial row matching for hashed subplans.
When hashing a subplan like "WHERE (a, b) NOT IN (SELECT x, y FROM ...)",
findPartialMatch() attempted to match rows using the hashtable's internal
equality operators, which of course are for x and y's datatypes.  What we
need to use are the potentially cross-type operators for a=x, b=y, etc.
Failure to do that leads to wrong answers or even crashes.  The scope for
problems is limited to cases where we have different types with compatible
hash functions (else we'd not be using a hashed subplan), but for example
int4 vs int8 can cause the problem.

Per bug #7597 from Bo Jensen.  This has been wrong since the hashed-subplan
code was written, so patch all the way back.
2012-10-11 12:22:13 -04:00
Tom Lane
707263542e Fix parse location tracking for lists that can be empty.
The previous coding of the YYLLOC_DEFAULT macro behaved strangely for empty
productions, assigning the previous nonterminal's location as the parse
location of the result.  The usefulness of that was (at best) debatable
already, but the real problem is that in list-generating nonterminals like
	OptFooList: /* EMPTY */ { ... } | OptFooList Foo { ... } ;
the initially-identified location would get copied up, so that even a
nonempty list would be given a bogus parse location.  Document how to work
around that, and do so for OptSchemaEltList, so that the error condition
just added for CREATE SCHEMA IF NOT EXISTS produces a sane error cursor.
So far as I can tell, there are currently no other cases where the
situation arises, so we don't need other instances of this coding yet.
2012-10-04 17:15:29 -04:00
Tom Lane
fb34e94d21 Support CREATE SCHEMA IF NOT EXISTS.
Per discussion, schema-element subcommands are not allowed together with
this option, since it's not very obvious what should happen to the element
objects.

Fabrízio de Royes Mello
2012-10-03 19:47:11 -04:00
Alvaro Herrera
ece26987c2 Remove collations from generic ALTER test
The error messages they generate are not portable enough.

Also, since the only point of the alter_generic_1 expected file was to
cover platforms with no collation support, it's now useless, so remove
it.
2012-10-01 10:57:58 -03:00
Alvaro Herrera
811ca1300b Add alternative expected output for alter_generic
The original only expected file failed to consider machines without
non-default collation support.  Per buildfarm.

Also, move the test to another parallel group; the one it was originally
put in is already full according to comments in the schedule file.  Per
note from Tom Lane.
2012-09-29 00:37:13 -03:00
Alvaro Herrera
ff7e5b4841 Add alter_generic regression test
This makes refactoring of parts of the ALTER command safe(r) because we
ensure no change in functionality.

Author: KaiGai Kohei
2012-09-28 18:39:12 -03:00
Tom Lane
31510194cc Minor corrections for ALTER TYPE ADD VALUE IF NOT EXISTS patch.
Produce a NOTICE when the label already exists, for consistency with other
CREATE IF NOT EXISTS commands.  Also, fix the code so it produces something
more user-friendly than an index violation when the label already exists.
This not incidentally enables making a regression test that the previous
patch didn't make for fear of exposing an unpredictable OID in the results.
Also some wordsmithing on the documentation.
2012-09-22 18:35:22 -04:00
Andrew Dunstan
6d12b68cd7 Allow IF NOT EXISTS when add a new enum label.
If the label is already in the enum the statement becomes a no-op.
This will reduce the pain that comes from our not allowing this
operation inside a transaction block.

Andrew Dunstan, reviewed by Tom Lane and Magnus Hagander.
2012-09-22 12:53:31 -04:00
Tom Lane
11e131854f Improve ruleutils.c's heuristics for dealing with rangetable aliases.
The previous scheme had bugs in some corner cases involving tables that had
been renamed since a view was made.  This could result in dumped views that
failed to reload or reloaded incorrectly, as seen in bug #7553 from Lloyd
Albin, as well as in some pgsql-hackers discussion back in January.  Also,
its behavior for printing EXPLAIN plans was sometimes confusing because of
willingness to use the same alias for multiple RTEs (it was Ashutosh
Bapat's complaint about that aspect that started the January thread).

To fix, ensure that each RTE in the query has a unique unqualified alias,
by modifying the alias if necessary (we add "_" and digits as needed to
create a non-conflicting name).  Then we can just print its variables with
that alias, avoiding the confusing and bug-prone scheme of sometimes
schema-qualifying variable names.  In EXPLAIN, it proves to be expedient to
take the further step of only assigning such aliases to RTEs that are
actually referenced in the query, since the planner has a habit of
generating extra RTEs with the same alias in situations such as
inheritance-tree expansion.

Although this fixes a bug of very long standing, I'm hesitant to back-patch
such a noticeable behavioral change.  My experiments while creating a
regression test convinced me that actually incorrect output (as opposed to
confusing output) occurs only in very narrow cases, which is backed up by
the lack of previous complaints from the field.  So we may be better off
living with it in released branches; and in any case it'd be smart to let
this ripen awhile in HEAD before we consider back-patching it.
2012-09-21 19:03:10 -04:00
Tom Lane
807a40c551 Fix planning of btree index scans using ScalarArrayOpExpr quals.
In commit 9e8da0f757, I improved btree
to handle ScalarArrayOpExpr quals natively, so that constructs like
"indexedcol IN (list)" could be supported by index-only scans.  Using
such a qual results in multiple scans of the index, under-the-hood.
I went to some lengths to ensure that this still produces rows in index
order ... but I failed to recognize that if a higher-order index column
is lacking an equality constraint, rescans can produce out-of-order
data from that column.  Tweak the planner to not expect sorted output
in that case.  Per trouble report from Robert McGehee.
2012-09-18 12:20:34 -04:00
Tom Lane
3b8968f252 Rethink heuristics for choosing index quals for parameterized paths.
Some experimentation with examples similar to bug #7539 has convinced me
that indxpath.c's original implementation of parameterized-path generation
was several bricks shy of a load.  In general, if we are relying on a
particular outer rel or set of outer rels for a parameterized path, the
path should use every indexable join clause that's available from that rel
or rels.  Any join clauses that get left out of the indexqual will end up
getting applied as plain filter quals (qpquals), and that's generally a
significant loser compared to having the index AM enforce them.  (This is
particularly true with btree, which can skip the index scan entirely if
it can see that the given indexquals are mutually contradictory.)  The
original heuristics failed to ensure this, though, and were overly
complicated anyway.  Rewrite to make the code explicitly identify each
useful set of outer rels and then select all applicable join clauses for
each one.  The one plan that changes in the regression tests is in fact
for the better according to the planner's cost estimates.

(Note: this is not a correctness issue but just a matter of plan quality.
I don't yet know what is going on in bug #7539, but I don't expect this
change to fix that.)
2012-09-16 17:58:09 -04:00
Tom Lane
a20993608a Fix case of window function + aggregate + GROUP BY expression.
In commit 1bc16a9460 I added a minor
optimization to drop the component variables of a GROUP BY expression from
the target list computed at the aggregation level of a query, if those Vars
weren't referenced elsewhere in the tlist.  However, I overlooked that the
window-function planning code would deconstruct such expressions and thus
need to have access to their component variables.  Fix it to not do that.

While at it, I removed the distinction between volatile and nonvolatile
window partition/order expressions: the code now computes all of them
at the aggregation level.  This saves a relatively expensive check for
volatility, and it's unclear that the resulting plan isn't better anyway.

Per bug #7535 from Louis-David Mitterrand.  Back-patch to 9.2.
2012-09-13 11:32:25 -04:00
Tom Lane
46c508fbcf Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH.
The planner previously assumed that parameter Vars having the same absolute
query level, varno, and varattno could safely be assigned the same runtime
PARAM_EXEC slot, even though they might be different Vars appearing in
different subqueries.  This was (probably) safe before the introduction of
CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can
be executed during execution of some other subquery, causing the lifespan
of Params at the same syntactic nesting level as the CTE to overlap with
use of the same slots inside the CTE.  In 9.1 we created additional hazards
by using the same parameter-assignment technology for nestloop inner scan
parameters, but it was broken before that, as illustrated by the added
regression test.

To fix, restructure the planner's management of PlannerParamItems so that
items having different semantic lifespans are kept rigorously separated.
This will probably result in complex queries using more runtime PARAM_EXEC
slots than before, but the slots are cheap enough that this hardly matters.
Also, stop generating PlannerParamItems containing Params for subquery
outputs: all we really need to do is reserve the PARAM_EXEC slot number,
and that now only takes incrementing a counter.  The planning code is
simpler and probably faster than before, as well as being more correct.

Per report from Vik Reykja.

These changes will mostly also need to be made in the back branches, but
I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 12:55:01 -04:00
Tom Lane
6d2c8c0e2a Drop cheap-startup-cost paths during add_path() if we don't need them.
We can detect whether the planner top level is going to care at all about
cheap startup cost (it will only do so if query_planner's tuple_fraction
argument is greater than zero).  If it isn't, we might as well discard
paths immediately whose only advantage over others is cheap startup cost.
This turns out to get rid of quite a lot of paths in complex queries ---
I saw planner runtime reduction of more than a third on one large query.

Since add_path isn't currently passed the PlannerInfo "root", the easiest
way to tell it whether to do this was to add a bool flag to RelOptInfo.
That's a bit redundant, since all relations in a given query level will
have the same setting.  But in the future it's possible that we'd refine
the control decision to work on a per-relation basis, so this seems like
a good arrangement anyway.

Per my suggestion of a few months ago.
2012-09-01 18:16:24 -04:00
Tom Lane
4da6439bd8 Fix mark_placeholder_maybe_needed to handle LATERAL references.
If a PlaceHolderVar contains a pulled-up LATERAL reference, its minimum
possible evaluation level might be higher in the join tree than its
original syntactic location.  That in turn affects the ph_needed level for
any contained PlaceHolderVars (that is, those PHVs had better propagate up
the join tree at least to the evaluation level of the outer PHV).  We got
this mostly right, but mark_placeholder_maybe_needed() failed to account
for the effect, and in consequence could leave the inner PHVs with
ph_may_need less than what their ultimate ph_needed value will be.  That's
bad because it could lead to failure to select a join order that will allow
evaluation of the inner PHV at a valid location.  Fix that, and add an
Assert that checks that we don't ever set ph_needed to more than
ph_may_need.
2012-09-01 13:56:46 -04:00
Tom Lane
da3df99870 Fix LATERAL references to join alias variables.
I had thought this case worked already, but perhaps I didn't re-test it
after adding extract_lateral_references() ...
2012-08-31 17:44:31 -04:00
Tom Lane
d1a4db8d25 Improve EXPLAIN's ability to cope with LATERAL references in plans.
push_child_plan/pop_child_plan didn't bother to adjust the "ancestors"
list of parent plan nodes when descending to a child plan node.  I think
this was okay when it was written, but it's not okay in the presence of
LATERAL references, since a subplan node could easily be returning a
LATERAL value back up to the same nestloop node that provides the value.
Per changed regression test results, the omission led to failure to
interpret Param nodes that have perfectly good interpretations.
2012-08-30 12:56:50 -04:00
Tom Lane
e83bb10d6d Adjust definition of cheapest_total_path to work better with LATERAL.
In the initial cut at LATERAL, I kept the rule that cheapest_total_path
was always unparameterized, which meant it had to be NULL if the relation
has no unparameterized paths.  It turns out to work much more nicely if
we always have *some* path nominated as cheapest-total for each relation.
In particular, let's still say it's the cheapest unparameterized path if
there is one; if not, take the cheapest-total-cost path among those of
the minimum available parameterization.  (The first rule is actually
a special case of the second.)

This allows reversion of some temporary lobotomizations I'd put in place.
In particular, the planner can now consider hash and merge joins for
joins below a parameter-supplying nestloop, even if there aren't any
unparameterized paths available.  This should bring planning of
LATERAL-containing queries to the same level as queries not using that
feature.

Along the way, simplify management of parameterized paths in add_path()
and friends.  In the original coding for parameterized paths in 9.2,
I tried to minimize the logic changes in add_path(), so it just treated
parameterization as yet another dimension of comparison for paths.
We later made it ignore pathkeys (sort ordering) of parameterized paths,
on the grounds that ordering isn't a useful property for the path on the
inside of a nestloop, so we might as well get rid of useless parameterized
paths as quickly as possible.  But we didn't take that reasoning as far as
we should have.  Startup cost isn't a useful property inside a nestloop
either, so add_path() ought to discount startup cost of parameterized paths
as well.  Having done that, the secondary sorting I'd implemented (in
add_parameterized_path) is no longer needed --- any parameterized path that
survives add_path() at all is worth considering at higher levels.  So this
should be a bit faster as well as simpler.
2012-08-29 22:06:07 -04:00
Tom Lane
e323c55301 Fix DROP INDEX CONCURRENTLY IF EXISTS.
This threw ERROR, not the expected NOTICE, if the index didn't exist.
The bug was actually visible in not-as-expected regression test output,
so somebody wasn't paying too close attention in commit
8cb53654db.
Per report from Brendan Byrd.
2012-08-27 12:45:43 -04:00
Tom Lane
9ff79b9d4e Fix up planner infrastructure to support LATERAL properly.
This patch takes care of a number of problems having to do with failure
to choose valid join orders and incorrect handling of lateral references
pulled up from subqueries.  Notable changes:

* Add a LateralJoinInfo data structure similar to SpecialJoinInfo, to
represent join ordering constraints created by lateral references.
(I first considered extending the SpecialJoinInfo structure, but the
semantics are different enough that a separate data structure seems
better.)  Extend join_is_legal() and related functions to prevent trying
to form unworkable joins, and to ensure that we will consider joins that
satisfy lateral references even if the joins would be clauseless.

* Fill in the infrastructure needed for the last few types of relation scan
paths to support parameterization.  We'd have wanted this eventually
anyway, but it is necessary now because a relation that gets pulled up out
of a UNION ALL subquery may acquire a reltargetlist containing lateral
references, meaning that its paths *have* to be parameterized whether or
not we have any code that can push join quals down into the scan.

* Compute data about lateral references early in query_planner(), and save
in RelOptInfo nodes, to avoid repetitive calculations later.

* Assorted corner-case bug fixes.

There's probably still some bugs left, but this is a lot closer to being
real than it was before.
2012-08-26 22:50:23 -04:00
Tom Lane
ec8a0135c3 Fix cascading privilege revoke to notice when privileges are still held.
If we revoke a grant option from some role X, but X still holds the option
via another grant, we should not recursively revoke the privilege from
role(s) Y that X had granted it to.  This was supposedly fixed as one
aspect of commit 4b2dafcc0b, but I must not
have tested it, because in fact that code never worked: it forgot to shift
the grant-option bits back over when masking the bits being revoked.

Per bug #6728 from Daniel German.  Back-patch to all active branches,
since this has been wrong since 8.0.
2012-08-23 17:25:10 -04:00
Tom Lane
9b2a237cee Fix typo in comment. 2012-08-19 22:56:17 -04:00
Tom Lane
092d7ded29 Allow OLD and NEW in multi-row VALUES within rules.
Now that we have LATERAL, it's fairly painless to allow this case, which
was left as a TODO in the original multi-row VALUES implementation.
2012-08-19 14:12:16 -04:00
Tom Lane
084a29c94f Another round of planner fixes for LATERAL.
Formerly, subquery pullup had no need to examine other entries in the range
table, since they could not contain any references to the subquery being
pulled up.  That's no longer true with LATERAL, so now we need to be able
to visit rangetable subexpressions to replace Vars referencing the
pulled-up subquery.  Also, this means that extract_lateral_references must
be unsurprised at encountering lateral PlaceHolderVars, since such might be
created when pulling up a subquery that's underneath an outer join with
respect to the lateral reference.
2012-08-18 14:10:17 -04:00
Tom Lane
f5983923d8 Allow create_index_paths() to consider multiple join bitmapscan paths.
In the initial cut at the "parameterized paths" feature, I'd simplified
create_index_paths() to the point where it would only generate a single
parameterized bitmap path per relation.  Experimentation with an example
supplied by Josh Berkus convinces me that that's not good enough: we really
need to consider a bitmap path for each possible outer relation.  Otherwise
we have regressions relative to pre-9.2 versions, in which the planner
picks a plain indexscan where it should have used a bitmap scan in queries
involving three or more tables.  Indeed, after fixing this, several queries
in the regression tests show improved plans as a result of using bitmap not
plain indexscans.
2012-08-16 13:03:54 -04:00
Heikki Linnakangas
317dd55a9c Add SP-GiST support for range types.
The implementation is a quad-tree, largely copied from the quad-tree
implementation for points. The lower and upper bound of ranges are the 2d
coordinates, with some extra code to handle empty ranges.

I left out the support for adjacent operator, -|-, from the original patch.
Not because there was necessarily anything wrong with it, but it was more
complicated than the other operators, and I only have limited time for
reviewing. That will follow as a separate patch.

Alexander Korotkov, reviewed by Jeff Davis and me.
2012-08-16 14:30:45 +03:00
Tom Lane
4c5316931f Fix rescan logic in nodeCtescan.
The previous coding essentially assumed that nodes would be rescanned in
the same order they were initialized in; or at least that the "leader" of
a group of CTEscans would be rescanned before any others were required to
execute.  Unfortunately, that isn't even a little bit true.  It's possible
to devise queries in which the leader isn't rescanned until other CTEscans
on the same CTE have run to completion, or even in which the leader never
gets a rescan call at all.

The fix makes the leader specially responsible only for initial creation
and final destruction of the tuplestore; rescan resets are now a
symmetrically shared responsibility.  This means that we might reset the
tuplestore multiple times when restarting a plan subtree containing
multiple CTEscans; but resetting an already-empty tuplestore is cheap
enough that that doesn't seem like a problem.

Per report from Adam Mackler; the new regression test cases are based on
his example query.

Back-patch to 8.4 where CTE scans were introduced.
2012-08-15 19:02:33 -04:00
Tom Lane
17351fce4e Prevent access to external files/URLs via XML entity references.
xml_parse() would attempt to fetch external files or URLs as needed to
resolve DTD and entity references in an XML value, thus allowing
unprivileged database users to attempt to fetch data with the privileges
of the database server.  While the external data wouldn't get returned
directly to the user, portions of it could be exposed in error messages
if the data didn't parse as valid XML; and in any case the mere ability
to check existence of a file might be useful to an attacker.

The ideal solution to this would still allow fetching of references that
are listed in the host system's XML catalogs, so that documents can be
validated according to installed DTDs.  However, doing that with the
available libxml2 APIs appears complex and error-prone, so we're not going
to risk it in a security patch that necessarily hasn't gotten wide review.
So this patch merely shuts off all access, causing any external fetch to
silently expand to an empty string.  A future patch may improve this.

In HEAD and 9.2, also suppress warnings about undefined entities, which
would otherwise occur as a result of not loading referenced DTDs.  Previous
branches don't show such warnings anyway, due to different error handling
arrangements.

Credit to Noah Misch for first reporting the problem, and for much work
towards a solution, though this simplistic approach was not his preference.
Also thanks to Daniel Veillard for consultation.

Security: CVE-2012-3489
2012-08-14 18:31:16 -04:00
Tom Lane
c1774d2c81 More fixes for planner's handling of LATERAL.
Re-allow subquery pullup for LATERAL subqueries, except when the subquery
is below an outer join and contains lateral references to relations outside
that outer join.  If we pull up in such a case, we risk introducing lateral
cross-references into outer joins' ON quals, which is something the code is
entirely unprepared to cope with right now; and I'm not sure it'll ever be
worth coping with.

Support lateral refs in VALUES (this seems to be the only additional path
type that needs such support as a consequence of re-allowing subquery
pullup).

Put in a slightly hacky fix for joinpath.c's refusal to consider
parameterized join paths even when there cannot be any unparameterized
ones.  This was causing "could not devise a query plan for the given query"
failures in queries involving more than two FROM items.

Put in an even more hacky fix for distribute_qual_to_rels() being unhappy
with join quals that contain references to rels outside their syntactic
scope; which is to say, disable that test altogether.  Need to think about
how to preserve some sort of debugging cross-check here, while not
expending more cycles than befits a debugging cross-check.
2012-08-12 16:01:26 -04:00
Tom Lane
e76af54137 Fix some issues with LATERAL(SELECT UNION ALL SELECT).
The LATERAL marking has to be propagated down to the UNION leaf queries
when we pull them up.  Also, fix the formerly stubbed-off
set_append_rel_pathlist().  It does already have enough smarts to cope with
making a parameterized Append path at need; it just has to not assume that
there *must* be an unparameterized path.
2012-08-11 18:42:56 -04:00
Tom Lane
eaccfded98 Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree.  To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed.  This allows removal of a large number of ad-hoc
checks scattered around the code base.  The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.

Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.

Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.

In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone.  (I didn't risk actually removing said dead code, though.)
2012-08-10 11:36:15 -04:00
Tom Lane
5ebaaa4944 Implement SQL-standard LATERAL subqueries.
This patch implements the standard syntax of LATERAL attached to a
sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM,
since set-returning function calls are expected to be one of the principal
use-cases.

The main change here is a rewrite of the mechanism for keeping track of
which relations are visible for column references while the FROM clause is
being scanned.  The parser "namespace" lists are no longer lists of bare
RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE
pointer as well as some visibility-controlling flags.  Aside from
supporting LATERAL correctly, this lets us get rid of the ancient hacks
that required rechecking subqueries and JOIN/ON and function-in-FROM
expressions for invalid references after they were initially parsed.
Invalid column references are now always correctly detected on sight.

In passing, remove assorted parser error checks that are now dead code by
virtue of our having gotten rid of add_missing_from, as well as some
comments that are obsolete for the same reason.  (It was mainly
add_missing_from that caused so much fudging here in the first place.)

The planner support for this feature is very minimal, and will be improved
in future patches.  It works well enough for testing purposes, though.

catversion bump forced due to new field in RangeTblEntry.
2012-08-07 19:02:54 -04:00
Bruce Momjian
ac78c4178b Fix to_char(), to_date(), and to_timestamp() to handle negative/BC
century specifications just like positive/AD centuries.  Previously the
behavior was either wrong or inconsistent with positive/AD handling.

Centuries without years now always assume the first year of the century,
which is now documented.
2012-08-07 13:34:44 -04:00
Tom Lane
3152bf722f Fix bugs with parsing signed hh:mm and hh:mm:ss fields in interval input.
DecodeInterval() failed to honor the "range" parameter (the special SQL
syntax for indicating which fields appear in the literal string) if the
time was signed.  This seems inappropriate, so make it work like the
not-signed case.  The inconsistency was introduced in my commit
f867339c01, which as noted in its log message
was only really focused on making SQL-compliant literals work per spec.
Including a sign here is not per spec, but if we're going to allow it
then it's reasonable to expect it to work like the not-signed case.

Also, remove bogus setting of tmask, which caused subsequent processing to
think that what had been given was a timezone and not an hh:mm(:ss) field,
thus confusing checks for redundant fields.  This seems to be an aboriginal
mistake in Lockhart's commit 2cf1642461.

Add regression test cases to illustrate the changed behaviors.

Back-patch as far as 8.4, where support for spec-compliant interval
literals was added.

Range problem reported and diagnosed by Amit Kapila, tmask problem by me.
2012-08-03 17:40:43 -04:00
Tom Lane
f6ce81f55a Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).
Parse analysis neglected to cover the case of a WITH clause attached to an
intermediate-level set operation; it only handled WITH at the top level
or WITH attached to a leaf-level SELECT.  Per report from Adam Mackler.

In HEAD, I rearranged the order of SelectStmt's fields to put withClause
with the other fields that can appear on non-leaf SelectStmts.  In back
branches, leave it alone to avoid a possible ABI break for third-party
code.

Back-patch to 8.4 where WITH support was added.
2012-07-31 17:56:21 -04:00
Tom Lane
af026b5d9b Fix longstanding crash-safety bug with newly-created-or-reset sequences.
If a crash occurred immediately after the first nextval() call for a serial
column, WAL replay would restore the sequence to a state in which it
appeared that no nextval() had been done, thus allowing the first sequence
value to be returned again by the next nextval() call; as reported in
bug #6748 from Xiangming Mei.

More generally, the problem would occur if an ALTER SEQUENCE was executed
on a freshly created or reset sequence.  (The manifestation with serial
columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step
to serial column creation.)  The cause is that sequence creation attempted
to save one WAL entry by writing out a WAL record that made it appear that
the first nextval() had already happened (viz, with is_called = true),
while marking the sequence's in-database state with log_cnt = 1 to show
that the first nextval() need not emit a WAL record.  However, ALTER
SEQUENCE would emit a new WAL entry reflecting the actual in-database state
(with is_called = false).  Then, nextval would allocate the first sequence
value and set is_called = true, but it would trust the log_cnt value and
not emit any WAL record.  A crash at this point would thus restore the
sequence to its post-ALTER state, causing the next nextval() call to return
the first sequence value again.

To fix, get rid of the idea of logging an is_called status different from
reality.  This means that the first nextval-driven WAL record will happen
at the first nextval call not the second, but the marginal cost of that is
pretty negligible.  In addition, make sure that ALTER SEQUENCE resets
log_cnt to zero in any case where it touches sequence parameters that
affect future nextval results.  This will result in some user-visible
changes in the contents of a sequence's log_cnt column, as reflected in the
patch's regression test changes; but no application should be depending on
that anyway, since it was already true that log_cnt changes rather
unpredictably depending on checkpoint timing.

In addition, make some basically-cosmetic improvements to get rid of
sequence.c's undesirable intimacy with page layout details.  It was always
really trying to WAL-log the contents of the sequence tuple, so we should
have it do that directly using a HeapTuple's t_data and t_len, rather than
backing into it with some magic assumptions about where the tuple would be
on the sequence's page.

Back-patch to all supported branches.
2012-07-25 17:42:23 -04:00
Alvaro Herrera
d7b47e5155 Change syntax of new CHECK NO INHERIT constraints
The initially implemented syntax, "CHECK NO INHERIT (expr)" was not
deemed very good, so switch to "CHECK (expr) NO INHERIT" instead.  This
way it looks similar to SQL-standards compliant constraint attribute.

Backport to 9.2 where the new syntax and feature was introduced.

Per discussion.
2012-07-24 16:01:32 -04:00
Tom Lane
b71258af56 Fix name collision between concurrent regression tests.
Commit f5bcd398ad introduced a test using
a table named "circles" in inherit.sql.  Unfortunately, the concurrently
executed constraints test was already using that table name, so the
parallel regression tests would sometimes fail.  Rename table to dodge
the problem.  Per buildfarm.
2012-07-22 00:01:19 -04:00
Alvaro Herrera
f5bcd398ad connoinherit may be true only for CHECK constraints
The code was setting it true for other constraints, which is
bogus.  Doing so caused bogus catalog entries for such constraints, and
in particular caused an error to be raised when trying to drop a
constraint of types other than CHECK from a table that has children,
such as reported in bug #6712.

In 9.2, additionally ignore connoinherit=true for other constraint
types, to avoid having to force initdb; existing databases might already
contain bogus catalog entries.

Includes a catversion bump (in HEAD only).

Bug report from Miroslav Šulc
Analysis from Amit Kapila and Noah Misch; Amit also contributed the patch.
2012-07-20 14:08:07 -04:00
Tom Lane
8e617e29aa Fix whole-row Var evaluation to cope with resjunk columns (again).
When a whole-row Var is reading the result of a subquery, we need it to
ignore any "resjunk" columns that the subquery might have evaluated for
GROUP BY or ORDER BY purposes.  We've hacked this area before, in commit
68e40998d0, but that fix only covered
whole-row Vars of named composite types, not those of RECORD type; and it
was mighty klugy anyway, since it just assumed without checking that any
extra columns in the result must be resjunk.  A proper fix requires getting
hold of the subquery's targetlist so we can actually see which columns are
resjunk (whereupon we can use a JunkFilter to get rid of them).  So bite
the bullet and add some infrastructure to make that possible.

Per report from Andrew Dunstan and additional testing by Merlin Moncure.
Back-patch to all supported branches.  In 8.3, also back-patch commit
292176a118, which for some reason I had
not done at the time, but it's a prerequisite for this change.
2012-07-20 13:10:58 -04:00
Robert Haas
3a0e4d36eb Make new event trigger facility actually do something.
Commit 3855968f32 added syntax, pg_dump,
psql support, and documentation, but the triggers didn't actually fire.
With this commit, they now do.  This is still a pretty basic facility
overall because event triggers do not get a whole lot of information
about what the user is trying to do unless you write them in C; and
there's still no option to fire them anywhere except at the very
beginning of the execution sequence, but it's better than nothing,
and a good building block for future work.

Along the way, add a regression test for ALTER LARGE OBJECT, since
testing of event triggers reveals that we haven't got one.

Dimitri Fontaine and Robert Haas
2012-07-20 11:39:01 -04:00
Robert Haas
3855968f32 Syntax support and documentation for event triggers.
They don't actually do anything yet; that will get fixed in a
follow-on commit.  But this gets the basic infrastructure in place,
including CREATE/ALTER/DROP EVENT TRIGGER; support for COMMENT,
SECURITY LABEL, and ALTER EXTENSION .. ADD/DROP EVENT TRIGGER;
pg_dump and psql support; and documentation for the anticipated
initial feature set.

Dimitri Fontaine, with review and a bunch of additional hacking by me.
Thom Brown extensively reviewed earlier versions of this patch set,
but there's not a whole lot of that code left in this commit, as it
turns out.
2012-07-18 10:16:16 -04:00
Peter Eisentraut
a84bf4922e Avoid extra newlines in XML mapping in table forest mode
found by P. Broennimann
2012-07-12 23:52:50 +03:00
Tom Lane
84a42560c8 Add array_remove() and array_replace() functions.
These functions support removing or replacing array element value(s)
matching a given search value.  Although intended mainly to support a
future array-foreign-key feature, they seem useful in their own right.

Marco Nenciarini and Gabriele Bartolini, reviewed by Alex Hunsaker
2012-07-11 13:59:35 -04:00
Tom Lane
628cbb50ba Re-implement extraction of fixed prefixes from regular expressions.
To generate btree-indexable conditions from regex WHERE conditions (such as
WHERE indexed_col ~ '^foo'), we need to be able to identify any fixed
prefix that a regex might have; that is, find any string that must be a
prefix of all strings satisfying the regex.  We used to do that with
entirely ad-hoc code that looked at the source text of the regex.  It
didn't know very much about regex syntax, which mostly meant that it would
fail to identify some optimizable cases; but Viktor Rosenfeld reported that
it would produce actively wrong answers for quantified parenthesized
subexpressions, such as '^(foo)?bar'.  Rather than trying to extend the
ad-hoc code to cover this, let's get rid of it altogether in favor of
identifying prefixes by examining the compiled form of a regex.

To do this, I've added a new entry point "pg_regprefix" to the regex library;
hopefully it is defined in a sufficiently general fashion that it can remain
in the library when/if that code gets split out as a standalone project.

Since this bug has been there for a very long time, this fix needs to get
back-patched.  However it depends on some other recent commits (particularly
the addition of wchar-to-database-encoding conversion), so I'll commit this
separately and then go to work on back-porting the necessary fixes.
2012-07-10 14:54:37 -04:00
Robert Haas
d7c734841b Reduce messages about implicit indexes and sequences to DEBUG1.
Per recent discussion on pgsql-hackers, these messages are too
chatty for most users.
2012-07-04 20:35:29 -04:00
Peter Eisentraut
2b44306315 Assorted message style improvements 2012-07-02 21:12:46 +03:00
Robert Haas
0caa0d04db Make DROP FUNCTION hint more informative.
If you decide you want to take the hint, this gives you something you
can paste right back to the server.

Dean Rasheed
2012-06-26 13:33:23 -04:00
Tom Lane
fe3db74002 Share RI trigger code between NO ACTION and RESTRICT cases.
These triggers are identical except for whether ri_Check_Pk_Match is to be
called, so factor out the common code to save a couple hundred lines.

Also, eliminate null-column checks in ri_Check_Pk_Match, since they're
duplicate with the calling functions and require unnecessary complication
in its API statement.

Simplify the way code is shared between RI_FKey_check_ins and
RI_FKey_check_upd, too.
2012-06-19 14:31:54 -04:00
Tom Lane
e8c9fd5fdf Allow ON UPDATE/DELETE SET DEFAULT plans to be cached.
Once upon a time, somebody was worried that cached RI plans wouldn't get
remade with new default values after ALTER TABLE ... SET DEFAULT, so they
didn't allow caching of plans for ON UPDATE/DELETE SET DEFAULT actions.
That time is long gone, though (and even at the time I doubt this was the
greatest hazard posed by ALTER TABLE...).  So allow these triggers to cache
their plans just like the others.

The cache_plan argument to ri_PlanCheck is now vestigial, since there
are no callers that don't pass "true"; but I left it alone in case there
is any future need for it.
2012-06-18 19:37:23 -04:00
Tom Lane
c75be2ad60 Change ON UPDATE SET NULL/SET DEFAULT referential actions to meet SQL spec.
Previously, when executing an ON UPDATE SET NULL or SET DEFAULT action for
a multicolumn MATCH SIMPLE foreign key constraint, we would set only those
referencing columns corresponding to referenced columns that were changed.
This is what the SQL92 standard said to do --- but more recent versions
of the standard say that all referencing columns should be set to null or
their default values, no matter exactly which referenced columns changed.
At least for SET DEFAULT, that is clearly saner behavior.  It's somewhat
debatable whether it's an improvement for SET NULL, but it appears that
other RDBMS systems read the spec this way.  So let's do it like that.

This is a release-notable behavioral change, although considering that
our documentation already implied it was done this way, the lack of
complaints suggests few people use such cases.
2012-06-18 12:12:52 -04:00
Tom Lane
f5297bdfe4 Refer to the default foreign key match style as MATCH SIMPLE internally.
Previously we followed the SQL92 wording, "MATCH <unspecified>", but since
SQL99 there's been a less awkward way to refer to the default style.

In addition to the code changes, pg_constraint.confmatchtype now stores
this match style as 's' (SIMPLE) rather than 'u' (UNSPECIFIED).  This
doesn't affect pg_dump or psql because they use pg_get_constraintdef()
to reconstruct foreign key definitions.  But other client-side code might
examine that column directly, so this change will have to be marked as
an incompatibility in the 9.3 release notes.
2012-06-17 20:16:44 -04:00
Peter Eisentraut
15b1918e7d Improve reporting of permission errors for array types
Because permissions are assigned to element types, not array types,
complaining about permission denied on an array type would be
misleading to users.  So adjust the reporting to refer to the element
type instead.

In order not to duplicate the required logic in two dozen places,
refactor the permission denied reporting for types a bit.

pointed out by Yeb Havinga during the review of the type privilege
feature
2012-06-15 22:55:03 +03:00
Tom Lane
80edfd7659 Revisit error message details for JSON input parsing.
Instead of identifying error locations only by line number (which could
be entirely unhelpful with long input lines), provide a fragment of the
input text too, placing this info in a new CONTEXT entry.  Make the
error detail messages conform more closely to style guidelines, fix
failure to expose some of them for translation, ensure compiler can
check formats against supplied parameters.
2012-06-13 19:43:35 -04:00
Simon Riggs
28ac797287 Revert error message on GLOBAL/LOCAL pending further discussion 2012-06-10 08:41:01 +01:00
Simon Riggs
72335a2015 Add ERROR msg for GLOBAL/LOCAL TEMP is not yet implemented 2012-06-09 16:35:26 +01:00
Tom Lane
3dd8e59681 Fix bogus handling of control characters in json_lex_string().
The original coding misbehaved if "char" is signed, and also made the
extremely poor decision to print control characters literally when trying
to complain about them.  Report and patch by Shigeru Hanada.

In passing, also fix core dump risk in report_parse_error() should the
parse state be something other than what it expects.
2012-06-04 20:43:57 -04:00
Tom Lane
2a4c46e0ba Fix array overrun in regex code.
zaptreesubs() was coded to unconditionally reset a capture subre's
corresponding pmatch[] entry.  However, in regexes without backrefs, that
array is caller-supplied and might not have as many entries as the regex
has capturing parens.  So check the array length and do nothing if there
is no corresponding entry, much as subset() does.  Failure to check this
resulted in a stack clobber in the case reported by Marko Kreen.

This bug appears to have been latent in the regex library from the
beginning.  It was not exposed because find() called dissect() not
cdissect(), and the dissect() code path didn't ever call zaptreesubs()
(formerly zapmem()).  When I unified dissect() and cdissect() in commit
4dd78bf37a, the problem was exposed.

Now that I've seen this, I'm rather suspicious that we might need to
back-patch it; but will refrain for now, for lack of evidence that
the case can be hit in the previous coding.
2012-05-24 13:56:16 -04:00
Peter Eisentraut
f1f6737e15 Fix incorrect logic in JSON number lexer
Detectable by gcc -Wlogical-op.

Add two regression test cases that would previously allow incorrect
values to pass.
2012-05-20 02:24:46 +03:00
Tom Lane
488c6dd170 Improve error message for ALTER COLUMN TYPE coercion failure.
Per recent discussion, the error message for this was actually a trifle
inaccurate, since it said "cannot be cast" which might be incorrect.
Adjust that wording, and add a HINT suggesting that a USING clause might
be needed.
2012-05-16 07:28:25 -04:00
Peter Eisentraut
c8e086795a Remove whitespace from end of lines
pgindent and perltidy should clean up the rest.
2012-05-15 22:19:41 +03:00
Tom Lane
809e7e21af Converge all SQL-level statistics timing values to float8 milliseconds.
This patch adjusts the core statistics views to match the decision already
taken for pg_stat_statements, that values representing elapsed time should
be represented as float8 and measured in milliseconds.  By using float8,
we are no longer tied to a specific maximum precision of timing data.
(Internally, it's still microseconds, but we could now change that without
needing changes at the SQL level.)

The columns affected are
pg_stat_bgwriter.checkpoint_write_time
pg_stat_bgwriter.checkpoint_sync_time
pg_stat_database.blk_read_time
pg_stat_database.blk_write_time
pg_stat_user_functions.total_time
pg_stat_user_functions.self_time
pg_stat_xact_user_functions.total_time
pg_stat_xact_user_functions.self_time

The first four of these are new in 9.2, so there is no compatibility issue
from changing them.  The others require a release note comment that they
are now double precision (and can show a fractional part) rather than
bigint as before; also their underlying statistics functions now match
the column definitions, instead of returning bigint microseconds.
2012-04-30 14:03:33 -04:00
Tom Lane
1dd89eadcd Rename I/O timing statistics columns to blk_read_time and blk_write_time.
This seems more consistent with the pre-existing choices for names of
other statistics columns.  Rename assorted internal identifiers to match.
2012-04-29 18:13:33 -04:00
Tom Lane
7c85aa39fc Fix oversight in recent parameterized-path patch.
bitmap_scan_cost_est() has to be able to cope with a BitmapOrPath, but
I'd taken a shortcut that didn't work for that case.  Noted by Heikki.
Add some regression tests since this area is evidently under-covered.
2012-04-26 14:17:44 -04:00
Tom Lane
d6d5f67b5b Modify create_index regression test to avoid intermittent failures.
We have been seeing intermittent buildfarm failures due to a query
sometimes not using an index-only scan plan, because a background
auto-ANALYZE prevented the table's all-visible bits from being set
immediately, thereby causing the estimated cost of an index-only scan
to go up considerably.  Adjust the test case so that a bitmap index scan is
preferred instead, which serves equally well for the purpose the test case
is actually meant for.  (Of course, it would be better to eliminate the
interference from auto-ANALYZE, but I see no low-risk way to do that,
so any such fix will have to be left for 9.3 or later.)
2012-04-25 22:57:48 -04:00
Tom Lane
9fa82c9809 Fix planner's handling of RETURNING lists in writable CTEs.
setrefs.c failed to do "rtoffset" adjustment of Vars in RETURNING lists,
which meant they were left with the wrong varnos when the RETURNING list
was in a subquery.  That was never possible before writable CTEs, of
course, but now it's broken.  The executor fails to notice any problem
because ExecEvalVar just references the ecxt_scantuple for any normal
varno; but EXPLAIN breaks when the varno is wrong, as illustrated in a
recent complaint from Bartosz Dmytrak.

Since the eventual rtoffset of the subquery is not known at the time
we are preparing its plan node, the previous scheme of executing
set_returning_clause_references() at that time cannot handle this
adjustment.  Fortunately, it turns out that we don't really need to do it
that way, because all the needed information is available during normal
setrefs.c execution; we just have to dig it out of the ModifyTable node.
So, do that, and get rid of the kluge of early setrefs processing of
RETURNING lists.  (This is a little bit of a cheat in the case of inherited
UPDATE/DELETE, because we are not passing a "root" struct that corresponds
exactly to what the subplan was built with.  But that doesn't matter, and
anyway this is less ugly than early setrefs processing was.)

Back-patch to 9.1, where the problem became possible to hit.
2012-04-25 20:20:33 -04:00
Robert Haas
3ce7f18e92 Casts to or from a domain type are ignored; warn and document.
Prohibiting this outright would break dumps taken from older versions
that contain such casts, which would create far more pain than is
justified here.

Per report by Jaime Casanova and subsequent discussion.
2012-04-24 09:20:53 -04:00
Robert Haas
5d4b60f2f2 Lots of doc corrections.
Josh Kupershmidt
2012-04-23 22:43:09 -04:00
Tom Lane
33e99153e9 Use fuzzy not exact cost comparison for the final tie-breaker in add_path.
Instead of an exact cost comparison, use a fuzzy comparison with 1e-10
delta after all other path metrics have proved equal.  This is to avoid
having platform-specific roundoff behaviors determine the choice when
two paths are really the same to our cost estimators.  Adjust the
recently-added test case that made it obvious we had a problem here.
2012-04-21 00:51:14 -04:00
Alvaro Herrera
09ff76fcdb Recast "ONLY" column CHECK constraints as NO INHERIT
The original syntax wasn't universally loved, and it didn't allow its
usage in CREATE TABLE, only ALTER TABLE.  It now works everywhere, and
it also allows using ALTER TABLE ONLY to add an uninherited CHECK
constraint, per discussion.

The pg_constraint column has accordingly been renamed connoinherit.

This commit partly reverts some of the changes in
61d81bd28d, particularly some pg_dump and
psql bits, because now pg_get_constraintdef includes the necessary NO
INHERIT within the constraint definition.

Author: Nikhil Sontakke
Some tweaks by me
2012-04-20 23:56:57 -03:00
Tom Lane
5b7b5518d0 Revise parameterized-path mechanism to fix assorted issues.
This patch adjusts the treatment of parameterized paths so that all paths
with the same parameterization (same set of required outer rels) for the
same relation will have the same rowcount estimate.  We cache the rowcount
estimates to ensure that property, and hopefully save a few cycles too.
Doing this makes it practical for add_path_precheck to operate without
a rowcount estimate: it need only assume that paths with different
parameterizations never dominate each other, which is close enough to
true anyway for coarse filtering, because normally a more-parameterized
path should yield fewer rows thanks to having more join clauses to apply.

In add_path, we do the full nine yards of comparing rowcount estimates
along with everything else, so that we can discard parameterized paths that
don't actually have an advantage.  This fixes some issues I'd found with
add_path rejecting parameterized paths on the grounds that they were more
expensive than not-parameterized ones, even though they yielded many fewer
rows and hence would be cheaper once subsequent joining was considered.

To make the same-rowcounts assumption valid, we have to require that any
parameterized path enforce *all* join clauses that could be obtained from
the particular set of outer rels, even if not all of them are useful for
indexing.  This is required at both base scans and joins.  It's a good
thing anyway since the net impact is that join quals are checked at the
lowest practical level in the join tree.  Hence, discard the original
rather ad-hoc mechanism for choosing parameterization joinquals, and build
a better one that has a more principled rule for when clauses can be moved.
The original rule was actually buggy anyway for lack of knowledge about
which relations are part of an outer join's outer side; getting this right
requires adding an outer_relids field to RestrictInfo.
2012-04-19 15:53:47 -04:00
Tom Lane
e3ffd05b02 Weaken the planner's tests for relevant joinclauses.
We should be willing to cross-join two small relations if that allows us
to use an inner indexscan on a large relation (that is, the potential
indexqual for the large table requires both smaller relations).  This
worked in simple cases but fell apart as soon as there was a join clause
to a fourth relation, because the existence of any two-relation join clause
caused the planner to not consider clauseless joins between other base
relations.  The added regression test shows an example case adapted from
a recent complaint from Benoit Delbosc.

Adjust have_relevant_joinclause, have_relevant_eclass_joinclause, and
has_relevant_eclass_joinclause to consider that a join clause mentioning
three or more relations is sufficient grounds for joining any subset of
those relations, even if we have to do so via a cartesian join.  Since such
clauses are relatively uncommon, this shouldn't affect planning speed on
typical queries; in fact it should help a bit, because the latter two
functions in particular get significantly simpler.

Although this is arguably a bug fix, I'm not going to risk back-patching
it, since it might have currently-unforeseen consequences.
2012-04-13 16:07:17 -04:00
Peter Eisentraut
c0cc526e8b Rename bytea_agg to string_agg and add delimiter argument
Per mailing list discussion, we would like to keep the bytea functions
parallel to the text functions, so rename bytea_agg to string_agg,
which already exists for text.

Also, to satisfy the rule that we don't want aggregate functions of
the same name with a different number of arguments, add a delimiter
argument, just like string_agg for text already has.
2012-04-13 21:36:59 +03:00
Tom Lane
880bfc3287 Silently ignore any nonexistent schemas that are listed in search_path.
Previously we attempted to throw an error or at least warning for missing
schemas, but this was done inconsistently because of implementation
restrictions (in many cases, GUC settings are applied outside transactions
so that we can't do system catalog lookups).  Furthermore, there were
exceptions to the rule even in the beginning, and we'd been poking more
and more holes in it as time went on, because it turns out that there are
lots of use-cases for having some irrelevant items in a common search_path
value.  It seems better to just adopt a philosophy similar to what's always
been done with Unix PATH settings, wherein nonexistent or unreadable
directories are silently ignored.

This commit also fixes the documentation to point out that schemas for
which the user lacks USAGE privilege are silently ignored.  That's always
been true but was previously not documented.

This is mostly in response to Robert Haas' complaint that 9.1 started to
throw errors or warnings for missing schemas in cases where prior releases
had not.  We won't adopt such a significant behavioral change in a back
branch, so something different will be needed in 9.1.
2012-04-11 12:02:50 -04:00
Tom Lane
263d9de66b Allow statistics to be collected for foreign tables.
ANALYZE now accepts foreign tables and allows the table's FDW to control
how the sample rows are collected.  (But only manual ANALYZEs will touch
foreign tables, for the moment, since among other things it's not very
clear how to handle remote permissions checks in an auto-analyze.)

contrib/file_fdw is extended to support this.

Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
2012-04-06 15:02:35 -04:00
Simon Riggs
8cb53654db Add DROP INDEX CONCURRENTLY [IF EXISTS], uses ShareUpdateExclusiveLock 2012-04-06 10:21:40 +01:00
Robert Haas
21cc529698 checkopint -> checkpoint
Report by Guillaume Lelarge.
2012-04-05 21:37:33 -04:00
Robert Haas
b736aef2ec Publish checkpoint timing information to pg_stat_bgwriter.
Greg Smith, Peter Geoghegan, and Robert Haas
2012-04-05 14:04:37 -04:00
Robert Haas
644828908f Expose track_iotiming data via the statistics collector.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly.  I've here exposed it at the database level instead.
2012-04-05 11:40:24 -04:00
Tom Lane
05dbd4a773 Fix plpgsql named-cursor-parameter feature for variable name conflicts.
The parser got confused if a cursor parameter had the same name as
a plpgsql variable.  Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.

Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
2012-04-04 21:50:31 -04:00
Peter Eisentraut
38b9693fd9 Add support for renaming domain constraints 2012-04-03 08:11:51 +03:00
Tom Lane
e8476f46fc Fix COPY FROM for null marker strings that correspond to invalid encoding.
The COPY documentation says "COPY FROM matches the input against the null
string before removing backslashes".  It is therefore reasonable to presume
that null markers like E'\\0' will work ... and they did, until someone put
the tests in the wrong order during microoptimization-driven rewrites.
Since then, we've been failing if the null marker is something that would
de-escape to an invalidly-encoded string.  Since null markers generally
need to be something that can't appear in the data, this represents a
nontrivial loss of functionality; surprising nobody noticed it earlier.

Per report from Jeff Davis.  Backpatch to 8.4 where this got broken.
2012-03-25 23:17:22 -04:00
Tom Lane
8279eb4191 Fix planner's handling of outer PlaceHolderVars within subqueries.
For some reason, in the original coding of the PlaceHolderVar mechanism
I had supposed that PlaceHolderVars couldn't propagate into subqueries.
That is of course entirely possible.  When it happens, we need to treat
an outer-level PlaceHolderVar much like an outer Var or Aggref, that is
SS_replace_correlation_vars() needs to replace the PlaceHolderVar with
a Param, and then when building the finished SubPlan we have to provide
the PlaceHolderVar expression as an actual parameter for the SubPlan.
The handling of the contained expression is a bit delicate but it can be
treated exactly like an Aggref's expression.

In addition to the missing logic in subselect.c, prepjointree.c was failing
to search subqueries for PlaceHolderVars that need their relids adjusted
during subquery pullup.  It looks like everyplace else that touches
PlaceHolderVars got it right, though.

Per report from Mark Murawski.  In 9.1 and HEAD, queries affected by this
oversight would fail with "ERROR: Upper-level PlaceHolderVar found where
not expected".  But in 9.0 and 8.4, you'd silently get possibly-wrong
answers, since the value transmitted into the subquery wouldn't go to null
when it should.
2012-03-24 16:21:39 -04:00
Tom Lane
0339047bc9 Code review for protransform patches.
Fix loss of previous expression-simplification work when a transform
function fires: we must not simply revert to untransformed input tree.
Instead build a dummy FuncExpr node to pass to the transform function.
This has the additional advantage of providing a simpler, more uniform
API for transform functions.

Move documentation to a somewhat less buried spot, relocate some
poorly-placed code, be more wary of null constants and invalid typmod
values, add an opr_sanity check on protransform function signatures,
and some other minor cosmetic adjustments.

Note: although this patch touches pg_proc.h, no need for catversion
bump, because the changes are cosmetic and don't actually change the
intended catalog contents.
2012-03-23 17:29:57 -04:00
Tom Lane
9dbf2b7d75 Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements.  The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.

In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs.  Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.

Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.

Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn".  There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.

Andres Freund and Tom Lane
2012-03-19 21:38:12 -04:00
Tom Lane
dd4134ea56 Revisit handling of UNION ALL subqueries with non-Var output columns.
In commit 57664ed25e I tried to fix a bug
reported by Teodor Sigaev by making non-simple-Var output columns distinct
(by wrapping their expressions with dummy PlaceHolderVar nodes).  This did
not work too well.  Commit b28ffd0fcc fixed
some ensuing problems with matching to child indexes, but per a recent
report from Claus Stadler, constraint exclusion of UNION ALL subqueries was
still broken, because constant-simplification didn't handle the injected
PlaceHolderVars well either.  On reflection, the original patch was quite
misguided: there is no reason to expect that EquivalenceClass child members
will be distinct.  So instead of trying to make them so, we should ensure
that we can cope with the situation when they're not.

Accordingly, this patch reverts the code changes in the above-mentioned
commits (though the regression test cases they added stay).  Instead, I've
added assorted defenses to make sure that duplicate EC child members don't
cause any problems.  Teodor's original problem ("MergeAppend child's
targetlist doesn't match MergeAppend") is addressed more directly by
revising prepare_sort_from_pathkeys to let the parent MergeAppend's sort
list guide creation of each child's sort list.

In passing, get rid of add_sort_column; as far as I can tell, testing for
duplicate sort keys at this stage is dead code.  Certainly it doesn't
trigger often enough to be worth expending cycles on in ordinary queries.
And keeping the test would've greatly complicated the new logic in
prepare_sort_from_pathkeys, because comparing pathkey list entries against
a previous output array requires that we not skip any entries in the list.

Back-patch to 9.1, like the previous patches.  The only known issue in
this area that wasn't caused by the ill-advised previous patches was the
MergeAppend planning failure, which of course is not relevant before 9.1.
It's possible that we need some of the new defenses against duplicate child
EC entries in older branches, but until there's some clear evidence of that
I'm going to refrain from back-patching further.
2012-03-16 13:11:55 -04:00
Tom Lane
c6a11b89e4 Teach SPGiST to store nulls and do whole-index scans.
This patch fixes the other major compatibility-breaking limitation of
SPGiST, that it didn't store anything for null values of the indexed
column, and so could not support whole-index scans or "x IS NULL"
tests.  The approach is to create a wholly separate search tree for
the null entries, and use fixed "allTheSame" insertion and search
rules when processing this tree, instead of calling the index opclass
methods.  This way the opclass methods do not need to worry about
dealing with nulls.

Catversion bump is for pg_am updates as well as the change in on-disk
format of SPGiST indexes; there are some tweaks in SPGiST WAL records
as well.

Heavily rewritten version of a patch by Oleg Bartunov and Teodor Sigaev.
(The original also stored nulls separately, but it reused GIN code to do
so; which required undesirable compromises in the on-disk format, and
would likely lead to bugs due to the GIN code being required to work in
two very different contexts.)
2012-03-11 16:29:59 -04:00
Peter Eisentraut
39d74e346c Add support for renaming constraints
reviewed by Josh Berkus and Dimitri Fontaine
2012-03-10 20:19:13 +02:00
Tom Lane
0e5e167aae Collect and use element-frequency statistics for arrays.
This patch improves selectivity estimation for the array <@, &&, and @>
(containment and overlaps) operators.  It enables collection of statistics
about individual array element values by ANALYZE, and introduces
operator-specific estimators that use these stats.  In addition,
ScalarArrayOpExpr constructs of the forms "const = ANY/ALL (array_column)"
and "const <> ANY/ALL (array_column)" are estimated by treating them as
variants of the containment operators.

Since we still collect scalar-style stats about the array values as a
whole, the pg_stats view is expanded to show both these stats and the
array-style stats in separate columns.  This creates an incompatible change
in how stats for tsvector columns are displayed in pg_stats: the stats
about lexemes are now displayed in the array-related columns instead of the
original scalar-related columns.

There are a few loose ends here, notably that it'd be nice to be able to
suppress either the scalar-style stats or the array-element stats for
columns for which they're not useful.  But the patch is in good enough
shape to commit for wider testing.

Alexander Korotkov, reviewed by Noah Misch and Nathan Boley
2012-03-03 20:20:57 -05:00
Peter Eisentraut
b59ca98209 Allow CREATE TABLE (LIKE ...) from composite type
The only reason this didn't work before was that parserOpenTable()
rejects composite types.  So use relation_openrv() directly and
manually do the errposition() setup that parserOpenTable() does.
2012-03-03 16:03:05 +02:00
Peter Eisentraut
6688d2878e Add COLLATION FOR expression
reviewed by Jaime Casanova
2012-03-02 21:12:16 +02:00
Peter Eisentraut
9cfd800aab Add some enumeration commas, for consistency 2012-02-24 11:04:45 +02:00
Tom Lane
173e29aa5d Fix the general case of quantified regex back-references.
Cases where a back-reference is part of a larger subexpression that
is quantified have never worked in Spencer's regex engine, because
he used a compile-time transformation that neglected the need to
check the back-reference match in iterations before the last one.
(That was okay for capturing parens, and we still do it if the
regex has *only* capturing parens ... but it's not okay for backrefs.)

To make this work properly, we have to add an "iteration" node type
to the regex engine's vocabulary of sub-regex nodes.  Since this is a
moderately large change with a fair risk of introducing new bugs of its
own, apply to HEAD only, even though it's a fix for a longstanding bug.
2012-02-24 01:41:03 -05:00
Andrew Dunstan
83fcaffea2 Fix a couple of cases of JSON output.
First, as noted by Itagaki Takahiro, a datum of type JSON doesn't
need to be escaped. Second, ensure that numeric output not in
the form of a legal JSON number is quoted and escaped.
2012-02-20 15:01:03 -05:00
Tom Lane
5223f96d92 Fix regex back-references that are directly quantified with *.
The syntax "\n*", that is a backref with a * quantifier directly applied
to it, has never worked correctly in Spencer's library.  This has been an
open bug in the Tcl bug tracker since 2005:
https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894

The core of the problem is in parseqatom(), which first changes "\n*" to
"\n+|" and then applies repeat() to the NFA representing the backref atom.
repeat() thinks that any arc leading into its "rp" argument is part of the
sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
the arc that was intended to represent the empty bypass around "\n+", this
arc gets moved too, so that it now leads into the state loop created by
repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
something that represents zero or more repetitions of the NFA representing
the backref atom.  In the original example, in place of
	^([bc])\1*$
we now have something that acts like
	^([bc])(\1+|[bc]*)$
At runtime, the branch involving the actual backref fails, as it's supposed
to, but then the other branch succeeds anyway.

We could no doubt fix this by some rearrangement of the operations in
parseqatom(), but that code is plenty ugly already, and what's more the
whole business of converting "x*" to "x+|" probably needs to go away to fix
another problem I'll mention in a moment.  Instead, this patch suppresses
the *-conversion when the target is a simple backref atom, leaving the case
of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
one-liner, at the cost of having to tweak cbrdissect() a little.  In the
event I went a bit further than that and rewrote cbrdissect() to check all
the string-length-related conditions before it starts comparing characters.
It seems a bit stupid to possibly iterate through many copies of an
n-character backreference, only to fail at the end because the target
string's length isn't a multiple of n --- we could have found that out
before starting.  The existing coding could only be a win if integer
division is hugely expensive compared to character comparison, but I don't
know of any modern machine where that might be true.

This does not fix all the problems with quantified back-references.  In
particular, the code is still broken for back-references that appear within
a larger expression that is quantified (so that direct insertion of the
quantification limits into the BACKREF node doesn't apply).  I think fixing
that will take some major surgery on the NFA code, specifically introducing
an explicit iteration node type instead of trying to transform iteration
into concatenation of modified regexps.

Back-patch to all supported branches.  In HEAD, also add a regression test
case for this.  (It may seem a bit silly to create a regression test file
for just one test case; but I'm expecting that we will soon import a whole
bunch of regex regression tests from Tcl, so might as well create the
infrastructure now.)
2012-02-20 00:52:33 -05:00
Andrew Dunstan
2f582f76b1 Improve pretty printing of viewdefs.
Some line feeds are added to target lists and from lists to make
them more readable. By default they wrap at 80 columns if possible,
but the wrap column is also selectable - if 0 it wraps after every
item.

Andrew Dunstan, reviewed by Hitoshi Harada.
2012-02-19 11:43:46 -05:00
Tom Lane
759c95c45b Update expected/collate.linux.utf8.out for recent plpgsql changes.
This file was missed in commit 4c6cedd1b0.
2012-02-18 18:08:02 -05:00
Tom Lane
4bfe68dfab Run a portal's cleanup hook immediately when pushing it to FAILED state.
This extends the changes of commit 6252c4f9e2
so that we run the cleanup hook earlier for failure cases as well as
success cases.  As before, the point is to avoid an assertion failure from
an Assert I added in commit a874fe7b4c, which
was meant to check that no user-written code can be called during portal
cleanup.  This fixes a case reported by Pavan Deolasee in which the Assert
could be triggered during backend exit (see the new regression test case),
and also prevents the possibility that the cleanup hook is run after
portions of the portal's state have already been recycled.  That doesn't
really matter in current usage, but it foreseeably could matter in the
future.

Back-patch to 9.1 where the Assert in question was added.
2012-02-15 16:19:01 -05:00
Robert Haas
ac9100f8cf More regression tests for LEAKPROOF/NOT LEAKPROOF stuff.
Along the way, move create_function_3 into a parallel schedule.

KaiGai Kohei
2012-02-15 10:56:26 -05:00
Tom Lane
398f70ec07 Preserve column names in the execution-time tupledesc for a RowExpr.
The hstore and json datatypes both have record-conversion functions that
pay attention to column names in the composite values they're handed.
We used to not worry about inserting correct field names into tuple
descriptors generated at runtime, but given these examples it seems
useful to do so.  Observe the nicer-looking results in the regression
tests whose results changed.

catversion bump because there is a subtle change in requirements for stored
rule parsetrees: RowExprs from ROW() constructs now have to include field
names.

Andrew Dunstan and Tom Lane
2012-02-14 17:34:56 -05:00
Robert Haas
dc66f1c5f2 Remove new, intermittently failing regression test.
Per buildfarm.
2012-02-13 23:43:24 -05:00
Robert Haas
e37e448650 Fix new create_function_3 regression tests not to rely on tuple order.
Per buildfarm.
2012-02-13 22:49:07 -05:00
Robert Haas
cd30728fb2 Allow LEAKPROOF functions for better performance of security views.
We don't normally allow quals to be pushed down into a view created
with the security_barrier option, but functions without side effects
are an exception: they're OK.  This allows much better performance in
common cases, such as when using an equality operator (that might
even be indexable).

There is an outstanding issue here with the CREATE FUNCTION / ALTER
FUNCTION syntax: there's no way to use ALTER FUNCTION to unset the
leakproof flag.  But I'm committing this as-is so that it doesn't
have to be rebased again; we can fix up the grammar in a future
commit.

KaiGai Kohei, with some wordsmithing by me.
2012-02-13 22:21:14 -05:00
Tom Lane
d06e2d2005 Add ORDER BY to a query to prevent occasional regression test failures.
Per buildfarm, we sometimes get row-ordering variations in the output.
This also makes this query look more like numerous other ones in the same
test file.
2012-02-10 02:33:00 -05:00
Tom Lane
cbba55d6d7 Support min/max index optimizations on boolean columns.
Since bool_and() is equivalent to min(), and bool_or() to max(), we might
as well let them be index-optimized in the same way.  The practical value
of this is debatable at best, but it seems nearly cost-free to enable it.
Code-wise, we need only adjust the entries in pg_aggregate.  There is a
measurable planning speed penalty for a query involving one of these
aggregates, but it is only a few percent in simple cases, so that seems
acceptable.

Marti Raudsepp, reviewed by Abhijit Menon-Sen
2012-02-08 12:41:48 -05:00
Tom Lane
3db6524fe6 Mark some more I/O-conversion-invoking functions as stable not volatile.
When written, textanycat, anytextcat, quote_literal, and quote_nullable
were marked volatile, because they could invoke arbitrary type-specific
output functions as part of casting their anyelement arguments to text.
Since then, we have defined a project policy that I/O functions must not
be volatile, as per commit aab353a60b.
So these functions can safely be downgraded to stable.  Most of the time
this makes no difference since they'll get inlined anyway, but as noted
by Andrew Dunstan, there are cases where the volatile marking prevents
optimizations that the planner does before function inlining.  (I think
I might have overlooked these functions in the earlier commit on the
grounds that inlining would make it moot, but not so --- tgl)

This change results in a change in the expected output of the json
regression tests, because the planner can now flatten a sub-select
that it failed to before.  The old output is preferable, but getting
that back will require some as-yet-unfinished work on RowExpr handling.

Marti Raudsepp
2012-02-08 11:29:29 -05:00
Tom Lane
342b83fdca Revert "Add some regression test cases for denormalized float8 input."
This reverts commit 500cf66d55.  As was
more or less expected, a small minority of platforms won't accept
denormalized input even with the recent changes.  It doesn't seem
especially helpful to test this if we're going to have to provide an
alternate expected-file to allow failure.
2012-02-04 15:52:09 -05:00
Andrew Dunstan
39909d1d39 Add array_to_json and row_to_json functions.
Also move the escape_json function from explain.c to json.c where it
seems to belong.

Andrew Dunstan, Reviewd by Abhijit Menon-Sen.
2012-02-03 12:11:16 -05:00
Tom Lane
500cf66d55 Add some regression test cases for denormalized float8 input.
This was submitted with the previous patch, but I'm committing it
separately to ease backing it out if these results prove too unportable.

Marti Raudsepp, after a proposal by Jeroen Vermeulen
2012-02-01 13:13:54 -05:00
Tom Lane
bef47331b6 Code review for plpgsql fn_signature patch.
Don't quote the output of format_procedure(); it's already quoted quite
enough.  Remove the fn_name field, which was now just dead weight.  Fix
remaining expected-output files.
2012-02-01 02:14:37 -05:00
Robert Haas
5384a73f98 Built-in JSON data type.
Like the XML data type, we simply store JSON data as text, after checking
that it is valid.  More complex operations such as canonicalization and
comparison may come later, but this is enough for not.

There are a few open issues here, such as whether we should attempt to
detect UTF-8 surrogate pairs represented as \uXXXX\uYYYY, but this gets
the basic framework in place.
2012-01-31 11:48:23 -05:00