Commit Graph

1997 Commits

Author SHA1 Message Date
Tom Lane
a20993608a Fix case of window function + aggregate + GROUP BY expression.
In commit 1bc16a9460 I added a minor
optimization to drop the component variables of a GROUP BY expression from
the target list computed at the aggregation level of a query, if those Vars
weren't referenced elsewhere in the tlist.  However, I overlooked that the
window-function planning code would deconstruct such expressions and thus
need to have access to their component variables.  Fix it to not do that.

While at it, I removed the distinction between volatile and nonvolatile
window partition/order expressions: the code now computes all of them
at the aggregation level.  This saves a relatively expensive check for
volatility, and it's unclear that the resulting plan isn't better anyway.

Per bug #7535 from Louis-David Mitterrand.  Back-patch to 9.2.
2012-09-13 11:32:25 -04:00
Tom Lane
46c508fbcf Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH.
The planner previously assumed that parameter Vars having the same absolute
query level, varno, and varattno could safely be assigned the same runtime
PARAM_EXEC slot, even though they might be different Vars appearing in
different subqueries.  This was (probably) safe before the introduction of
CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can
be executed during execution of some other subquery, causing the lifespan
of Params at the same syntactic nesting level as the CTE to overlap with
use of the same slots inside the CTE.  In 9.1 we created additional hazards
by using the same parameter-assignment technology for nestloop inner scan
parameters, but it was broken before that, as illustrated by the added
regression test.

To fix, restructure the planner's management of PlannerParamItems so that
items having different semantic lifespans are kept rigorously separated.
This will probably result in complex queries using more runtime PARAM_EXEC
slots than before, but the slots are cheap enough that this hardly matters.
Also, stop generating PlannerParamItems containing Params for subquery
outputs: all we really need to do is reserve the PARAM_EXEC slot number,
and that now only takes incrementing a counter.  The planning code is
simpler and probably faster than before, as well as being more correct.

Per report from Vik Reykja.

These changes will mostly also need to be made in the back branches, but
I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 12:55:01 -04:00
Tom Lane
6d2c8c0e2a Drop cheap-startup-cost paths during add_path() if we don't need them.
We can detect whether the planner top level is going to care at all about
cheap startup cost (it will only do so if query_planner's tuple_fraction
argument is greater than zero).  If it isn't, we might as well discard
paths immediately whose only advantage over others is cheap startup cost.
This turns out to get rid of quite a lot of paths in complex queries ---
I saw planner runtime reduction of more than a third on one large query.

Since add_path isn't currently passed the PlannerInfo "root", the easiest
way to tell it whether to do this was to add a bool flag to RelOptInfo.
That's a bit redundant, since all relations in a given query level will
have the same setting.  But in the future it's possible that we'd refine
the control decision to work on a per-relation basis, so this seems like
a good arrangement anyway.

Per my suggestion of a few months ago.
2012-09-01 18:16:24 -04:00
Tom Lane
4da6439bd8 Fix mark_placeholder_maybe_needed to handle LATERAL references.
If a PlaceHolderVar contains a pulled-up LATERAL reference, its minimum
possible evaluation level might be higher in the join tree than its
original syntactic location.  That in turn affects the ph_needed level for
any contained PlaceHolderVars (that is, those PHVs had better propagate up
the join tree at least to the evaluation level of the outer PHV).  We got
this mostly right, but mark_placeholder_maybe_needed() failed to account
for the effect, and in consequence could leave the inner PHVs with
ph_may_need less than what their ultimate ph_needed value will be.  That's
bad because it could lead to failure to select a join order that will allow
evaluation of the inner PHV at a valid location.  Fix that, and add an
Assert that checks that we don't ever set ph_needed to more than
ph_may_need.
2012-09-01 13:56:46 -04:00
Tom Lane
da3df99870 Fix LATERAL references to join alias variables.
I had thought this case worked already, but perhaps I didn't re-test it
after adding extract_lateral_references() ...
2012-08-31 17:44:31 -04:00
Tom Lane
d1a4db8d25 Improve EXPLAIN's ability to cope with LATERAL references in plans.
push_child_plan/pop_child_plan didn't bother to adjust the "ancestors"
list of parent plan nodes when descending to a child plan node.  I think
this was okay when it was written, but it's not okay in the presence of
LATERAL references, since a subplan node could easily be returning a
LATERAL value back up to the same nestloop node that provides the value.
Per changed regression test results, the omission led to failure to
interpret Param nodes that have perfectly good interpretations.
2012-08-30 12:56:50 -04:00
Tom Lane
e83bb10d6d Adjust definition of cheapest_total_path to work better with LATERAL.
In the initial cut at LATERAL, I kept the rule that cheapest_total_path
was always unparameterized, which meant it had to be NULL if the relation
has no unparameterized paths.  It turns out to work much more nicely if
we always have *some* path nominated as cheapest-total for each relation.
In particular, let's still say it's the cheapest unparameterized path if
there is one; if not, take the cheapest-total-cost path among those of
the minimum available parameterization.  (The first rule is actually
a special case of the second.)

This allows reversion of some temporary lobotomizations I'd put in place.
In particular, the planner can now consider hash and merge joins for
joins below a parameter-supplying nestloop, even if there aren't any
unparameterized paths available.  This should bring planning of
LATERAL-containing queries to the same level as queries not using that
feature.

Along the way, simplify management of parameterized paths in add_path()
and friends.  In the original coding for parameterized paths in 9.2,
I tried to minimize the logic changes in add_path(), so it just treated
parameterization as yet another dimension of comparison for paths.
We later made it ignore pathkeys (sort ordering) of parameterized paths,
on the grounds that ordering isn't a useful property for the path on the
inside of a nestloop, so we might as well get rid of useless parameterized
paths as quickly as possible.  But we didn't take that reasoning as far as
we should have.  Startup cost isn't a useful property inside a nestloop
either, so add_path() ought to discount startup cost of parameterized paths
as well.  Having done that, the secondary sorting I'd implemented (in
add_parameterized_path) is no longer needed --- any parameterized path that
survives add_path() at all is worth considering at higher levels.  So this
should be a bit faster as well as simpler.
2012-08-29 22:06:07 -04:00
Tom Lane
e323c55301 Fix DROP INDEX CONCURRENTLY IF EXISTS.
This threw ERROR, not the expected NOTICE, if the index didn't exist.
The bug was actually visible in not-as-expected regression test output,
so somebody wasn't paying too close attention in commit
8cb53654db.
Per report from Brendan Byrd.
2012-08-27 12:45:43 -04:00
Tom Lane
9ff79b9d4e Fix up planner infrastructure to support LATERAL properly.
This patch takes care of a number of problems having to do with failure
to choose valid join orders and incorrect handling of lateral references
pulled up from subqueries.  Notable changes:

* Add a LateralJoinInfo data structure similar to SpecialJoinInfo, to
represent join ordering constraints created by lateral references.
(I first considered extending the SpecialJoinInfo structure, but the
semantics are different enough that a separate data structure seems
better.)  Extend join_is_legal() and related functions to prevent trying
to form unworkable joins, and to ensure that we will consider joins that
satisfy lateral references even if the joins would be clauseless.

* Fill in the infrastructure needed for the last few types of relation scan
paths to support parameterization.  We'd have wanted this eventually
anyway, but it is necessary now because a relation that gets pulled up out
of a UNION ALL subquery may acquire a reltargetlist containing lateral
references, meaning that its paths *have* to be parameterized whether or
not we have any code that can push join quals down into the scan.

* Compute data about lateral references early in query_planner(), and save
in RelOptInfo nodes, to avoid repetitive calculations later.

* Assorted corner-case bug fixes.

There's probably still some bugs left, but this is a lot closer to being
real than it was before.
2012-08-26 22:50:23 -04:00
Tom Lane
ec8a0135c3 Fix cascading privilege revoke to notice when privileges are still held.
If we revoke a grant option from some role X, but X still holds the option
via another grant, we should not recursively revoke the privilege from
role(s) Y that X had granted it to.  This was supposedly fixed as one
aspect of commit 4b2dafcc0b, but I must not
have tested it, because in fact that code never worked: it forgot to shift
the grant-option bits back over when masking the bits being revoked.

Per bug #6728 from Daniel German.  Back-patch to all active branches,
since this has been wrong since 8.0.
2012-08-23 17:25:10 -04:00
Tom Lane
9b2a237cee Fix typo in comment. 2012-08-19 22:56:17 -04:00
Tom Lane
092d7ded29 Allow OLD and NEW in multi-row VALUES within rules.
Now that we have LATERAL, it's fairly painless to allow this case, which
was left as a TODO in the original multi-row VALUES implementation.
2012-08-19 14:12:16 -04:00
Tom Lane
084a29c94f Another round of planner fixes for LATERAL.
Formerly, subquery pullup had no need to examine other entries in the range
table, since they could not contain any references to the subquery being
pulled up.  That's no longer true with LATERAL, so now we need to be able
to visit rangetable subexpressions to replace Vars referencing the
pulled-up subquery.  Also, this means that extract_lateral_references must
be unsurprised at encountering lateral PlaceHolderVars, since such might be
created when pulling up a subquery that's underneath an outer join with
respect to the lateral reference.
2012-08-18 14:10:17 -04:00
Tom Lane
f5983923d8 Allow create_index_paths() to consider multiple join bitmapscan paths.
In the initial cut at the "parameterized paths" feature, I'd simplified
create_index_paths() to the point where it would only generate a single
parameterized bitmap path per relation.  Experimentation with an example
supplied by Josh Berkus convinces me that that's not good enough: we really
need to consider a bitmap path for each possible outer relation.  Otherwise
we have regressions relative to pre-9.2 versions, in which the planner
picks a plain indexscan where it should have used a bitmap scan in queries
involving three or more tables.  Indeed, after fixing this, several queries
in the regression tests show improved plans as a result of using bitmap not
plain indexscans.
2012-08-16 13:03:54 -04:00
Heikki Linnakangas
317dd55a9c Add SP-GiST support for range types.
The implementation is a quad-tree, largely copied from the quad-tree
implementation for points. The lower and upper bound of ranges are the 2d
coordinates, with some extra code to handle empty ranges.

I left out the support for adjacent operator, -|-, from the original patch.
Not because there was necessarily anything wrong with it, but it was more
complicated than the other operators, and I only have limited time for
reviewing. That will follow as a separate patch.

Alexander Korotkov, reviewed by Jeff Davis and me.
2012-08-16 14:30:45 +03:00
Tom Lane
4c5316931f Fix rescan logic in nodeCtescan.
The previous coding essentially assumed that nodes would be rescanned in
the same order they were initialized in; or at least that the "leader" of
a group of CTEscans would be rescanned before any others were required to
execute.  Unfortunately, that isn't even a little bit true.  It's possible
to devise queries in which the leader isn't rescanned until other CTEscans
on the same CTE have run to completion, or even in which the leader never
gets a rescan call at all.

The fix makes the leader specially responsible only for initial creation
and final destruction of the tuplestore; rescan resets are now a
symmetrically shared responsibility.  This means that we might reset the
tuplestore multiple times when restarting a plan subtree containing
multiple CTEscans; but resetting an already-empty tuplestore is cheap
enough that that doesn't seem like a problem.

Per report from Adam Mackler; the new regression test cases are based on
his example query.

Back-patch to 8.4 where CTE scans were introduced.
2012-08-15 19:02:33 -04:00
Tom Lane
17351fce4e Prevent access to external files/URLs via XML entity references.
xml_parse() would attempt to fetch external files or URLs as needed to
resolve DTD and entity references in an XML value, thus allowing
unprivileged database users to attempt to fetch data with the privileges
of the database server.  While the external data wouldn't get returned
directly to the user, portions of it could be exposed in error messages
if the data didn't parse as valid XML; and in any case the mere ability
to check existence of a file might be useful to an attacker.

The ideal solution to this would still allow fetching of references that
are listed in the host system's XML catalogs, so that documents can be
validated according to installed DTDs.  However, doing that with the
available libxml2 APIs appears complex and error-prone, so we're not going
to risk it in a security patch that necessarily hasn't gotten wide review.
So this patch merely shuts off all access, causing any external fetch to
silently expand to an empty string.  A future patch may improve this.

In HEAD and 9.2, also suppress warnings about undefined entities, which
would otherwise occur as a result of not loading referenced DTDs.  Previous
branches don't show such warnings anyway, due to different error handling
arrangements.

Credit to Noah Misch for first reporting the problem, and for much work
towards a solution, though this simplistic approach was not his preference.
Also thanks to Daniel Veillard for consultation.

Security: CVE-2012-3489
2012-08-14 18:31:16 -04:00
Tom Lane
c1774d2c81 More fixes for planner's handling of LATERAL.
Re-allow subquery pullup for LATERAL subqueries, except when the subquery
is below an outer join and contains lateral references to relations outside
that outer join.  If we pull up in such a case, we risk introducing lateral
cross-references into outer joins' ON quals, which is something the code is
entirely unprepared to cope with right now; and I'm not sure it'll ever be
worth coping with.

Support lateral refs in VALUES (this seems to be the only additional path
type that needs such support as a consequence of re-allowing subquery
pullup).

Put in a slightly hacky fix for joinpath.c's refusal to consider
parameterized join paths even when there cannot be any unparameterized
ones.  This was causing "could not devise a query plan for the given query"
failures in queries involving more than two FROM items.

Put in an even more hacky fix for distribute_qual_to_rels() being unhappy
with join quals that contain references to rels outside their syntactic
scope; which is to say, disable that test altogether.  Need to think about
how to preserve some sort of debugging cross-check here, while not
expending more cycles than befits a debugging cross-check.
2012-08-12 16:01:26 -04:00
Tom Lane
e76af54137 Fix some issues with LATERAL(SELECT UNION ALL SELECT).
The LATERAL marking has to be propagated down to the UNION leaf queries
when we pull them up.  Also, fix the formerly stubbed-off
set_append_rel_pathlist().  It does already have enough smarts to cope with
making a parameterized Append path at need; it just has to not assume that
there *must* be an unparameterized path.
2012-08-11 18:42:56 -04:00
Tom Lane
eaccfded98 Centralize the logic for detecting misplaced aggregates, window funcs, etc.
Formerly we relied on checking after-the-fact to see if an expression
contained aggregates, window functions, or sub-selects when it shouldn't.
This is grotty, easily forgotten (indeed, we had forgotten to teach
DefineIndex about rejecting window functions), and none too efficient
since it requires extra traversals of the parse tree.  To improve matters,
define an enum type that classifies all SQL sub-expressions, store it in
ParseState to show what kind of expression we are currently parsing, and
make transformAggregateCall, transformWindowFuncCall, and transformSubLink
check the expression type and throw error if the type indicates the
construct is disallowed.  This allows removal of a large number of ad-hoc
checks scattered around the code base.  The enum type is sufficiently
fine-grained that we can still produce error messages of at least the
same specificity as before.

Bringing these error checks together revealed that we'd been none too
consistent about phrasing of the error messages, so standardize the wording
a bit.

Also, rewrite checking of aggregate arguments so that it requires only one
traversal of the arguments, rather than up to three as before.

In passing, clean up some more comments left over from add_missing_from
support, and annotate some tests that I think are dead code now that that's
gone.  (I didn't risk actually removing said dead code, though.)
2012-08-10 11:36:15 -04:00
Tom Lane
5ebaaa4944 Implement SQL-standard LATERAL subqueries.
This patch implements the standard syntax of LATERAL attached to a
sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM,
since set-returning function calls are expected to be one of the principal
use-cases.

The main change here is a rewrite of the mechanism for keeping track of
which relations are visible for column references while the FROM clause is
being scanned.  The parser "namespace" lists are no longer lists of bare
RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE
pointer as well as some visibility-controlling flags.  Aside from
supporting LATERAL correctly, this lets us get rid of the ancient hacks
that required rechecking subqueries and JOIN/ON and function-in-FROM
expressions for invalid references after they were initially parsed.
Invalid column references are now always correctly detected on sight.

In passing, remove assorted parser error checks that are now dead code by
virtue of our having gotten rid of add_missing_from, as well as some
comments that are obsolete for the same reason.  (It was mainly
add_missing_from that caused so much fudging here in the first place.)

The planner support for this feature is very minimal, and will be improved
in future patches.  It works well enough for testing purposes, though.

catversion bump forced due to new field in RangeTblEntry.
2012-08-07 19:02:54 -04:00
Bruce Momjian
ac78c4178b Fix to_char(), to_date(), and to_timestamp() to handle negative/BC
century specifications just like positive/AD centuries.  Previously the
behavior was either wrong or inconsistent with positive/AD handling.

Centuries without years now always assume the first year of the century,
which is now documented.
2012-08-07 13:34:44 -04:00
Tom Lane
3152bf722f Fix bugs with parsing signed hh:mm and hh:mm:ss fields in interval input.
DecodeInterval() failed to honor the "range" parameter (the special SQL
syntax for indicating which fields appear in the literal string) if the
time was signed.  This seems inappropriate, so make it work like the
not-signed case.  The inconsistency was introduced in my commit
f867339c01, which as noted in its log message
was only really focused on making SQL-compliant literals work per spec.
Including a sign here is not per spec, but if we're going to allow it
then it's reasonable to expect it to work like the not-signed case.

Also, remove bogus setting of tmask, which caused subsequent processing to
think that what had been given was a timezone and not an hh:mm(:ss) field,
thus confusing checks for redundant fields.  This seems to be an aboriginal
mistake in Lockhart's commit 2cf1642461.

Add regression test cases to illustrate the changed behaviors.

Back-patch as far as 8.4, where support for spec-compliant interval
literals was added.

Range problem reported and diagnosed by Amit Kapila, tmask problem by me.
2012-08-03 17:40:43 -04:00
Tom Lane
f6ce81f55a Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).
Parse analysis neglected to cover the case of a WITH clause attached to an
intermediate-level set operation; it only handled WITH at the top level
or WITH attached to a leaf-level SELECT.  Per report from Adam Mackler.

In HEAD, I rearranged the order of SelectStmt's fields to put withClause
with the other fields that can appear on non-leaf SelectStmts.  In back
branches, leave it alone to avoid a possible ABI break for third-party
code.

Back-patch to 8.4 where WITH support was added.
2012-07-31 17:56:21 -04:00
Tom Lane
af026b5d9b Fix longstanding crash-safety bug with newly-created-or-reset sequences.
If a crash occurred immediately after the first nextval() call for a serial
column, WAL replay would restore the sequence to a state in which it
appeared that no nextval() had been done, thus allowing the first sequence
value to be returned again by the next nextval() call; as reported in
bug #6748 from Xiangming Mei.

More generally, the problem would occur if an ALTER SEQUENCE was executed
on a freshly created or reset sequence.  (The manifestation with serial
columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step
to serial column creation.)  The cause is that sequence creation attempted
to save one WAL entry by writing out a WAL record that made it appear that
the first nextval() had already happened (viz, with is_called = true),
while marking the sequence's in-database state with log_cnt = 1 to show
that the first nextval() need not emit a WAL record.  However, ALTER
SEQUENCE would emit a new WAL entry reflecting the actual in-database state
(with is_called = false).  Then, nextval would allocate the first sequence
value and set is_called = true, but it would trust the log_cnt value and
not emit any WAL record.  A crash at this point would thus restore the
sequence to its post-ALTER state, causing the next nextval() call to return
the first sequence value again.

To fix, get rid of the idea of logging an is_called status different from
reality.  This means that the first nextval-driven WAL record will happen
at the first nextval call not the second, but the marginal cost of that is
pretty negligible.  In addition, make sure that ALTER SEQUENCE resets
log_cnt to zero in any case where it touches sequence parameters that
affect future nextval results.  This will result in some user-visible
changes in the contents of a sequence's log_cnt column, as reflected in the
patch's regression test changes; but no application should be depending on
that anyway, since it was already true that log_cnt changes rather
unpredictably depending on checkpoint timing.

In addition, make some basically-cosmetic improvements to get rid of
sequence.c's undesirable intimacy with page layout details.  It was always
really trying to WAL-log the contents of the sequence tuple, so we should
have it do that directly using a HeapTuple's t_data and t_len, rather than
backing into it with some magic assumptions about where the tuple would be
on the sequence's page.

Back-patch to all supported branches.
2012-07-25 17:42:23 -04:00
Alvaro Herrera
d7b47e5155 Change syntax of new CHECK NO INHERIT constraints
The initially implemented syntax, "CHECK NO INHERIT (expr)" was not
deemed very good, so switch to "CHECK (expr) NO INHERIT" instead.  This
way it looks similar to SQL-standards compliant constraint attribute.

Backport to 9.2 where the new syntax and feature was introduced.

Per discussion.
2012-07-24 16:01:32 -04:00
Tom Lane
b71258af56 Fix name collision between concurrent regression tests.
Commit f5bcd398ad introduced a test using
a table named "circles" in inherit.sql.  Unfortunately, the concurrently
executed constraints test was already using that table name, so the
parallel regression tests would sometimes fail.  Rename table to dodge
the problem.  Per buildfarm.
2012-07-22 00:01:19 -04:00
Alvaro Herrera
f5bcd398ad connoinherit may be true only for CHECK constraints
The code was setting it true for other constraints, which is
bogus.  Doing so caused bogus catalog entries for such constraints, and
in particular caused an error to be raised when trying to drop a
constraint of types other than CHECK from a table that has children,
such as reported in bug #6712.

In 9.2, additionally ignore connoinherit=true for other constraint
types, to avoid having to force initdb; existing databases might already
contain bogus catalog entries.

Includes a catversion bump (in HEAD only).

Bug report from Miroslav Šulc
Analysis from Amit Kapila and Noah Misch; Amit also contributed the patch.
2012-07-20 14:08:07 -04:00
Tom Lane
8e617e29aa Fix whole-row Var evaluation to cope with resjunk columns (again).
When a whole-row Var is reading the result of a subquery, we need it to
ignore any "resjunk" columns that the subquery might have evaluated for
GROUP BY or ORDER BY purposes.  We've hacked this area before, in commit
68e40998d0, but that fix only covered
whole-row Vars of named composite types, not those of RECORD type; and it
was mighty klugy anyway, since it just assumed without checking that any
extra columns in the result must be resjunk.  A proper fix requires getting
hold of the subquery's targetlist so we can actually see which columns are
resjunk (whereupon we can use a JunkFilter to get rid of them).  So bite
the bullet and add some infrastructure to make that possible.

Per report from Andrew Dunstan and additional testing by Merlin Moncure.
Back-patch to all supported branches.  In 8.3, also back-patch commit
292176a118, which for some reason I had
not done at the time, but it's a prerequisite for this change.
2012-07-20 13:10:58 -04:00
Robert Haas
3a0e4d36eb Make new event trigger facility actually do something.
Commit 3855968f32 added syntax, pg_dump,
psql support, and documentation, but the triggers didn't actually fire.
With this commit, they now do.  This is still a pretty basic facility
overall because event triggers do not get a whole lot of information
about what the user is trying to do unless you write them in C; and
there's still no option to fire them anywhere except at the very
beginning of the execution sequence, but it's better than nothing,
and a good building block for future work.

Along the way, add a regression test for ALTER LARGE OBJECT, since
testing of event triggers reveals that we haven't got one.

Dimitri Fontaine and Robert Haas
2012-07-20 11:39:01 -04:00
Robert Haas
3855968f32 Syntax support and documentation for event triggers.
They don't actually do anything yet; that will get fixed in a
follow-on commit.  But this gets the basic infrastructure in place,
including CREATE/ALTER/DROP EVENT TRIGGER; support for COMMENT,
SECURITY LABEL, and ALTER EXTENSION .. ADD/DROP EVENT TRIGGER;
pg_dump and psql support; and documentation for the anticipated
initial feature set.

Dimitri Fontaine, with review and a bunch of additional hacking by me.
Thom Brown extensively reviewed earlier versions of this patch set,
but there's not a whole lot of that code left in this commit, as it
turns out.
2012-07-18 10:16:16 -04:00
Peter Eisentraut
a84bf4922e Avoid extra newlines in XML mapping in table forest mode
found by P. Broennimann
2012-07-12 23:52:50 +03:00
Tom Lane
84a42560c8 Add array_remove() and array_replace() functions.
These functions support removing or replacing array element value(s)
matching a given search value.  Although intended mainly to support a
future array-foreign-key feature, they seem useful in their own right.

Marco Nenciarini and Gabriele Bartolini, reviewed by Alex Hunsaker
2012-07-11 13:59:35 -04:00
Tom Lane
628cbb50ba Re-implement extraction of fixed prefixes from regular expressions.
To generate btree-indexable conditions from regex WHERE conditions (such as
WHERE indexed_col ~ '^foo'), we need to be able to identify any fixed
prefix that a regex might have; that is, find any string that must be a
prefix of all strings satisfying the regex.  We used to do that with
entirely ad-hoc code that looked at the source text of the regex.  It
didn't know very much about regex syntax, which mostly meant that it would
fail to identify some optimizable cases; but Viktor Rosenfeld reported that
it would produce actively wrong answers for quantified parenthesized
subexpressions, such as '^(foo)?bar'.  Rather than trying to extend the
ad-hoc code to cover this, let's get rid of it altogether in favor of
identifying prefixes by examining the compiled form of a regex.

To do this, I've added a new entry point "pg_regprefix" to the regex library;
hopefully it is defined in a sufficiently general fashion that it can remain
in the library when/if that code gets split out as a standalone project.

Since this bug has been there for a very long time, this fix needs to get
back-patched.  However it depends on some other recent commits (particularly
the addition of wchar-to-database-encoding conversion), so I'll commit this
separately and then go to work on back-porting the necessary fixes.
2012-07-10 14:54:37 -04:00
Robert Haas
d7c734841b Reduce messages about implicit indexes and sequences to DEBUG1.
Per recent discussion on pgsql-hackers, these messages are too
chatty for most users.
2012-07-04 20:35:29 -04:00
Peter Eisentraut
2b44306315 Assorted message style improvements 2012-07-02 21:12:46 +03:00
Robert Haas
0caa0d04db Make DROP FUNCTION hint more informative.
If you decide you want to take the hint, this gives you something you
can paste right back to the server.

Dean Rasheed
2012-06-26 13:33:23 -04:00
Tom Lane
fe3db74002 Share RI trigger code between NO ACTION and RESTRICT cases.
These triggers are identical except for whether ri_Check_Pk_Match is to be
called, so factor out the common code to save a couple hundred lines.

Also, eliminate null-column checks in ri_Check_Pk_Match, since they're
duplicate with the calling functions and require unnecessary complication
in its API statement.

Simplify the way code is shared between RI_FKey_check_ins and
RI_FKey_check_upd, too.
2012-06-19 14:31:54 -04:00
Tom Lane
e8c9fd5fdf Allow ON UPDATE/DELETE SET DEFAULT plans to be cached.
Once upon a time, somebody was worried that cached RI plans wouldn't get
remade with new default values after ALTER TABLE ... SET DEFAULT, so they
didn't allow caching of plans for ON UPDATE/DELETE SET DEFAULT actions.
That time is long gone, though (and even at the time I doubt this was the
greatest hazard posed by ALTER TABLE...).  So allow these triggers to cache
their plans just like the others.

The cache_plan argument to ri_PlanCheck is now vestigial, since there
are no callers that don't pass "true"; but I left it alone in case there
is any future need for it.
2012-06-18 19:37:23 -04:00
Tom Lane
c75be2ad60 Change ON UPDATE SET NULL/SET DEFAULT referential actions to meet SQL spec.
Previously, when executing an ON UPDATE SET NULL or SET DEFAULT action for
a multicolumn MATCH SIMPLE foreign key constraint, we would set only those
referencing columns corresponding to referenced columns that were changed.
This is what the SQL92 standard said to do --- but more recent versions
of the standard say that all referencing columns should be set to null or
their default values, no matter exactly which referenced columns changed.
At least for SET DEFAULT, that is clearly saner behavior.  It's somewhat
debatable whether it's an improvement for SET NULL, but it appears that
other RDBMS systems read the spec this way.  So let's do it like that.

This is a release-notable behavioral change, although considering that
our documentation already implied it was done this way, the lack of
complaints suggests few people use such cases.
2012-06-18 12:12:52 -04:00
Tom Lane
f5297bdfe4 Refer to the default foreign key match style as MATCH SIMPLE internally.
Previously we followed the SQL92 wording, "MATCH <unspecified>", but since
SQL99 there's been a less awkward way to refer to the default style.

In addition to the code changes, pg_constraint.confmatchtype now stores
this match style as 's' (SIMPLE) rather than 'u' (UNSPECIFIED).  This
doesn't affect pg_dump or psql because they use pg_get_constraintdef()
to reconstruct foreign key definitions.  But other client-side code might
examine that column directly, so this change will have to be marked as
an incompatibility in the 9.3 release notes.
2012-06-17 20:16:44 -04:00
Peter Eisentraut
15b1918e7d Improve reporting of permission errors for array types
Because permissions are assigned to element types, not array types,
complaining about permission denied on an array type would be
misleading to users.  So adjust the reporting to refer to the element
type instead.

In order not to duplicate the required logic in two dozen places,
refactor the permission denied reporting for types a bit.

pointed out by Yeb Havinga during the review of the type privilege
feature
2012-06-15 22:55:03 +03:00
Tom Lane
80edfd7659 Revisit error message details for JSON input parsing.
Instead of identifying error locations only by line number (which could
be entirely unhelpful with long input lines), provide a fragment of the
input text too, placing this info in a new CONTEXT entry.  Make the
error detail messages conform more closely to style guidelines, fix
failure to expose some of them for translation, ensure compiler can
check formats against supplied parameters.
2012-06-13 19:43:35 -04:00
Simon Riggs
28ac797287 Revert error message on GLOBAL/LOCAL pending further discussion 2012-06-10 08:41:01 +01:00
Simon Riggs
72335a2015 Add ERROR msg for GLOBAL/LOCAL TEMP is not yet implemented 2012-06-09 16:35:26 +01:00
Tom Lane
3dd8e59681 Fix bogus handling of control characters in json_lex_string().
The original coding misbehaved if "char" is signed, and also made the
extremely poor decision to print control characters literally when trying
to complain about them.  Report and patch by Shigeru Hanada.

In passing, also fix core dump risk in report_parse_error() should the
parse state be something other than what it expects.
2012-06-04 20:43:57 -04:00
Tom Lane
2a4c46e0ba Fix array overrun in regex code.
zaptreesubs() was coded to unconditionally reset a capture subre's
corresponding pmatch[] entry.  However, in regexes without backrefs, that
array is caller-supplied and might not have as many entries as the regex
has capturing parens.  So check the array length and do nothing if there
is no corresponding entry, much as subset() does.  Failure to check this
resulted in a stack clobber in the case reported by Marko Kreen.

This bug appears to have been latent in the regex library from the
beginning.  It was not exposed because find() called dissect() not
cdissect(), and the dissect() code path didn't ever call zaptreesubs()
(formerly zapmem()).  When I unified dissect() and cdissect() in commit
4dd78bf37a, the problem was exposed.

Now that I've seen this, I'm rather suspicious that we might need to
back-patch it; but will refrain for now, for lack of evidence that
the case can be hit in the previous coding.
2012-05-24 13:56:16 -04:00
Peter Eisentraut
f1f6737e15 Fix incorrect logic in JSON number lexer
Detectable by gcc -Wlogical-op.

Add two regression test cases that would previously allow incorrect
values to pass.
2012-05-20 02:24:46 +03:00
Tom Lane
488c6dd170 Improve error message for ALTER COLUMN TYPE coercion failure.
Per recent discussion, the error message for this was actually a trifle
inaccurate, since it said "cannot be cast" which might be incorrect.
Adjust that wording, and add a HINT suggesting that a USING clause might
be needed.
2012-05-16 07:28:25 -04:00
Peter Eisentraut
c8e086795a Remove whitespace from end of lines
pgindent and perltidy should clean up the rest.
2012-05-15 22:19:41 +03:00
Tom Lane
809e7e21af Converge all SQL-level statistics timing values to float8 milliseconds.
This patch adjusts the core statistics views to match the decision already
taken for pg_stat_statements, that values representing elapsed time should
be represented as float8 and measured in milliseconds.  By using float8,
we are no longer tied to a specific maximum precision of timing data.
(Internally, it's still microseconds, but we could now change that without
needing changes at the SQL level.)

The columns affected are
pg_stat_bgwriter.checkpoint_write_time
pg_stat_bgwriter.checkpoint_sync_time
pg_stat_database.blk_read_time
pg_stat_database.blk_write_time
pg_stat_user_functions.total_time
pg_stat_user_functions.self_time
pg_stat_xact_user_functions.total_time
pg_stat_xact_user_functions.self_time

The first four of these are new in 9.2, so there is no compatibility issue
from changing them.  The others require a release note comment that they
are now double precision (and can show a fractional part) rather than
bigint as before; also their underlying statistics functions now match
the column definitions, instead of returning bigint microseconds.
2012-04-30 14:03:33 -04:00
Tom Lane
1dd89eadcd Rename I/O timing statistics columns to blk_read_time and blk_write_time.
This seems more consistent with the pre-existing choices for names of
other statistics columns.  Rename assorted internal identifiers to match.
2012-04-29 18:13:33 -04:00
Tom Lane
7c85aa39fc Fix oversight in recent parameterized-path patch.
bitmap_scan_cost_est() has to be able to cope with a BitmapOrPath, but
I'd taken a shortcut that didn't work for that case.  Noted by Heikki.
Add some regression tests since this area is evidently under-covered.
2012-04-26 14:17:44 -04:00
Tom Lane
d6d5f67b5b Modify create_index regression test to avoid intermittent failures.
We have been seeing intermittent buildfarm failures due to a query
sometimes not using an index-only scan plan, because a background
auto-ANALYZE prevented the table's all-visible bits from being set
immediately, thereby causing the estimated cost of an index-only scan
to go up considerably.  Adjust the test case so that a bitmap index scan is
preferred instead, which serves equally well for the purpose the test case
is actually meant for.  (Of course, it would be better to eliminate the
interference from auto-ANALYZE, but I see no low-risk way to do that,
so any such fix will have to be left for 9.3 or later.)
2012-04-25 22:57:48 -04:00
Tom Lane
9fa82c9809 Fix planner's handling of RETURNING lists in writable CTEs.
setrefs.c failed to do "rtoffset" adjustment of Vars in RETURNING lists,
which meant they were left with the wrong varnos when the RETURNING list
was in a subquery.  That was never possible before writable CTEs, of
course, but now it's broken.  The executor fails to notice any problem
because ExecEvalVar just references the ecxt_scantuple for any normal
varno; but EXPLAIN breaks when the varno is wrong, as illustrated in a
recent complaint from Bartosz Dmytrak.

Since the eventual rtoffset of the subquery is not known at the time
we are preparing its plan node, the previous scheme of executing
set_returning_clause_references() at that time cannot handle this
adjustment.  Fortunately, it turns out that we don't really need to do it
that way, because all the needed information is available during normal
setrefs.c execution; we just have to dig it out of the ModifyTable node.
So, do that, and get rid of the kluge of early setrefs processing of
RETURNING lists.  (This is a little bit of a cheat in the case of inherited
UPDATE/DELETE, because we are not passing a "root" struct that corresponds
exactly to what the subplan was built with.  But that doesn't matter, and
anyway this is less ugly than early setrefs processing was.)

Back-patch to 9.1, where the problem became possible to hit.
2012-04-25 20:20:33 -04:00
Robert Haas
3ce7f18e92 Casts to or from a domain type are ignored; warn and document.
Prohibiting this outright would break dumps taken from older versions
that contain such casts, which would create far more pain than is
justified here.

Per report by Jaime Casanova and subsequent discussion.
2012-04-24 09:20:53 -04:00
Robert Haas
5d4b60f2f2 Lots of doc corrections.
Josh Kupershmidt
2012-04-23 22:43:09 -04:00
Tom Lane
33e99153e9 Use fuzzy not exact cost comparison for the final tie-breaker in add_path.
Instead of an exact cost comparison, use a fuzzy comparison with 1e-10
delta after all other path metrics have proved equal.  This is to avoid
having platform-specific roundoff behaviors determine the choice when
two paths are really the same to our cost estimators.  Adjust the
recently-added test case that made it obvious we had a problem here.
2012-04-21 00:51:14 -04:00
Alvaro Herrera
09ff76fcdb Recast "ONLY" column CHECK constraints as NO INHERIT
The original syntax wasn't universally loved, and it didn't allow its
usage in CREATE TABLE, only ALTER TABLE.  It now works everywhere, and
it also allows using ALTER TABLE ONLY to add an uninherited CHECK
constraint, per discussion.

The pg_constraint column has accordingly been renamed connoinherit.

This commit partly reverts some of the changes in
61d81bd28d, particularly some pg_dump and
psql bits, because now pg_get_constraintdef includes the necessary NO
INHERIT within the constraint definition.

Author: Nikhil Sontakke
Some tweaks by me
2012-04-20 23:56:57 -03:00
Tom Lane
5b7b5518d0 Revise parameterized-path mechanism to fix assorted issues.
This patch adjusts the treatment of parameterized paths so that all paths
with the same parameterization (same set of required outer rels) for the
same relation will have the same rowcount estimate.  We cache the rowcount
estimates to ensure that property, and hopefully save a few cycles too.
Doing this makes it practical for add_path_precheck to operate without
a rowcount estimate: it need only assume that paths with different
parameterizations never dominate each other, which is close enough to
true anyway for coarse filtering, because normally a more-parameterized
path should yield fewer rows thanks to having more join clauses to apply.

In add_path, we do the full nine yards of comparing rowcount estimates
along with everything else, so that we can discard parameterized paths that
don't actually have an advantage.  This fixes some issues I'd found with
add_path rejecting parameterized paths on the grounds that they were more
expensive than not-parameterized ones, even though they yielded many fewer
rows and hence would be cheaper once subsequent joining was considered.

To make the same-rowcounts assumption valid, we have to require that any
parameterized path enforce *all* join clauses that could be obtained from
the particular set of outer rels, even if not all of them are useful for
indexing.  This is required at both base scans and joins.  It's a good
thing anyway since the net impact is that join quals are checked at the
lowest practical level in the join tree.  Hence, discard the original
rather ad-hoc mechanism for choosing parameterization joinquals, and build
a better one that has a more principled rule for when clauses can be moved.
The original rule was actually buggy anyway for lack of knowledge about
which relations are part of an outer join's outer side; getting this right
requires adding an outer_relids field to RestrictInfo.
2012-04-19 15:53:47 -04:00
Tom Lane
e3ffd05b02 Weaken the planner's tests for relevant joinclauses.
We should be willing to cross-join two small relations if that allows us
to use an inner indexscan on a large relation (that is, the potential
indexqual for the large table requires both smaller relations).  This
worked in simple cases but fell apart as soon as there was a join clause
to a fourth relation, because the existence of any two-relation join clause
caused the planner to not consider clauseless joins between other base
relations.  The added regression test shows an example case adapted from
a recent complaint from Benoit Delbosc.

Adjust have_relevant_joinclause, have_relevant_eclass_joinclause, and
has_relevant_eclass_joinclause to consider that a join clause mentioning
three or more relations is sufficient grounds for joining any subset of
those relations, even if we have to do so via a cartesian join.  Since such
clauses are relatively uncommon, this shouldn't affect planning speed on
typical queries; in fact it should help a bit, because the latter two
functions in particular get significantly simpler.

Although this is arguably a bug fix, I'm not going to risk back-patching
it, since it might have currently-unforeseen consequences.
2012-04-13 16:07:17 -04:00
Peter Eisentraut
c0cc526e8b Rename bytea_agg to string_agg and add delimiter argument
Per mailing list discussion, we would like to keep the bytea functions
parallel to the text functions, so rename bytea_agg to string_agg,
which already exists for text.

Also, to satisfy the rule that we don't want aggregate functions of
the same name with a different number of arguments, add a delimiter
argument, just like string_agg for text already has.
2012-04-13 21:36:59 +03:00
Tom Lane
880bfc3287 Silently ignore any nonexistent schemas that are listed in search_path.
Previously we attempted to throw an error or at least warning for missing
schemas, but this was done inconsistently because of implementation
restrictions (in many cases, GUC settings are applied outside transactions
so that we can't do system catalog lookups).  Furthermore, there were
exceptions to the rule even in the beginning, and we'd been poking more
and more holes in it as time went on, because it turns out that there are
lots of use-cases for having some irrelevant items in a common search_path
value.  It seems better to just adopt a philosophy similar to what's always
been done with Unix PATH settings, wherein nonexistent or unreadable
directories are silently ignored.

This commit also fixes the documentation to point out that schemas for
which the user lacks USAGE privilege are silently ignored.  That's always
been true but was previously not documented.

This is mostly in response to Robert Haas' complaint that 9.1 started to
throw errors or warnings for missing schemas in cases where prior releases
had not.  We won't adopt such a significant behavioral change in a back
branch, so something different will be needed in 9.1.
2012-04-11 12:02:50 -04:00
Tom Lane
263d9de66b Allow statistics to be collected for foreign tables.
ANALYZE now accepts foreign tables and allows the table's FDW to control
how the sample rows are collected.  (But only manual ANALYZEs will touch
foreign tables, for the moment, since among other things it's not very
clear how to handle remote permissions checks in an auto-analyze.)

contrib/file_fdw is extended to support this.

Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
2012-04-06 15:02:35 -04:00
Simon Riggs
8cb53654db Add DROP INDEX CONCURRENTLY [IF EXISTS], uses ShareUpdateExclusiveLock 2012-04-06 10:21:40 +01:00
Robert Haas
21cc529698 checkopint -> checkpoint
Report by Guillaume Lelarge.
2012-04-05 21:37:33 -04:00
Robert Haas
b736aef2ec Publish checkpoint timing information to pg_stat_bgwriter.
Greg Smith, Peter Geoghegan, and Robert Haas
2012-04-05 14:04:37 -04:00
Robert Haas
644828908f Expose track_iotiming data via the statistics collector.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly.  I've here exposed it at the database level instead.
2012-04-05 11:40:24 -04:00
Tom Lane
05dbd4a773 Fix plpgsql named-cursor-parameter feature for variable name conflicts.
The parser got confused if a cursor parameter had the same name as
a plpgsql variable.  Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.

Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
2012-04-04 21:50:31 -04:00
Peter Eisentraut
38b9693fd9 Add support for renaming domain constraints 2012-04-03 08:11:51 +03:00
Tom Lane
e8476f46fc Fix COPY FROM for null marker strings that correspond to invalid encoding.
The COPY documentation says "COPY FROM matches the input against the null
string before removing backslashes".  It is therefore reasonable to presume
that null markers like E'\\0' will work ... and they did, until someone put
the tests in the wrong order during microoptimization-driven rewrites.
Since then, we've been failing if the null marker is something that would
de-escape to an invalidly-encoded string.  Since null markers generally
need to be something that can't appear in the data, this represents a
nontrivial loss of functionality; surprising nobody noticed it earlier.

Per report from Jeff Davis.  Backpatch to 8.4 where this got broken.
2012-03-25 23:17:22 -04:00
Tom Lane
8279eb4191 Fix planner's handling of outer PlaceHolderVars within subqueries.
For some reason, in the original coding of the PlaceHolderVar mechanism
I had supposed that PlaceHolderVars couldn't propagate into subqueries.
That is of course entirely possible.  When it happens, we need to treat
an outer-level PlaceHolderVar much like an outer Var or Aggref, that is
SS_replace_correlation_vars() needs to replace the PlaceHolderVar with
a Param, and then when building the finished SubPlan we have to provide
the PlaceHolderVar expression as an actual parameter for the SubPlan.
The handling of the contained expression is a bit delicate but it can be
treated exactly like an Aggref's expression.

In addition to the missing logic in subselect.c, prepjointree.c was failing
to search subqueries for PlaceHolderVars that need their relids adjusted
during subquery pullup.  It looks like everyplace else that touches
PlaceHolderVars got it right, though.

Per report from Mark Murawski.  In 9.1 and HEAD, queries affected by this
oversight would fail with "ERROR: Upper-level PlaceHolderVar found where
not expected".  But in 9.0 and 8.4, you'd silently get possibly-wrong
answers, since the value transmitted into the subquery wouldn't go to null
when it should.
2012-03-24 16:21:39 -04:00
Tom Lane
0339047bc9 Code review for protransform patches.
Fix loss of previous expression-simplification work when a transform
function fires: we must not simply revert to untransformed input tree.
Instead build a dummy FuncExpr node to pass to the transform function.
This has the additional advantage of providing a simpler, more uniform
API for transform functions.

Move documentation to a somewhat less buried spot, relocate some
poorly-placed code, be more wary of null constants and invalid typmod
values, add an opr_sanity check on protransform function signatures,
and some other minor cosmetic adjustments.

Note: although this patch touches pg_proc.h, no need for catversion
bump, because the changes are cosmetic and don't actually change the
intended catalog contents.
2012-03-23 17:29:57 -04:00
Tom Lane
9dbf2b7d75 Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements.  The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.

In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs.  Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.

Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.

Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn".  There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.

Andres Freund and Tom Lane
2012-03-19 21:38:12 -04:00
Tom Lane
dd4134ea56 Revisit handling of UNION ALL subqueries with non-Var output columns.
In commit 57664ed25e I tried to fix a bug
reported by Teodor Sigaev by making non-simple-Var output columns distinct
(by wrapping their expressions with dummy PlaceHolderVar nodes).  This did
not work too well.  Commit b28ffd0fcc fixed
some ensuing problems with matching to child indexes, but per a recent
report from Claus Stadler, constraint exclusion of UNION ALL subqueries was
still broken, because constant-simplification didn't handle the injected
PlaceHolderVars well either.  On reflection, the original patch was quite
misguided: there is no reason to expect that EquivalenceClass child members
will be distinct.  So instead of trying to make them so, we should ensure
that we can cope with the situation when they're not.

Accordingly, this patch reverts the code changes in the above-mentioned
commits (though the regression test cases they added stay).  Instead, I've
added assorted defenses to make sure that duplicate EC child members don't
cause any problems.  Teodor's original problem ("MergeAppend child's
targetlist doesn't match MergeAppend") is addressed more directly by
revising prepare_sort_from_pathkeys to let the parent MergeAppend's sort
list guide creation of each child's sort list.

In passing, get rid of add_sort_column; as far as I can tell, testing for
duplicate sort keys at this stage is dead code.  Certainly it doesn't
trigger often enough to be worth expending cycles on in ordinary queries.
And keeping the test would've greatly complicated the new logic in
prepare_sort_from_pathkeys, because comparing pathkey list entries against
a previous output array requires that we not skip any entries in the list.

Back-patch to 9.1, like the previous patches.  The only known issue in
this area that wasn't caused by the ill-advised previous patches was the
MergeAppend planning failure, which of course is not relevant before 9.1.
It's possible that we need some of the new defenses against duplicate child
EC entries in older branches, but until there's some clear evidence of that
I'm going to refrain from back-patching further.
2012-03-16 13:11:55 -04:00
Tom Lane
c6a11b89e4 Teach SPGiST to store nulls and do whole-index scans.
This patch fixes the other major compatibility-breaking limitation of
SPGiST, that it didn't store anything for null values of the indexed
column, and so could not support whole-index scans or "x IS NULL"
tests.  The approach is to create a wholly separate search tree for
the null entries, and use fixed "allTheSame" insertion and search
rules when processing this tree, instead of calling the index opclass
methods.  This way the opclass methods do not need to worry about
dealing with nulls.

Catversion bump is for pg_am updates as well as the change in on-disk
format of SPGiST indexes; there are some tweaks in SPGiST WAL records
as well.

Heavily rewritten version of a patch by Oleg Bartunov and Teodor Sigaev.
(The original also stored nulls separately, but it reused GIN code to do
so; which required undesirable compromises in the on-disk format, and
would likely lead to bugs due to the GIN code being required to work in
two very different contexts.)
2012-03-11 16:29:59 -04:00
Peter Eisentraut
39d74e346c Add support for renaming constraints
reviewed by Josh Berkus and Dimitri Fontaine
2012-03-10 20:19:13 +02:00
Tom Lane
0e5e167aae Collect and use element-frequency statistics for arrays.
This patch improves selectivity estimation for the array <@, &&, and @>
(containment and overlaps) operators.  It enables collection of statistics
about individual array element values by ANALYZE, and introduces
operator-specific estimators that use these stats.  In addition,
ScalarArrayOpExpr constructs of the forms "const = ANY/ALL (array_column)"
and "const <> ANY/ALL (array_column)" are estimated by treating them as
variants of the containment operators.

Since we still collect scalar-style stats about the array values as a
whole, the pg_stats view is expanded to show both these stats and the
array-style stats in separate columns.  This creates an incompatible change
in how stats for tsvector columns are displayed in pg_stats: the stats
about lexemes are now displayed in the array-related columns instead of the
original scalar-related columns.

There are a few loose ends here, notably that it'd be nice to be able to
suppress either the scalar-style stats or the array-element stats for
columns for which they're not useful.  But the patch is in good enough
shape to commit for wider testing.

Alexander Korotkov, reviewed by Noah Misch and Nathan Boley
2012-03-03 20:20:57 -05:00
Peter Eisentraut
b59ca98209 Allow CREATE TABLE (LIKE ...) from composite type
The only reason this didn't work before was that parserOpenTable()
rejects composite types.  So use relation_openrv() directly and
manually do the errposition() setup that parserOpenTable() does.
2012-03-03 16:03:05 +02:00
Peter Eisentraut
6688d2878e Add COLLATION FOR expression
reviewed by Jaime Casanova
2012-03-02 21:12:16 +02:00
Peter Eisentraut
9cfd800aab Add some enumeration commas, for consistency 2012-02-24 11:04:45 +02:00
Tom Lane
173e29aa5d Fix the general case of quantified regex back-references.
Cases where a back-reference is part of a larger subexpression that
is quantified have never worked in Spencer's regex engine, because
he used a compile-time transformation that neglected the need to
check the back-reference match in iterations before the last one.
(That was okay for capturing parens, and we still do it if the
regex has *only* capturing parens ... but it's not okay for backrefs.)

To make this work properly, we have to add an "iteration" node type
to the regex engine's vocabulary of sub-regex nodes.  Since this is a
moderately large change with a fair risk of introducing new bugs of its
own, apply to HEAD only, even though it's a fix for a longstanding bug.
2012-02-24 01:41:03 -05:00
Andrew Dunstan
83fcaffea2 Fix a couple of cases of JSON output.
First, as noted by Itagaki Takahiro, a datum of type JSON doesn't
need to be escaped. Second, ensure that numeric output not in
the form of a legal JSON number is quoted and escaped.
2012-02-20 15:01:03 -05:00
Tom Lane
5223f96d92 Fix regex back-references that are directly quantified with *.
The syntax "\n*", that is a backref with a * quantifier directly applied
to it, has never worked correctly in Spencer's library.  This has been an
open bug in the Tcl bug tracker since 2005:
https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894

The core of the problem is in parseqatom(), which first changes "\n*" to
"\n+|" and then applies repeat() to the NFA representing the backref atom.
repeat() thinks that any arc leading into its "rp" argument is part of the
sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
the arc that was intended to represent the empty bypass around "\n+", this
arc gets moved too, so that it now leads into the state loop created by
repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
something that represents zero or more repetitions of the NFA representing
the backref atom.  In the original example, in place of
	^([bc])\1*$
we now have something that acts like
	^([bc])(\1+|[bc]*)$
At runtime, the branch involving the actual backref fails, as it's supposed
to, but then the other branch succeeds anyway.

We could no doubt fix this by some rearrangement of the operations in
parseqatom(), but that code is plenty ugly already, and what's more the
whole business of converting "x*" to "x+|" probably needs to go away to fix
another problem I'll mention in a moment.  Instead, this patch suppresses
the *-conversion when the target is a simple backref atom, leaving the case
of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
one-liner, at the cost of having to tweak cbrdissect() a little.  In the
event I went a bit further than that and rewrote cbrdissect() to check all
the string-length-related conditions before it starts comparing characters.
It seems a bit stupid to possibly iterate through many copies of an
n-character backreference, only to fail at the end because the target
string's length isn't a multiple of n --- we could have found that out
before starting.  The existing coding could only be a win if integer
division is hugely expensive compared to character comparison, but I don't
know of any modern machine where that might be true.

This does not fix all the problems with quantified back-references.  In
particular, the code is still broken for back-references that appear within
a larger expression that is quantified (so that direct insertion of the
quantification limits into the BACKREF node doesn't apply).  I think fixing
that will take some major surgery on the NFA code, specifically introducing
an explicit iteration node type instead of trying to transform iteration
into concatenation of modified regexps.

Back-patch to all supported branches.  In HEAD, also add a regression test
case for this.  (It may seem a bit silly to create a regression test file
for just one test case; but I'm expecting that we will soon import a whole
bunch of regex regression tests from Tcl, so might as well create the
infrastructure now.)
2012-02-20 00:52:33 -05:00
Andrew Dunstan
2f582f76b1 Improve pretty printing of viewdefs.
Some line feeds are added to target lists and from lists to make
them more readable. By default they wrap at 80 columns if possible,
but the wrap column is also selectable - if 0 it wraps after every
item.

Andrew Dunstan, reviewed by Hitoshi Harada.
2012-02-19 11:43:46 -05:00
Tom Lane
759c95c45b Update expected/collate.linux.utf8.out for recent plpgsql changes.
This file was missed in commit 4c6cedd1b0.
2012-02-18 18:08:02 -05:00
Tom Lane
4bfe68dfab Run a portal's cleanup hook immediately when pushing it to FAILED state.
This extends the changes of commit 6252c4f9e2
so that we run the cleanup hook earlier for failure cases as well as
success cases.  As before, the point is to avoid an assertion failure from
an Assert I added in commit a874fe7b4c, which
was meant to check that no user-written code can be called during portal
cleanup.  This fixes a case reported by Pavan Deolasee in which the Assert
could be triggered during backend exit (see the new regression test case),
and also prevents the possibility that the cleanup hook is run after
portions of the portal's state have already been recycled.  That doesn't
really matter in current usage, but it foreseeably could matter in the
future.

Back-patch to 9.1 where the Assert in question was added.
2012-02-15 16:19:01 -05:00
Robert Haas
ac9100f8cf More regression tests for LEAKPROOF/NOT LEAKPROOF stuff.
Along the way, move create_function_3 into a parallel schedule.

KaiGai Kohei
2012-02-15 10:56:26 -05:00
Tom Lane
398f70ec07 Preserve column names in the execution-time tupledesc for a RowExpr.
The hstore and json datatypes both have record-conversion functions that
pay attention to column names in the composite values they're handed.
We used to not worry about inserting correct field names into tuple
descriptors generated at runtime, but given these examples it seems
useful to do so.  Observe the nicer-looking results in the regression
tests whose results changed.

catversion bump because there is a subtle change in requirements for stored
rule parsetrees: RowExprs from ROW() constructs now have to include field
names.

Andrew Dunstan and Tom Lane
2012-02-14 17:34:56 -05:00
Robert Haas
dc66f1c5f2 Remove new, intermittently failing regression test.
Per buildfarm.
2012-02-13 23:43:24 -05:00
Robert Haas
e37e448650 Fix new create_function_3 regression tests not to rely on tuple order.
Per buildfarm.
2012-02-13 22:49:07 -05:00
Robert Haas
cd30728fb2 Allow LEAKPROOF functions for better performance of security views.
We don't normally allow quals to be pushed down into a view created
with the security_barrier option, but functions without side effects
are an exception: they're OK.  This allows much better performance in
common cases, such as when using an equality operator (that might
even be indexable).

There is an outstanding issue here with the CREATE FUNCTION / ALTER
FUNCTION syntax: there's no way to use ALTER FUNCTION to unset the
leakproof flag.  But I'm committing this as-is so that it doesn't
have to be rebased again; we can fix up the grammar in a future
commit.

KaiGai Kohei, with some wordsmithing by me.
2012-02-13 22:21:14 -05:00
Tom Lane
d06e2d2005 Add ORDER BY to a query to prevent occasional regression test failures.
Per buildfarm, we sometimes get row-ordering variations in the output.
This also makes this query look more like numerous other ones in the same
test file.
2012-02-10 02:33:00 -05:00
Tom Lane
cbba55d6d7 Support min/max index optimizations on boolean columns.
Since bool_and() is equivalent to min(), and bool_or() to max(), we might
as well let them be index-optimized in the same way.  The practical value
of this is debatable at best, but it seems nearly cost-free to enable it.
Code-wise, we need only adjust the entries in pg_aggregate.  There is a
measurable planning speed penalty for a query involving one of these
aggregates, but it is only a few percent in simple cases, so that seems
acceptable.

Marti Raudsepp, reviewed by Abhijit Menon-Sen
2012-02-08 12:41:48 -05:00
Tom Lane
3db6524fe6 Mark some more I/O-conversion-invoking functions as stable not volatile.
When written, textanycat, anytextcat, quote_literal, and quote_nullable
were marked volatile, because they could invoke arbitrary type-specific
output functions as part of casting their anyelement arguments to text.
Since then, we have defined a project policy that I/O functions must not
be volatile, as per commit aab353a60b.
So these functions can safely be downgraded to stable.  Most of the time
this makes no difference since they'll get inlined anyway, but as noted
by Andrew Dunstan, there are cases where the volatile marking prevents
optimizations that the planner does before function inlining.  (I think
I might have overlooked these functions in the earlier commit on the
grounds that inlining would make it moot, but not so --- tgl)

This change results in a change in the expected output of the json
regression tests, because the planner can now flatten a sub-select
that it failed to before.  The old output is preferable, but getting
that back will require some as-yet-unfinished work on RowExpr handling.

Marti Raudsepp
2012-02-08 11:29:29 -05:00
Tom Lane
342b83fdca Revert "Add some regression test cases for denormalized float8 input."
This reverts commit 500cf66d55.  As was
more or less expected, a small minority of platforms won't accept
denormalized input even with the recent changes.  It doesn't seem
especially helpful to test this if we're going to have to provide an
alternate expected-file to allow failure.
2012-02-04 15:52:09 -05:00
Andrew Dunstan
39909d1d39 Add array_to_json and row_to_json functions.
Also move the escape_json function from explain.c to json.c where it
seems to belong.

Andrew Dunstan, Reviewd by Abhijit Menon-Sen.
2012-02-03 12:11:16 -05:00
Tom Lane
500cf66d55 Add some regression test cases for denormalized float8 input.
This was submitted with the previous patch, but I'm committing it
separately to ease backing it out if these results prove too unportable.

Marti Raudsepp, after a proposal by Jeroen Vermeulen
2012-02-01 13:13:54 -05:00
Tom Lane
bef47331b6 Code review for plpgsql fn_signature patch.
Don't quote the output of format_procedure(); it's already quoted quite
enough.  Remove the fn_name field, which was now just dead weight.  Fix
remaining expected-output files.
2012-02-01 02:14:37 -05:00
Robert Haas
5384a73f98 Built-in JSON data type.
Like the XML data type, we simply store JSON data as text, after checking
that it is valid.  More complex operations such as canonicalization and
comparison may come later, but this is enough for not.

There are a few open issues here, such as whether we should attempt to
detect UTF-8 surrogate pairs represented as \uXXXX\uYYYY, but this gets
the basic framework in place.
2012-01-31 11:48:23 -05:00
Heikki Linnakangas
4c6cedd1b0 Print function signature, not just name, in PL/pgSQL error messages.
This makes it unambiguous which function the message is coming from, if you
have overloaded functions.

Pavel Stehule, reviewed by Abhijit Menon-Sen.
2012-01-31 10:36:20 +02:00
Tom Lane
b28ffd0fcc Fix pushing of index-expression qualifications through UNION ALL.
In commit 57664ed25e, I made the planner
wrap non-simple-variable outputs of appendrel children (IOW, child SELECTs
of UNION ALL subqueries) inside PlaceHolderVars, in order to solve some
issues with EquivalenceClass processing.  However, this means that any
upper-level WHERE clauses mentioning such outputs will now contain
PlaceHolderVars after they're pushed down into the appendrel child,
and that prevents indxpath.c from recognizing that they could be matched
to index expressions.  To fix, add explicit stripping of PlaceHolderVars
from index operands, same as we have long done for RelabelType nodes.
Add a regression test covering both this and the plain-UNION case (which
is a totally different code path, but should also be able to do it).

Per bug #6416 from Matteo Beccati.  Back-patch to 9.1, same as the
previous change.
2012-01-29 16:31:23 -05:00
Tom Lane
4ec6581c0c Fix handling of init_plans list in inheritance_planner().
Formerly we passed an empty list to each per-child-table invocation of
grouping_planner, and then merged the results into the global list.
However, that fails if there's a CTE attached to the statement, because
create_ctescan_plan uses the list to find the plan referenced by a CTE
reference; so it was unable to find any CTEs attached to the outer UPDATE
or DELETE.  But there's no real reason not to use the same list throughout
the process, and doing so is simpler and faster anyway.

Per report from Josh Berkus of "could not find plan for CTE" failures.
Back-patch to 9.1 where we added support for WITH attached to UPDATE or
DELETE.  Add some regression test cases, too.
2012-01-28 20:24:42 -05:00
Peter Eisentraut
b376ec6fa5 Show default privileges in information schema
Hitherto, the information schema only showed explicitly granted
privileges that were visible in the *acl catalog columns.  If no
privileges had been granted, the implicit privileges were not shown.

To fix that, add an SQL-accessible version of the acldefault()
function, and use that inside the aclexplode() calls to substitute the
catalog-specific default privilege set for null values.

reviewed by Abhijit Menon-Sen
2012-01-27 21:58:51 +02:00
Robert Haas
2d1371d3ee Be more clear when a new column name collides with a system column name.
We now use the same error message for ALTER TABLE .. ADD COLUMN or
ALTER TABLE .. RENAME COLUMN that we do for CREATE TABLE.  The old
message was accurate, but might be confusing to users not aware of our
system columns.

Vik Reykja, with some changes by me, and further proofreading by Tom Lane
2012-01-26 12:44:30 -05:00
Magnus Hagander
61cb8c5abb Add deadlock counter to pg_stat_database
Adds a counter that tracks number of deadlocks that occurred in
each database to pg_stat_database.

Magnus Hagander, reviewed by Jaime Casanova
2012-01-26 15:58:19 +01:00
Magnus Hagander
bc3347484a Track temporary file count and size in pg_stat_database
Add counters for number and size of temporary files used
for spill-to-disk queries for each database to the
pg_stat_database view.

Tomas Vondra, review by Magnus Hagander
2012-01-26 14:41:19 +01:00
Robert Haas
9d35116611 Damage control for yesterday's CheckIndexCompatible changes.
Rip out a regression test that doesn't play well with settings put in
place by the build farm, and rewrite the code in CheckIndexCompatible
in a hopefully more transparent style.
2012-01-26 08:21:31 -05:00
Alvaro Herrera
08146775ac Have \copy go through SendQuery
This enables a bunch of features, notably ON_ERROR_ROLLBACK.  It also
makes COPY failure (either in the server or psql) as a whole behave more
sanely in psql.

Additionally, having more commands in the same command line as COPY
works better (though since psql splits lines at semicolons, this doesn't
matter much unless you're using -c).

Also tighten a couple of switches on PQresultStatus() to add
PGRES_COPY_BOTH support and stop assuming that unknown statuses received
are errors; have those print diagnostics where warranted.

Author: Noah Misch
2012-01-25 18:22:00 -03:00
Robert Haas
6eb71ac552 Make CheckIndexCompatible simpler and more bullet-proof.
This gives up the "don't rewrite the index" behavior in a couple of
relatively unimportant cases, such as changing between an array type
and an unconstrained domain over that array type, in return for
making this code more future-proof.

Noah Misch
2012-01-25 15:28:07 -05:00
Alvaro Herrera
74ab96a45e Add pg_trigger_depth() function
This reports the depth level of triggers currently in execution, or zero
if not called from inside a trigger.

No catversion bump in this patch, but you have to initdb if you want
access to the new function.

Author: Kevin Grittner
2012-01-25 13:22:54 -03:00
Robert Haas
49562f5eb6 Adjustments to regression tests for security_barrier views.
Drop the role we create, so regression tests pass even when run more
than once against the same cluster, a problem noted by Tom Lane and
Jeff Janes.  Also, rename the temporary role so that it starts with
"regress_", to make it unlikely that we'll collide with an existing
role name while running "make installcheck", per further gripe from
Tom Lane.
2012-01-24 08:46:32 -05:00
Simon Riggs
b8a91d9d1c ALTER <thing> [IF EXISTS] ... allows silent DDL if required,
e.g. ALTER FOREIGN TABLE IF EXISTS foo RENAME TO bar

Pavel Stehule
2012-01-23 23:25:04 +00:00
Robert Haas
cc53a1e7cc Add bitwise AND, OR, and NOT operators for macaddr data type.
Brendan Jurd, reviewed by Fujii Masao
2012-01-19 15:25:14 -05:00
Magnus Hagander
4f42b546fd Separate state from query string in pg_stat_activity
This separates the state (running/idle/idleintransaction etc) into
it's own field ("state"), and leaves the query field containing just
query text.

The query text will now mean "current query" when a query is running
and "last query" in other states. Accordingly,the field has been
renamed from current_query to query.

Since backwards compatibility was broken anyway to make that, the procpid
field has also been renamed to pid - along with the same field in
pg_stat_replication for consistency.

Scott Mead and Magnus Hagander, review work from Greg Smith
2012-01-19 14:19:20 +01:00
Robert Haas
504f0c5d5d Regression tests for security_barrier views.
KaiGai Kohei
2012-01-17 22:07:24 -05:00
Tom Lane
de5a08c59d Tweak duplicate-index-column regression test to avoid locale sensitivity.
The originally-chosen test case gives different results in es_EC locale
because of unusual rule for sorting strings beginning with "LL".  Adjust
the comparison value to avoid that, while hopefully not introducing new
locale dependencies elsewhere.  Per report from Jaime Casanova.
2012-01-12 14:18:08 -05:00
Peter Eisentraut
a9f2e31cf6 Support CREATE TABLE (LIKE ...) with foreign tables and views
Composite types are not yet supported, because parserOpenTable()
rejects them.
2012-01-10 21:46:29 +02:00
Peter Eisentraut
db49517c62 Rename the internal structures of the CREATE TABLE (LIKE ...) facility
The original implementation of this interpreted it as a kind of
"inheritance" facility and named all the internal structures
accordingly.  This turned out to be very confusing, because it has
nothing to do with the INHERITS feature.  So rename all the internal
parser infrastructure, update the comments, adjust the error messages,
and split up the regression tests.
2012-01-07 23:02:33 +02:00
Peter Eisentraut
104e7dac28 Improve ALTER DOMAIN / DROP CONSTRAINT with nonexistent constraint
ALTER DOMAIN / DROP CONSTRAINT on a nonexistent constraint name did
not report any error.  Now it reports an error.  The IF EXISTS option
was added to get the usual behavior of ignoring nonexistent objects to
drop.
2012-01-05 19:48:55 +02:00
Tom Lane
dfd26f9c5f Make executor's SELECT INTO code save and restore original tuple receiver.
As previously coded, the QueryDesc's dest pointer was left dangling
(pointing at an already-freed receiver object) after ExecutorEnd.  It's a
bit astonishing that it took us this long to notice, and I'm not sure that
the known problem case with SQL functions is the only one.  Fix it by
saving and restoring the original receiver pointer, which seems the most
bulletproof way of ensuring any related bugs are also covered.

Per bug #6379 from Paul Ramsey.  Back-patch to 8.4 where the current
handling of SELECT INTO was introduced.
2012-01-04 18:30:55 -05:00
Tom Lane
ac7a5a3f25 Fix coerce_to_target_type for coerce_type's klugy handling of COLLATE.
Because coerce_type recurses into the argument of a CollateExpr,
coerce_to_target_type's longstanding code for detecting whether coerce_type
had actually done anything (to wit, returned a different node than it
passed in) was broken in 9.1.  This resulted in unexpected failures in
hide_coercion_node; which was not the latter's fault, since it's critical
that we never call it on anything that wasn't inserted by coerce_type.
(Else we might decide to "hide" a user-written function call.)

Fix by removing and replacing the CollateExpr in coerce_to_target_type
itself.  This is all pretty ugly but I don't immediately see a way to make
it nicer.

Per report from Jean-Yves F. Barbier.
2012-01-02 14:43:45 -05:00
Tom Lane
15ba590792 Adjust SP-GiST regression tests to be less locale-sensitive.
The original test cases gave varying results depending on whether the
locale sorts digits before or after letters.  Since that's not really
what we wish to test here, adjust the test data to not contain any strings
beginning with digits.  Per report from Pavel Stehule.
2011-12-29 17:04:36 -05:00
Tom Lane
e2c2c2e8b1 Improve planner's handling of duplicated index column expressions.
It's potentially useful for an index to repeat the same indexable column
or expression in multiple index columns, if the columns have different
opclasses.  (If they share opclasses too, the duplicate column is pretty
useless, but nonetheless we've allowed such cases since 9.0.)  However,
the planner failed to cope with this, because createplan.c was relying on
simple equal() matching to figure out which index column each index qual
is intended for.  We do have that information available upstream in
indxpath.c, though, so the fix is to not flatten the multi-level indexquals
list when putting it into an IndexPath.  Then we can rely on the sublist
structure to identify target index columns in createplan.c.  There's a
similar issue for index ORDER BYs (the KNNGIST feature), so introduce a
multi-level-list representation for that too.  This adds a bit more
representational overhead, but we might more or less buy that back by not
having to search for matching index columns anymore in createplan.c;
likewise btcostestimate saves some cycles.

Per bug #6351 from Christian Rudolph.  Likely symptoms include the "btree
index keys must be ordered by attribute" failure shown there, as well as
"operator MMMM is not a member of opfamily NNNN".

Although this is a pre-existing problem that can be demonstrated in 9.0 and
9.1, I'm not going to back-patch it, because the API changes in the planner
seem likely to break things such as index plugins.  The corner cases where
this matters seem too narrow to justify possibly breaking things in a minor
release.
2011-12-23 18:45:14 -05:00
Robert Haas
d5448c7d31 Add bytea_agg, parallel to string_agg.
Pavel Stehule
2011-12-23 08:40:25 -05:00
Robert Haas
0e4611c023 Add a security_barrier option for views.
When a view is marked as a security barrier, it will not be pulled up
into the containing query, and no quals will be pushed down into it,
so that no function or operator chosen by the user can be applied to
rows not exposed by the view.  Views not configured with this
option cannot provide robust row-level security, but will perform far
better.

Patch by KaiGai Kohei; original problem report by Heikki Linnakangas
(in October 2009!).  Review (in earlier versions) by Noah Misch and
others.  Design advice by Tom Lane and myself.  Further review and
cleanup by me.
2011-12-22 16:16:31 -05:00
Peter Eisentraut
f90dd28062 Add ALTER DOMAIN ... RENAME
You could already rename domains using ALTER TYPE, but with this new
command it is more consistent with how other commands treat domains as
a subcategory of types.
2011-12-22 22:43:56 +02:00
Tom Lane
1db5af2794 Fix gincostestimate to handle ScalarArrayOpExpr reasonably.
The original coding of this function overlooked the possibility that
it could be passed anything except simple OpExpr indexquals.  But
ScalarArrayOpExpr is possible too, and the code would probably crash
(and surely give ridiculous answers) in such a case.  Add logic to try
to estimate sanely for such cases.

In passing, fix the treatment of inner-indexscan cost estimation: it was
failing to scale up properly for multiple iterations of a nestloop.
(I think somebody might've thought that index_pages_fetched() is linear,
but of course it's not.)

Report, diagnosis, and preliminary patch by Marti Raudsepp; I refactored
it a bit and fixed the cost estimation.

Back-patch into 9.1 where the bogus code was introduced.
2011-12-20 19:57:34 -05:00
Peter Eisentraut
729205571e Add support for privileges on types
This adds support for the more or less SQL-conforming USAGE privilege
on types and domains.  The intent is to be able restrict which users
can create dependencies on types, which restricts the way in which
owners can alter types.

reviewed by Yeb Havinga
2011-12-20 00:05:19 +02:00
Alvaro Herrera
61d81bd28d Allow CHECK constraints to be declared ONLY
This makes them enforceable only on the parent table, not on children
tables.  This is useful in various situations, per discussion involving
people bitten by the restrictive behavior introduced in 8.4.

Message-Id:
8762mp93iw.fsf@comcast.net
CAFaPBrSMMpubkGf4zcRL_YL-AERUbYF_-ZNNYfb3CVwwEqc9TQ@mail.gmail.com

Authors: Nikhil Sontakke, Alex Hunsaker
Reviewed by Robert Haas and myself
2011-12-19 17:30:23 -03:00
Tom Lane
9220362493 Teach SP-GiST to do index-only scans.
Operator classes can specify whether or not they support this; this
preserves the flexibility to use lossy representations within an index.

In passing, move constant data about a given index into the rd_amcache
cache area, instead of doing fresh lookups each time we start an index
operation.  This is mainly to try to make sure that spgcanreturn() has
insignificant cost; I still don't have any proof that it matters for
actual index accesses.  Also, get rid of useless copying of FmgrInfo
pointers; we can perfectly well use the relcache's versions in-place.
2011-12-19 14:58:41 -05:00
Tom Lane
5577ca5bfb Remove bogus entries in gist point_ops operator class.
These entries could never be matched to an index clause because they don't
have the index datatype on the left-hand side of the operator.  (Their
commutators are in the opclass, which is sensible, but that doesn't mean
these operators should be.)  Spotted by a test that I recently added to
opr_sanity to catch exactly this type of thinko.  AFAICT there is no code
in gistproc.c that is specifically meant to cover these cases, so nothing
to remove at that level.
2011-12-17 18:51:00 -05:00
Tom Lane
8daeb5ddd6 Add SP-GiST (space-partitioned GiST) index access method.
SP-GiST is comparable to GiST in flexibility, but supports non-balanced
partitioned search structures rather than balanced trees.  As described at
PGCon 2011, this new indexing structure can beat GiST in both index build
time and query speed for search problems that it is well matched to.

There are a number of areas that could still use improvement, but at this
point the code seems committable.

Teodor Sigaev and Oleg Bartunov, with considerable revisions by Tom Lane
2011-12-17 16:42:30 -05:00
Robert Haas
d039fd51f7 Don't leave regress_test_role_super lying around.
Fixes an oversight in commit fc6d1006bd.

Noted by Tom Lane.
2011-12-15 18:45:02 -05:00
Heikki Linnakangas
4adead1d22 Add support for passing cursor parameters in named notation in PL/pgSQL.
Yeb Havinga, reviewed by Kevin Grittner, with small changes by me.
2011-12-14 15:55:37 +02:00
Peter Eisentraut
5bcf8ede45 Add ALTER FOREIGN DATA WRAPPER / RENAME and ALTER SERVER / RENAME 2011-12-09 20:42:30 +02:00
Tom Lane
c6e3ac11b6 Create a "sort support" interface API for faster sorting.
This patch creates an API whereby a btree index opclass can optionally
provide non-SQL-callable support functions for sorting.  In the initial
patch, we only use this to provide a directly-callable comparator function,
which can be invoked with a bit less overhead than the traditional
SQL-callable comparator.  While that should be of value in itself, the real
reason for doing this is to provide a datatype-extensible framework for
more aggressive optimizations, as in Peter Geoghegan's recent work.

Robert Haas and Tom Lane
2011-12-07 00:19:39 -05:00
Tom Lane
65d9aedb1b Fix getTypeIOParam to support type record[].
Since record[] uses array_in, it needs to have its element type passed
as typioparam.  In HEAD and 9.1, this fix essentially reverts commit
9bc933b212, which was a hack that is no
longer needed since domains don't set their typelem anymore.  Before
that, adjust the logic so that only domains are excluded from being
treated like arrays, rather than assuming that only base types should
be included.  Add a regression test to demonstrate the need for this.
Per report from Maxim Boguk.

Back-patch to 8.4, where type record[] was added.
2011-12-01 12:44:16 -05:00
Tom Lane
f225e4bc54 When a row fails a not-null constraint, show row's contents in errdetail.
Simple extension of previous patch for CHECK constraints.
2011-11-29 18:29:18 -05:00
Tom Lane
f1e13001b2 When a row fails a CHECK constraint, show row's contents in errdetail.
This should make it easier to identify which row is problematic when an
insert or update is processing many rows.

The formatting is similar to that for unique-index violation messages,
except that we limit field widths to 64 bytes since otherwise the message
could get unreasonably long.  (In particular, there's currently no attempt
to quote or escape field values that contain commas etc.)

Jan Kundrát, reviewed by Royce Ausburn, somewhat rewritten by me.
2011-11-29 15:02:49 -05:00
Tom Lane
c66e4f138b Improve GiST range-contained-by searches by adding a flag for empty ranges.
In the original implementation, a range-contained-by search had to scan
the entire index because an empty range could be lurking anywhere.
Improve that by adding a flag to upper GiST entries that says whether the
represented subtree contains any empty ranges.

Also, make a simple mod to the penalty function to discourage empty ranges
from getting pushed into subtrees without any.  This needs more work, and
the picksplit function should be taught about it too, but that code can be
improved without causing an on-disk compatibility break; so we'll leave it
for another day.

Since we're breaking on-disk compatibility of range values anyway, I took
the opportunity to reorganize the range flags bits; the unused
RANGE_xB_NULL bits are now adjacent, which might open the door for using
them in some other way later.

In passing, remove the GiST range opclass entry for <>, which doesn't seem
like it can really be indexed usefully.

Alexander Korotkov, with some editorializing by Tom
2011-11-27 16:51:29 -05:00
Tom Lane
df73584431 Remove zero- and one-argument range constructor functions.
Per discussion, the zero-argument forms aren't really worth the catalog
space (just write 'empty' instead).  The one-argument forms have some use,
but they also have a serious problem with looking too much like functional
cast notation; to the point where in many real use-cases, the parser would
misinterpret what was wanted.

Committing this as a separate patch, with the thought that we might want
to revert part or all of it if we can think of some way around the cast
ambiguity.
2011-11-22 20:45:05 -05:00
Tom Lane
cddc819e45 Improve implementation of range-contains-element tests.
Implement these tests directly instead of constructing a singleton range
and then applying range-contains.  This saves a range serialize/deserialize
cycle as well as a couple of redundant bound-comparison steps, and adds
very little code on net.

Remove elem_contained_by_range from the GiST opclass: it doesn't belong
there because there is no way to use it in an index clause (where the
indexed column would have to be on the left).  Its commutator is in the
opclass, and that's what counts.
2011-11-22 17:45:37 -05:00
Robert Haas
f1b4aa2a84 Check for INSERT privileges in SELECT INTO / CREATE TABLE AS.
In the normal course of events, this matters only if ALTER DEFAULT
PRIVILEGES has been used to revoke default INSERT permission.  Whether
or not the new behavior is more or less likely to be what the user wants
when dealing only with the built-in privilege facilities is arguable,
but it's clearly better when using a loadable module such as sepgsql
that may use the hook in ExecCheckRTPerms to enforce additional
permissions checks.

KaiGai Kohei, reviewed by Albe Laurenz
2011-11-22 16:16:26 -05:00
Tom Lane
766948bedd Still more review for range-types patch.
Per discussion, relax the range input/construction rules so that the
only hard error is lower bound > upper bound.  Cases where the lower
bound is <= upper bound, but the range nonetheless normalizes to empty,
are now permitted.

Fix core dump in range_adjacent when bounds are infinite.  Marginal
cleanup of regression test cases, some more code commenting.
2011-11-22 16:06:26 -05:00
Tom Lane
a4ffcc8e11 More code review for rangetypes patch.
Fix up some infelicitous coding in DefineRange, and add some missing error
checks.  Rearrange operator strategy number assignments for GiST anyrange
opclass so that they don't make such a mess of opr_sanity's table of
operator names associated with different strategy numbers.  Assign
hopefully-temporary selectivity estimators to range operators that didn't
have one --- poor as the estimates are, they're still a lot better than the
default 0.5 estimate, and they'll shut up the opr_sanity test that wants to
see selectivity estimators on all built-in operators.
2011-11-21 16:19:53 -05:00
Tom Lane
a1a233af66 Further review of range-types patch.
Lots of documentation cleanup today, and still more type_sanity tests.
2011-11-18 18:24:32 -05:00
Robert Haas
fc6d1006bd Further consolidation of DROP statement handling.
This gets rid of an impressive amount of duplicative code, with only
minimal behavior changes.  DROP FOREIGN DATA WRAPPER now requires object
ownership rather than superuser privileges, matching the documentation
we already have.  We also eliminate the historical warning about dropping
a built-in function as unuseful.  All operations are now performed in the
same order for all object types handled by dropcmds.c.

KaiGai Kohei, with minor revisions by me
2011-11-17 21:32:34 -05:00
Tom Lane
709aca5960 Declare range inclusion operators as taking anyelement not anynonarray.
Use of anynonarray was a crude hack to get around ambiguity versus the
array inclusion operators of the same names.  My previous patch to extend
the parser's type resolution heuristics makes that unnecessary, so use
the more general declaration instead.  This eliminates a wart that these
operators couldn't be used with ranges over arrays, which are otherwise
supported just fine.

Also, mark range_before and range_after as commutator operators,
per discussion with Jeff Davis.
2011-11-17 18:56:33 -05:00
Tom Lane
bf4f96b5e2 Fix range_cmp_bounds for the case of equal-valued exclusive bounds.
Also improve its comments and related regression tests.

Jeff Davis, with some further adjustments by Tom
2011-11-17 16:51:20 -05:00
Tom Lane
4509033a00 Code review for range-types catalog entries.
Fix assorted infelicities, such as dependency on OIDs that aren't
hardwired, as well as outright misdeclaration of daterange_canonical(),
which resulted in crashes if you invoked it directly.  Add some more
regression tests to try to catch similar mistakes in future.
2011-11-16 18:21:34 -05:00
Tom Lane
ad50934eaa Fix alignment and toasting bugs in range types.
A range type whose element type has 'd' alignment must have 'd' alignment
itself, else there is no guarantee that the element value can be used
in-place.  (Because range_deserialize uses att_align_pointer which forcibly
aligns the given pointer, violations of this rule did not lead to SIGBUS
but rather to garbage data being extracted, as in one of the added
regression test cases.)

Also, you can't put a toast pointer inside a range datum, since the
referenced value could disappear with the range datum still present.
For consistency with the handling of arrays and records, I also forced
decompression of in-line-compressed bound values.  It would work to store
them as-is, but our policy is to avoid situations that might result in
double compression.

Add assorted regression tests for this, and bump catversion because of
fixes to built-in pg_type entries.

Also some marginal cleanup of inconsistent/unnecessary error checks.
2011-11-14 21:42:04 -05:00
Tom Lane
4165d5b6d7 Update oidjoins regression test to match git HEAD.
This is mostly to add some sanity checking for the pg_range catalog.
2011-11-14 20:28:38 -05:00
Tom Lane
851c83fc81 Return FALSE instead of throwing error for comparisons with empty ranges.
Change range_before, range_after, range_adjacent to return false rather
than throwing an error when one or both input ranges are empty.

The original definition is unnecessarily difficult to use, and also can
result in undesirable planner failures since the planner could try to
compare an empty range to something else while deriving statistical
estimates.  (This was, in fact, the cause of repeatable regression test
failures on buildfarm member jaguar, as well as intermittent failures
elsewhere.)

Also tweak rangetypes regression test to not drop all the objects it
creates, so that the final state of the regression database contains
some rangetype objects for pg_dump testing.
2011-11-14 15:15:53 -05:00
Tom Lane
f158536285 Fix copyright notices, other minor editing in new range-types code.
No functional changes in this commit (except I could not resist the
temptation to re-word a couple of error messages).  This is just manual
cleanup after pgindent to make the code look reasonably like other PG
code, in preparation for more detailed code review to come.
2011-11-14 13:59:34 -05:00
Tom Lane
2c30f96103 Tweak new regression test case for more portability.
Ensure that same index gets selected on 32-bit and 64-bit machines.
Per buildfarm results.
2011-11-09 00:13:37 -05:00
Tom Lane
57664ed25e Wrap appendrel member outputs in PlaceHolderVars in additional cases.
Add PlaceHolderVar wrappers as needed to make UNION ALL sub-select output
expressions appear non-constant and distinct from each other.  This makes
the world safe for add_child_rel_equivalences to do what it does.  Before,
it was possible for that function to add identical expressions to different
EquivalenceClasses, which logically should imply merging such ECs, which
would be wrong; or to improperly add a constant to an EquivalenceClass,
drastically changing its behavior.  Per report from Teodor Sigaev.

The only currently known consequence of this bug is "MergeAppend child's
targetlist doesn't match MergeAppend" planner failures in 9.1 and later.
I am suspicious that there may be other failure modes that could affect
older release branches; but in the absence of any hard evidence, I'll
refrain from back-patching further than 9.1.
2011-11-08 21:14:21 -05:00
Tom Lane
f62be400c0 On second thought, we'd better just drop these tests altogether.
Further experimentation reveals that my previous change didn't fix the
issue entirely: these tests would still fail at the spring-forward DST
transition.  There doesn't seem to be any great value in testing this
specific issue for both timestamp and timestamptz, so just lose the
latter tests.
2011-11-06 20:12:20 -05:00
Tom Lane
362f731dde Un-break horology regression test.
Adjust ill-considered timezone-dependent tests added in commit
8a3d33c8e6 so that they won't fail on DST
transition days.  Per all-pink buildfarm.
2011-11-06 18:20:26 -05:00
Magnus Hagander
3a6e4076b7 Update regression tests for \d+ modification
Noted by Tom
2011-11-05 16:01:04 +01:00
Heikki Linnakangas
4429f6a9e3 Support range data types.
Selectivity estimation functions are missing for some range type operators,
which is a TODO.

Jeff Davis
2011-11-03 13:42:15 +02:00
Tom Lane
7e3bf99baa Fix handling of PlaceHolderVars in nestloop parameter management.
If we use a PlaceHolderVar from the outer relation in an inner indexscan,
we need to reference the PlaceHolderVar as such as the value to be passed
in from the outer relation.  The previous code effectively tried to
reconstruct the PHV from its component expression, which doesn't work since
(a) the Vars therein aren't necessarily bubbled up far enough, and (b) it
would be the wrong semantics anyway because of the possibility that the PHV
is supposed to have gone to null at some point before the current join.
Point (a) led to "variable not found in subplan target list" planner
errors, but point (b) would have led to silently wrong answers.
Per report from Roger Niederland.
2011-11-03 00:50:58 -04:00
Tom Lane
882368e854 Fix btree stop-at-nulls logic properly.
As pointed out by Naoya Anzai, my previous try at this was a few bricks
shy of a load, because I had forgotten that the initial-positioning logic
might not try to skip over nulls at the end of the index the scan will
start from.  We ought to fix that, because it represents an unnecessary
inefficiency, but first let's get the scan-stop logic back to a safe
state.  With this patch, we preserve the performance benefit requested
in bug #6278 for the case of scanning forward into NULLs (in a NULLS
LAST index), but the reverse case of scanning backward across NULLs
when there's no suitable initial-positioning qual is still inefficient.
2011-11-02 17:53:49 -04:00
Tom Lane
58958726ff Change FK trigger creation order to better support self-referential FKs.
When a foreign-key constraint references another column of the same table,
row updates will queue both the PK's ON UPDATE action and the FK's CHECK
action in the same event.  The ON UPDATE action must execute first, else
the CHECK will check a non-final state of the row and possibly throw an
inappropriate error, as seen in bug #6268 from Roman Lytovchenko.

Now, the firing order of multiple triggers for the same event is determined
by the sort order of their pg_trigger.tgnames, and the auto-generated names
we use for FK triggers are "RI_ConstraintTrigger_NNNN" where NNNN is the
trigger OID.  So most of the time the firing order is the same as creation
order, and so rearranging the creation order fixes it.

This patch will fail to fix the problem if the OID counter wraps around or
adds a decimal digit (eg, from 99999 to 100000) while we are creating the
triggers for an FK constraint.  Given the small odds of that, and the low
usage of self-referential FKs, we'll live with that solution in the back
branches.  A better fix is to change the auto-generated names for FK
triggers, but it seems unwise to do that in stable branches because there
may be client code that depends on the naming convention.  We'll fix it
that way in HEAD in a separate patch.

Back-patch to all supported branches, since this bug has existed for a long
time.
2011-10-26 13:02:28 -04:00
Robert Haas
980261929f Fix DROP OPERATOR FAMILY IF EXISTS.
Essentially, the "IF EXISTS" portion was being ignored, and an error
thrown anyway if the opfamily did not exist.

I broke this in commit fd1843ff8979c0461fb3f1a9eab61140c977e32d; so
backpatch to 9.1.X.

Report and diagnosis by KaiGai Kohei.
2011-10-21 09:12:23 -04:00
Robert Haas
1d751018d8 Add "skipping" to the NOTICE produced by DROP OPERATOR CLASS IF EXISTS.
This makes this message consistent with all the other similar notices
produced by other DROP IF EXISTS commands.

Noted by KaiGai Kohei
2011-10-19 23:45:31 -04:00
Robert Haas
0bf0899493 Remove a few of the new DROP-IF-EXISTS regression tests.
Commit 3301c83536 broke the build farm.
Let's try to fix that.
2011-10-19 23:38:06 -04:00
Robert Haas
3301c83536 Add some more regression tests for DROP IF EXISTS.
KaiGai Kohei
2011-10-19 21:00:45 -04:00
Tom Lane
e661c3dfd3 Fix collate.linux.utf8 expected output for recent error message change.
Noted by Jeff Davis.
2011-10-16 16:07:40 -04:00
Tom Lane
e6858e6657 Measure the number of all-visible pages for use in index-only scan costing.
Add a column pg_class.relallvisible to remember the number of pages that
were all-visible according to the visibility map as of the last VACUUM
(or ANALYZE, or some other operations that update pg_class.relpages).
Use relallvisible/relpages, instead of an arbitrary constant, to estimate
how many heap page fetches can be avoided during an index-only scan.

This is pretty primitive and will no doubt see refinements once we've
acquired more field experience with the index-only scan mechanism, but
it's way better than using a constant.

Note: I had to adjust an underspecified query in the window.sql regression
test, because it was changing answers when the plan changed to use an
index-only scan.  Some of the adjacent tests perhaps should be adjusted
as well, but I didn't do that here.
2011-10-14 17:23:46 -04:00
Robert Haas
3e9a2672d2 Attempt to reduce local dependencies in regression tests.
This appears to be another case where the relative sort order of letters
vs. numbers can throw things off.

Pavel Stehule
2011-10-10 08:01:40 -04:00
Robert Haas
c0f03aae04 Fix ALTER TABLE ONLY .. DROP CONSTRAINT.
When I consolidated two copies of the HOT-chain search logic in commit
4da99ea423, I introduced a behavior
change: the old code wouldn't necessarily traverse the entire chain,
if the most recently returned tuple were updated while the HOT chain
traversal is in progress.  The new behavior seems more correct, but
unfortunately, the code here relies on a scan with SnapshotNow failing
to see its own updates.  That seems pretty shaky even with the old HOT
chain traversal behavior, since there's no guarantee that these
updates will always be HOT, but it's trivial to broke a failure with
the new HOT search logic.  Fix by updating just the first matching
pg_constraint tuple, rather than all of them, since there should be
only one anyway.  But since nobody has reproduced this failure on older
versions, no back-patch for now.

Report and test case by Alex Hunsaker; tablecmds.c changes by me.
2011-10-09 23:39:52 -04:00
Tom Lane
45401c1c25 Prevent index-only scans in stats regression test.
This bollixes the test because it's expecting to see the idx_tup_fetch
counter increase, which won't happen if heap fetches were avoided by use
of an index-only scan.  Per buildfarm results.

While at it, let's just make sure that enable_seqscan and enable_indexscan
are ON for this test ...
2011-10-08 23:45:58 -04:00
Tom Lane
a2822fb933 Support index-only scans using the visibility map to avoid heap fetches.
When a btree index contains all columns required by the query, and the
visibility map shows that all tuples on a target heap page are
visible-to-all, we don't need to fetch that heap page.  This patch depends
on the previous patches that made the visibility map reliable.

There's a fair amount left to do here, notably trying to figure out a less
chintzy way of estimating the cost of an index-only scan, but the core
functionality seems ready to commit.

Robert Haas and Ibrar Ahmed, with some previous work by Heikki Linnakangas.
2011-10-07 20:14:13 -04:00
Tom Lane
5ec6b7f1b8 Improve generated column names for cases involving sub-SELECTs.
We'll now use "exists" for EXISTS(SELECT ...), "array" for ARRAY(SELECT
...), or the sub-select's own result column name for a simple expression
sub-select.  Previously, you usually got "?column?" in such cases.

Marti Raudsepp, reviewed by Kyotaro Horiugchi
2011-10-01 14:01:46 -04:00
Tom Lane
269c5dd2f4 Fix window functions that sort by expressions involving aggregates.
In commit c1d9579dd8, I changed things so
that the output of the Agg node that feeds the window functions would not
list any ungrouped Vars directly.  Formerly, for example, the Agg tlist
might have included both "x" and "sum(x)", which is not really valid if
"x" isn't a grouping column.  If we then had a window function ordering on
something like "sum(x) + 1", prepare_sort_from_pathkeys would find no exact
match for this in the Agg tlist, and would conclude that it must recompute
the expression.  But it would break the expression down to just the Var
"x", which it would find in the tlist, and then rebuild the ORDER BY
expression using a reference to the subplan's "x" output.  Now, after the
above-referenced changes, "x" isn't in the Agg tlist if it's not a grouping
column, so that prepare_sort_from_pathkeys fails with "could not find
pathkey item to sort", as reported by Bricklen Anderson.

The fix is to not break down Aggrefs into their component parts, but just
treat them as irreducible expressions to be sought in the subplan tlist.
This is definitely OK for the use with respect to window functions in
grouping_planner, since it just built the tlist being used on the same
basis.  AFAICT it is safe for other uses too; most of the other call sites
couldn't encounter Aggrefs anyway.
2011-09-26 23:48:39 -04:00
Tom Lane
16762b519c Speed up array element assignment in plpgsql by caching type information.
Cache assorted data in the PLpgSQL_arrayelem struct to avoid repetitive
catalog lookups over multiple executions of the same statement.

Pavel Stehule
2011-09-26 15:38:07 -04:00
Robert Haas
4893552e21 Fix another bit of unlogged-table-induced breakage.
Per bug #6205, reported by Abel Abraham Camarillo Ojeda.  This isn't a
particularly elegant fix, but I'm trying to minimize the chances of
causing yet another round of breakage.

Adjust regression tests to exercise this case.
2011-09-21 10:48:31 -04:00
Tom Lane
f0bedf3e45 Fix corner case bug in numeric to_char().
Trailing-zero stripping applied by the FM specifier could strip zeroes
to the left of the decimal point, for a format with no digit positions
after the decimal point (such as "FM999.").

Reported and diagnosed by Marti Raudsepp, though I didn't use his patch.
2011-09-07 17:07:20 -04:00
Robert Haas
8a3d33c8e6 Fix parsing of time string followed by yesterday/today/tomorrow.
Previously, 'yesterday 04:00:00'::timestamp didn't do the same thing as
'04:00:00 yesterday'::timestamp, and the return value from the latter
was midnight rather than the specified time.

Dean Rasheed, with some stylistic changes
2011-08-30 11:38:42 -04:00
Robert Haas
0371d4d063 Change format of SQL/MED generic options in psql backslash commands.
Rather than dumping out the raw array as PostgreSQL represents it
internally, we now print it out in a format similar to the one in
which the user input it, which seems a lot more user friendly.

Shigeru Hanada
2011-08-25 12:47:30 -04:00
Tom Lane
b33f78df17 Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.
Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER
ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger
fired for the same update.  Fix by not trying to overload the use of
estate->es_trig_tuple_slot.  Per report from Yoran Heling.

Back-patch to 9.0, when trigger WHEN conditions were introduced.
2011-08-21 18:15:55 -04:00
Peter Eisentraut
5845f42721 Adjust regression tests for error message change 2011-08-15 17:27:10 +03:00
Robert Haas
d82d84864c Display both per-table and per-column FDW options in psql's \d output.
Along the way, rename "Options" to "FDW Options" in various places for
consistency and clarity.

Shigeru Hanada
2011-08-11 11:45:47 -04:00
Tom Lane
77ba232564 Fix nested PlaceHolderVar expressions that appear only in targetlists.
A PlaceHolderVar's expression might contain another, lower-level
PlaceHolderVar.  If the outer PlaceHolderVar is used, the inner one
certainly will be also, and so we have to make sure that both of them get
into the placeholder_list with correct ph_may_need values during the
initial pre-scan of the query (before deconstruct_jointree starts).
We did this correctly for PlaceHolderVars appearing in the query quals,
but overlooked the issue for those appearing in the top-level targetlist;
with the result that nested placeholders referenced only in the targetlist
did not work correctly, as illustrated in bug #6154.

While at it, add some error checking to find_placeholder_info to ensure
that we don't try to create new placeholders after it's too late to do so;
they have to all be created before deconstruct_jointree starts.

Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
2011-08-09 00:50:07 -04:00
Robert Haas
d82a9d2a60 Teach psql to display the comments on SQL/MED objects in verbose mode.
The relevant backslash commands already exist, so we're just adding an
additional column.  With this commit, all objects that have psql backslash
commands and accept comments should now display those comments at least
in verbose mode.

Josh Kupershmidt, with doc additions by me.
2011-08-08 16:30:39 -04:00
Robert Haas
c4096c7639 Allow per-column foreign data wrapper options.
Shigeru Hanada, with fairly minor editing by me.
2011-08-05 13:24:03 -04:00
Robert Haas
6f1be5a67a Unbreak unlogged tables.
I broke this in commit 5da79169d3, which
was obviously insufficiently well tested.  Add some regression tests
in the hope of making future slip-ups more likely to be noticed.
2011-07-22 16:15:43 -04:00
Tom Lane
0ce7676aa0 Make xpath() do something useful with XPath expressions that return scalars.
Previously, xpath() simply returned an empty array if the expression did
not yield a node set.  This is useless for expressions that return scalars,
such as one with name() at the top level.  Arrange to return the scalar
value as a single-element xml array, instead.  (String values will be
suitably escaped.)

This change will also cause xpath_exists() to return true, not false,
for such expressions.

Florian Pflug, reviewed by Radoslaw Smogura
2011-07-21 11:32:46 -04:00
Tom Lane
aaf15e5c1c Ensure that xpath() escapes special characters in string values.
Without this it's possible for the output to not be legal XML, as
illustrated by the added regression test cases.

NB: this change will need to be called out as an incompatibility in the
9.2 release notes, since it's possible somebody was relying on the old
behavior, even though it's clearly wrong.

Florian Pflug, reviewed by Radoslaw Smogura
2011-07-20 18:44:35 -04:00
Robert Haas
463f2625a5 Support SECURITY LABEL on databases, tablespaces, and roles.
This requires a new shared catalog, pg_shseclabel.

Along the way, fix the security_label regression tests so that they
don't monkey with the labels of any pre-existing objects.  This is
unlikely to matter in practice, since only the label for the "dummy"
provider was being manipulated.  But this way still seems cleaner.

KaiGai Kohei, with fairly extensive hacking by me.
2011-07-20 13:18:24 -04:00
Tom Lane
cacd42d62c Rewrite libxml error handling to be more robust.
libxml reports some errors (like invalid xmlns attributes) via the error
handler hook, but still returns a success indicator to the library caller.
This causes us to miss some errors that are important to report.  Since the
"generic" error handler hook doesn't know whether the message it's getting
is for an error, warning, or notice, stop using that and instead start
using the "structured" error handler hook, which gets enough information
to be useful.

While at it, arrange to save and restore the error handler hook setting in
each libxml-using function, rather than assuming we can set and forget the
hook.  This should improve the odds of working nicely with third-party
libraries that also use libxml.

In passing, volatile-ize some local variables that get modified within
PG_TRY blocks.  I noticed this while testing with an older gcc version
than I'd previously tried to compile xml.c with.

Florian Pflug and Tom Lane, with extensive review/testing by Noah Misch
2011-07-20 13:03:49 -04:00
Peter Eisentraut
6307fff358 Fix typo 2011-07-19 08:02:34 +03:00
Tom Lane
3d4890c0c5 Add GET STACKED DIAGNOSTICS plpgsql command to retrieve exception info.
This is more SQL-spec-compliant, more easily extensible, and better
performing than the old method of inventing special variables.

Pavel Stehule, reviewed by Shigeru Hanada and David Wheeler
2011-07-18 14:47:18 -04:00
Robert Haas
3cba8999b3 Create a "fast path" for acquiring weak relation locks.
When an AccessShareLock, RowShareLock, or RowExclusiveLock is requested
on an unshared database relation, and we can verify that no conflicting
locks can possibly be present, record the lock in a per-backend queue,
stored within the PGPROC, rather than in the primary lock table.  This
eliminates a great deal of contention on the lock manager LWLocks.

This patch also refactors the interface between GetLockStatusData() and
pg_lock_status() to be a bit more abstract, so that we don't rely so
heavily on the lock manager's internal representation details.  The new
fast path lock structures don't have a LOCK or PROCLOCK structure to
return, so we mustn't depend on that for purposes of listing outstanding
locks.

Review by Jeff Davis.
2011-07-18 00:49:28 -04:00
Tom Lane
c1d9579dd8 Avoid listing ungrouped Vars in the targetlist of Agg-underneath-Window.
Regular aggregate functions in combination with, or within the arguments
of, window functions are OK per spec; they have the semantics that the
aggregate output rows are computed and then we run the window functions
over that row set.  (Thus, this combination is not really useful unless
there's a GROUP BY so that more than one aggregate output row is possible.)
The case without GROUP BY could fail, as recently reported by Jeff Davis,
because sloppy construction of the Agg node's targetlist resulted in extra
references to possibly-ungrouped Vars appearing outside the aggregate
function calls themselves.  See the added regression test case for an
example.

Fixing this requires modifying the API of flatten_tlist and its underlying
function pull_var_clause.  I chose to make pull_var_clause's API for
aggregates identical to what it was already doing for placeholders, since
the useful behaviors turn out to be the same (error, report node as-is, or
recurse into it).  I also tightened the error checking in this area a bit:
if it was ever valid to see an uplevel Var, Aggref, or PlaceHolderVar here,
that was a long time ago, so complain instead of ignoring them.

Backpatch into 9.1.  The failure exists in 8.4 and 9.0 as well, but seeing
that it only occurs in a basically-useless corner case, it doesn't seem
worth the risks of changing a function API in a minor release.  There might
be third-party code using pull_var_clause.
2011-07-12 18:24:39 -04:00
Peter Eisentraut
f05c65090a Message style improvements 2011-07-08 07:37:04 +03:00
Heikki Linnakangas
928408d9e5 Fix a bug with SSI and prepared transactions:
If there's a dangerous structure T0 ---> T1 ---> T2, and T2 commits first,
we need to abort something. If T2 commits before both conflicts appear,
then it should be caught by OnConflict_CheckForSerializationFailure. If
both conflicts appear before T2 commits, it should be caught by
PreCommit_CheckForSerializationFailure. But that is actually run when
T2 *prepares*. Fix that in OnConflict_CheckForSerializationFailure, by
treating a prepared T2 as if it committed already.

This is mostly a problem for prepared transactions, which are in prepared
state for some time, but also for regular transactions because they also go
through the prepared state in the SSI code for a short moment when they're
committed.

Kevin Grittner and Dan Ports
2011-07-07 18:12:15 +03:00
Peter Eisentraut
27af66162b Message style tweaks 2011-07-05 00:01:35 +03:00
Simon Riggs
f563afd433 Alter test results to comply with new ALTER TABLE behaviour. 2011-07-04 11:09:51 +01:00
Robert Haas
5da79169d3 Fix bugs in relpersistence handling during table creation.
Unlike the relistemp field which it replaced, relpersistence must be
set correctly quite early during the table creation process, as we
rely on it quite early on for a number of purposes, including security
checks.  Normally, this is set based on whether the user enters CREATE
TABLE, CREATE UNLOGGED TABLE, or CREATE TEMPORARY TABLE, but a
relation may also be made implicitly temporary by creating it in
pg_temp.  This patch fixes the handling of that case, and also
disables creation of unlogged tables in temporary tablespace (such
table indeed skip WAL-logging, but we reject an explicit
specification) and creation of relations in the temporary schemas of
other sessions (which is not very sensible, and didn't work right
anyway).

Report by Amit Khandekar.
2011-07-03 17:34:47 -04:00
Alvaro Herrera
897795240c Enable CHECK constraints to be declared NOT VALID
This means that they can initially be added to a large existing table
without checking its initial contents, but new tuples must comply to
them; a separate pass invoked by ALTER TABLE / VALIDATE can verify
existing data and ensure it complies with the constraint, at which point
it is marked validated and becomes a normal part of the table ecosystem.

An non-validated CHECK constraint is ignored in the planner for
constraint_exclusion purposes; when validated, cached plans are
recomputed so that partitioning starts working right away.

This patch also enables domains to have unvalidated CHECK constraints
attached to them as well by way of ALTER DOMAIN / ADD CONSTRAINT / NOT
VALID, which can later be validated with ALTER DOMAIN / VALIDATE
CONSTRAINT.

Thanks to Thom Brown, Dean Rasheed and Jaime Casanova for the various
reviews, and Robert Hass for documentation wording improvement
suggestions.

This patch was sponsored by Enova Financial.
2011-06-30 11:24:31 -04:00
Tom Lane
a5652d3e05 Restore correct btree preprocessing of "indexedcol IS NULL" conditions.
Such a condition is unsatisfiable in combination with any other type of
btree-indexable condition (since we assume btree operators are always
strict).  8.3 and 8.4 had an explicit test for this, which I removed in
commit 29c4ad9829, mistakenly thinking that
the case would be subsumed by the more general handling of IS (NOT) NULL
added in that patch.  Put it back, and improve the comments about it, and
add a regression test case.

Per bug #6079 from Renat Nasyrov, and analysis by Dean Rasheed.
2011-06-29 19:46:47 -04:00
Peter Eisentraut
21f1e15aaf Unify spelling of "canceled", "canceling", "cancellation"
We had previously (af26857a27)
established the U.S. spellings as standard.
2011-06-29 09:28:46 +03:00
Tom Lane
c6c4177f10 Update alternative expected file for recent sequence test changes. 2011-06-22 19:28:51 -04:00
Tom Lane
6aab24d7bc Revert "Don't select log_cnt in sequence regression tests."
This reverts commit addf11f9a2.
The right fix for the problem is to update the alternative expected
file, not to lobotomize the test case.
2011-06-22 19:21:29 -04:00
Robert Haas
addf11f9a2 Don't select log_cnt in sequence regression tests.
It's not entirely stable.

Per suggestion from Josh Kupershmidt.
2011-06-22 09:42:58 -04:00
Robert Haas
431ab0e828 Add some regression tests for serial pseudotypes.
Josh Kupershmidt
2011-06-21 22:57:08 -04:00
Peter Eisentraut
e2a0cb1a80 Message style and spelling improvements 2011-06-22 00:45:34 +03:00
Heikki Linnakangas
38c0e72180 Adjust the alternative expected output file for prepared_xacts test case,
used when max_prepared_transactions=0, for the recent changes in the test
case.
2011-06-21 17:07:10 +03:00
Heikki Linnakangas
1eea8e8a06 Fix bug in PreCommit_CheckForSerializationFailure. A transaction that has
already been marked as PREPARED cannot be killed. Kill the current
transaction instead.

One of the prepared_xacts regression tests actually hits this bug. I
removed the anomaly from the duplicate-gids test so that it fails in the
intended way, and added a new test to check serialization failures with
a prepared transaction.

Dan Ports
2011-06-21 14:49:50 +03:00
Tom Lane
cd1f0d04bf Fix thinko in previous patch for optimizing EXISTS-within-EXISTS.
When recursing after an optimization in pull_up_sublinks_qual_recurse, the
available_rels value passed down must include only the relations that are
in the righthand side of the new SEMI or ANTI join; it's incorrect to pull
up a sub-select that refers to other relations, as seen in the added test
case.  Per report from BangarRaju Vadapalli.

While at it, rethink the idea of recursing below a NOT EXISTS.  That is
essentially the same situation as pulling up ANY/EXISTS sub-selects that
are in the ON clause of an outer join, and it has the same disadvantage:
we'd force the two joins to be evaluated according to the syntactic nesting
order, because the lower join will most likely not be able to commute with
the ANTI join.  That could result in having to form a rather large join
product, whereas the handling of a correlated subselect is not quite that
dumb.  So until we can handle those cases better, #ifdef NOT_USED that
case.  (I think it's okay to pull up in the EXISTS/ANY cases, because SEMI
joins aren't so inflexible about ordering.)

Back-patch to 8.4, same as for previous patch in this area.  Fortunately
that patch hadn't made it into any shipped releases yet.
2011-06-20 14:33:20 -04:00
Tom Lane
b7e8feb33e Allow domains over arrays to match ANYARRAY parameters again.
This use-case was broken in commit 529cb267a6
of 2010-10-21, in which I commented "For the moment, we just forbid such
matching.  We might later wish to insert an automatic downcast to the
underlying array type, but such a change should also change matching of
domains to ANYELEMENT for consistency".  We still lack consensus about what
to do with ANYELEMENT; but not matching ANYARRAY is a clear loss of
functionality compared to prior releases, so let's go ahead and make that
happen.  Per complaint from Regina Obe and extensive subsequent discussion.
2011-06-08 12:52:58 -04:00
Tom Lane
fc1286d3cb Fix rewriter to cope (more or less) with CTEs in the query being rewritten.
Since the original implementation of CTEs only allowed them in SELECT
queries, the rule rewriter did not expect to find any CTEs in statements
being rewritten by ON INSERT/UPDATE/DELETE rules.  We had dealt with this
to some extent but the code was still several bricks shy of a load, as
illustrated in bug #6051 from Jehan-Guillaume de Rorthais.

In particular, we have to be able to copy CTEs from the original query's
cteList into that of a rule action, in case the rule action references the
CTE (which it pretty much always will).  This also implies we were doing
things in the wrong order in RewriteQuery: we have to recursively rewrite
the CTE queries before expanding the main query, so that we have the
rewritten queries available to copy.

There are unpleasant limitations yet to resolve here, but at least we now
throw understandable FEATURE_NOT_SUPPORTED errors for them instead of just
failing with bizarre implementation-dependent errors.  In particular, we
can't handle propagating the same CTE into multiple post-rewrite queries
(because then the CTE would be evaluated multiple times), and we can't cope
with conflicts between CTE names in the original query and in the rule
actions.
2011-06-07 00:08:31 -04:00
Tom Lane
ea8e42f3a0 Fix failure to check whether a rowtype's component types are sortable.
The existence of a btree opclass accepting composite types caused us to
assume that every composite type is sortable.  This isn't true of course;
we need to check if the column types are all sortable.  There was logic
for this for the case of array comparison (ie, check that the element
type is sortable), but we missed the point for rowtypes.  Per Teodor's
report of an ANALYZE failure for an unsortable composite type.

Rather than just add some more ad-hoc logic for this, I moved knowledge of
the issue into typcache.c.  The typcache will now only report out array_eq,
record_cmp, and friends as usable operators if the array or composite type
will work with those functions.

Unfortunately we don't have enough info to do this for anonymous RECORD
types; in that case, just assume it will work, and take the runtime failure
as before if it doesn't.

This patch might be a candidate for back-patching at some point, but
given the lack of complaints from the field, I'd rather just test it in
HEAD for now.

Note: most of the places touched in this patch will need further work
when we get around to supporting hashing of record types.
2011-06-03 15:39:17 -04:00
Tom Lane
aff97b1f4e Handle domains when checking for recursive inclusion of composite types.
We need this now because we allow domains over arrays, and we'll probably
allow domains over composites pretty soon, which makes the problem even
more obvious.

Although domains over arrays also exist in previous versions, this does not
need to be back-patched, because the coding used in older versions
successfully "looked through" domains over arrays.  The problem is exposed
by not treating a domain as having a typelem.

Problem identified by Noah Misch, though I did not use his patch, since
it would require additional work to handle domains over composites that
way.  This approach is more future-proof.
2011-06-02 18:37:57 -04:00
Heikki Linnakangas
27c7875d95 In binary-upgrade mode, dump dropped attributes of composite types.
Noah Misch
2011-05-21 08:24:34 -04:00
Peter Eisentraut
bcf63a51e3 Message style improvements 2011-05-21 00:50:35 +03:00
Peter Eisentraut
c13dc6402b Spell checking and markup refinement 2011-05-19 01:14:45 +03:00
Magnus Hagander
27525b1831 Unbreak the regression tests from my previous commit 2011-05-05 23:02:03 +02:00
Andrew Dunstan
625744ac43 Use terse mode to avoid variable order dependency output in foreign data regression check.
Per Tom Lane's suggestion about my gripe about occasional errors noticed on the buildfarm.
2011-04-25 14:43:26 -04:00
Robert Haas
97e8346851 Regression tests for TOAST.
Kevin Grittner, per discussion of bug #5989
2011-04-25 09:48:10 -04:00
Tom Lane
795c382e8c Improve findoidjoins to cover more cases.
Teach the program and script to deal with OID-array referencing columns,
which we now have several of.  Also, modify the recommended usage process
to specify that the program should be run against the regression database
rather than template1.  This lets it find numerous joins that cannot be
found in template1 because the relevant catalogs are entirely empty.

Together these changes add seventeen formerly-missed cases to the oidjoins
regression test.
2011-04-23 19:33:17 -04:00
Tom Lane
970d8a3973 Update oidjoins regression test for 9.1 catalog schema additions. 2011-04-23 14:49:54 -04:00
Tom Lane
1abd146ddd Adjust comments about collate.linux.utf8 regression test.
This test should now work in any database with UTF8 encoding, regardless
of the database's default locale.  The former restriction was really
"doesn't work if default locale is C", and that was because of not handling
mbstowcs/wcstombs correctly.
2011-04-23 12:51:47 -04:00
Robert Haas
a0e8df527e Allow ALTER TYPE .. ADD ATTRIBUTE .. CASCADE to recurse to descendants.
Without this, adding an attribute to a typed table with an inheritance
child fails, which is surprising.

Noah Misch, with minor changes by me.
2011-04-20 22:49:37 -04:00
Robert Haas
68739ba856 Allow ALTER TABLE name {OF type | NOT OF}.
This syntax allows a standalone table to be made into a typed table,
or a typed table to be made standalone.  This is possibly a mildly
useful feature in its own right, but the real motivation for this
change is that we need it to make pg_upgrade work with typed tables.
This doesn't actually fix that problem, but it's necessary
infrastructure.

Noah Misch
2011-04-20 21:38:47 -04:00
Tom Lane
918854cc08 Fix handling of collations in multi-row VALUES constructs.
Per spec we ought to apply select_common_collation() across the expressions
in each column of the VALUES table.  The original coding was just taking
the first row and assuming it was representative.

This patch adds a field to struct RangeTblEntry to carry the resolved
collations, so initdb is forced for changes in stored rule representation.
2011-04-18 15:31:52 -04:00
Robert Haas
04db0fdbfa Only allow typed tables to hang off composite types, not e.g. tables.
This also ensures that we take a relation lock on the composite type when
creating a typed table, which is necessary to prevent the composite type
and the typed table from getting out of step in the face of concurrent
DDL.

Noah Misch, with some changes.
2011-04-18 10:19:46 -04:00
Tom Lane
c29abc8b6f Fix assorted infelicities in collation handling in psql's describe.c.
In \d, be more careful to print collation only if it's not the default for
the column's data type.  Avoid assuming that the name "default" is magic.

Fix \d on a composite type so that it will print per-column collations.
It's no longer the case that a composite type cannot have modifiers.
(In consequence, the expected outputs for composite-type regression tests
change.)

Fix \dD so that it will print collation for a domain, again only if it's
not the same as the base type's collation.
2011-04-17 18:09:22 -04:00
Tom Lane
c947325856 Support a COLLATE clause in plpgsql variable declarations.
This allows the usual rules for assigning a collation to a local variable
to be overridden.  Per discussion, it seems appropriate to support this
rather than forcing all local variables to have the argument-derived
collation.
2011-04-17 14:54:19 -04:00
Robert Haas
39a68e5c6c Fix toast table creation.
Instead of using slightly-too-clever heuristics to decide when we must
create a TOAST table, just check whether one is needed every time the
table is altered.  Checking whether a toast table is needed is cheap
enough that we needn't worry about doing it on every ALTER TABLE command,
and the previous coding is apparently prone to accidental breakage:
commit 04e17bae50 broken ALTER TABLE ..
SET STORAGE, which moved some actions from AT_PASS_COL_ATTRS to
AT_PASS_MISC, and commit 6c57239985 broke
ALTER TABLE .. ADD COLUMN by changing the way that adding columns
recurses into child tables.

Noah Misch, with one comment change by me
2011-04-13 18:17:52 -07:00
Tom Lane
921b993677 Fix RI_Initial_Check to use a COLLATE clause when needed in its query.
If the referencing and referenced columns have different collations,
the parser will be unable to resolve which collation to use unless it's
helped out in this way.  The effects are sometimes masked, if we end up
using a non-collation-sensitive plan; but if we do use a mergejoin
we'll see a failure, as recently noted by Robert Haas.

The SQL spec states that the referenced column's collation should be used
to resolve RI checks, so that's what we do.  Note however that we currently
don't append a COLLATE clause when writing a query that examines only the
referencing column.  If we ever support collations that have varying
notions of equality, that will have to be changed.  For the moment, though,
it's preferable to leave it off so that we can use a normal index on the
referencing column.
2011-04-11 21:32:53 -04:00
Tom Lane
1e16a8107d Teach regular expression operators to honor collations.
This involves getting the character classification and case-folding
functions in the regex library to use the collations infrastructure.
Most of this work had been done already in connection with the upper/lower
and LIKE logic, so it was a simple matter of transposition.

While at it, split out these functions into a separate source file
regc_pg_locale.c, so that they can be correctly labeled with the Postgres
project's license rather than the Scriptics license.  These functions are
100% Postgres-written code whereas what remains in regc_locale.c is still
mostly not ours, so lumping them both under the same copyright notice was
getting more and more misleading.
2011-04-10 18:03:09 -04:00
Tom Lane
f89e4dfa75 Remove collate.linux.utf8.sql's assumptions about ".utf8" in locale names.
Tweak the test so that it does not depend on the platform using ".utf8" as
the extension signifying that a locale uses UTF8 encoding.  For the most
part this just requires using the abbreviated collation names "en_US" etc,
though I had to work a bit harder on the collation creation tests.

This opens the door to using the test on platforms that spell locales
differently, for example ".utf-8" or ".UTF-8".  Also, the test is now
somewhat useful with server encodings other than UTF8; though depending on
which encoding is selected, different subsets of it will fail for lack of
character set support.
2011-04-09 16:24:36 -04:00
Tom Lane
a19002d4e5 Adjust collation determination rules as per discussion.
Remove crude hack that tried to propagate collation through a
function-returning-record, ie, from the function's arguments to individual
fields selected from its result record.  That is just plain inconsistent,
because the function result is composite and cannot have a collation;
and there's no hope of making this kind of action-at-a-distance work
consistently.  Adjust regression test cases that expected this to happen.

Meanwhile, the behavior of casting to a domain with a declared collation
stays the same as it was, since that seemed to be the consensus.
2011-04-09 14:40:09 -04:00
Tom Lane
2594cf0e8c Revise the API for GUC variable assign hooks.
The previous functions of assign hooks are now split between check hooks
and assign hooks, where the former can fail but the latter shouldn't.
Aside from being conceptually clearer, this approach exposes the
"canonicalized" form of the variable value to guc.c without having to do
an actual assignment.  And that lets us fix the problem recently noted by
Bernd Helmle that the auto-tune patch for wal_buffers resulted in bogus
log messages about "parameter "wal_buffers" cannot be changed without
restarting the server".  There may be some speed advantage too, because
this design lets hook functions avoid re-parsing variable values when
restoring a previous state after a rollback (they can store a pre-parsed
representation of the value instead).  This patch also resolves a
longstanding annoyance about custom error messages from variable assign
hooks: they should modify, not appear separately from, guc.c's own message
about "invalid parameter value".
2011-04-07 00:12:02 -04:00
Robert Haas
f5e524d92b Add casts from int4 and int8 to numeric.
Joey Adams, per gripe from Ramanujam.  Review by myself and Tom Lane.
2011-04-05 09:35:43 -04:00
Robert Haas
6c57239985 Rearrange "add column" logic to merge columns at exec time.
The previous coding set attinhcount too high in some cases, resulting in
an undumpable, undroppable column.  Per bug #5856, reported by Naoya
Anzai.  See also commit 31b6fc06d8, which
fixes a similar bug in ALTER TABLE .. ADD CONSTRAINT.

Patch by Noah Misch.
2011-04-03 21:53:32 -04:00
Robert Haas
50533a6dc5 Support comments on FOREIGN DATA WRAPPER and SERVER objects.
This mostly involves making it work with the objectaddress.c framework,
which does most of the heavy lifting.  In that vein, change
GetForeignDataWrapperOidByName to get_foreign_data_wrapper_oid and
GetForeignServerOidByName to get_foreign_server_oid, to match the
pattern we use for other object types.

Robert Haas and Shigeru Hanada
2011-04-01 11:28:28 -04:00
Tom Lane
eb51af71f2 Prevent a rowtype from being included in itself.
Eventually we might be able to allow that, but it's not clear how many
places need to be fixed to prevent infinite recursion when there's a direct
or indirect inclusion of a rowtype in itself.  One such place is
CheckAttributeType(), which will recurse to stack overflow in cases such as
those exhibited in bug #5950 from Alex Perepelica.  If we were sure it was
the only such place, we could easily modify the code added by this patch to
stop the recursion without a complaint ... but it probably isn't the only
such place.  Hence, throw error until such time as someone is excited
enough about this type of usage to put work into making it safe.

Back-patch as far as 8.3.  8.2 doesn't have the recursive call in
CheckAttributeType in the first place, so I see no need to add code there
in the absence of clear evidence of a problem elsewhere.
2011-03-28 15:46:04 -04:00
Tom Lane
a4425e3200 Fix collation handling in plpgsql functions.
Make plpgsql treat the input collation as a polymorphism variable, so
that we cache separate plans for each input collation that's used in a
particular session, as per recent discussion.  Propagate the input
collation to all collatable input parameters.

I chose to also propagate the input collation to all declared variables of
collatable types, which is a bit more debatable but seems to be necessary
for non-astonishing behavior.  (Copying a parameter into a separate local
variable shouldn't result in a change of behavior, for example.)  There is
enough infrastructure here to support declaring a collation for each local
variable to override that default, but I thought we should wait to see what
the field demand is before adding such a feature.

In passing, remove exec_get_rec_fieldtype(), which wasn't used anywhere.

Documentation patch to follow.
2011-03-25 15:06:36 -04:00
Tom Lane
27dc7e240b Fix handling of collation in SQL-language functions.
Ensure that parameter symbols receive collation from the function's
resolved input collation, and fix inlining to behave properly.

BTW, this commit lays about 90% of the infrastructure needed to support
use of argument names in SQL functions.  Parsing of parameters is now
done via the parser-hook infrastructure ... we'd just need to supply
a column-ref hook ...
2011-03-24 20:30:23 -04:00
Tom Lane
3bba9ce945 Clean up handling of COLLATE clauses in index column definitions.
Ensure that COLLATE at the top level of an index expression is treated the
same as a grammatically separate COLLATE.  Fix bogus reverse-parsing logic
in pg_get_indexdef.
2011-03-24 15:29:52 -04:00
Tom Lane
6e197cb2e5 Improve reporting of run-time-detected indeterminate-collation errors.
pg_newlocale_from_collation does not have enough context to give an error
message that's even a little bit useful, so move the responsibility for
complaining up to its callers.  Also, reword ERRCODE_INDETERMINATE_COLLATION
error messages in a less jargony, more message-style-guide-compliant
fashion.
2011-03-22 16:55:32 -04:00
Tom Lane
37d6d07dda Throw error for indeterminate collation of an ORDER/GROUP/DISTINCT target.
This restores a parse error that was thrown (though only in the ORDER BY
case) by the original collation patch.  I had removed it in my recent
revisions because it was thrown at a place where collations now haven't
been computed yet; but I thought of another way to handle it.

Throwing the error at parse time, rather than leaving it to be done at
runtime, is good because a syntax error pointer is helpful for localizing
the problem.  We can reasonably assume that the comparison function for a
collatable datatype will complain if it doesn't have a collation to use.
Now the planner might choose to implement GROUP or DISTINCT via hashing,
in which case no runtime error would actually occur, but it seems better
to throw error consistently rather than let the error depend on what the
planner chooses to do.  Another possible objection is that the user might
specify a nondefault sort operator that doesn't care about collation
... but that's surely an uncommon usage, and it wouldn't hurt him to throw
in a COLLATE clause anyway.  This change also makes the ORDER BY/GROUP
BY/DISTINCT case more consistent with the UNION/INTERSECT/EXCEPT case,
which was already coded to throw this error even though the same objections
could be raised there.
2011-03-22 15:58:03 -04:00
Tom Lane
8df08c8489 Reimplement planner's handling of MIN/MAX aggregate optimization (again).
Instead of playing cute games with pathkeys, just build a direct
representation of the intended sub-select, and feed it through
query_planner to get a Path for the index access.  This is a bit slower
than 9.1's previous method, since we'll duplicate most of the overhead of
query_planner; but since the whole optimization only applies to rather
simple single-table queries, that probably won't be much of a problem in
practice.  The advantage is that we get to do the right thing when there's
a partial index that needs the implicit IS NOT NULL clause to be usable.
Also, although this makes planagg.c be a bit more closely tied to the
ordering of operations in grouping_planner, we can get rid of some coupling
to lower-level parts of the planner.  Per complaint from Marti Raudsepp.
2011-03-22 00:34:31 -04:00
Tom Lane
9b095fbea4 Add some platform-independent tests for the collation feature.
There's a lot we can't test very well without platform dependencies,
but the C/POSIX collations should now work the same way everywhere.
2011-03-20 14:35:39 -04:00
Tom Lane
b310b6e31c Revise collation derivation method and expression-tree representation.
All expression nodes now have an explicit output-collation field, unless
they are known to only return a noncollatable data type (such as boolean
or record).  Also, nodes that can invoke collation-aware functions store
a separate field that is the collation value to pass to the function.
This avoids confusion that arises when a function has collatable inputs
and noncollatable output type, or vice versa.

Also, replace the parser's on-the-fly collation assignment method with
a post-pass over the completed expression tree.  This allows us to use
a more complex (and hopefully more nearly spec-compliant) assignment
rule without paying for it in extra storage in every expression node.

Fix assorted bugs in the planner's handling of collations by making
collation one of the defining properties of an EquivalenceClass and
by converting CollateExprs into discardable RelabelType nodes during
expression preprocessing.
2011-03-19 20:30:08 -04:00
Tom Lane
f12423d75a Adjust regression test to avoid platform-dependent failure.
We have a test that verifies that max(anyarray) will cope if the array
column elements aren't all the same array type.  However, it's now possible
for that to produce a collation-related error message instead of the
expected one, if the first two column elements happen to be of the same
type and it's one that expects to be given collation info.  Tweak the test
to ensure this doesn't happen.  Per buildfarm member pika.
2011-03-14 12:52:31 -04:00
Peter Eisentraut
3d9f7ec1ff Add test case for collation mismatch in recursive query
This isn't very important by itself, but was left on my list of things
without test coverage for the collation feature.
2011-03-12 10:07:23 +02:00
Tom Lane
8acdb8bf9c Split CollateClause into separate raw and analyzed node types.
CollateClause is now used only in raw grammar output, and CollateExpr after
parse analysis.  This is for clarity and to avoid carrying collation names
in post-analysis parse trees: that's both wasteful and possibly misleading,
since the collation's name could be changed while the parsetree still
exists.

Also, clean up assorted infelicities and omissions in processing of the
node type.
2011-03-11 16:28:18 -05:00
Tom Lane
e3c732a85c Create an explicit concept of collations that work for any encoding.
Use collencoding = -1 to represent such a collation in pg_collation.
We need this to make the "default" entry work sanely, and a later
patch will fix the C/POSIX entries to be represented this way instead
of duplicating them across all encodings.  All lookup operations now
search first for an entry that's database-encoding-specific, and then
for the same name with collencoding = -1.

Also some incidental code cleanup in collationcmds.c and pg_collation.c.
2011-03-11 13:20:11 -05:00
Tom Lane
a051ef699c Remove collation information from TypeName, where it does not belong.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee.  It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".

To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API.  I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes.  This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.

Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.

While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.

BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
2011-03-09 22:39:20 -05:00
Simon Riggs
a8a8a3e096 Efficient transaction-controlled synchronous replication.
If a standby is broadcasting reply messages and we have named
one or more standbys in synchronous_standby_names then allow
users who set synchronous_replication to wait for commit, which
then provides strict data integrity guarantees. Design avoids
sending and receiving transaction state information so minimises
bookkeeping overheads. We synchronize with the highest priority
standby that is connected and ready to synchronize. Other standbys
can be defined to takeover in case of standby failure.

This version has very strict behaviour; more relaxed options
may be added at a later date.

Simon Riggs and Fujii Masao, with reviews by Yeb Havinga, Jaime
Casanova, Heikki Linnakangas and Robert Haas, plus the assistance
of many other design reviewers.
2011-03-06 22:49:16 +00:00
Peter Eisentraut
b9cff97fdf Don't allow CREATE TABLE AS to create a column with invalid collation
It is possible that an expression ends up with a collatable type but
without a collation.  CREATE TABLE AS could then create a table based
on that.  But such a column cannot be dumped with valid SQL syntax, so
we disallow creating such a column.

per test report from Noah Misch
2011-03-04 23:42:07 +02:00
Tom Lane
8d3b421f5f Allow non-superusers to create (some) extensions.
Remove the unconditional superuser permissions check in CREATE EXTENSION,
and instead define a "superuser" extension property, which when false
(not the default) skips the superuser permissions check.  In this case
the calling user only needs enough permissions to execute the commands
in the extension's installation script.  The superuser property is also
enforced in the same way for ALTER EXTENSION UPDATE cases.

In other ALTER EXTENSION cases and DROP EXTENSION, test ownership of
the extension rather than superuserness.  ALTER EXTENSION ADD/DROP needs
to insist on ownership of the target object as well; to do that without
duplicating code, refactor comment.c's big switch for permissions checks
into a separate function in objectaddress.c.

I also removed the superuserness checks in pg_available_extensions and
related functions; there's no strong reason why everybody shouldn't
be able to see that info.

Also invent an IF NOT EXISTS variant of CREATE EXTENSION, and use that
in pg_dump, so that dumps won't fail for installed-by-default extensions.
We don't have any of those yet, but we will soon.

This is all per discussion of wrapping the standard procedural languages
into extensions.  I'll make those changes in a separate commit; this is
just putting the core infrastructure in place.
2011-03-04 16:08:53 -05:00
Peter Eisentraut
4442e1975d When creating a collation, check that the locales can be loaded
This is the same check that would happen later when the collation is
used, but it's friendlier to check the collation already when it is
created.
2011-03-04 22:14:37 +02:00
Tom Lane
908ab80286 Further refine patch for commenting operator implementation functions.
Instead of manually maintaining the "implementation of XXX operator"
comments in pg_proc.h, delete all those entries and let initdb create
them via a join.  To let initdb figure out which name to use when there
is a conflict, change the comments for deprecated operators to say they
are deprecated --- which seems like a good thing to do anyway.
2011-03-03 15:55:47 -05:00
Tom Lane
94133a9354 Mark operator implementation functions as such in their comments.
Historically, we've not had separate comments for built-in pg_operator
entries, but relied on the comments for the underlying functions.  The
trouble with this approach is that there isn't much of anything to suggest
to users that they'd be better off using the operators instead.  So, move
all the relevant comments into pg_operator, and give each underlying
function a comment that just says "implementation of XXX operator".
There are only about half a dozen cases where it seems reasonable to use
the underlying function interchangeably with the operator; in these cases
I left the same comment in place on the function as on the operator.

While at it, establish a policy that every built-in function and operator
entry should have a comment: there are now queries in the opr_sanity
regression test that will complain if one doesn't.  This only required
adding a dozen or two more entries than would have been there anyway.

I also spent some time trying to eliminate gratuitous inconsistencies in
the style of the comments, though it's hopeless to suppose that more won't
creep in soon enough.

Per my proposal of 2010-10-15.
2011-03-03 01:34:17 -05:00
Robert Haas
92c30fd2ed Rename pg_stat_replication.apply_location to replay_location.
For consistency with pg_last_xlog_replay_location.  Per discussion.
2011-02-28 12:49:57 -05:00
Tom Lane
a874fe7b4c Refactor the executor's API to support data-modifying CTEs better.
The originally committed patch for modifying CTEs didn't interact well
with EXPLAIN, as noted by myself, and also had corner-case problems with
triggers, as noted by Dean Rasheed.  Those problems show it is really not
practical for ExecutorEnd to call any user-defined code; so split the
cleanup duties out into a new function ExecutorFinish, which must be called
between the last ExecutorRun call and ExecutorEnd.  Some Asserts have been
added to these functions to help verify correct usage.

It is no longer necessary for callers of the executor to call
AfterTriggerBeginQuery/AfterTriggerEndQuery for themselves, as this is now
done by ExecutorStart/ExecutorFinish respectively.  If you really need to
suppress that and do it for yourself, pass EXEC_FLAG_SKIP_TRIGGERS to
ExecutorStart.

Also, refactor portal commit processing to allow for the possibility that
PortalDrop will invoke user-defined code.  I think this is not actually
necessary just yet, since the portal-execution-strategy logic forces any
non-pure-SELECT query to be run to completion before we will consider
committing.  But it seems like good future-proofing.
2011-02-27 13:44:12 -05:00
Tom Lane
000128bc7f Fix order of shutdown processing when CTEs contain inter-references.
We need ExecutorEnd to run the ModifyTable nodes to completion in
reverse order of initialization, not forward order.  Easily done
by constructing the list back-to-front.
2011-02-25 23:53:34 -05:00
Tom Lane
389af95155 Support data-modifying commands (INSERT/UPDATE/DELETE) in WITH.
This patch implements data-modifying WITH queries according to the
semantics that the updates all happen with the same command counter value,
and in an unspecified order.  Therefore one WITH clause can't see the
effects of another, nor can the outer query see the effects other than
through the RETURNING values.  And attempts to do conflicting updates will
have unpredictable results.  We'll need to document all that.

This commit just fixes the code; documentation updates are waiting on
author.

Marko Tiikkaja and Hitoshi Harada
2011-02-25 18:58:02 -05:00
Itagaki Takahiro
3cba8240a1 Add ENCODING option to COPY TO/FROM and file_fdw.
File encodings can be specified separately from client encoding.
If not specified, client encoding is used for backward compatibility.

Cases when the encoding doesn't match client encoding are slower
than matched cases because we don't have conversion procs for other
encodings. Performance improvement would be be a future work.

Original patch by Hitoshi Harada, and modified by me.
2011-02-21 14:32:40 +09:00
Tom Lane
bb74240794 Implement an API to let foreign-data wrappers actually be functional.
This commit provides the core code and documentation needed.  A contrib
module test case will follow shortly.

Shigeru Hanada, Jan Urbanski, Heikki Linnakangas
2011-02-20 00:18:14 -05:00
Tom Lane
327e025071 Create the catalog infrastructure for foreign-data-wrapper handlers.
Add a fdwhandler column to pg_foreign_data_wrapper, plus HANDLER options
in the CREATE FOREIGN DATA WRAPPER and ALTER FOREIGN DATA WRAPPER commands,
plus pg_dump support for same.  Also invent a new pseudotype fdw_handler
with properties similar to language_handler.

This is split out of the "FDW API" patch for ease of review; it's all stuff
we will certainly need, regardless of any other details of the FDW API.
FDW handler functions will not actually get called yet.

In passing, fix some omissions and infelicities in foreigncmds.c.

Shigeru Hanada, Jan Urbanski, Heikki Linnakangas
2011-02-19 00:07:15 -05:00
Itagaki Takahiro
62c7bd31c8 Add transaction-level advisory locks.
They share the same locking namespace with the existing session-level
advisory locks, but they are automatically released at the end of the
current transaction and cannot be released explicitly via unlock
functions.

Marko Tiikkaja, reviewed by me.
2011-02-18 14:05:12 +09:00
Robert Haas
4a25bc145a Add client_hostname field to pg_stat_activity.
Peter Eisentraut, reviewed by Steve Singer, Alvaro Herrera, and me.
2011-02-17 16:03:28 -05:00
Tom Lane
6e02755b22 Add FOREACH IN ARRAY looping to plpgsql.
(I'm not entirely sure that we've finished bikeshedding the syntax details,
but the functionality seems OK.)

Pavel Stehule, reviewed by Stephen Frost and Tom Lane
2011-02-16 01:53:03 -05:00
Tom Lane
555353c0c5 Rearrange extension-related views as per recent discussion.
The original design of pg_available_extensions did not consider the
possibility of version-specific control files.  Split it into two views:
pg_available_extensions shows information that is generic about an
extension, while pg_available_extension_versions shows all available
versions together with information that could be version-dependent.
Also, add an SRF pg_extension_update_paths() to assist in checking that
a collection of update scripts provide sane update path sequences.
2011-02-14 19:22:36 -05:00
Peter Eisentraut
b313bca0af DDL support for collations
- collowner field
- CREATE COLLATION
- ALTER COLLATION
- DROP COLLATION
- COMMENT ON COLLATION
- integration with extensions
- pg_dump support for the above
- dependency management
- psql tab completion
- psql \dO command
2011-02-12 15:55:18 +02:00
Heikki Linnakangas
b186523fd9 Send status updates back from standby server to master, indicating how far
the standby has written, flushed, and applied the WAL. At the moment, this
is for informational purposes only, the values are only shown in
pg_stat_replication system view, but in the future they will also be needed
for synchronous replication.

Extracted from Simon riggs' synchronous replication patch by Robert Haas, with
some tweaking by me.
2011-02-10 21:04:02 +02:00
Magnus Hagander
4c468b37a2 Track last time for statistics reset on databases and bgwriter
Tracks one counter for each database, which is reset whenever
the statistics for any individual object inside the database is
reset, and one counter for the background writer.

Tomas Vondra, reviewed by Greg Smith
2011-02-10 15:14:04 +01:00
Tom Lane
0bc0bd07d4 Fix merge conflict. 2011-02-08 16:22:20 -05:00
Tom Lane
d9572c4e3b Core support for "extensions", which are packages of SQL objects.
This patch adds the server infrastructure to support extensions.
There is still one significant loose end, namely how to make it play nice
with pg_upgrade, so I am not yet committing the changes that would make
all the contrib modules depend on this feature.

In passing, fix a disturbingly large amount of breakage in
AlterObjectNamespace() and callers.

Dimitri Fontaine, reviewed by Anssi Kääriäinen,
Itagaki Takahiro, Tom Lane, and numerous others
2011-02-08 16:13:22 -05:00
Peter Eisentraut
414c5a2ea6 Per-column collation support
This adds collation support for columns and domains, a COLLATE clause
to override it per expression, and B-tree index support.

Peter Eisentraut
reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
2011-02-08 23:04:18 +02:00
Simon Riggs
722bf7017b Extend ALTER TABLE to allow Foreign Keys to be added without initial validation.
FK constraints that are marked NOT VALID may later be VALIDATED, which uses an
ShareUpdateExclusiveLock on constraint table and RowShareLock on referenced
table. Significantly reduces lock strength and duration when adding FKs.
New state visible from psql.

Simon Riggs, with reviews from Marko Tiikkaja and Robert Haas
2011-02-08 12:23:20 +00:00
Heikki Linnakangas
dafaa3efb7 Implement genuine serializable isolation level.
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.

To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.

A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.

Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.

We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.

Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.

Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
2011-02-08 00:09:08 +02:00
Robert Haas
65377e0b9c Tighten ALTER FOREIGN TABLE .. SET DATA TYPE checks.
If the foreign table's rowtype is being used as the type of a column in
another table, we can't just up and change its data type.  This was
already checked for composite types and ordinary tables, but we
previously failed to enforce it for foreign tables.
2011-02-06 00:26:27 -05:00
Robert Haas
a40b1e0bf3 Restore ALTER TABLE .. ADD COLUMN w/DEFAULT restriction.
This reverts commit a06e41deeb of 2011-01-26.
Per discussion, this behavior is not wanted, as it would need to change if
we ever made composite types support DEFAULT.
2011-01-27 08:35:34 -05:00
Robert Haas
a06e41deeb Remove arbitrary ALTER TABLE .. ADD COLUMN restriction.
The previous coding prevented ALTER TABLE .. ADD COLUMN from being used
with a non-NULL default in situations where the table's rowtype was being
used elsewhere.  But this is a completely arbitrary restriction since
you could do the same operation in multiple steps (add the column, add
the default, update the table).

Inspired by a patch from Noah Misch, though I didn't use his code.
2011-01-26 06:37:08 -05:00
Tom Lane
88452d5ba6 Implement ALTER TABLE ADD UNIQUE/PRIMARY KEY USING INDEX.
This feature allows a unique or pkey constraint to be created using an
already-existing unique index.  While the constraint isn't very
functionally different from the bare index, it's nice to be able to do that
for documentation purposes.  The main advantage over just issuing a plain
ALTER TABLE ADD UNIQUE/PRIMARY KEY is that the index can be created with
CREATE INDEX CONCURRENTLY, so that there is not a long interval where the
table is locked against updates.

On the way, refactor some of the code in DefineIndex() and index_create()
so that we don't have to pass through those functions in order to create
the index constraint's catalog entries.  Also, in parse_utilcmd.c, pass
around the ParseState pointer in struct CreateStmtContext to save on
notation, and add error location pointers to some error reports that didn't
have one before.

Gurjeet Singh, reviewed by Steve Singer and Tom Lane
2011-01-25 15:43:05 -05:00
Robert Haas
6f59777c65 Code cleanup for assign_transaction_read_only.
As in commit fb4c5d2798 on 2011-01-21,
this avoids spurious debug messages and allows idempotent changes at
any time.  Along the way, make assign_XactIsoLevel allow idempotent
changes even when not within a subtransaction, to be consistent with
the new coding of assign_transaction_read_only and because there's
no compelling reason to do otherwise.

Kevin Grittner, with some adjustments.
2011-01-22 20:55:50 -05:00
Robert Haas
8ceb245680 Make ALTER TABLE revalidate uniqueness and exclusion constraints.
Failure to do so can lead to constraint violations.  This was broken by
commit 1ddc2703a9 on 2010-02-07, so
back-patch to 9.0.

Noah Misch.  Regression test by me.
2011-01-20 22:44:10 -05:00
Andrew Dunstan
b7a0b42641 Unbreak regression tests, apparently broken by commit 4c8e20f 2011-01-11 22:27:20 -05:00
Tom Lane
73912e7fbd Fix GIN to support null keys, empty and null items, and full index scans.
Per my recent proposal(s).  Null key datums can now be returned by
extractValue and extractQuery functions, and will be stored in the index.
Also, placeholder entries are made for indexable items that are NULL or
contain no keys according to extractValue.  This means that the index is
now always complete, having at least one entry for every indexed heap TID,
and so we can get rid of the prohibition on full-index scans.  A full-index
scan is implemented much the same way as partial-match scans were already:
we build a bitmap representing all the TIDs found in the index, and then
drive the results off that.

Also, introduce a concept of a "search mode" that can be requested by
extractQuery when the operator requires matching to empty items (this is
just as cheap as matching to a single key) or requires a full index scan
(which is not so cheap, but it sure beats failing or giving wrong answers).
The behavior remains backward compatible for opclasses that don't return
any null keys or request a non-default search mode.

Using these features, we can now make the GIN index opclass for anyarray
behave in a way that matches the actual anyarray operators for &&, <@, @>,
and = ... which it failed to do before in assorted corner cases.

This commit fixes the core GIN code and ginarrayprocs.c, updates the
documentation, and adds some simple regression test cases for the new
behaviors using the array operators.  The tsearch and contrib GIN opclass
support functions still need to be looked over and probably fixed.

Another thing I intend to fix separately is that this is pretty inefficient
for cases where more than one scan condition needs a full-index search:
we'll run duplicate GinScanEntrys, each one of which builds a large bitmap.
There is some existing logic to merge duplicate GinScanEntrys but it needs
refactoring to make it work for entries belonging to different scan keys.

Note that most of gin.h has been split out into a new file gin_private.h,
so that gin.h doesn't export anything that's not supposed to be used by GIN
opclasses or the rest of the backend.  I did quite a bit of other code
beautification work as well, mostly fixing comments and choosing more
appropriate names for things.
2011-01-07 19:16:24 -05:00
Itagaki Takahiro
a755ea33ae New system view pg_stat_replication displays activity of wal sender processes.
Itagaki Takahiro and Simon Riggs.
2011-01-07 20:35:38 +09:00
Tom Lane
6c596c29a3 Update sequence_1.out for recent changes in sequence regression test. 2011-01-06 10:58:32 -05:00
Magnus Hagander
40d9e94bd7 Add views and functions to monitor hot standby query conflicts
Add the view pg_stat_database_conflicts and a column to pg_stat_database,
and the underlying functions to provide the information.
2011-01-03 12:46:03 +01:00
Peter Eisentraut
39b8843296 Implement remaining fields of information_schema.sequences view
Add new function pg_sequence_parameters that returns a sequence's start,
minimum, maximum, increment, and cycle values, and use that in the view.
(bug #5662; design suggestion by Tom Lane)

Also slightly adjust the view's column order and permissions after review of
SQL standard.
2011-01-02 15:15:21 +02:00
Robert Haas
0d692a0dc9 Basic foreign table support.
Foreign tables are a core component of SQL/MED.  This commit does
not provide a working SQL/MED infrastructure, because foreign tables
cannot yet be queried.  Support for foreign table scans will need to
be added in a future patch.  However, this patch creates the necessary
system catalog structure, syntax support, and support for ancillary
operations such as COMMENT and SECURITY LABEL.

Shigeru Hanada, heavily revised by Robert Haas
2011-01-01 23:48:11 -05:00
Peter Eisentraut
6a208aa404 Allow casting a table's row type to the table's supertype if it's a typed table
This is analogous to the existing facility that allows casting a row type to a
supertable's row type.
2011-01-01 23:04:14 +02:00
Magnus Hagander
9b8aff8c19 Add REPLICATION privilege for ROLEs
This privilege is required to do Streaming Replication, instead of
superuser, making it possible to set up a SR slave that doesn't
have write permissions on the master.

Superuser privileges do NOT override this check, so in order to
use the default superuser account for replication it must be
explicitly granted the REPLICATION permissions. This is backwards
incompatible change, in the interest of higher default security.
2010-12-29 11:05:03 +01:00
Tom Lane
554506871b KNNGIST, otherwise known as order-by-operator support for GIST.
This commit represents a rather heavily editorialized version of
Teodor's builtin_knngist_itself-0.8.2 and builtin_knngist_proc-0.8.1
patches.  I redid the opclass API to add a separate Distance method
instead of turning the Consistent method into an illogical mess,
fixed some bit-rot in the rbtree interfaces, and generally worked over
the code style and comments.

There's still no non-code documentation to speak of, but I'll work on
that separately.  Some contrib-module changes are also yet to come
(right now, point <-> point is the only KNN-ified operator).

Teodor Sigaev and Tom Lane
2010-12-03 20:53:29 -05:00
Tom Lane
225f0aa3df Prevent inlining a SQL function with multiple OUT parameters.
There were corner cases in which the planner would attempt to inline such
a function, which would result in a failure at runtime due to loss of
information about exactly what the result record type is.  Fix by disabling
inlining when the function's recorded result type is RECORD.  There might
be some sub-cases where inlining could still be allowed, but this is a
simple and backpatchable fix, so leave refinements for another day.
Per bug #5777 from Nate Carson.

Back-patch to all supported branches.  8.1 happens to avoid a core-dump
here, but it still does the wrong thing.
2010-12-01 00:53:18 -05:00
Robert Haas
55109313f9 Add more ALTER <object> .. SET SCHEMA commands.
This adds support for changing the schema of a conversion, operator,
operator class, operator family, text search configuration, text search
dictionary, text search parser, or text search template.

Dimitri Fontaine, with assorted corrections and other kibitzing.
2010-11-26 17:31:54 -05:00
Tom Lane
725d52d0c2 Create the system catalog infrastructure needed for KNNGIST.
This commit adds columns amoppurpose and amopsortfamily to pg_amop, and
column amcanorderbyop to pg_am.  For the moment all the entries in
amcanorderbyop are "false", since the underlying support isn't there yet.

Also, extend the CREATE OPERATOR CLASS/ALTER OPERATOR FAMILY commands with
[ FOR SEARCH | FOR ORDER BY sort_operator_family ] clauses to allow the new
columns of pg_amop to be populated, and create pg_dump support for dumping
that information.

I also added some documentation, although it's perhaps a bit premature
given that the feature doesn't do anything useful yet.

Teodor Sigaev, Robert Haas, Tom Lane
2010-11-24 14:22:17 -05:00
Peter Eisentraut
f2a4278330 Propagate ALTER TYPE operations to typed tables
This adds RESTRICT/CASCADE flags to ALTER TYPE ... ADD/DROP/ALTER/
RENAME ATTRIBUTE to control whether to alter typed tables as well.
2010-11-23 22:50:17 +02:00
Peter Eisentraut
fc946c39ae Remove useless whitespace at end of lines 2010-11-23 22:34:55 +02:00