libxml reports some errors (like invalid xmlns attributes) via the error
handler hook, but still returns a success indicator to the library caller.
This causes us to miss some errors that are important to report. Since the
"generic" error handler hook doesn't know whether the message it's getting
is for an error, warning, or notice, stop using that and instead start
using the "structured" error handler hook, which gets enough information
to be useful.
While at it, arrange to save and restore the error handler hook setting in
each libxml-using function, rather than assuming we can set and forget the
hook. This should improve the odds of working nicely with third-party
libraries that also use libxml.
In passing, volatile-ize some local variables that get modified within
PG_TRY blocks. I noticed this while testing with an older gcc version
than I'd previously tried to compile xml.c with.
Florian Pflug and Tom Lane, with extensive review/testing by Noah Misch
This is more SQL-spec-compliant, more easily extensible, and better
performing than the old method of inventing special variables.
Pavel Stehule, reviewed by Shigeru Hanada and David Wheeler
Regular aggregate functions in combination with, or within the arguments
of, window functions are OK per spec; they have the semantics that the
aggregate output rows are computed and then we run the window functions
over that row set. (Thus, this combination is not really useful unless
there's a GROUP BY so that more than one aggregate output row is possible.)
The case without GROUP BY could fail, as recently reported by Jeff Davis,
because sloppy construction of the Agg node's targetlist resulted in extra
references to possibly-ungrouped Vars appearing outside the aggregate
function calls themselves. See the added regression test case for an
example.
Fixing this requires modifying the API of flatten_tlist and its underlying
function pull_var_clause. I chose to make pull_var_clause's API for
aggregates identical to what it was already doing for placeholders, since
the useful behaviors turn out to be the same (error, report node as-is, or
recurse into it). I also tightened the error checking in this area a bit:
if it was ever valid to see an uplevel Var, Aggref, or PlaceHolderVar here,
that was a long time ago, so complain instead of ignoring them.
Backpatch into 9.1. The failure exists in 8.4 and 9.0 as well, but seeing
that it only occurs in a basically-useless corner case, it doesn't seem
worth the risks of changing a function API in a minor release. There might
be third-party code using pull_var_clause.
If there's a dangerous structure T0 ---> T1 ---> T2, and T2 commits first,
we need to abort something. If T2 commits before both conflicts appear,
then it should be caught by OnConflict_CheckForSerializationFailure. If
both conflicts appear before T2 commits, it should be caught by
PreCommit_CheckForSerializationFailure. But that is actually run when
T2 *prepares*. Fix that in OnConflict_CheckForSerializationFailure, by
treating a prepared T2 as if it committed already.
This is mostly a problem for prepared transactions, which are in prepared
state for some time, but also for regular transactions because they also go
through the prepared state in the SSI code for a short moment when they're
committed.
Kevin Grittner and Dan Ports
This means that they can initially be added to a large existing table
without checking its initial contents, but new tuples must comply to
them; a separate pass invoked by ALTER TABLE / VALIDATE can verify
existing data and ensure it complies with the constraint, at which point
it is marked validated and becomes a normal part of the table ecosystem.
An non-validated CHECK constraint is ignored in the planner for
constraint_exclusion purposes; when validated, cached plans are
recomputed so that partitioning starts working right away.
This patch also enables domains to have unvalidated CHECK constraints
attached to them as well by way of ALTER DOMAIN / ADD CONSTRAINT / NOT
VALID, which can later be validated with ALTER DOMAIN / VALIDATE
CONSTRAINT.
Thanks to Thom Brown, Dean Rasheed and Jaime Casanova for the various
reviews, and Robert Hass for documentation wording improvement
suggestions.
This patch was sponsored by Enova Financial.
Such a condition is unsatisfiable in combination with any other type of
btree-indexable condition (since we assume btree operators are always
strict). 8.3 and 8.4 had an explicit test for this, which I removed in
commit 29c4ad9829, mistakenly thinking that
the case would be subsumed by the more general handling of IS (NOT) NULL
added in that patch. Put it back, and improve the comments about it, and
add a regression test case.
Per bug #6079 from Renat Nasyrov, and analysis by Dean Rasheed.
already been marked as PREPARED cannot be killed. Kill the current
transaction instead.
One of the prepared_xacts regression tests actually hits this bug. I
removed the anomaly from the duplicate-gids test so that it fails in the
intended way, and added a new test to check serialization failures with
a prepared transaction.
Dan Ports
When recursing after an optimization in pull_up_sublinks_qual_recurse, the
available_rels value passed down must include only the relations that are
in the righthand side of the new SEMI or ANTI join; it's incorrect to pull
up a sub-select that refers to other relations, as seen in the added test
case. Per report from BangarRaju Vadapalli.
While at it, rethink the idea of recursing below a NOT EXISTS. That is
essentially the same situation as pulling up ANY/EXISTS sub-selects that
are in the ON clause of an outer join, and it has the same disadvantage:
we'd force the two joins to be evaluated according to the syntactic nesting
order, because the lower join will most likely not be able to commute with
the ANTI join. That could result in having to form a rather large join
product, whereas the handling of a correlated subselect is not quite that
dumb. So until we can handle those cases better, #ifdef NOT_USED that
case. (I think it's okay to pull up in the EXISTS/ANY cases, because SEMI
joins aren't so inflexible about ordering.)
Back-patch to 8.4, same as for previous patch in this area. Fortunately
that patch hadn't made it into any shipped releases yet.
This use-case was broken in commit 529cb267a6
of 2010-10-21, in which I commented "For the moment, we just forbid such
matching. We might later wish to insert an automatic downcast to the
underlying array type, but such a change should also change matching of
domains to ANYELEMENT for consistency". We still lack consensus about what
to do with ANYELEMENT; but not matching ANYARRAY is a clear loss of
functionality compared to prior releases, so let's go ahead and make that
happen. Per complaint from Regina Obe and extensive subsequent discussion.
Since the original implementation of CTEs only allowed them in SELECT
queries, the rule rewriter did not expect to find any CTEs in statements
being rewritten by ON INSERT/UPDATE/DELETE rules. We had dealt with this
to some extent but the code was still several bricks shy of a load, as
illustrated in bug #6051 from Jehan-Guillaume de Rorthais.
In particular, we have to be able to copy CTEs from the original query's
cteList into that of a rule action, in case the rule action references the
CTE (which it pretty much always will). This also implies we were doing
things in the wrong order in RewriteQuery: we have to recursively rewrite
the CTE queries before expanding the main query, so that we have the
rewritten queries available to copy.
There are unpleasant limitations yet to resolve here, but at least we now
throw understandable FEATURE_NOT_SUPPORTED errors for them instead of just
failing with bizarre implementation-dependent errors. In particular, we
can't handle propagating the same CTE into multiple post-rewrite queries
(because then the CTE would be evaluated multiple times), and we can't cope
with conflicts between CTE names in the original query and in the rule
actions.
The existence of a btree opclass accepting composite types caused us to
assume that every composite type is sortable. This isn't true of course;
we need to check if the column types are all sortable. There was logic
for this for the case of array comparison (ie, check that the element
type is sortable), but we missed the point for rowtypes. Per Teodor's
report of an ANALYZE failure for an unsortable composite type.
Rather than just add some more ad-hoc logic for this, I moved knowledge of
the issue into typcache.c. The typcache will now only report out array_eq,
record_cmp, and friends as usable operators if the array or composite type
will work with those functions.
Unfortunately we don't have enough info to do this for anonymous RECORD
types; in that case, just assume it will work, and take the runtime failure
as before if it doesn't.
This patch might be a candidate for back-patching at some point, but
given the lack of complaints from the field, I'd rather just test it in
HEAD for now.
Note: most of the places touched in this patch will need further work
when we get around to supporting hashing of record types.
We need this now because we allow domains over arrays, and we'll probably
allow domains over composites pretty soon, which makes the problem even
more obvious.
Although domains over arrays also exist in previous versions, this does not
need to be back-patched, because the coding used in older versions
successfully "looked through" domains over arrays. The problem is exposed
by not treating a domain as having a typelem.
Problem identified by Noah Misch, though I did not use his patch, since
it would require additional work to handle domains over composites that
way. This approach is more future-proof.
Teach the program and script to deal with OID-array referencing columns,
which we now have several of. Also, modify the recommended usage process
to specify that the program should be run against the regression database
rather than template1. This lets it find numerous joins that cannot be
found in template1 because the relevant catalogs are entirely empty.
Together these changes add seventeen formerly-missed cases to the oidjoins
regression test.
This test should now work in any database with UTF8 encoding, regardless
of the database's default locale. The former restriction was really
"doesn't work if default locale is C", and that was because of not handling
mbstowcs/wcstombs correctly.
This syntax allows a standalone table to be made into a typed table,
or a typed table to be made standalone. This is possibly a mildly
useful feature in its own right, but the real motivation for this
change is that we need it to make pg_upgrade work with typed tables.
This doesn't actually fix that problem, but it's necessary
infrastructure.
Noah Misch
Per spec we ought to apply select_common_collation() across the expressions
in each column of the VALUES table. The original coding was just taking
the first row and assuming it was representative.
This patch adds a field to struct RangeTblEntry to carry the resolved
collations, so initdb is forced for changes in stored rule representation.
This also ensures that we take a relation lock on the composite type when
creating a typed table, which is necessary to prevent the composite type
and the typed table from getting out of step in the face of concurrent
DDL.
Noah Misch, with some changes.
This allows the usual rules for assigning a collation to a local variable
to be overridden. Per discussion, it seems appropriate to support this
rather than forcing all local variables to have the argument-derived
collation.
Instead of using slightly-too-clever heuristics to decide when we must
create a TOAST table, just check whether one is needed every time the
table is altered. Checking whether a toast table is needed is cheap
enough that we needn't worry about doing it on every ALTER TABLE command,
and the previous coding is apparently prone to accidental breakage:
commit 04e17bae50 broken ALTER TABLE ..
SET STORAGE, which moved some actions from AT_PASS_COL_ATTRS to
AT_PASS_MISC, and commit 6c57239985 broke
ALTER TABLE .. ADD COLUMN by changing the way that adding columns
recurses into child tables.
Noah Misch, with one comment change by me
If the referencing and referenced columns have different collations,
the parser will be unable to resolve which collation to use unless it's
helped out in this way. The effects are sometimes masked, if we end up
using a non-collation-sensitive plan; but if we do use a mergejoin
we'll see a failure, as recently noted by Robert Haas.
The SQL spec states that the referenced column's collation should be used
to resolve RI checks, so that's what we do. Note however that we currently
don't append a COLLATE clause when writing a query that examines only the
referencing column. If we ever support collations that have varying
notions of equality, that will have to be changed. For the moment, though,
it's preferable to leave it off so that we can use a normal index on the
referencing column.
This involves getting the character classification and case-folding
functions in the regex library to use the collations infrastructure.
Most of this work had been done already in connection with the upper/lower
and LIKE logic, so it was a simple matter of transposition.
While at it, split out these functions into a separate source file
regc_pg_locale.c, so that they can be correctly labeled with the Postgres
project's license rather than the Scriptics license. These functions are
100% Postgres-written code whereas what remains in regc_locale.c is still
mostly not ours, so lumping them both under the same copyright notice was
getting more and more misleading.
Tweak the test so that it does not depend on the platform using ".utf8" as
the extension signifying that a locale uses UTF8 encoding. For the most
part this just requires using the abbreviated collation names "en_US" etc,
though I had to work a bit harder on the collation creation tests.
This opens the door to using the test on platforms that spell locales
differently, for example ".utf-8" or ".UTF-8". Also, the test is now
somewhat useful with server encodings other than UTF8; though depending on
which encoding is selected, different subsets of it will fail for lack of
character set support.
Remove crude hack that tried to propagate collation through a
function-returning-record, ie, from the function's arguments to individual
fields selected from its result record. That is just plain inconsistent,
because the function result is composite and cannot have a collation;
and there's no hope of making this kind of action-at-a-distance work
consistently. Adjust regression test cases that expected this to happen.
Meanwhile, the behavior of casting to a domain with a declared collation
stays the same as it was, since that seemed to be the consensus.
The previous coding set attinhcount too high in some cases, resulting in
an undumpable, undroppable column. Per bug #5856, reported by Naoya
Anzai. See also commit 31b6fc06d8, which
fixes a similar bug in ALTER TABLE .. ADD CONSTRAINT.
Patch by Noah Misch.
This mostly involves making it work with the objectaddress.c framework,
which does most of the heavy lifting. In that vein, change
GetForeignDataWrapperOidByName to get_foreign_data_wrapper_oid and
GetForeignServerOidByName to get_foreign_server_oid, to match the
pattern we use for other object types.
Robert Haas and Shigeru Hanada
Eventually we might be able to allow that, but it's not clear how many
places need to be fixed to prevent infinite recursion when there's a direct
or indirect inclusion of a rowtype in itself. One such place is
CheckAttributeType(), which will recurse to stack overflow in cases such as
those exhibited in bug #5950 from Alex Perepelica. If we were sure it was
the only such place, we could easily modify the code added by this patch to
stop the recursion without a complaint ... but it probably isn't the only
such place. Hence, throw error until such time as someone is excited
enough about this type of usage to put work into making it safe.
Back-patch as far as 8.3. 8.2 doesn't have the recursive call in
CheckAttributeType in the first place, so I see no need to add code there
in the absence of clear evidence of a problem elsewhere.
Make plpgsql treat the input collation as a polymorphism variable, so
that we cache separate plans for each input collation that's used in a
particular session, as per recent discussion. Propagate the input
collation to all collatable input parameters.
I chose to also propagate the input collation to all declared variables of
collatable types, which is a bit more debatable but seems to be necessary
for non-astonishing behavior. (Copying a parameter into a separate local
variable shouldn't result in a change of behavior, for example.) There is
enough infrastructure here to support declaring a collation for each local
variable to override that default, but I thought we should wait to see what
the field demand is before adding such a feature.
In passing, remove exec_get_rec_fieldtype(), which wasn't used anywhere.
Documentation patch to follow.
Ensure that parameter symbols receive collation from the function's
resolved input collation, and fix inlining to behave properly.
BTW, this commit lays about 90% of the infrastructure needed to support
use of argument names in SQL functions. Parsing of parameters is now
done via the parser-hook infrastructure ... we'd just need to supply
a column-ref hook ...
Ensure that COLLATE at the top level of an index expression is treated the
same as a grammatically separate COLLATE. Fix bogus reverse-parsing logic
in pg_get_indexdef.
This restores a parse error that was thrown (though only in the ORDER BY
case) by the original collation patch. I had removed it in my recent
revisions because it was thrown at a place where collations now haven't
been computed yet; but I thought of another way to handle it.
Throwing the error at parse time, rather than leaving it to be done at
runtime, is good because a syntax error pointer is helpful for localizing
the problem. We can reasonably assume that the comparison function for a
collatable datatype will complain if it doesn't have a collation to use.
Now the planner might choose to implement GROUP or DISTINCT via hashing,
in which case no runtime error would actually occur, but it seems better
to throw error consistently rather than let the error depend on what the
planner chooses to do. Another possible objection is that the user might
specify a nondefault sort operator that doesn't care about collation
... but that's surely an uncommon usage, and it wouldn't hurt him to throw
in a COLLATE clause anyway. This change also makes the ORDER BY/GROUP
BY/DISTINCT case more consistent with the UNION/INTERSECT/EXCEPT case,
which was already coded to throw this error even though the same objections
could be raised there.
Instead of playing cute games with pathkeys, just build a direct
representation of the intended sub-select, and feed it through
query_planner to get a Path for the index access. This is a bit slower
than 9.1's previous method, since we'll duplicate most of the overhead of
query_planner; but since the whole optimization only applies to rather
simple single-table queries, that probably won't be much of a problem in
practice. The advantage is that we get to do the right thing when there's
a partial index that needs the implicit IS NOT NULL clause to be usable.
Also, although this makes planagg.c be a bit more closely tied to the
ordering of operations in grouping_planner, we can get rid of some coupling
to lower-level parts of the planner. Per complaint from Marti Raudsepp.
All expression nodes now have an explicit output-collation field, unless
they are known to only return a noncollatable data type (such as boolean
or record). Also, nodes that can invoke collation-aware functions store
a separate field that is the collation value to pass to the function.
This avoids confusion that arises when a function has collatable inputs
and noncollatable output type, or vice versa.
Also, replace the parser's on-the-fly collation assignment method with
a post-pass over the completed expression tree. This allows us to use
a more complex (and hopefully more nearly spec-compliant) assignment
rule without paying for it in extra storage in every expression node.
Fix assorted bugs in the planner's handling of collations by making
collation one of the defining properties of an EquivalenceClass and
by converting CollateExprs into discardable RelabelType nodes during
expression preprocessing.
We have a test that verifies that max(anyarray) will cope if the array
column elements aren't all the same array type. However, it's now possible
for that to produce a collation-related error message instead of the
expected one, if the first two column elements happen to be of the same
type and it's one that expects to be given collation info. Tweak the test
to ensure this doesn't happen. Per buildfarm member pika.
It is possible that an expression ends up with a collatable type but
without a collation. CREATE TABLE AS could then create a table based
on that. But such a column cannot be dumped with valid SQL syntax, so
we disallow creating such a column.
per test report from Noah Misch
Instead of manually maintaining the "implementation of XXX operator"
comments in pg_proc.h, delete all those entries and let initdb create
them via a join. To let initdb figure out which name to use when there
is a conflict, change the comments for deprecated operators to say they
are deprecated --- which seems like a good thing to do anyway.
Historically, we've not had separate comments for built-in pg_operator
entries, but relied on the comments for the underlying functions. The
trouble with this approach is that there isn't much of anything to suggest
to users that they'd be better off using the operators instead. So, move
all the relevant comments into pg_operator, and give each underlying
function a comment that just says "implementation of XXX operator".
There are only about half a dozen cases where it seems reasonable to use
the underlying function interchangeably with the operator; in these cases
I left the same comment in place on the function as on the operator.
While at it, establish a policy that every built-in function and operator
entry should have a comment: there are now queries in the opr_sanity
regression test that will complain if one doesn't. This only required
adding a dozen or two more entries than would have been there anyway.
I also spent some time trying to eliminate gratuitous inconsistencies in
the style of the comments, though it's hopeless to suppose that more won't
creep in soon enough.
Per my proposal of 2010-10-15.
The originally committed patch for modifying CTEs didn't interact well
with EXPLAIN, as noted by myself, and also had corner-case problems with
triggers, as noted by Dean Rasheed. Those problems show it is really not
practical for ExecutorEnd to call any user-defined code; so split the
cleanup duties out into a new function ExecutorFinish, which must be called
between the last ExecutorRun call and ExecutorEnd. Some Asserts have been
added to these functions to help verify correct usage.
It is no longer necessary for callers of the executor to call
AfterTriggerBeginQuery/AfterTriggerEndQuery for themselves, as this is now
done by ExecutorStart/ExecutorFinish respectively. If you really need to
suppress that and do it for yourself, pass EXEC_FLAG_SKIP_TRIGGERS to
ExecutorStart.
Also, refactor portal commit processing to allow for the possibility that
PortalDrop will invoke user-defined code. I think this is not actually
necessary just yet, since the portal-execution-strategy logic forces any
non-pure-SELECT query to be run to completion before we will consider
committing. But it seems like good future-proofing.
We need ExecutorEnd to run the ModifyTable nodes to completion in
reverse order of initialization, not forward order. Easily done
by constructing the list back-to-front.
This patch implements data-modifying WITH queries according to the
semantics that the updates all happen with the same command counter value,
and in an unspecified order. Therefore one WITH clause can't see the
effects of another, nor can the outer query see the effects other than
through the RETURNING values. And attempts to do conflicting updates will
have unpredictable results. We'll need to document all that.
This commit just fixes the code; documentation updates are waiting on
author.
Marko Tiikkaja and Hitoshi Harada
File encodings can be specified separately from client encoding.
If not specified, client encoding is used for backward compatibility.
Cases when the encoding doesn't match client encoding are slower
than matched cases because we don't have conversion procs for other
encodings. Performance improvement would be be a future work.
Original patch by Hitoshi Harada, and modified by me.
Add a fdwhandler column to pg_foreign_data_wrapper, plus HANDLER options
in the CREATE FOREIGN DATA WRAPPER and ALTER FOREIGN DATA WRAPPER commands,
plus pg_dump support for same. Also invent a new pseudotype fdw_handler
with properties similar to language_handler.
This is split out of the "FDW API" patch for ease of review; it's all stuff
we will certainly need, regardless of any other details of the FDW API.
FDW handler functions will not actually get called yet.
In passing, fix some omissions and infelicities in foreigncmds.c.
Shigeru Hanada, Jan Urbanski, Heikki Linnakangas
They share the same locking namespace with the existing session-level
advisory locks, but they are automatically released at the end of the
current transaction and cannot be released explicitly via unlock
functions.
Marko Tiikkaja, reviewed by me.
(I'm not entirely sure that we've finished bikeshedding the syntax details,
but the functionality seems OK.)
Pavel Stehule, reviewed by Stephen Frost and Tom Lane
- collowner field
- CREATE COLLATION
- ALTER COLLATION
- DROP COLLATION
- COMMENT ON COLLATION
- integration with extensions
- pg_dump support for the above
- dependency management
- psql tab completion
- psql \dO command
This adds collation support for columns and domains, a COLLATE clause
to override it per expression, and B-tree index support.
Peter Eisentraut
reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
FK constraints that are marked NOT VALID may later be VALIDATED, which uses an
ShareUpdateExclusiveLock on constraint table and RowShareLock on referenced
table. Significantly reduces lock strength and duration when adding FKs.
New state visible from psql.
Simon Riggs, with reviews from Marko Tiikkaja and Robert Haas
Until now, our Serializable mode has in fact been what's called Snapshot
Isolation, which allows some anomalies that could not occur in any
serialized ordering of the transactions. This patch fixes that using a
method called Serializable Snapshot Isolation, based on research papers by
Michael J. Cahill (see README-SSI for full references). In Serializable
Snapshot Isolation, transactions run like they do in Snapshot Isolation,
but a predicate lock manager observes the reads and writes performed and
aborts transactions if it detects that an anomaly might occur. This method
produces some false positives, ie. it sometimes aborts transactions even
though there is no anomaly.
To track reads we implement predicate locking, see storage/lmgr/predicate.c.
Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
memory is finite, so when a transaction takes many tuple-level locks on a
page, the locks are promoted to a single page-level lock, and further to a
single relation level lock if necessary. To lock key values with no matching
tuple, a sequential scan always takes a relation-level lock, and an index
scan acquires a page-level lock that covers the search key, whether or not
there are any matching keys at the moment.
A predicate lock doesn't conflict with any regular locks or with another
predicate locks in the normal sense. They're only used by the predicate lock
manager to detect the danger of anomalies. Only serializable transactions
participate in predicate locking, so there should be no extra overhead for
for other transactions.
Predicate locks can't be released at commit, but must be remembered until
all the transactions that overlapped with it have completed. That means that
we need to remember an unbounded amount of predicate locks, so we apply a
lossy but conservative method of tracking locks for committed transactions.
If we run short of shared memory, we overflow to a new "pg_serial" SLRU
pool.
We don't currently allow Serializable transactions in Hot Standby mode.
That would be hard, because even read-only transactions can cause anomalies
that wouldn't otherwise occur.
Serializable isolation mode now means the new fully serializable level.
Repeatable Read gives you the old Snapshot Isolation level that we have
always had.
Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
Anssi Kääriäinen
If the foreign table's rowtype is being used as the type of a column in
another table, we can't just up and change its data type. This was
already checked for composite types and ordinary tables, but we
previously failed to enforce it for foreign tables.
This reverts commit a06e41deeb of 2011-01-26.
Per discussion, this behavior is not wanted, as it would need to change if
we ever made composite types support DEFAULT.
The previous coding prevented ALTER TABLE .. ADD COLUMN from being used
with a non-NULL default in situations where the table's rowtype was being
used elsewhere. But this is a completely arbitrary restriction since
you could do the same operation in multiple steps (add the column, add
the default, update the table).
Inspired by a patch from Noah Misch, though I didn't use his code.
This feature allows a unique or pkey constraint to be created using an
already-existing unique index. While the constraint isn't very
functionally different from the bare index, it's nice to be able to do that
for documentation purposes. The main advantage over just issuing a plain
ALTER TABLE ADD UNIQUE/PRIMARY KEY is that the index can be created with
CREATE INDEX CONCURRENTLY, so that there is not a long interval where the
table is locked against updates.
On the way, refactor some of the code in DefineIndex() and index_create()
so that we don't have to pass through those functions in order to create
the index constraint's catalog entries. Also, in parse_utilcmd.c, pass
around the ParseState pointer in struct CreateStmtContext to save on
notation, and add error location pointers to some error reports that didn't
have one before.
Gurjeet Singh, reviewed by Steve Singer and Tom Lane
As in commit fb4c5d2798 on 2011-01-21,
this avoids spurious debug messages and allows idempotent changes at
any time. Along the way, make assign_XactIsoLevel allow idempotent
changes even when not within a subtransaction, to be consistent with
the new coding of assign_transaction_read_only and because there's
no compelling reason to do otherwise.
Kevin Grittner, with some adjustments.
Failure to do so can lead to constraint violations. This was broken by
commit 1ddc2703a9 on 2010-02-07, so
back-patch to 9.0.
Noah Misch. Regression test by me.
Per my recent proposal(s). Null key datums can now be returned by
extractValue and extractQuery functions, and will be stored in the index.
Also, placeholder entries are made for indexable items that are NULL or
contain no keys according to extractValue. This means that the index is
now always complete, having at least one entry for every indexed heap TID,
and so we can get rid of the prohibition on full-index scans. A full-index
scan is implemented much the same way as partial-match scans were already:
we build a bitmap representing all the TIDs found in the index, and then
drive the results off that.
Also, introduce a concept of a "search mode" that can be requested by
extractQuery when the operator requires matching to empty items (this is
just as cheap as matching to a single key) or requires a full index scan
(which is not so cheap, but it sure beats failing or giving wrong answers).
The behavior remains backward compatible for opclasses that don't return
any null keys or request a non-default search mode.
Using these features, we can now make the GIN index opclass for anyarray
behave in a way that matches the actual anyarray operators for &&, <@, @>,
and = ... which it failed to do before in assorted corner cases.
This commit fixes the core GIN code and ginarrayprocs.c, updates the
documentation, and adds some simple regression test cases for the new
behaviors using the array operators. The tsearch and contrib GIN opclass
support functions still need to be looked over and probably fixed.
Another thing I intend to fix separately is that this is pretty inefficient
for cases where more than one scan condition needs a full-index search:
we'll run duplicate GinScanEntrys, each one of which builds a large bitmap.
There is some existing logic to merge duplicate GinScanEntrys but it needs
refactoring to make it work for entries belonging to different scan keys.
Note that most of gin.h has been split out into a new file gin_private.h,
so that gin.h doesn't export anything that's not supposed to be used by GIN
opclasses or the rest of the backend. I did quite a bit of other code
beautification work as well, mostly fixing comments and choosing more
appropriate names for things.
Add new function pg_sequence_parameters that returns a sequence's start,
minimum, maximum, increment, and cycle values, and use that in the view.
(bug #5662; design suggestion by Tom Lane)
Also slightly adjust the view's column order and permissions after review of
SQL standard.
Foreign tables are a core component of SQL/MED. This commit does
not provide a working SQL/MED infrastructure, because foreign tables
cannot yet be queried. Support for foreign table scans will need to
be added in a future patch. However, this patch creates the necessary
system catalog structure, syntax support, and support for ancillary
operations such as COMMENT and SECURITY LABEL.
Shigeru Hanada, heavily revised by Robert Haas
This commit represents a rather heavily editorialized version of
Teodor's builtin_knngist_itself-0.8.2 and builtin_knngist_proc-0.8.1
patches. I redid the opclass API to add a separate Distance method
instead of turning the Consistent method into an illogical mess,
fixed some bit-rot in the rbtree interfaces, and generally worked over
the code style and comments.
There's still no non-code documentation to speak of, but I'll work on
that separately. Some contrib-module changes are also yet to come
(right now, point <-> point is the only KNN-ified operator).
Teodor Sigaev and Tom Lane
There were corner cases in which the planner would attempt to inline such
a function, which would result in a failure at runtime due to loss of
information about exactly what the result record type is. Fix by disabling
inlining when the function's recorded result type is RECORD. There might
be some sub-cases where inlining could still be allowed, but this is a
simple and backpatchable fix, so leave refinements for another day.
Per bug #5777 from Nate Carson.
Back-patch to all supported branches. 8.1 happens to avoid a core-dump
here, but it still does the wrong thing.
This adds support for changing the schema of a conversion, operator,
operator class, operator family, text search configuration, text search
dictionary, text search parser, or text search template.
Dimitri Fontaine, with assorted corrections and other kibitzing.
This commit adds columns amoppurpose and amopsortfamily to pg_amop, and
column amcanorderbyop to pg_am. For the moment all the entries in
amcanorderbyop are "false", since the underlying support isn't there yet.
Also, extend the CREATE OPERATOR CLASS/ALTER OPERATOR FAMILY commands with
[ FOR SEARCH | FOR ORDER BY sort_operator_family ] clauses to allow the new
columns of pg_amop to be populated, and create pg_dump support for dumping
that information.
I also added some documentation, although it's perhaps a bit premature
given that the feature doesn't do anything useful yet.
Teodor Sigaev, Robert Haas, Tom Lane
Currently, three conversion format specifiers are supported: %s for a
string, %L for an SQL literal, and %I for an SQL identifier. The latter
two are deliberately designed not to overlap with what sprintf() already
supports, in case we want to add more of sprintf()'s functionality here
later.
Patch by Pavel Stehule, heavily revised by me. Reviewed by Jeff Janes
and, in earlier versions, by Itagaki Takahiro and Tom Lane.
Avoid depending on LL notation, which is likely to not work in pre-C99
compilers; don't pointlessly use INT32_MIN/INT64_MIN in code that has
the numerical value hard-wired into it anyway; remove some gratuitous
style inconsistencies between pg_ltoa and pg_lltoa; fix int2 test case
so it actually tests int2.
This eliminates the need for inefficient implementions of this
functionality in both contrib/dblink and contrib/tablefunc, so remove
them. The upcoming patch implementing an in-core format() function
will also require this functionality.
In passing, add some regression tests.
Use INT_MIN rather than INT32_MIN as we do elsewhere in the code, and
try to work around nonexistence of INT64_MIN if necessary. Adjust the
new regression tests to something hopefully saner, per observation by
Tom Lane.
A hand-coded implementation turns out to be much faster than calling
printf(). In passing, add a few more regresion tests.
Andres Freund, with assorted, mostly cosmetic changes.
In the previous coding, we simply issued ALTER SEQUENCE RESTART commands,
which do not roll back on error. This meant that an error between
truncating and committing left the sequences out of sync with the table
contents, with potentially bad consequences as were noted in a Warning on
the TRUNCATE man page.
To fix, create a new storage file (relfilenode) for a sequence that is to
be reset due to RESTART IDENTITY. If the transaction aborts, we'll
automatically revert to the old storage file. This acts just like a
rewriting ALTER TABLE operation. A penalty is that we have to take
exclusive lock on the sequence, but since we've already got exclusive lock
on its owning table, that seems unlikely to be much of a problem.
The interaction of this with usual nontransactional behaviors of sequence
operations is a bit weird, but it's hard to see what would be completely
consistent. Our choice is to discard cached-but-unissued sequence values
both when the RESTART is executed, and at rollback if any; but to not touch
the currval() state either time.
In passing, move the sequence reset operations to happen before not after
any AFTER TRUNCATE triggers are fired. The previous ordering was not
logically sensible, but was forced by the need to minimize inconsistency
if the triggers caused an error. Transactional rollback is a much better
solution to that.
Patch by Steve Singer, rather heavily adjusted by me.
PG 8.4 added a built-in feature for casting pretty much any data type to
string types (text, varchar, etc). We allowed this to work in any of the
historically-allowed syntaxes: CAST(x AS text), x::text, text(x), or
x.text. However, multiple complaints have shown that it's too easy to
invoke such casts unintentionally in the latter two styles, particularly
field selection. To cure the problem with the narrowest possible change
of behavior, disallow use of I/O conversion casts from composite types to
string types via functional/attribute syntax. The new functionality is
still available via cast syntax.
In passing, document the equivalence of functional and attribute syntax
in a more visible place.
Rather than considering this result as meaning "unknown", report LONG_MAX.
This won't change what superusers can set max_stack_depth to, but it will
cause InitializeGUCOptions() to set the built-in default to 2MB not 100kB.
The latter seems like a fairly unreasonable interpretation of "infinity".
Per my investigation of odd buildfarm results as well as an old complaint
from Heikki.
Since this should persuade all the buildfarm animals to use a reasonable
stack depth setting during "make check", revert previous patch that dumbed
down a recursive regression test to only 5 levels.
Per my recent proposal, get rid of all the direct inspection of indexes
and manual generation of paths in planagg.c. Instead, set up
EquivalenceClasses for the aggregate argument expressions, and let the
regular path generation logic deal with creating paths that can satisfy
those sort orders. This makes planagg.c a bit more visible to the rest
of the planner than it was originally, but the approach is basically a lot
cleaner than before. A major advantage of doing it this way is that we get
MIN/MAX optimization on inheritance trees (using MergeAppend of indexscans)
practically for free, whereas in the old way we'd have had to add a whole
lot more duplicative logic.
One small disadvantage of this approach is that MIN/MAX aggregates can no
longer exploit partial indexes having an "x IS NOT NULL" predicate, unless
that restriction or something that implies it is specified in the query.
The previous implementation was able to use the added "x IS NOT NULL"
condition as an extra predicate proof condition, but in this version we
rely entirely on indexes that are considered usable by the main planning
process. That seems a fair tradeoff for the simplicity and functionality
gained.
Some buildfarm members fail the test with the original depth of 10 levels,
apparently because they are running at the minimum max_stack_depth setting
of 100kB and using ~ 10k per recursion level. While it might be
interesting to try to figure out why they're eating so much stack, it isn't
likely that any fix for that would be back-patchable. So just change the
test to recurse only 5 levels. The extra levels don't prove anything
correctness-wise anyway.
In general, expression execution state trees aren't re-entrantly usable,
since functions can store private state information in them.
For efficiency reasons, plpgsql tries to cache and reuse state trees for
"simple" expressions. It can get away with that most of the time, but it
can fail if the state tree is dirty from a previous failed execution (as
in an example from Alvaro) or is being used recursively (as noted by me).
Fix by tracking whether a state tree is in use, and falling back to the
"non-simple" code path if so. This results in a pretty considerable speed
hit when the non-simple path is taken, but the available alternatives seem
even more unpleasant because they add overhead in the simple path. Per
idea from Heikki.
Back-patch to all supported branches.
After much expenditure of effort, we've got this to the point where the
performance penalty is pretty minimal in typical cases.
Andrew Dunstan, reviewed by Brendan Jurd, Dean Rasheed, and Tom Lane
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
A couple of places in the planner need to generate whole-row Vars, and were
cutting corners by setting vartype = RECORDOID in the Vars, even in cases
where there's an identifiable named composite type for the RTE being
referenced. While we mostly got away with this, it failed when there was
also a parser-generated whole-row reference to the same RTE, because the
two Vars weren't equal() due to the difference in vartype. Fix by
providing a subroutine the planner can call to generate whole-row Vars
the same way the parser does.
Per bug #5716 from Andrew Tipton. Back-patch to 9.0 where one of the bogus
calls was introduced (the other one is new in HEAD).
This is not the hoped-for facility of using INSERT/UPDATE/DELETE inside
a WITH, but rather the other way around. It seems useful in its own
right anyway.
Note: catversion bumped because, although the contents of stored rules
might look compatible, there's actually a subtle semantic change.
A single Query containing a WITH and INSERT...VALUES now represents
writing the WITH before the INSERT, not before the VALUES. While it's
not clear that that matters to anyone, it seems like a good idea to
have it cited in the git history for catversion.h.
Original patch by Marko Tiikkaja, with updating and cleanup by
Hitoshi Harada.
This patch eliminates the former need to sort the output of an Append scan
when an ordered scan of an inheritance tree is wanted. This should be
particularly useful for fast-start cases such as queries with LIMIT.
Original patch by Greg Stark, with further hacking by Hans-Jurgen Schonig,
Robert Haas, and Tom Lane.
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
The point of a PlaceHolderVar is to allow a non-strict expression to be
evaluated below an outer join, after which its value bubbles up like a Var
and can be forced to NULL when the outer join's semantics require that.
However, there was a serious design oversight in that, namely that we
didn't ensure that there was actually a correct place in the plan tree
to evaluate the placeholder :-(. It may be necessary to delay evaluation
of an outer join to ensure that a placeholder that should be evaluated
below the join can be evaluated there. Per recent bug report from Kirill
Simonov.
Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
This is intended as infrastructure to support integration with label-based
mandatory access control systems such as SE-Linux. Further changes (mostly
hooks) will be needed, but this is a big chunk of it.
KaiGai Kohei and Robert Haas
The previous coding would decide that join removal was unsafe upon finding
a PlaceHolderVar that needed to be evaluated at the inner rel and then used
above the join. However, this fails to cover the case of PlaceHolderVars
that refer to both the inner rel and some other rels. Per bug report from
Andrus.
This was unintentionally broken in 8.4 while tightening up checking of
ordinary non-Julian date inputs to forbid references to "year zero".
Per bug #5672 from Benjamin Gigot.
In these cases a qual can get marked with the removable rel in its
required_relids, but this is just to schedule its evaluation correctly, not
because it really depends on the rel. We were assuming that, in effect,
we could throw away *all* quals so marked, which is nonsense. Tighten up
the logic to be a little more paranoid about which quals belong to the
outer join being considered for removal, and arrange for all quals that
don't belong to be updated so they will still get evaluated correctly.
Also fix another problem that happened to be exposed by this test case,
which was that make_join_rel() was failing to notice some cases where
a constant-false qual could be used to prove a join relation empty. If it's
a pushed-down constant false, then the relation is empty even if it's an
outer join, because the qual applies after the outer join expansion.
Per report from Nathan Grange. Back-patch into 9.0.
Since the code underlying pg_get_expr() is not secure against malformed
input, and can't practically be made so, we need to prevent miscreants
from feeding arbitrary data to it. We can do this securely by declaring
pg_get_expr() to take a new datatype "pg_node_tree" and declaring the
system catalog columns that hold nodeToString output to be of that type.
There is no way at SQL level to create a non-null value of type pg_node_tree.
Since the backend-internal operations that fill those catalog columns
operate below the SQL level, they are oblivious to the datatype relabeling
and don't need any changes.
functions to the core XML code. Per discussion, the former depends on
XMLOPTION while the others do not. These supersede a version previously
offered by contrib/xml2.
Mike Fowler, reviewed by Pavel Stehule
better handling of NULL elements within the arrays. The third parameter
is a string that should be used to represent a NULL element, or should
be translated into a NULL element, respectively. If the third parameter
is NULL it behaves the same as the two-parameter form.
There are two incompatible changes in the behavior of the two-parameter form
of string_to_array. First, it will return an empty (zero-element) array
rather than NULL when the input string is of zero length. Second, if the
field separator is NULL, the function splits the string into individual
characters, rather than returning NULL as before. These two changes make
this form fully compatible with the behavior of the new three-parameter form.
Pavel Stehule, reviewed by Brendan Jurd
expressions. We need to deal with this when handling subscripts in an array
assignment, and also when catching an exception. In an Assert-enabled build
these omissions led to Assert failures, but I think in a normal build the
only consequence would be short-term memory leakage; which may explain why
this wasn't reported from the field long ago.
Back-patch to all supported versions. 7.4 doesn't have exceptions, but
otherwise these bugs go all the way back.
Heikki Linnakangas and Tom Lane
can be caught in the same places that could catch an ordinary RAISE ERROR
in the same location. The previous coding insisted on throwing the error
from the block containing the active exception handler; which is arguably
more surprising, and definitely unlike Oracle's behavior.
Not back-patching, since this is a pretty obscure corner case. The risk
of breaking somebody's code in a minor version update seems to outweigh
any possible benefit.
Piyush Newe, reviewed by David Fetter
other columns to be referenced without listing them in GROUP BY, so long as
the primary key column(s) are listed in GROUP BY.
Eventually we should also allow functional dependency on a UNIQUE constraint
when the columns are marked NOT NULL, but that has to wait until NOT NULL
constraints are represented in pg_constraint, because we need to have
pg_constraint OIDs for all the conditions needed to ensure functional
dependency.
Peter Eisentraut, reviewed by Alex Hunsaker and Tom Lane
functionality, while creating an ambiguity in usage with ORDER BY that at
least two people have already gotten seriously confused by. Also, add an
opr_sanity test to check that we don't in future violate the newly minted
policy of not having built-in aggregates with the same name and different
numbers of parameters. Per discussion of a complaint from Thom Brown.
a pass-by-reference datatype with a nontrivial projection step.
We were using the same memory context for the projection operation as for
the temporary context used by the hashtable routines in execGrouping.c.
However, the hashtable routines feel free to reset their temp context at
any time, which'd lead to destroying input data that was still needed.
Report and diagnosis by Tao Ma.
Back-patch to 8.1, where the problem was introduced by the changes that
allowed us to work with "virtual" tuples instead of materializing intermediate
tuple values everywhere. The earlier code looks quite similar, but it doesn't
suffer the problem because the data gets copied into another context as a
result of having to materialize ExecProject's output tuple.
any implicit casting previously applied to the targetlist item. This is
reasonable because the implicit cast, by definition, wasn't written by the
user; so we are preserving the expected behavior that ORDER BY items match
textually equivalent tlist items. The case never arose before because there
couldn't be any implicit casting of a top-level SELECT item before we process
ORDER BY etc. But now it can arise in the context of aggregates containing
ORDER BY clauses, since the "targetlist" is the already-casted list of
arguments for the aggregate. The net effect is that the datatype used for
ORDER BY/DISTINCT purposes is the aggregate's declared input type, not that
of the original input column; which is a bit debatable but not horrendous,
and to do otherwise would require major rework that doesn't seem justified.
Per bug #5564 from Daniel Grace. Back-patch to 9.0 where aggregate ORDER BY
was implemented.
relation using the general PARAM_EXEC executor parameter mechanism, rather
than the ad-hoc kluge of passing the outer tuple down through ExecReScan.
The previous method was hard to understand and could never be extended to
handle parameters coming from multiple join levels. This patch doesn't
change the set of possible plans nor have any significant performance effect,
but it's necessary infrastructure for future generalization of the concept
of an inner indexscan plan.
ExecReScan's second parameter is now unused, so it's removed.
sub-select contains a join alias reference that expands into an expression
containing another sub-select. Per yesterday's report from Merlin Moncure
and subsequent off-list investigation.
Back-patch to 7.4. Older versions didn't attempt to flatten sub-selects in
ways that would trigger this problem.
of YYSTYPE, and hence returning the wrong answer for cases where a plpgsql
"unreserved keyword" really does conflict with a variable name. Obviously
I didn't test this enough :-(. Per bug #5524 from Peter Gagarinov.
If such a Var appeared within a nested sub-select, we failed to translate it
correctly during pullup of the view, because the recursive call to
replace_rte_variables_mutator was looking for the wrong sublevels_up value.
Bug was introduced during the addition of the PlaceHolderVar mechanism.
Per bug #5514 from Marcos Castedo.
"val AS name" to "name := val", as per recent discussion.
This patch catches everything in the original named-parameters patch,
but I'm not certain that no other dependencies snuck in later (grepping
the source tree for all uses of AS soon proved unworkable).
In passing I note that we've dropped the ball at least once on keeping
ecpg's lexer (as opposed to parser) in sync with the backend. It would
be a good idea to go through all of pgc.l and see if it's in sync now.
I didn't attempt that at the moment.
for sure ;-)). It now also optimizes more cases, such as %_%_. Improve
comments too. Per bug #5478.
In passing, also rename the TCHAR macro to GETCHAR, because pgindent is
messing with the formatting of the former (apparently it now thinks TCHAR
is a typedef name).
Back-patch to 8.3, where the bug was introduced.
to RFC 3986. In particular, these characters now terminate the path part
of a URL: '"', '<', '>', '\', '^', '`', '{', '|', '}'. The previous behavior
was inconsistent and depended on whether a "?" was present in the path.
Per gripe from Donald Fraser and spec research by Kevin Grittner.
This is a pre-existing bug, but not back-patching since the risks of
breaking existing applications seem to outweigh the benefits.
the fact that NetBSD/mips is currently broken, as per buildfarm member pika.
Also add regression tests to ensure that get_float8_nan and get_float4_nan
are exercised even on platforms where they are not needed by
float8in/float4in.
Zoltán Böszörményi and Tom Lane
being assigned to, in case the expression to be assigned is a FieldStore that
would need to modify that value. The need for this was foreseen some time
ago, but not implemented then because we did not have arrays of composites.
Now we do, but the point evidently got overlooked in that patch. Net result
is that updating a field of an array element doesn't work right, as
illustrated if you try the new regression test on an unpatched backend.
Noted while experimenting with EXPLAIN VERBOSE, which has also got some issues
in this area.
Backpatch to 8.3, where arrays of composites were introduced.
In addition, add support for a "payload" string to be passed along with
each notify event.
This implementation should be significantly more efficient than the old one,
and is also more compatible with Hot Standby usage. There is not yet any
facility for HS slaves to receive notifications generated on the master,
although such a thing is possible in future.
Joachim Wieland, reviewed by Jeff Davis; also hacked on by me.
This patch allows the frame to start from CURRENT ROW (in either RANGE or
ROWS mode), and it also adds support for ROWS n PRECEDING and ROWS n FOLLOWING
start and end points. (RANGE value PRECEDING/FOLLOWING isn't there yet ---
the grammar works, but that's all.)
Hitoshi Harada, reviewed by Pavel Stehule
VACUUM FULL INPLACE), along with a boatload of subsidiary code and complexity.
Per discussion, the use case for this method of vacuuming is no longer large
enough to justify maintaining it; not to mention that we don't wish to invest
the work that would be needed to make it play nicely with Hot Standby.
Aside from the code directly related to old-style VACUUM FULL, this commit
removes support for certain WAL record types that could only be generated
within VACUUM FULL, redirect-pointer removal in heap_page_prune, and
nontransactional generation of cache invalidation sinval messages (the last
being the sticking point for Hot Standby).
We still have to retain all code that copes with finding HEAP_MOVED_OFF and
HEAP_MOVED_IN flag bits on existing tuples. This can't be removed as long
as we want to support in-place update from pre-9.0 databases.
DROP USER at the end of the cluster.sql test could fail, if the temp
table created in the previous session hadn't finished getting dropped.
Unluckily, I didn't see this in several repetitions of the parallel
regression tests, but it's popping up on quite a few buildfarm machines.
relations (they don't live in pg_toast). This caused an Assert failure in
assert-enabled builds. So far as I can see, in a non-assert build it would
only have messed up the checks for conflicting names, so a failure would be
quite improbable but perhaps not impossible.
When a column is renamed, we recursively rename the same column in
all descendent tables. But if one of those tables also inherits that
column from a table outside the inheritance hierarchy rooted at the
named table, we must throw an error. The previous coding correctly
prohibited the rename when the parent had inherited the column from
elsewhere, but overlooked the case where the parent was OK but a child
table also inherited the same column from a second, unrelated parent.
For now, not backpatched due to lack of complaints from the field.
KaiGai Kohei, with further changes by me.
Reviewed by Bernd Helme and Tom Lane.
the input values into a string. The two argument version also does the same
thing, but inserts delimiters between elements.
Original patch by Pavel Stehule, reviewed by David E. Wheeler and me.
default of "plpgsql". This is more reasonable than it was when the DO patch
was written, because we have since decided that plpgsql should be installed
by default. Per discussion, having a parameter for this doesn't seem useful
enough to justify the risk of application breakage if the value is changed
unexpectedly.
and implement OVERLAY() for bit strings and bytea.
In passing also convert text OVERLAY() to a true built-in, instead of
relying on a SQL function.
Leonardo F, reviewed by Kevin Grittner
This is the last EXECUTE-like plpgsql statement that was missing
the capability of inserting parameter values via USING.
Pavel Stehule, reviewed by Itagaki Takahiro
VACUUM FULL was renamed to VACUUM FULL INPLACE. Also added a new
option -i, --inplace for vacuumdb to perform FULL INPLACE vacuuming.
Since the new VACUUM FULL uses CLUSTER infrastructure, we cannot
use it for system tables. VACUUM FULL for system tables always
fall back into VACUUM FULL INPLACE silently.
Itagaki Takahiro, reviewed by Jeff Davis and Simon Riggs.
peculiar variant of UNION ALL, and so wouldn't likely get written directly
as-is, it's possible for it to arise as a result of simplification of
less-obviously-silly queries. In particular, now that we can do flattening
of subqueries that have constant outputs and are underneath an outer join,
it's possible for the case to result from simplification of queries of the
type exhibited in bug #5263. Back-patch to 8.4 to avoid a functionality
regression for this type of query.
to be just a minor extension of the previous patch that made "x IS NULL"
indexable, because we can treat the IS NOT NULL condition as if it were
"x < NULL" or "x > NULL" (depending on the index's NULLS FIRST/LAST option),
just like IS NULL is treated like "x = NULL". Aside from any possible
usefulness in its own right, this is an important improvement for
index-optimized MAX/MIN aggregates: it is now reliably possible to get
a column's min or max value cheaply, even when there are a lot of nulls
cluttering the interesting end of the index.
choose an index name the same as it would do for an unnamed index constraint.
(My recent changes to the index naming logic have helped to ensure that this
will be a reasonable choice.) Per a suggestion from Peter.
A necessary side-effect is to promote CONCURRENTLY to type_func_name_keyword
status, ie, it can't be a table/column/index name anymore unless quoted.
This is not all bad, since we have heard more than once of people typing
CREATE INDEX CONCURRENTLY ON foo (...) and getting a normal index build of
an index named "concurrently", which was not what they wanted. Now this
syntax will result in a concurrent build of an index with system-chosen
name; which they can rename afterwards if they want something else.
and composite types, which are the only relkinds for which pg_dump support
exists for dumping column comments. There is no obvious usefulness for
comments on columns of sequences or toast tables; and while comments on
index columns might have some value, it's not worth the risk of compatibility
problems due to possible changes in the algorithm for assigning names to
index columns. Per discussion.
In consequence, remove now-dead code for copying such comments in CREATE TABLE
LIKE.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
presented with an UNKNOWN-type Var, which can happen in cases where an
unknown literal appeared in a subquery. While many such cases will fail
later on anyway in the planner, there are some cases where the planner is
able to flatten the query and replace the Var by the constant before it has
to coerce the union column to the final type. I had added this check in 8.4
to provide earlier/better error detection, but it causes a regression for
some cases that worked OK before. Fix by not making the check if the input
node is UNKNOWN type and not a Const or Param. If it isn't going to work,
it will fail anyway at plan time, with the only real loss being inability to
provide an error cursor. Per gripe from Britt Piehler.
In passing, rename a couple of variables to remove confusion from an
inner scope masking the same variable names in an outer scope.
non-kluge method for controlling the order in which values are fed to an
aggregate function. At the same time eliminate the old implementation
restriction that DISTINCT was only supported for single-argument aggregates.
Possibly release-notable behavioral change: formerly, agg(DISTINCT x)
dropped null values of x unconditionally. Now, it does so only if the
agg transition function is strict; otherwise nulls are treated as DISTINCT
normally would, ie, you get one copy.
Andrew Gierth, reviewed by Hitoshi Harada
we have to cope with the possibility that the declared result rowtype contains
dropped columns. This fails in 8.4, as per bug #5240.
While at it, be more paranoid about inserting binary coercions when inlining.
The pre-8.4 code did not really need to worry about that because it could not
inline at all in any case where an added coercion could change the behavior
of the function's statement. However, when inlining a SRF we allow sorting,
grouping, and set-ops such as UNION. In these cases, modifying one of the
targetlist entries that the sort/group/setop depends on could conceivably
change the behavior of the function's statement --- so don't inline when
such a case applies.
checked to determine whether the trigger should be fired.
For BEFORE triggers this is mostly a matter of spec compliance; but for AFTER
triggers it can provide a noticeable performance improvement, since queuing of
a deferred trigger event and re-fetching of the row(s) at end of statement can
be short-circuited if the trigger does not need to be fired.
Takahiro Itagaki, reviewed by KaiGai Kohei.
adopted for EXPLAIN. This will allow additional options to be implemented
in future without having to make them fully-reserved keywords. The old syntax
remains available for existing options, however.
Itagaki Takahiro
default be "throw error on conflict", as per discussions. The GUC variable
is plpgsql.variable_conflict, with values "error", "use_variable",
"use_column". The behavior can also be specified per-function by inserting
one of
#variable_conflict error
#variable_conflict use_variable
#variable_conflict use_column
at the start of the function body.
The 8.5 release notes will need to mention using "use_variable" to retain
backward-compatible behavior, although we should encourage people to migrate
to the much less mistake-prone "error" setting.
Update the plpgsql documentation to match this and other recent changes.
directly. This was a lot of trouble, but should be worth it in terms of
not having to keep the plpgsql lexer in step with core anymore. In addition
the handling of keywords is significantly better-structured, allowing us to
de-reserve a number of words that plpgsql formerly treated as reserved.
can be the name of a plpgsql cursor variable, which formerly was converted
to $N before the core parser saw it, but that's no longer the case.
Deal with plain name references to plpgsql variables, and add a regression
test case that exposes the failure.
into SQL expressions, to using the newly added parser callback hooks.
This allows us to do the substitutions in a more semantically-aware way:
a variable reference will only be recognized where it can validly go,
ie, a place where a column value or parameter would be legal, instead of
the former behavior that would replace any textual match including
table names and column aliases (leading to syntax errors later on).
A release-note-worthy fine point is that plpgsql variable names that match
fully-reserved words will now need to be quoted.
This commit preserves the former behavior that variable references take
precedence over any possible match to a column name. The infrastructure
is in place to support the reverse precedence or throwing an error on
ambiguity, but those behaviors aren't accessible yet.
Most of the code changes here are associated with making the namespace
data structure persist so that it can be consulted at runtime, instead
of throwing it away at the end of initial function parsing.
The plpgsql scanner is still doing name lookups, but that behavior is
now irrelevant for SQL expressions. A future commit will deal with
removing unnecessary lookups.
behavior, and is so little used that no one has been interested in fixing it.
To ensure that possible uses are covered, remove the ALIAS declaration's
arbitrary restriction that only $n identifiers can be aliased.
(We could alternatively make RENAME act just like ALIAS, but per discussion
having two different ways to do the same thing is probably more confusing than
helpful.)
As proof of concept, modify plpgsql to use the hooks. plpgsql is still
inserting $n symbols textually, but the "back end" of the parsing process now
goes through the ParamRef hook instead of using a fixed parameter-type array,
and then execution only fetches actually-referenced parameters, using a hook
added to ParamListInfo.
Although there's a lot left to be done in plpgsql, this already cures the
"if (TG_OP = 'INSERT' and NEW.foo ...)" problem, as illustrated by the
changed regression test.
style by default. Per discussion, there seems to be hardly anything that
really relies on being able to change the regex flavor, so the ability to
select it via embedded options ought to be enough for any stragglers.
Also, if we didn't remove the GUC, we'd really be morally obligated to
mark the regex functions non-immutable, which'd possibly create performance
issues.
Per recent discussion, add_missing_from has been deprecated for long enough to
consider removing, and it's getting in the way of planned parser refactoring.
The system now always behaves as though add_missing_from were OFF.
are named in the UPDATE's SET list.
Note: the schema of pg_trigger has not actually changed; we've just started
to use a column that was there all along. catversion bumped anyway so that
this commit is included in the history of potentially interesting changes
to system catalog contents.
Itagaki Takahiro
Partially revert the previous patch I installed and replace it with a more
general fix: any time a snapshot is pushed as Active, we need to ensure that it
will not be modified in the future. This means that if the same snapshot is
used as CurrentSnapshot, it needs to be copied separately. This affects
serializable transactions only, because CurrentSnapshot has already been copied
by RegisterSnapshot and so PushActiveSnapshot does not think it needs another
copy. However, CommandCounterIncrement would modify CurrentSnapshot, whereas
ActiveSnapshots must not have their command counters incremented.
I say "partially" because the regression test I added for the previous bug
has been kept.
(This restores 8.3 behavior, because before snapmgr.c existed, any snapshot set
as Active was copied.)
Per bug report from Stuart Bishop in
6bc73d4c0910042358k3d1adff3qa36f8df75198ecea@mail.gmail.com
inheritance parent tables are compared using equal(), instead of doing
strcmp() on the nodeToString representation. The old implementation was
always a tad cheesy, and it finally fails completely as of 8.4, now that the
node tree might contain syntax location information. equal() knows it's
supposed to ignore those fields, but strcmp() hardly can. Per recent
report from Scott Ribe.
the privileges that will be applied to subsequently-created objects.
Such adjustments are always per owning role, and can be restricted to objects
created in particular schemas too. A notable benefit is that users can
override the traditional default privilege settings, eg, the PUBLIC EXECUTE
privilege traditionally granted by default for functions.
Petr Jelinek
The old coding was using a regular snapshot, referenced elsewhere, that was
subject to having its command counter updated. Fix by creating a private copy
of the snapshot exclusively for the cursor.
Backpatch to 8.4, which is when the bug was introduced during the snapshot
management rewrite.
The original coding correctly noted that these aren't just redundancies
(they're effectively X IS NOT NULL, assuming = is strict). However, they
got treated that way if X happened to be in a single-member EquivalenceClass
already, which could happen if there was an ORDER BY X clause, for instance.
The simplest and most reliable solution seems to be to not try to process
such clauses through the EquivalenceClass machinery; just throw them back
for traditional processing. The amount of work that'd be needed to be
smarter than that seems out of proportion to the benefit.
Per bug #5084 from Bernt Marius Johnsen, and analysis by Andrew Gierth.
to create a function for it.
Procedural languages now have an additional entry point, namely a function
to execute an inline code block. This seemed a better design than trying
to hide the transient-ness of the code from the PL. As of this patch, only
plpgsql has an inline handler, but probably people will soon write handlers
for the other standard PLs.
In passing, remove the long-dead LANCOMPILER option of CREATE LANGUAGE.
Petr Jelinek
This is intentionally similar to the recently revised syntax for EXPLAIN
options, ie, (name value, ...). The old syntax is still supported for
backwards compatibility, but we intend that any options added in future
will be provided only in the new syntax.
Robert Haas, Emmanuel Cecchet
In this case we generate two PathKey references to the expression (one for
DISTINCT and one for ORDER BY) and they really need to refer to the same
EquivalenceClass. However get_eclass_for_sort_expr was being overly paranoid
and creating two different EC's. Correct behavior is to use the SortGroupRef
index to decide whether two references to volatile expressions that are
equal() (ie textually equivalent) should be considered the same.
Backpatch to 8.4. Possibly this should be changed in 8.3 as well, but
I'll refrain in the absence of evidence of a visible failure in that branch.
Per bug #5049.
code was already okay with this, but the hack that obtained the output
column types of a recursive union in advance of doing real parse analysis
of the recursive union forgot to handle the case where there was an inner
WITH clause available to the non-recursive term. Best fix seems to be to
refactor so that we don't need the "throwaway" parse analysis step at all.
Instead, teach the transformSetOperationStmt code to set up the CTE's output
column information after it's processed the non-recursive term normally.
Per report from David Fetter.
that's generated for a whole-row Var referencing the subquery, when the
subquery is in the nullable side of an outer join. The previous coding
instead put PlaceHolderVars around the elements of the RowExpr. The effect
was that when the outer join made the subquery outputs go to null, the
whole-row Var produced ROW(NULL,NULL,...) rather than just NULL. There
are arguments afoot about whether those things ought to be semantically
indistinguishable, but for the moment they are not entirely so, and the
planner needs to take care that its machinations preserve the difference.
Per bug #5025.
Making this feasible required refactoring ResolveNew() to allow more caller
control over what is substituted for a Var. I chose to make ResolveNew()
a wrapper around a new general-purpose function replace_rte_variables().
I also fixed the ancient bogosity that ResolveNew might fail to set
a query's hasSubLinks field after inserting a SubLink in it. Although
all current callers make sure that happens anyway, we've had bugs of that
sort before, and it seemed like a good time to install a proper solution.
Back-patch to 8.4. The problem can be demonstrated clear back to 8.0,
but the fix would be too invasive in earlier branches; not to mention
that people may be depending on the subtly-incorrect behavior. The
8.4 series is new enough that fixing this probably won't cause complaints,
but it might in older branches. Also, 8.4 shows the incorrect behavior
in more cases than older branches do, because it is able to flatten
subqueries in more cases.
so that their elements are always taken as simple expressions over the
query's input columns. It originally seemed like a good idea to make them
act exactly like GROUP BY and ORDER BY, right down to the SQL92-era behavior
of accepting output column names or numbers. However, that was not such a
great idea, for two reasons:
1. It permits circular references, as exhibited in bug #5018: the output
column could be the one containing the window function itself. (We actually
had a regression test case illustrating this, but nobody thought twice about
how confusing that would be.)
2. It doesn't seem like a good idea for, eg, "lead(foo) OVER (ORDER BY foo)"
to potentially use two completely different meanings for "foo".
Accordingly, narrow down the behavior of window clauses to use only the
SQL99-compliant interpretation that the expressions are simple expressions.
by supporting conversions in places that used to demand exact rowtype match.
Since this issue is certain to come up elsewhere (in fact, already has,
in ExecEvalConvertRowtype), factor out the support code into new core
functions for tuple conversion. I chose to put these in a new source
file since heaptuple.c is already overly long.
Heavily revised version of a patch by Pavel Stehule.
Both hex format and the traditional "escape" format are automatically
handled on input. The output format is selected by the new GUC variable
bytea_output.
As committed, bytea_output defaults to HEX, which is an *incompatible
change*. We will keep it this way for awhile for testing purposes, but
should consider whether to switch to the more backwards-compatible
default of ESCAPE before 8.5 is released.
Peter Eisentraut
Add family of functions that did not exist earlier,
mainly due to historical omission. Original patch by
Abhijit Menon-Sen, with review and modifications by
Joe Conway. catversion.h bumped.
The current implementation fires an AFTER ROW trigger for each tuple that
looks like it might be non-unique according to the index contents at the
time of insertion. This works well as long as there aren't many conflicts,
but won't scale to massive unique-key reassignments. Improving that case
is a TODO item.
Dean Rasheed
This is a simple test to see whether COSTS OFF will help much with getting
EXPLAIN output that's sufficiently platform-independent for use in the
regression tests. The planner does have some freedom of choice in these
examples (plain via bitmap indexscan), so I'm not sure what will happen.
by unique-ifying the RHS and then inner-joining to some other relation,
that is not grounds for violating the RHS of some other outer join.
Noticed while regression-testing new GEQO code, which will blindly follow
any path that join_is_legal says is legal, and then complain later if that
leads to a dead end.
I'm not certain that this can result in any visible failure in 8.4: the
mistake may always be masked by the fact that subsequent attempts to join
the rest of the RHS of the other join will fail. But I'm not certain it
can't, either, and it's definitely not operating as intended. So back-patch.
The added regression test depends on the new no-failures-allowed logic
that I'm about to commit in GEQO, so no point back-patching that.
that the sanity checking I added to create_mergejoin_plan() in 8.3 was a
few bricks shy of a load: the mergeclauses could reference pathkeys in a
noncanonical order such as x,y,x, not only cases like x,x,y which is all
that the code had allowed for. The odd cases only turn up when using
redundant clauses in an outer join condition, which is why no one had
noticed before.
the "cteParam" as a proxy for the possibility that the underlying CTE plan
depends on outer-level variables or Params, but that doesn't work very well
because it sometimes causes calling subqueries to be treated as SubPlans when
they could be InitPlans. This is inefficient and also causes the outright
failure exhibited in bug #4902. Instead, leave the cteParam out of it and
copy the underlying CTE plan's extParams directly. Per bug #4902 from
Marko Tiikkaja.
I wrote this one while chasing down some bugs in the closing days of 8.4. It
could be useful in the long run. This area of the code had no test coverage
at all before.
even when not in FM mode. This improves compatibility with Oracle and with
our pre-8.4 behavior, as per bug #4862.
Brendan Jurd
Add a couple of regression test cases for this. In passing, get rid of the
labeling of the individual test cases; doesn't seem to be good for anything
except causing extra work when inserting a test...
Tom Lane
This prevents autovacuum from reclaiming free space in them and causing
the test's output row order to change, which is causing intermittent
bogus failure reports in the buildfarm.
Backpatch to 8.3. The issue exists further back, but since autovacuum was
not on by default before 8.3, it's not a problem for buildfarm testing.
function returning setof record. This used to work, more or less
accidentally, but I had broken it while extending the code to allow
materialize-mode functions to be called in select lists. Add a regression
test case so it doesn't get broken again. Per gripe from Greg Davidson.
grounds that they don't fit into the specified interval qualifier (typmod).
This behavior, while of long standing, is clearly wrong per spec --- for
example the value INTERVAL '999' SECOND means 999 seconds and should not be
reduced to less than 60 seconds.
In some cases there could be grounds to raise an error if higher-order field
values are not given as zero; for example '1 year 1 month'::INTERVAL MONTH
should arguably be taken as an error rather than equivalent to 13 months.
However our internal representation doesn't allow us to do that in a fashion
that would consistently reject all and only the cases that a strict reading
of the spec would suggest. Also, seeing that for example INTERVAL '13' MONTH
will print out as '1 year 1 mon', we have to be careful not to create a
situation where valid data will fail to dump and reload. The present patch
therefore takes the attitude of not throwing an error in any such case.
We might want to revisit that in future but it would take more redesign
than seems prudent in late beta.
Per a complaint from Sebastien Flaesch and subsequent discussion. While
at other times we might have just postponed such an issue to the next
development cycle, 8.4 already has changed the parsing of interval literals
quite a bit in an effort to accept all spec-compliant cases correctly.
This seems like a change that should be part of that rather than coming
along later.
this case is worth a special code path, but a special code path that gets
the boundary condition wrong is definitely no good. Per bug #4821 from
Andrew Gierth.
In passing, clean up some minor code formatting issues (excess parentheses
and blank lines in odd places).
Back-patch to 8.3, where the bug was introduced.
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
documentation warnings against setting it nonzero unless active use of
prepared transactions is intended and a suitable transaction manager has been
installed. This should help to prevent the type of scenario we've seen
several times now where a prepared transaction is forgotten and eventually
causes severe maintenance problems (or even anti-wraparound shutdown).
The only real reason we had the default be nonzero in the first place was to
support regression testing of the feature. To still be able to do that,
tweak pg_regress to force a nonzero value during "make check". Since we
cannot force a nonzero value in "make installcheck", add a variant regression
test "expected" file that shows the results that will be obtained when
max_prepared_transactions is zero.
Also, extend the HINT messages for transaction wraparound warnings to mention
the possibility that old prepared transactions are causing the problem.
All per today's discussion.
more nearly matching the core SQL scanner. The user-visible effects are:
* Block comments (slash-star comments) now nest, as per SQL spec.
* In standard_conforming_strings mode, backslash as the last character of a
non-E string literal is now correctly taken as an ordinary character;
formerly it was misinterpreted as escaping the ending quote. (Since the
string also had to pass through the core scanner, this invariably led
to syntax errors.)
* Formerly, backslashes in the format string of RAISE were always treated as
quoting the next character, regardless of mode. Now, they are ordinary
characters with standard_conforming_strings on, while with it off, they
introduce the same set of escapes as in the core SQL scanner. Also,
escape_string_warning is now effective for RAISE format strings. These
changes make RAISE format strings work just like any other string literal.
This is implemented by copying and pasting a lot of logic from the core
scanner. It would be a good idea to look into getting rid of plpgsql's
scanner entirely in favor of using the core scanner. However, that involves
more change than I can justify making during beta --- in particular, the core
scanner would have to become re-entrant.
In passing, remove the kluge that made the plpgsql scanner emit T_FUNCTION or
T_TRIGGER as a made-up first token. That presumably had some value once upon
a time, but now it's just useless complication for both the scanner and the
grammar.
how this ought to behave for multi-dimensional arrays. Per discussion,
not having it at all seems better than having it with what might prove
to be the wrong behavior. We can always add it later when we have consensus
on the correct behavior.
by my patch of 2007-01-28 to use per-subtransaction ExprContexts/EStates:
since we re-prepared any expression tree when the current subtransaction ID
changed, we'd accumulate more and more leaked expression state trees in the
outermost subtransaction if the same function was executed at multiple levels
of subtransaction nesting. To fix, go back to the previous scheme where
there was only one EState per transaction for simple plpgsql expressions.
We really only need an ExprContext per subtransaction, not a whole EState,
so it's possible to keep prepared expression state trees in the one EState
throughout the transaction. This should be more efficient as well as not
leaking memory for cases involving lots of subtransactions.
The added regression test is the case that inspired the 2007-01-28 patch in
the first place, just to make sure we didn't go backwards. The current
memory leak complaint is unfortunately hard to test for in the regression
test framework, though manual testing shows it's fixed.
Although this is a pre-existing bug, I'm not back-patching because I'd like to
see this method get some field testing first. Consider back-patching if it
gets through 8.4beta unscathed.
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
noise words for the last twelve years, for compatibility with Berkeley-era
output formatting of the special INVALID values for those datatypes.
Considering that the datatypes themselves have been deprecated for awhile,
this is taking backwards compatibility a little far. Per gripe from Josh
Berkus.
by the planning process. This prevents the "failed to locate grouping columns"
error recently reported by Dickson Guedes. That happens because planning
replaces SubLinks by SubPlans in the subquery's targetlist, and exprTypmod()
is smarter about the former than the latter, causing the apparent type of
the subquery's output columns to change. This seems to be a deficiency we
should fix in exprTypmod(), but that will be a much more invasive patch
with possible side-effects elsewhere, so I'll do that only in HEAD.
Back-patch to 8.3. Arguably the lack of a copying step is broken/dangerous
all the way back, but in the absence of known problems I'll refrain from
making the older branches pay the extra cost. (The reason this particular
symptom didn't appear before is that exprTypmod() wasn't smart about SubLinks
either, until 8.3.)
attribute numbering. Also, a parent whole-row reference should not require
select privilege on child columns that aren't inherited from the parent.
Problem diagnosed by KaiGai Kohei, though this isn't exactly his patch.
to 100ms (from 1000). This still seems to be comfortably larger than the
useful range of the parameter, and it should help discourage people from
picking uselessly large values. Tweak the documentation to recommend small
values, too. Per discussion of a couple weeks ago.
making pull_up_sublinks() construct a full-blown JoinExpr tree representation
of IN/EXISTS SubLinks that it is able to convert to semi or anti joins.
This makes pull_up_sublinks() a shade more complex, but the gain in semantic
clarity is worth it. I still have more to do in this area to address the
previously-discussed problems, but this commit in itself fixes at least one
bug in HEAD, as shown by added regression test case.
wrappers (similar to procedural languages). This way we don't need to retain
the nearly empty libraries, and we are more free in how to implement the
wrapper API in the future.
instead of vice versa. Update the regression test expectations to support
that. In the plpgsql test, adjust the test data so that this isn't an
issue. In the char and varchar tests, add new expected files.
get rid of the OID column. This eliminates the problem discovered by Heikki
back in November that 8.4's suppression of "unnecessary" junk filtering in
INSERT/SELECT could lead to an Assert failure, or storing of oids into a table
that shouldn't have them if Asserts are off. While that particular problem
could have been solved in other ways, it seems likely to be just a forerunner
of things to come if we continue to allow tables to contain rows that disagree
with the pg_class.relhasoids setting. It's better to make this operation slow
than to sacrifice performance or risk bugs in more common code paths.
Also, add ALTER TABLE SET WITH OIDS to rewrite the table to add oids.
This was a bit more controversial, but in view of the very small amount of
extra code needed given the current ALTER TABLE infrastructure, it seems best
to eliminate the asymmetry in features.
has_column_privilege and has_any_column_privilege SQL functions; fix the
information_schema views that are supposed to pay attention to column
privileges; adjust pg_stats to show stats for any column you have select
privilege on; and fix COPY to allow copying a subset of columns if the user
has suitable per-column privileges for all the columns.
To improve efficiency of some of the information_schema views, extend the
has_xxx_privilege functions to allow inquiring about the OR of a set of
privileges in just one call. This is just exposing capability that already
existed in the underlying aclcheck routines.
In passing, make the information_schema views report the owner's own
privileges as being grantable, since Postgres assumes this even when the grant
option bit is not set in the ACL. This is a longstanding oversight.
Also, make the new has_xxx_privilege functions for foreign data objects follow
the same coding conventions used by the older ones.
Stephen Frost and Tom Lane
CREATE/ALTER/DROP USER MAPPING are now allowed either by the server owner or
by a user with USAGE privileges for his own user name. This is more or less
what the SQL standard wants anyway (plus "implementation-defined")
Hide information_schema.user_mapping_options.option_value, unless the current
user is the one associated with the user mapping, or is the server owner and
the mapping is for PUBLIC, or is a superuser. This is to protect passwords.
Also, fix a bug in information_schema._pg_foreign_servers, which hid servers
using wrappers where the current user did not have privileges on the wrapper.
The correct behavior is to hide servers where the current user has no
privileges on the server.
patch. This includes the ability to force the frame to cover the whole
partition, and the ability to make the frame end exactly on the current row
rather than its last ORDER BY peer. Supporting any more of the full SQL
frame-clause syntax will require nontrivial hacking on the window aggregate
code, so it'll have to wait for 8.5 or beyond.
This doesn't do any remote or external things yet, but it gives modules
like plproxy and dblink a standardized and future-proof system for
managing their connection information.
Martin Pihlak and Peter Eisentraut
actual argument type of ANYARRAY to match an argument declared ANYARRAY,
so long as ANYELEMENT etc aren't used. I had overlooked the fact that this
is a possible case while fixing bug #3852; but it is possible because
pg_statistic contains columns declared ANYARRAY. Per gripe from Corey Horton.
as LIKE. I oversimplified this code when removing support for plan-time
determination of index operator lossiness back in April --- I had thought
create_bitmap_subplan could stop returning two separate lists of qual
conditions, but it still must so that we can treat special operators
correctly in create_bitmap_scan_plan. Per report from Rushabh Lathia.
locate the target row, if the cursor was declared with FOR UPDATE or FOR
SHARE. This approach is more flexible and reliable than digging through the
plan tree; for instance it can cope with join cursors. But we still provide
the old code for use with non-FOR-UPDATE cursors. Per gripe from Robert Haas.
return the tableoid as well as the ctid for any FOR UPDATE targets that
have child tables. All child tables are listed in the ExecRowMark list,
but the executor just skips the ones that didn't produce the current row.
Curiously, this longstanding restriction doesn't seem to have been documented
anywhere; so no doc changes.
anyelement. This lacks the WITH ORDINALITY option, as well as the multiple
input arrays option added in the most recent SQL specs. But it's still a
pretty useful subset of the spec's functionality, and it is enough to
allow obsoleting contrib/intagg.
function as a special case.
This version still has the suspicious behavior of returning null for an
empty array (rather than zero), but this may need a wholesale revision of
empty array behavior, currently under discussion.
Jim Nasby, Robert Haas, Peter Eisentraut
in "postgres_verbose" intervalstyle, and the equally arbitrary decision to
show at least two fractional-seconds digits in most other datetime display
styles. This results in some minor changes in the expected regression test
outputs.
Also, coalesce a lot of repetitive code in datetime.c into subroutines,
for clarity and ease of maintenance. In particular this roughly halves
the number of #ifdef HAVE_INT64_TIMESTAMP segments.
Ron Mayer, with some additional kibitzing from Tom Lane
specifically, we can input either the "format with designators" or the
"alternative format", and we can output the former when IntervalStyle is set
to iso_8601.
Ron Mayer
from DateStyle, and create a new interval style that produces output matching
the SQL standard (at least for interval values that fall within the standard's
restrictions). IntervalStyle is also used to resolve the conflict between the
standard and traditional Postgres rules for interpreting negative interval
input.
Ron Mayer
RETURNING clause, not just a SELECT as formerly.
A side effect of this patch is that when a set-returning SQL function is used
in a FROM clause, performance is improved because the output is collected into
a tuplestore within the function, rather than using the less efficient
value-per-call mechanism.
the timestamp types. Turns out this doesn't even reduce the available
range of dates, since the restriction to dates that work for Julian-date
arithmetic is much tighter than the int32 range anyway. Per a longstanding
TODO item.
depth-first search order. Upon close reading of SQL:2008, it seems that the
spec's SEARCH DEPTH FIRST and SEARCH BREADTH FIRST options do not actually
guarantee any particular result order: what they do is provide a constructed
column that the user can then sort on in the outer query. So this is actually
just as much functionality ...
pseudo-type record[] to represent arrays of possibly-anonymous composite
types. Since composite datums carry their own type identification, no
extra knowledge is needed at the array level.
The main reason for doing this right now is that it is necessary to support
the general case of detection of cycles in recursive queries: if you need to
compare more than one column to detect a cycle, you need to compare a ROW()
to an array built from ROW()s, at least if you want to do it as the spec
suggests. Add some documentation and regression tests concerning the cycle
detection issue.
RecursiveUnion to which it refers. It turns out that we can just postpone the
relevant initialization steps until the first exec call for the node, by which
time the ancestor node must surely be initialized. Per report from Greg Stark.
implementation uses an in-memory hash table, so it will poop out for very
large recursive results ... but the performance characteristics of a
sort-based implementation would be pretty unpleasant too.
recursive CTE that we're still in progress of analyzing. Add a similar guard
to the similar code in expandRecordVariable(), and tweak regression tests to
cover this case. Per report from Dickson S. Guedes.
There are some unimplemented aspects: recursive queries must use UNION ALL
(should allow UNION too), and we don't have SEARCH or CYCLE clauses.
These might or might not get done for 8.4, but even without them it's a
pretty useful feature.
There are also a couple of small loose ends and definitional quibbles,
which I'll send a memo about to pgsql-hackers shortly. But let's land
the patch now so we can get on with other development.
Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane
element types. Since the backend doesn't actually pay attention to the array
type's delimiter, this has no functional effect, but it seems better for the
catalog entries to be consistent. Per gripe from Greg Mullane and subsequent
discussion.
interpreted as expected (the sign should affect months too), and get rid of
hard-wired assumption that unmarked signed values must be hours (if integers)
or seconds (if floats). The former was just a bug in my previous patch,
while the latter may have made sense at one time but seems illogical now
that we support determination of the units from typmod information.
Ron Mayer and myself.
and the literal syntax INTERVAL 'string' ... SECOND(n), as required by the
SQL standard. Our old syntax put (n) directly after INTERVAL, which was
a mistake, but will still be accepted for backward compatibility as well
as symmetry with the TIMESTAMP cases.
Change intervaltypmodout to show it in the spec's way, too. (This could
potentially affect clients, if there are any that analyze the typmod of an
INTERVAL in any detail.)
Also fix interval input to handle 'min:sec.frac' properly; I had overlooked
this case in my previous patch.
Document the use of the interval fields qualifier, which up to now we had
never mentioned in the docs. (I think the omission was intentional because
it didn't work per spec; but it does now, or at least close enough to be
credible.)
a lot closer than it was before). To do this, tweak coerce_type() to pass
through the typmod information when invoking interval_in() on an UNKNOWN
constant; then fix DecodeInterval to pay attention to the typmod when deciding
how to interpret a units-less integer value. I changed one or two other
details as well. I believe the code now reacts as expected by spec for all
the literal syntaxes that are specifically enumerated in the spec. There
are corner cases involving strings that don't exactly match the set of fields
called out by the typmod, for which we might want to tweak the behavior some
more; but I think this is an area of user friendliness rather than spec
compliance. There remain some non-compliant details about the SQL syntax
(as opposed to what's inside the literal string); but at least we'll throw
error rather than silently doing the wrong thing in those cases.
and anti joins. To do this, pass the SpecialJoinInfo struct for the current
join as an additional optional argument to operator join selectivity
estimation functions. This allows the estimator to tell not only what kind
of join is being formed, but which variable is on which side of the join;
a requirement long recognized but not dealt with till now. This also leaves
the door open for future improvements in the estimators, such as accounting
for the null-insertion effects of lower outer joins. I didn't do anything
about that in the current patch but the information is in principle deducible
from what's passed.
The patch also clarifies the definition of join selectivity for semi/anti
joins: it's the fraction of the left input that has (at least one) match
in the right input. This allows getting rid of some very fuzzy thinking
that I had committed in the original 7.4-era IN-optimization patch.
There's probably room to estimate this better than the present patch does,
but at least we know what to estimate.
Since I had to touch CREATE OPERATOR anyway to allow a variant signature
for join estimator functions, I took the opportunity to add a couple of
additional checks that were missing, per my recent message to -hackers:
* Check that estimator functions return float8;
* Require execute permission at the time of CREATE OPERATOR on the
operator's function as well as the estimator functions;
* Require ownership of any pre-existing operator that's modified by
the command.
I also moved the lookup of the functions out of OperatorCreate() and
into operatorcmds.c, since that seemed more consistent with most of
the other catalog object creation processes, eg CREATE TYPE.
This completes my project of improving usage of hashing for duplicate
elimination (aggregate functions with DISTINCT remain undone, but that's
for some other day).
As with the previous patches, this means we can INTERSECT/EXCEPT on datatypes
that can hash but not sort, and it means that INTERSECT/EXCEPT without ORDER
BY are no longer certain to produce sorted output.
but seem like a separate patch since most of the remaining work is on the
executor side.) I took the opportunity to push selection of the grouping
operators for set operations into the parser where it belongs. Otherwise this
is just a small exercise in making prepunion.c consider both alternatives.
As with the recent DISTINCT patch, this means we can UNION on datatypes that
can hash but not sort, and it means that UNION without ORDER BY is no longer
certain to produce sorted output.
as methods for implementing the DISTINCT step. This eliminates the former
performance gap between DISTINCT and GROUP BY, and also makes it possible
to do SELECT DISTINCT on datatypes that only support hashing not sorting.
SELECT DISTINCT ON is still always implemented by sorting; it would take
executor changes to support hashing that, and it's not clear it's worth
the trouble.
This is a release-note-worthy incompatibility from previous PG versions,
since SELECT DISTINCT can no longer be counted on to deliver sorted output
without explicitly saying ORDER BY. (Anyone who can't cope with that
can consider turning off enable_hashagg.)
Several regression test queries needed to have ORDER BY added to preserve
stable output order. I fixed the ones that manifested here, but there
might be some other cases that show up on other platforms.
only type categories in which the previous coding made *every* type
preferred; so there is no change in effective behavior, because the function
resolution rules only do something different when faced with a choice
between preferred and non-preferred types in the same category. It just
seems safer and less surprising to have CREATE TYPE default to non-preferred
status ...
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
need to deconstruct proargmodes for each pg_proc entry inspected by
FuncnameGetCandidates(). Fixes function lookup performance regression
caused by yesterday's variadic-functions patch.
In passing, make pg_proc.probin be NULL, rather than a dummy value '-',
in cases where it is not actually used for the particular type of function.
This should buy back some of the space cost of the extra column.
so long as all the trailing arguments are of the same (non-array) type.
The function receives them as a single array argument (which is why they
have to all be the same type).
It might be useful to extend this facility to aggregates, but this patch
doesn't do that.
This patch imposes a noticeable slowdown on function lookup --- a follow-on
patch will fix that by adding a redundant column to pg_proc.
Pavel Stehule
bug #4290. The fundamental bug is that masking extParam by outer_params,
as finalize_plan had been doing, caused us to lose the information that
an initPlan depended on the output of a sibling initPlan. On reflection
the best thing to do seemed to be not to try to adjust outer_params for
this case but get rid of it entirely. The only thing it was really doing
for us was to filter out param IDs associated with SubPlan nodes, and that
can be done (with greater accuracy) while processing individual SubPlan
nodes in finalize_primnode. This approach was vindicated by the discovery
that the masking method was hiding a second bug: SS_finalize_plan failed to
remove extParam bits for initPlan output params that were referenced in the
main plan tree (it only got rid of those referenced by other initPlans).
It's not clear that this caused any real problems, given the limited use
of extParam by the executor, but it's certainly not what was intended.
I originally thought that there was also a problem with needing to include
indirect dependencies on external params in initPlans' param sets, but it
turns out that the executor handles this correctly so long as the depended-on
initPlan is earlier in the initPlans list than the one using its output.
That seems a bit of a fragile assumption, but it is true at the moment,
so I just documented it in some code comments rather than making what would
be rather invasive changes to remove the assumption.
Back-patch to 8.1. Previous versions don't have the case of initPlans
referring to other initPlans' outputs, so while the existing logic is still
questionable for them, there are not any known bugs to be fixed. So I'll
refrain from changing them for now.
of any lower outer join, even if it also references the non-nullable side and
so could not get pushed below the outer join anyway. We need this in case
the clause is an OR clause: if it doesn't get marked outerjoin_delayed,
create_or_index_quals() could pull an indexable restriction for the nullable
side out of it, leading to wrong results as demonstrated by today's bug
report from toruvinn. (See added regression test case for an example.)
In principle this has been wrong for quite a while. In practice I don't
think any branch before 8.3 can really show the failure, because
create_or_index_quals() will only pull out indexable conditions, and before
8.3 those were always strict. So though we might have improperly generated
null-extended rows in the outer join, they'd get discarded from the result
anyway. The gating factor that makes the failure visible is that 8.3
considers "col IS NULL" to be indexable. Hence I'm not going to risk
back-patching further than 8.3.
objects are specified, we drop them all in a single performMultipleDeletions
call. This makes the RESTRICT/CASCADE checks more relaxed: it's not counted
as a cascade if one of the later objects has a dependency on an earlier one.
NOTICE messages about such cases go away, too.
In passing, fix the permissions check for DROP CONVERSION, which for some
reason was never made role-aware, and omitted the namespace-owner exemption
too.
Alex Hunsaker, with further fiddling by me.
cases. Recent buildfarm experience shows that it is sometimes possible
to execute several SQL commands in less time than the granularity of
Windows' not-very-high-resolution gettimeofday(), leading to a failure
because the tests expect the value of now() to change and it doesn't.
Also, it was recognized some time ago that the same area of the tests
could fail if local midnight passes between the insertion and the checking
of the values for 'yesterday', 'tomorrow', etc. Clean all this up per
ideas from myself and Greg Stark.
There remains a window for failure if the transaction block is entered
exactly at local midnight (so that 'now' and 'today' have the same value),
but that seems low-probability enough to live with.
Since the point of this change is mostly to eliminate buildfarm noise,
back-patch to all versions we are still actively testing.
sequence to be reset to its original starting value. This requires adding the
original start value to the set of parameters (columns) of a sequence object,
which is a user-visible change with potential compatibility implications;
it also forces initdb.
Also add hopefully-SQL-compatible RESTART/CONTINUE IDENTITY options to
TRUNCATE TABLE. RESTART IDENTITY executes ALTER SEQUENCE RESTART for all
sequences "owned by" any of the truncated relations. CONTINUE IDENTITY is
a no-op option.
Zoltan Boszormenyi
HINT fields to a user-thrown error message, and to specify the SQLSTATE
error code to use. The syntax has also been tweaked so that the
Oracle-compatible case "RAISE exception_name" works (though you won't get a
very nice error message if you just write that much). Lastly, support
the Oracle-compatible syntax "RAISE" with no parameters to re-throw
the current error from within an EXCEPTION block.
In passing, allow the syntax SQLSTATE 'nnnnn' within EXCEPTION lists,
so that there is a way to trap errors with custom SQLSTATE codes.
Pavel Stehule and Tom Lane
as those for inherited columns; that is, it's no longer allowed for a child
table to not have a check constraint matching one that exists on a parent.
This satisfies the principle of least surprise (rows selected from the parent
will always appear to meet its check constraints) and eliminates some
longstanding bogosity in pg_dump, which formerly had to guess about whether
check constraints were really inherited or not.
The implementation involves adding conislocal and coninhcount columns to
pg_constraint (paralleling attislocal and attinhcount in pg_attribute)
and refactoring various ALTER TABLE actions to be more like those for
columns.
Alex Hunsaker, Nikhil Sontakke, Tom Lane
checked to see if it's been initialized to all non-nulls. The implicit NOT
NULL constraint was not being checked during the ALTER (in fact, not even if
there was an explicit NOT NULL too), because ATExecAddColumn neglected to
set the flag needed to make the test happen. This has been broken since
the capability was first added, in 8.0.
Brendan Jurd, per a report from Kaloyan Iliev.
output is not of the same type that's needed for the IN comparison (ie,
where the parser inserted an implicit coercion above the subselect result).
We should record the coerced expression, not just a raw Var referencing
the subselect output, as the quantity that needs to be unique-ified if
we choose to implement the IN as Unique followed by a plain join.
As of 8.3 this error was causing crashes, as seen in bug #4113 from Javier
Hernandez, because the executor was being told to hash or sort the raw
subselect output column using operators appropriate to the coerced type.
In prior versions there was no crash because the executor chose the
hash or sort operators for itself based on the column type it saw.
However, that's still not really right, because what's unique for one data
type might not be unique for another. In corner cases we could get multiple
outputs of a row that should appear only once, as demonstrated by the
regression test case included in this commit.
However, this patch doesn't apply cleanly to 8.2 or before, and the code
involved has shifted enough over time that I'm hesitant to try to back-patch.
Given the lack of complaints from the field about such corner cases, I think
the bug may not be important enough to risk breaking other things with a
back-patch.
where Datum is 8 bytes wide. Since this will break old-style C functions
(those still using version 0 calling convention) that have arguments or
results of these types, provide a configure option to disable it and retain
the old pass-by-reference behavior. Likewise, provide a configure option
to disable the recently-committed float4 pass-by-value change.
Zoltan Boszormenyi, plus configurability stuff by me.
indexscan always occurs in one call, and the results are returned in a
TIDBitmap instead of a limited-size array of TIDs. This should improve
speed a little by reducing AM entry/exit overhead, and it is necessary
infrastructure if we are ever to support bitmap indexes.
In an only slightly related change, add support for TIDBitmaps to preserve
(somewhat lossily) the knowledge that particular TIDs reported by an index
need to have their quals rechecked when the heap is visited. This facility
is not really used yet; we'll need to extend the forced-recheck feature to
plain indexscans before it's useful, and that hasn't been coded yet.
The intent is to use it to clean up 8.3's horrid @@@ kluge for text search
with weighted queries. There might be other uses in future, but that one
alone is sufficient reason.
Heikki Linnakangas, with some adjustments by me.
data. This makes for a significant speedup at the cost that the results
now vary between little-endian and big-endian machines; which forces us
to add explicit ORDER BYs in a couple of regression tests to preserve
machine-independent comparison results. Also, force initdb by bumping
catversion, since the contents of hash indexes will change (at least on
big-endian machines).
Kenneth Marshall and Tom Lane, based on work from Bob Jenkins. This commit
does not adopt Bob's new faster mix() algorithm, however, since we still need
to convince ourselves that that doesn't degrade the quality of the hashing.
algorithm. This is a good deal slower than our old roundoff-error-prone
code for long inputs, so we keep the old code for use in the transcendental
functions, where everything is approximate anyway. Also create a
user-accessible function div(numeric, numeric) to provide access to the
exact result of trunc(x/y) --- since the regular numeric / operator will
round off its result, simply computing that expression in SQL doesn't
reliably give the desired answer. This fixes bug #3387 and various related
corner cases, and improves the usefulness of PG for high-precision integer
arithmetic.
directly to all the member expressions, instead of the previous implementation
where the ARRAY[] constructor would infer a common element type and then we'd
coerce the finished array after the fact. This has a number of benefits,
one being that we can allow an empty ARRAY[] construct so long as its
element type is specified by such a cast.
Brendan Jurd, minor fixes by me.
this adds support for 64-bit tzdata files, which is needed to support DST
calculations beyond 2038. Add a regression test case to give some minimal
confidence that that really works.
Heikki Linnakangas
checking of argument compatibility right; although the problem is only exposed
with multiple-input aggregates in which some arguments are polymorphic and
some are not. Per bug #3852 from Sokolov Yura.
of poorer planning in 8.3 than 8.2:
1. After pushing a constant across an outer join --- ie, given
"a LEFT JOIN b ON (a.x = b.y) WHERE a.x = 42", we can deduce that b.y is
sort of equal to 42, in the sense that we needn't fetch any b rows where
it isn't 42 --- loop to see if any additional deductions can be made.
Previous releases did that by recursing, but I had mistakenly thought that
this was no longer necessary given the EquivalenceClass machinery.
2. Allow pushing constants across outer join conditions even if the
condition is outerjoin_delayed due to a lower outer join. This is safe
as long as the condition is strict and we re-test it at the upper join.
3. Keep the outer-join clause even if we successfully push a constant
across it. This is *necessary* in the outerjoin_delayed case, but
even in the simple case, it seems better to do this to ensure that the
join search order heuristics will consider the join as reasonable to
make. Mark such a clause as having selectivity 1.0, though, since it's
not going to eliminate very many rows after application of the constant
condition.
4. Tweak have_relevant_eclass_joinclause to report that two relations
are joinable when they have vars that are equated to the same constant.
We won't actually generate any joinclause from such an EquivalenceClass,
but again it seems that in such a case it's a good idea to consider
the join as worth costing out.
5. Fix a bug in select_mergejoin_clauses that was exposed by these
changes: we have to reject candidate mergejoin clauses if either side was
equated to a constant, because we can't construct a canonical pathkey list
for such a clause. This is an implementation restriction that might be
worth fixing someday, but it doesn't seem critical to get it done for 8.3.
but no database changes have been made since the last CommandCounterIncrement.
This should result in a significant improvement in the number of "commands"
that can typically be performed within a transaction before hitting the 2^32
CommandId size limit. In particular this buys back (and more) the possible
adverse consequences of my previous patch to fix plan caching behavior.
The implementation requires tracking whether the current CommandCounter
value has been "used" to mark any tuples. CommandCounter values stored into
snapshots are presumed not to be used for this purpose. This requires some
small executor changes, since the executor used to conflate the curcid of
the snapshot it was using with the command ID to mark output tuples with.
Separating these concepts allows some small simplifications in executor APIs.
Something for the TODO list: look into having CommandCounterIncrement not do
AcceptInvalidationMessages. It seems fairly bogus to be doing it there,
but exactly where to do it instead isn't clear, and I'm disinclined to mess
with asynchronous behavior during late beta.
plan before the effects of DDL executed in an immediately prior SPI operation
had been absorbed. Per report from Chris Wood.
This patch has an unpleasant side effect of causing the number of
CommandCounterIncrement()s done by a typical plpgsql function to
approximately double. Amelioration of the consequences of that
will be undertaken in a separate patch.