handling many-way scans: instead of re-evaluating all prior indexscan
quals to see if a tuple has been fetched more than once, use a hash table
indexed by tuple CTID. But fall back to the old way if the hash table
grows to exceed SortMem.
as well as the hash function (formerly the comparison function was hardwired
as memcmp()). This makes it possible to eliminate the special-purpose
hashtable management code in execGrouping.c in favor of using dynahash to
manage tuple hashtables; which is a win because dynahash knows how to expand
a hashtable when the original size estimate was too small, whereas the
special-purpose code was too stupid to do that. (See recent gripe from
Stephan Szabo about poor performance when hash table size estimate is way
off.) Free side benefit: when using string_hash, the default comparison
function is now strncmp() instead of memcmp(). This should eliminate some
part of the overhead associated with larger NAMEDATALEN values.
be anything yielding an array of the proper kind, not only sub-ARRAY[]
constructs; do subscript checking at runtime not parse time. Also,
adjust array_cat to make array || array comply with the SQL99 spec.
Joe Conway
yet, though). Avoid using nth() to fetch tlist entries; provide a
common routine get_tle_by_resno() to search a tlist for a particular
resno. This replaces a couple uses of nth() and a dozen hand-coded
search loops. Also, replace a few uses of nth(length-1, list) with
llast().
error, if any input element is NULL. This is not what we ultimately want,
but until arrays can have NULL elements, it will have to do. Patch from
Joe Conway.
It also works to create a non-polymorphic aggregate from polymorphic
functions, should you want to do that. Regression test added, docs still
lacking. By Joe Conway, with some kibitzing from Tom Lane.
ANYELEMENT. The effect is to postpone typechecking of the function
body until runtime. Documentation is still lacking.
Original patch by Joe Conway, modified to postpone type checking
by Tom Lane.
'scalar op ALL (array)', where the operator is applied between the
lefthand scalar and each element of the array. The operator must
yield boolean; the result of the construct is the OR or AND of the
per-element results, respectively.
Original coding by Joe Conway, after an idea of Peter's. Rewritten
by Tom to keep the implementation strictly separate from subqueries.
comparison functions), replacing the highly bogus bitwise array_eq. Create
a btree index opclass for ANYARRAY --- it is now possible to create indexes
on array columns.
Arrange to cache the results of catalog lookups across multiple array
operations, instead of repeating the lookups on every call.
Add string_to_array and array_to_string functions.
Remove singleton_array, array_accum, array_assign, and array_subscript
functions, since these were for proof-of-concept and not intended to become
supported functions.
Minor adjustments to behavior in some corner cases with empty or
zero-dimensional arrays.
Joe Conway (with some editorializing by Tom Lane).
specific hash functions used by hash indexes, rather than the old
not-datatype-aware ComputeHashFunc routine. This makes it safe to do
hash joining on several datatypes that previously couldn't use hashing.
The sets of datatypes that are hash indexable and hash joinable are now
exactly the same, whereas before each had some that weren't in the other.
extensions to support our historical behavior. An aggregate belongs
to the closest query level of any of the variables in its argument,
or the current query level if there are no variables (e.g., COUNT(*)).
The implementation involves adding an agglevelsup field to Aggref,
and treating outer aggregates like outer variables at planning time.
when the plan is ReScanned, we don't have to rebuild the hash table
if there is no parameter change for its child node. This idea has
been used for a long time in Sort and Material nodes, but was not in
the hash code till now.
introducing new 'FastList' list-construction subroutines to use in
hot spots. This avoids the O(N^2) behavior of repeated lappend's
by keeping a tail pointer, while not changing behavior by reversing
list order as the lcons() method would do.
of an index can now be a computed expression instead of a simple variable.
Restrictions on expressions are the same as for predicates (only immutable
functions, no sub-selects). This fixes problems recently introduced with
inlining SQL functions, because the inlining transformation is applied to
both expression trees so the planner can still match them up. Along the
way, improve efficiency of handling index predicates (both predicates and
index expressions are now cached by the relcache) and fix 7.3 oversight
that didn't record dependencies of predicate expressions.
handle multiple 'formats' for data I/O. Restructure CommandDest and
DestReceiver stuff one more time (it's finally starting to look a bit
clean though). Code now matches latest 3.0 protocol document as far
as message formats go --- but there is no support for binary I/O yet.
DestReceiver pointers instead of just CommandDest values. The DestReceiver
is made at the point where the destination is selected, rather than
deep inside the executor. This cleans up the original kluge implementation
of tstoreReceiver.c, and makes it easy to support retrieving results
from utility statements inside portals. Thus, you can now do fun things
like Bind and Execute a FETCH or EXPLAIN command, and it'll all work
as expected (e.g., you can Describe the portal, or use Execute's count
parameter to suspend the output partway through). Implementation involves
stuffing the utility command's output into a Tuplestore, which would be
kind of annoying for huge output sets, but should be quite acceptable
for typical uses of utility commands.
the column by table OID and column number, if it's a simple column
reference. Along the way, get rid of reskey/reskeyop fields in Resdoms.
Turns out that representation was not convenient for either the planner
or the executor; we can make the planner deliver exactly what the
executor wants with no more effort.
initdb forced due to change in stored rule representation.
which does the same thing. Perhaps at one time there was a reason to
allow plan nodes to store their result types in different places, but
AFAICT that's been unnecessary for a good while.
Both plannable queries and utility commands are now always executed
within Portals, which have been revamped so that they can handle the
load (they used to be good only for single SELECT queries). Restructure
code to push command-completion-tag selection logic out of postgres.c,
so that it won't have to be duplicated between simple and extended queries.
initdb forced due to addition of a field to Query nodes.
that the types of untyped string-literal constants are deduced (ie,
when coerce_type is applied to 'em, that's what the type must be).
Remove the ancient hack of storing the input Param-types array as a
global variable, and put the info into ParseState instead. This touches
a lot of files because of adjustment of routine parameter lists, but
it's really not a large patch. Note: PREPARE statement still insists on
exact specification of parameter types, but that could easily be relaxed
now, if we wanted to do so.
I had inadvertently omitted it while rearranging things to support
length-counted incoming messages. Also, change the parser's API back
to accepting a 'char *' query string instead of 'StringInfo', as the
latter wasn't buying us anything except overhead. (I think when I put
it in I had some notion of making the parser API 8-bit-clean, but
seeing that flex depends on null-terminated input, that's not really
ever gonna happen.)
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
expressions, ARRAY(sub-SELECT) expressions, some array functions.
Polymorphic functions using ANYARRAY/ANYELEMENT argument and return
types. Some regression tests in place, documentation is lacking.
Joe Conway, with some kibitzing from Tom Lane.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
utility statement (DeclareCursorStmt) with a SELECT query dangling from
it, rather than a SELECT query with a few unusual fields in it. Add
code to determine whether a planned query can safely be run backwards.
If DECLARE CURSOR specifies SCROLL, ensure that the plan can be run
backwards by adding a Materialize plan node if it can't. Without SCROLL,
you get an error if you try to fetch backwards from a cursor that can't
handle it. (There is still some discussion about what the exact
behavior should be, but this is necessary infrastructure in any case.)
Along the way, make EXPLAIN DECLARE CURSOR work.
entire contents of the subplan into the tuplestore before we can return
any tuples. Instead, the tuplestore holds what we've already read, and
we fetch additional rows from the subplan as needed. Random access to
the previously-read rows works with the tuplestore, and doesn't affect
the state of the partially-read subplan. This is a step towards fixing
the problems with cursors over complex queries --- we don't want to
stick in Materialize nodes if they'll prevent quick startup for a cursor.
rid of the assumption that sizeof(Oid)==sizeof(int). This is one small
step towards someday supporting 8-byte OIDs. For the moment, it doesn't
do much except get rid of a lot of unsightly casts.
locParam lists can be converted to bitmapsets to speed updating. Also,
replace 'locParam' with 'allParam', which contains all the paramIDs
relevant to the node (i.e., the union of extParam and locParam); this
saves a step during SetChangedParamList() without costing anything
elsewhere.
startup, not in the parser; this allows ALTER DOMAIN to work correctly
with domain constraint operations stored in rules. Rod Taylor;
code review by Tom Lane.
nodes where it's not really necessary. In many cases where the scan node
is not the topmost plan node (eg, joins, aggregation), it's possible to
just return the table tuple directly instead of generating an intermediate
projection tuple. In preliminary testing, this reduced the CPU time
needed for 'SELECT COUNT(*) FROM foo' by about 10%.
Try to model the effect of rescanning input tuples in mergejoins;
account for JOIN_IN short-circuiting where appropriate. Also, recognize
that mergejoin and hashjoin clauses may now be more than single operator
calls, so we have to charge appropriate execution costs.
that's selecting into a RECORD variable returns zero rows, make it
assign an all-nulls row to the RECORD; this is consistent with what
happens when the SELECT INTO target is not a RECORD. In support of
this, tweak the SPI code so that a valid tuple descriptor is returned
even when a SPI select returns no rows.
There are two implementation techniques: the executor understands a new
JOIN_IN jointype, which emits at most one matching row per left-hand row,
or the result of the IN's sub-select can be fed through a DISTINCT filter
and then joined as an ordinary relation.
Along the way, some minor code cleanup in the optimizer; notably, break
out most of the jointree-rearrangement preprocessing in planner.c and
put it in a new file prep/prepjointree.c.
Simplify SubLink by storing just a List of operator OIDs, instead of
a list of incomplete OpExprs --- that was a bizarre and bulky choice,
with no redeeming social value since we have to build new OpExprs
anyway when forming the plan tree.
'NOT (x IN (subselect))', that is 'NOT (x = ANY (subselect))',
rather than 'x <> ALL (subselect)' as we formerly did. This
opens the door to optimizing NOT IN the same way as IN, whereas
there's no hope of optimizing the expression using <>. Also,
convert 'x <> ALL (subselect)' to the NOT(IN) style, so that
the optimization will be available when processing rules dumped
by older Postgres versions.
initdb forced due to small change in SubLink node representation.
computation: reduce the bucket number mod nbatch. This changes the
association between original bucket numbers and batches, but that
doesn't matter. Minor other cleanups in hashjoin code to help
centralize decisions.
given any malloc block until something is first allocated in it; but
thereafter, MemoryContextReset won't release that first malloc block.
This preserves the quick-reset property of the original policy, without
forcing 8K to be allocated to every context whether any of it is ever
used or not. Also, remove some more no-longer-needed explicit freeing
during ExecEndPlan.
a per-query memory context created by CreateExecutorState --- and destroyed
by FreeExecutorState. This provides a final solution to the longstanding
problem of memory leaked by various ExecEndNode calls.
in the planned representation of a subplan at all any more, only SubPlan.
This means subselect.c doesn't scribble on its input anymore, which seems
like a good thing; and there are no longer three different possible
interpretations of a SubLink. Simplify node naming and improve comments
in primnodes.h. No change to stored rules, though.
execution state trees, and ExecEvalExpr takes an expression state tree
not an expression plan tree. The plan tree is now read-only as far as
the executor is concerned. Next step is to begin actually exploiting
this property.
make VALUE a non-reserved word again, use less invasive method of passing
ConstraintTestValue into transformExpr, fix problems with nested constraint
testing, do correct thing with NULL result from a constraint expression,
remove memory leak. Domain checks still need much more work if we are going
to allow ALTER DOMAIN, however.
so that all executable expression nodes inherit from a common supertype
Expr. This is somewhat of an exercise in code purity rather than any
real functional advance, but getting rid of the extra Oper or Func node
formerly used in each operator or function call should provide at least
a little space and speed improvement.
initdb forced by changes in stored-rules representation.
to plan nodes, not vice-versa. All executor state nodes now inherit from
struct PlanState. Copying of plan trees has been simplified by not
storing a list of SubPlans in Plan nodes (eliminating duplicate links).
The executor still needs such a list, but it can build it during
ExecutorStart since it has to scan the plan tree anyway.
No initdb forced since no stored-on-disk structures changed, but you
will need a full recompile because of node-numbering changes.
well as function calls. This is needed for cases where the planner has
constant-folded or inlined the original function call. Possibly we should
back-patch this change into 7.3 branch as well.
logic, dissuade planner from thinking that 'x IS DISTINCT FROM 42' may
be optimized into 'x = 42' (!!), cause dependency on = operator to be
recorded correctly, minor other improvements.
instead of only one. This should speed up planning (only one hash path
to consider for a given pair of relations) as well as allow more effective
hashing, when there are multiple hashable joinclauses.
operations: make sure we use operators that are compatible, as determined
by a mergejoin link in pg_operator. Also, add code to planner to ensure
we don't try to use hashed grouping when the grouping operators aren't
marked hashable.
sublink results and COPY's domain constraint checking. A Const that
isn't really constant is just a Bad Idea(tm). Remove hacks in
parse_coerce and other places that were needed because of the former
klugery.
-hackers a couple days ago.
Notes/caveats:
- added regression tests for the new functionality, all
regression tests pass on my machine
- added pg_dump support
- updated PL/PgSQL to support per-statement triggers; didn't
look at the other procedural languages.
- there's (even) more code duplication in trigger.c than there
was previously. Any suggestions on how to refactor the
ExecXXXTriggers() functions to reuse more code would be
welcome -- I took a brief look at it, but couldn't see an
easy way to do it (there are several subtly-different
versions of the code in question)
- updated the documentation. I also took the liberty of
removing a big chunk of duplicated syntax documentation in
the Programmer's Guide on triggers, and moving that
information to the CREATE TRIGGER reference page.
- I also included some spelling fixes and similar small
cleanups I noticed while making the changes. If you'd like
me to split those into a separate patch, let me know.
Neil Conway
one more row from the subplan than the COUNT would appear to require.
This costs a little more logic but a number of people have complained
about the old implementation.
of groups produced by GROUP BY. This improves the accuracy of planning
estimates for grouped subselects, and is needed to check whether a
hashed aggregation plan risks memory overflow.
before commit, not after :-( --- the original coding is not only unsafe
if an error occurs while it's processing, but it generates an invalid
sequence of WAL entries. Resurrect 7.2 logic for deleting items when
no longer needed. Use an enum instead of random macros. Editorialize
on names used for routines and constants. Teach backend/nodes routines
about new field in CreateTable struct. Add a regression test.
node now does its own grouping of the input rows, and has no need for a
preceding GROUP node in the plan pipeline. This allows elimination of
the misnamed tuplePerGroup option for GROUP, and actually saves more code
in nodeGroup.c than it costs in nodeAgg.c, as well as being presumably
faster. Restructure the API of query_planner so that we do not commit to
using a sorted or unsorted plan in query_planner; instead grouping_planner
makes the decision. (Right now it isn't any smarter than query_planner
was, but that will change as soon as it has the option to select a hash-
based aggregation step.) Despite all the hackery, no initdb needed since
only in-memory node types changed.
command status at the interactive level. SPI_processed, etc are set
in the same way as the returned command status would have been set if
the same querystring were issued interactively. Per gripe from
Michael Paesold 25-Sep-02.
query that uses it. This ensures that triggers will be applied consistently
throughout a query even if someone commits changes to the relation's
pg_class.reltriggers field meanwhile. Per crash report from Laurette Cisneros.
While at it, simplify memory management in relcache.c, which no longer
needs the old hack to try to keep trigger info in the same place over
a relcache entry rebuild. (Should try to fix rd_att and rewrite-rule
access similarly, someday.) And make RelationBuildTriggers simpler and
more robust by making it build the trigdesc in working memory and then
CopyTriggerDesc() into cache memory.
just the significant fields of FunctionCallInfoData, rather than MemSet'ing
the whole struct to zero. Unused positions in the arg[] array will
thereby contain garbage rather than zeroes. This buys back some of the
performance hit from increasing FUNC_MAX_ARGS. Also tweak tuplesort.c
code for more speed by marking some routines 'inline'. All together
these changes speed up simple sorts, like count(distinct int4column),
by about 25% on a P4 running RH Linux 7.2.
executor should not return the tuple as successfully marked, because in
fact it's been deleted. Not clear that this case has ever been seen
in practice (I think you'd have to write a SELECT FOR UPDATE that calls
a function that deletes some row the SELECT will visit later...) but we
should be consistent. Also add comments to several other places that
got it right but didn't explain what they were doing.
to be flexible about assignment casts without introducing ambiguity in
operator/function resolution. Introduce a well-defined promotion hierarchy
for numeric datatypes (int2->int4->int8->numeric->float4->float8).
Change make_const to initially label numeric literals as int4, int8, or
numeric (never float8 anymore).
Explicitly mark Func and RelabelType nodes to indicate whether they came
from a function call, explicit cast, or implicit cast; use this to do
reverse-listing more accurately and without so many heuristics.
Explicit casts to char, varchar, bit, varbit will truncate or pad without
raising an error (the pre-7.2 behavior), while assigning to a column without
any explicit cast will still raise an error for wrong-length data like 7.3.
This more nearly follows the SQL spec than 7.2 behavior (we should be
reporting a 'completion condition' in the explicit-cast cases, but we have
no mechanism for that, so just do silent truncation).
Fix some problems with enforcement of typmod for array elements;
it didn't work at all in 'UPDATE ... SET array[n] = foo', for example.
Provide a generalized array_length_coerce() function to replace the
specialized per-array-type functions that used to be needed (and were
missing for NUMERIC as well as all the datetime types).
Add missing conversions int8<->float4, text<->numeric, oid<->int8.
initdb forced.
(overlaying low byte of page size) and add HEAP_HASOID bit to t_infomask,
per earlier discussion. Simplify scheme for overlaying fields in tuple
header (no need for cmax to live in more than one place). Don't try to
clear infomask status bits in tqual.c --- not safe to do it there. Don't
try to force output table of a SELECT INTO to have OIDs, either. Get rid
of unnecessarily complex three-state scheme for TupleDesc.tdhasoids, which
has already caused one recent failure. Improve documentation.
type for runtime constraint checks, instead of misusing the parse-time
Constraint node for the purpose. Fix some damage introduced into type
coercion logic; in particular ensure that a coerced expression tree will
read out the correct result type when inspected (patch had broken some
RelabelType cases). Enforce domain NOT NULL constraints against columns
that are omitted from an INSERT.
to the table function, thus preventing memory leakage accumulation across
calls. This means that SRFs need to be careful to distinguish permanent
and local storage; adjust code and documentation accordingly. Patch by
Joe Conway, very minor tweaks by Tom Lane.
should be pretty safe in practice, but it's probably better to be safe
than sorry.
I was actually looking for cases where NAMEDATALEN is assumed to be
32, but only found one. That's fixed too, as well as a few bits of
code cleanup.
Neil Conway
array header, and to compute sizing and alignment of array elements
the same way normal tuple access operations do --- viz, using the
tupmacs.h macros att_addlength and att_align. This makes the world
safe for arrays of cstrings or intervals, and should make it much
easier to write array-type-polymorphic functions; as examples see
the cleanups of array_out and contrib/array_iterator. By Joe Conway
and Tom Lane.
value '-2' is used to indicate a variable-width type whose width is
computed as strlen(datum)+1. Everything that looks at typlen is updated
except for array support, which Joe Conway is working on; at the moment
it wouldn't work to try to create an array of cstring.
> There's no longer a separate call to heap_storage_create in that routine
> --- the right place to make the test is now in the storage_create
> boolean parameter being passed to heap_create. A simple change, but
> it passeth patch's understanding ...
Thanks.
Attached is a patch against cvs tip as of 8:30 PM PST or so. Turned out
that even after fixing the failed hunks, there was a new spot in
bufmgr.c which needed to be fixed (related to temp relations;
RelationUpdateNumberOfBlocks). But thankfully the regression test code
caught it :-)
Joe Conway
The local buffer manager is no longer used for newly-created relations
(unless they are TEMP); a new non-TEMP relation goes through the shared
bufmgr and thus will participate normally in checkpoints. But TEMP relations
use the local buffer manager throughout their lifespan. Also, operations
in TEMP relations are not logged in WAL, thus improving performance.
Since it's no longer necessary to fsync relations as they move out of the
local buffers into shared buffers, quite a lot of smgr.c/md.c/fd.c code
is no longer needed and has been removed: there's no concept of a dirty
relation anymore in md.c/fd.c, and we never fsync anything but WAL.
Still TODO: improve local buffer management algorithms so that it would
be reasonable to increase NLocBuffer.
of functions returning domain types, update documentation for typtype,
move get_typtype to lsyscache.c (actually, resurrect the old version),
add defense against creating pseudo-typed table columns, fix some
bogus list-parsing in grammar. Issues remain with respect to alias
handling and type checking; Joe is on those.
types for Table Functions, as previously proposed on HACKERS. Here is a
brief explanation:
1. Creates a new pg_type typtype: 'p' for pseudo type (currently either
'b' for base or 'c' for catalog, i.e. a class).
2. Creates new builtin type of typtype='p' named RECORD. This is the
first of potentially several pseudo types.
3. Modify FROM clause grammer to accept:
SELECT * FROM my_func() AS m(colname1 type1, colname2 type1, ...)
where m is the table alias, colname1, etc are the column names, and
type1, etc are the column types.
4. When typtype == 'p' and the function return type is RECORD, a list
of column defs is required, and when typtype != 'p', it is
disallowed.
5. A check was added to ensure that the tupdesc provide via the parser
and the actual return tupdesc match in number and type of
attributes.
When creating a function you can do:
CREATE FUNCTION foo(text) RETURNS setof RECORD ...
When using it you can do:
SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)
or
SELECT * from foo(sqlstmt) AS f(f1 int, f2 text, f3 timestamp)
or
SELECT * from foo(sqlstmt) f(f1 int, f2 text, f3 timestamp)
Included in the patches are adjustments to the regression test sql and
expected files, and documentation.
p.s.
This potentially solves (or at least improves) the issue of builtin
Table Functions. They can be bootstrapped as returning RECORD, and
we can wrap system views around them with properly specified column
defs. For example:
CREATE VIEW pg_settings AS
SELECT s.name, s.setting
FROM show_all_settings()AS s(name text, setting text);
Then we can also add the UPDATE RULE that I previously posted to
pg_settings, and have pg_settings act like a virtual table, allowing
settings to be queried and set.
Joe Conway
ERROR: ExecInsert: rejected due to CHECK constraint insert_con
To be like this:
ERROR: ExecInsert: rejected due to CHECK constraint "insert_con" on
"insert_tbl"
Updated regression tests to match.
I got sick of seeing 'rejected due to CHECK constraint "$1" in my log and
not being able to find the bug in our website code...
Christopher Kings-Lynne
> submitted on July 9:
>
> http://archives.postgresql.org/pgsql-patches/2002-07/msg00056.php
>
> Please disregard that one *if* this one is applied. If this one is
> rejected please go ahead with the July 9th patch.
The July 9th Table Function API patch mentioned above is now in CVS, so
here is an updated version of the guc patch which should apply cleanly
against CVS tip.
Joe Conway
bitmap, if present).
Per Tom Lane's suggestion the information whether a tuple has an oid
or not is carried in the tuple descriptor. For debugging reasons
tdhasoid is of type char, not bool. There are predefined values for
WITHOID, WITHOUTOID and UNDEFOID.
This patch has been generated against a cvs snapshot from last week
and I don't expect it to apply cleanly to current sources. While I
post it here for public review, I'm working on a new version against a
current snapshot. (There's been heavy activity recently; hope to
catch up some day ...)
This is a long patch; if it is too hard to swallow, I can provide it
in smaller pieces:
Part 1: Accessor macros
Part 2: tdhasoid in TupDesc
Part 3: Regression test
Part 4: Parameter withoid to heap_addheader
Part 5: Eliminate t_oid from HeapTupleHeader
Part 2 is the most hairy part because of changes in the executor and
even in the parser; the other parts are straightforward.
Up to part 4 the patched postmaster stays binary compatible to
databases created with an unpatched version. Part 5 is small (100
lines) and finally breaks compatibility.
Manfred Koizar
Implements between (symmetric / asymmetric) as a node.
Executes the left or right expression once, makes a Const out of the
resulting Datum and executes the >=, <= portions out of the Const sets.
Of course, the parser does a fair amount of preparatory work for this to
happen.
Rod Taylor
Conway (BuildTupleFromCStrings sets NULL for pass-by-value types when
intended value is 0). It also implements some other improvements
suggested by Neil.
Joe Conway
are managed as per request.
Moved from merging with table attributes to applying themselves during
coerce_type() and coerce_type_typmod.
Regression tests altered to test the cast() scenarios.
Rod Taylor
Reused the Expr node to hold DISTINCT which strongly resembles
the existing OP info. Define DISTINCT_EXPR which strongly resembles
the existing OPER_EXPR opType, but with handling for NULLs required
by SQL99.
We have explicit support for single-element DISTINCT comparisons
all the way through to the executor. But, multi-element DISTINCTs
are handled by expanding into a comparison tree in gram.y as is done for
other row comparisons. Per discussions, it might be desirable to move
this into one or more purpose-built nodes to be handled in the backend.
Define the optional ROW keyword and token per SQL99.
This allows single-element row constructs, which were formerly disallowed
due to shift/reduce conflicts with parenthesized a_expr clauses.
Define the SQL99 TREAT() function. Currently, use as a synonym for CAST().
comments on one of the optimizer functions a lot more
clear, adds a summary of the recent KSQO discussion to the
comments in the code, adds regression tests for the bug with
sequence state Tom fixed recently and another reg. test, and
removes some PostQuel legacy stuff: ExecAppend -> ExecInsert,
ExecRetrieve -> ExecSelect, etc.
Error messages remain unchanged until a vote.
Neil Conway
comments on one of the optimizer functions a lot more
clear, adds a summary of the recent KSQO discussion to the
comments in the code, adds regression tests for the bug with
sequence state Tom fixed recently and another reg. test, and
removes some PostQuel legacy stuff: ExecAppend -> ExecInsert,
ExecRetrieve -> ExecSelect, etc. This was changed because the
elog() messages from this routine are user-visible, so we
should be using the SQL terms.
Neil Conway
For example, if I run a query, that uses an index scan, and call
MemoryContextSt ats (CurrentMemoryContext) before ExecutorStart() and
after ExecutorEnd() in ProcessQuery(), I am consistently see ing that
the 'after' call shows 256 bytes more used, then 'before'...
The problem seems to be in ExecEndIndexScan - it does not release
scanstate, ind exstate, indexstate->iss_RelationDescs and indexstate ->
iss_ScanDescs...
Dmitry Tkach
two small changes to the API since last patch, which hopefully completes
the decoupling of composite function support from SRF specific support.
Joe Conway
transaction, so as to avoid returning them out of the index AM. Saves
repeated heap_fetch operations on frequently-updated rows. Also detect
queries on unique keys (equality to all columns of a unique index), and
don't bother continuing scan once we have found first match.
Killing is implemented in the btree and hash AMs, but not yet in rtree
or gist, because there isn't an equally convenient place to do it in
those AMs (the outer amgetnext routine can't do it without re-pinning
the index page).
Did some small cleanup on APIs of HeapTupleSatisfies, heap_fetch, and
index_insert to make this a little easier.
in snapshots, per my proposal of a few days ago. Also, tweak heapam.c
routines (heap_insert, heap_update, heap_delete, heap_mark4update) to
be passed the command ID to use, instead of doing GetCurrentCommandID.
For catalog updates they'll still get passed current command ID, but
for updates generated from the main executor they'll get passed the
command ID saved in the snapshot the query is using. This should fix
some corner cases associated with functions and triggers that advance
current command ID while an outer query is still in progress.
yesterday's proposal to pghackers. Also remove unnecessary parameters
to heap_beginscan, heap_rescan. I modified pg_proc.h to reflect the
new numbers of parameters for the AM interface routines, but did not
force an initdb because nothing actually looks at those fields.
returns-set boolean field in Func and Oper nodes. This allows cleaner,
more reliable tests for expressions returning sets in the planner and
parser. For example, a WHERE clause returning a set is now detected
and complained of in the parser, not only at runtime.
some kibitzing from Tom Lane. Not everything works yet, and there's
no documentation or regression test, but let's commit this so Joe
doesn't need to cope with tracking changes in so many files ...
pg_database, pg_shadow, pg_group, all of which now have potentially-long
fields. Along the way, get rid of SharedSystemRelationNames list: shared
rels are now identified in their include/pg_catalog/*.h files by a
BKI_SHARED_RELATION macro, while indexes and toast rels inherit sharedness
automatically from their parent table. Fix some bugs with failure to detoast
pg_group.grolist during ALTER GROUP.
messages more uniform and internationalizable: the global array
aclcheck_error_strings[] is gone in favor of a subroutine
aclcheck_error(). Partial implementation of namespace-related
permission checks --- not all done yet.
qualified operator names directly, for example CREATE OPERATOR myschema.+
( ... ). To qualify an operator name in an expression you need to write
OPERATOR(myschema.+) (thanks to Peter for suggesting an escape hatch).
I also took advantage of having to reformat pg_operator to fix something
that'd been bugging me for a while: mergejoinable operators should have
explicit links to the associated cross-data-type comparison operators,
rather than hardwiring an assumption that they are named < and >.
have been divided according to the type of object manipulated - so ALTER
TABLE code is in tablecmds.c, aggregate commands in aggregatecmds.c and
so on.
A few common support routines remain in define.c (prototypes in
src/include/commands/defrem.h).
No code has been changed except for includes to reflect the new files.
The prototypes for aggregatecmds.c, functioncmds.c, operatorcmds.c,
and typecmds.c remain in src/include/commands/defrem.h.
From John Gray <jgray@azuli.co.uk>
entries, per pghackers discussion. This fixes aggregates to live in
namespaces, and also simplifies/speeds up lookup in parse_func.c.
Also, add a 'proimplicit' flag to pg_proc that controls whether a type
coercion function may be invoked implicitly, or only explicitly. The
current settings of these flags are more permissive than I would like,
but we will need to debate and refine the behavior; for now, I avoided
breaking regression tests as much as I could.
in schemas other than the system namespace; however, there's no search
path yet, and not all operations work yet on tables outside the system
namespace.
objects to be privilege-checked. Some change in their APIs would be
necessary no matter what in the schema environment, and simply getting
rid of the name-based interface entirely seems like the best way.
the parsetree representation. As yet we don't *do* anything with schema
names, just drop 'em on the floor; but you can enter schema-compatible
command syntax, and there's even a primitive CREATE SCHEMA command.
No doc updates yet, except to note that you can now extract a field
from a function-returning-row's result with (foo(...)).fieldname.
PostgreSQL. This hash function replaces the one used by hash indexes and
the catalog cache. Hash joins use a different, relatively poor-quality
hash function, but I'll fix that later.
As suggested by Tom Lane, this patch also changes the size of the fixed
hash table used by the catalog cache to be a power-of-2 (instead of a
prime: I chose 256 instead of 257). This allows the catcache to lookup
hash buckets using a simple bitmask. This should improve the performance
of the catalog cache slightly, since the previous method (modulo a
prime) was slow.
In my tests, this improves the performance of hash indexes by between 4%
and 8%; the performance when using btree indexes or seqscans is
basically unchanged.
Neil Conway <neilconway@rogers.com>
o Change all current CVS messages of NOTICE to WARNING. We were going
to do this just before 7.3 beta but it has to be done now, as you will
see below.
o Change current INFO messages that should be controlled by
client_min_messages to NOTICE.
o Force remaining INFO messages, like from EXPLAIN, VACUUM VERBOSE, etc.
to always go to the client.
o Remove INFO from the client_min_messages options and add NOTICE.
Seems we do need three non-ERROR elog levels to handle the various
behaviors we need for these messages.
Regression passed.
now just below FATAL in server_min_messages. Added more text to
highlight ordering difference between it and client_min_messages.
---------------------------------------------------------------------------
REALLYFATAL => PANIC
STOP => PANIC
New INFO level the prints to client by default
New LOG level the prints to server log by default
Cause VACUUM information to print only to the client
NOTICE => INFO where purely information messages are sent
DEBUG => LOG for purely server status messages
DEBUG removed, kept as backward compatible
DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added
DebugLvl removed in favor of new DEBUG[1-5] symbols
New server_min_messages GUC parameter with values:
DEBUG[5-1], INFO, NOTICE, ERROR, LOG, FATAL, PANIC
New client_min_messages GUC parameter with values:
DEBUG[5-1], LOG, INFO, NOTICE, ERROR, FATAL, PANIC
Server startup now logged with LOG instead of DEBUG
Remove debug_level GUC parameter
elog() numbers now start at 10
Add test to print error message if older elog() values are passed to elog()
Bootstrap mode now has a -d that requires an argument, like postmaster
both input streams to the end. If one variable's range is much less
than the other, an indexscan-based merge can win by not scanning all
of the other table. Per example from Reinhard Max.
are now both invoked once per received SQL command (raw parsetree) from
pg_exec_query_string. BeginCommand is actually just an empty routine
at the moment --- all its former operations have been pushed into tuple
receiver setup routines in printtup.c. This makes for a clean distinction
between BeginCommand/EndCommand (once per command) and the tuple receiver
setup/teardown routines (once per ExecutorRun call), whereas the old code
was quite ad hoc. Along the way, clean up the calling conventions for
ExecutorRun a little bit.
report for each received SQL command, regardless of rewriting activity.
Also ensure that this report comes from the 'original' command, not the
last command generated by rewrite; this fixes 7.2 breakage for INSERT
commands that have actions added by rules. Fernando Nasser and Tom Lane.
Improve 'pg_internal.init' relcache entry preload mechanism so that it is
safe to use for all system catalogs, and arrange to preload a realistic
set of system-catalog entries instead of only the three nailed-in-cache
indexes that were formerly loaded this way. Fix mechanism for deleting
out-of-date pg_internal.init files: this must be synchronized with transaction
commit, not just done at random times within transactions. Drive it off
relcache invalidation mechanism so that no special-case tests are needed.
Cache additional information in relcache entries for indexes (their pg_index
tuples and index-operator OIDs) to eliminate repeated lookups. Also cache
index opclass info at the per-opclass level to avoid repeated lookups during
relcache load.
Generalize 'systable scan' utilities originally developed by Hiroshi,
move them into genam.c, use in a number of places where there was formerly
ugly code for choosing either heap or index scan. In particular this allows
simplification of the logic that prevents infinite recursion between syscache
and relcache during startup: we can easily switch to heapscans in relcache.c
when and where needed to avoid recursion, so IndexScanOK becomes simpler and
does not need any expensive initialization.
Eliminate useless opening of a heapscan data structure while doing an indexscan
(this saves an mdnblocks call and thus at least one kernel call).
originally created with, so that the set of visible tuples does not
change as a result of other activity. This essentially makes PG cursors
INSENSITIVE per the SQL92 definition. See bug report of 13-Feb-02.
inner indexscan (ie, one with runtime keys). ExecIndexReScan must
compute or recompute runtime keys even if we are rescanning in the
EPQ case. TidScan seems to have comparable problems. Per bug
noted by Barry Lind 11-Feb-02.
readability. Bizarre '(long *) TRUE' return convention is gone,
in favor of just raising an error internally in dynahash.c when
we detect hashtable corruption. HashTableWalk is gone, in favor
of using hash_seq_search directly, since it had no hope of working
with non-LONGALIGNable datatypes. Simplify some other code that was
made undesirably grotty by promixity to HashTableWalk.
Frank Miles 7-Sep-01. This is really just sticking a finger in the dike.
Frank's case works now, but we still couldn't support a recursive function
returning a set. Really need to restructure querytrees and execution
state so that the querytree is *read only*. We've run into this over and
over and over again ... it has to happen sometime soon.
default, but OIDS are removed from many system catalogs that don't need them.
Some interesting side effects: TOAST pointers are 20 bytes not 32 now;
pg_description has a three-column key instead of one.
Bugs fixed in passing: BINARY cursors work again; pg_class.relhaspkey
has some usefulness; pg_dump dumps comments on indexes, rules, and
triggers in a valid order.
initdb forced.
Note: I didn't force an initdb, figuring that one today was enough.
However, there is a new function in pg_proc.h, and pg_dump won't be
able to dump partial indexes until you add that function.
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
tests to return the correct results per SQL9x when given NULL inputs.
Reimplement these tests as well as IS [NOT] NULL to have their own
expression node types, instead of depending on special functions.
From Joe Conway, with a little help from Tom Lane.
report on old-style functions invoked by RI triggers. We had a number of
other places that were being sloppy about which memory context FmgrInfo
subsidiary data will be allocated in. Turns out none of them actually
cause a problem in 7.1, but this is for arcane reasons such as the fact
that old-style triggers aren't supported anyway. To avoid getting burnt
later, I've restructured the trigger support so that we don't keep trigger
FmgrInfo structs in relcache memory. Some other related cleanups too:
it's not really necessary to call fmgr_info at all while setting up
the index support info in relcache entries, because those ScanKeyEntry
structs are never used to invoke the functions. This should speed up
relcache initialization a tiny bit.
the same tuple slot that the raw tuple came from, because that slot has
the wrong tuple descriptor. Store it into its own slot with the correct
descriptor, instead. This repairs problems with SPI functions seeing
inappropriate tuple descriptors --- for example, plpgsql code failing to
cope with SELECT FOR UPDATE.
- New functions to create a portal using a prepared/saved
SPI plan or lookup an existing portal by name.
- Functions to fetch/move from/in portals. Results are placed
in the usual SPI_processed and SPI_tuptable, so the entire
set of utility functions can be used to gain attribute access.
- Prepared/saved SPI plans now use their own memory context
and SPI_freeplan(plan) can remove them.
- Tuple result sets (SPI_tuptable) now uses it's own memory
context and can be free'd by SPI_freetuptable(tuptab).
Enhancement of PL/pgSQL
- Uses generic named portals internally in FOR ... SELECT
loops to avoid running out of memory on huge result sets.
- Support for CURSOR and REFCURSOR syntax using the new SPI
functionality. Cursors used internally only need no explicit
transaction block. Refcursor variables can be used inside
of explicit transaction block to pass cursors between main
application and functions.
Jan
trees (mostly my fault). Repair. Also fix long-standing bug in ExecReplace:
after recomputing a concurrently updated tuple, we must recheck constraints.
Make EvalPlanQual leak memory with somewhat less enthusiasm than before,
although plugging leaks fully will require more changes than I care to risk
in a dot-release.
to their children, leading to misbehavior if they had any children that paid
attention to chgParam (most plan node types don't). Append's bug has been
there a long time, but nobody had noticed because it used to be difficult
to create a query where an Append would be used below the top level of a
plan; so there were never any parameters getting passed down. SubqueryScan
is new in 7.1 ... and I'd modeled its behavior on Append :-(
a separate statement (though it can still be invoked as part of VACUUM, too).
pg_statistic redesigned to be more flexible about what statistics are
stored. ANALYZE now collects a list of several of the most common values,
not just one, plus a histogram (not just the min and max values). Random
sampling is used to make the process reasonably fast even on very large
tables. The number of values and histogram bins collected is now
user-settable via an ALTER TABLE command.
There is more still to do; the new stats are not being used everywhere
they could be in the planner. But the remaining changes for this project
should be localized, and the behavior is already better than before.
A not-very-related change is that sorting now makes use of btree comparison
routines if it can find one, rather than invoking '<' twice.
function arguments in join queries: copy the tuples into
TransactionCommandContext so they don't get recycled too soon. This is
horrid, but not any worse than 7.0 or before, which also leaked such
tuples until end of query. A proper fix will require allowing tuple
datums to be physically stored inside larger tuple datums, which opens
up a bunch of issues that can't realistically be solved for 7.1.1.
1. If there is exactly one pg_operator entry of the right name and oprkind,
oper() and related routines would return that entry whether its input type
had anything to do with the request or not. This is just premature
optimization: we shouldn't return the single candidate until after we verify
that it really is a valid candidate, ie, is at least coercion-compatible
with the given types.
2. oper() and related routines only promise a coercion-compatible result.
Unfortunately, there were quite a few callers that assumed the returned
operator is binary-compatible with the given datatype; they would proceed
to call it without making any datatype coercions. These callers include
sorting, grouping, aggregation, and VACUUM ANALYZE. In general I think
it is appropriate for these callers to require an exact or binary-compatible
match, so I've added a new routine compatible_oper() that only succeeds if
it can find an operator that doesn't require any run-time conversions.
Callers now call oper() or compatible_oper() depending on whether they are
prepared to deal with type conversion or not.
The upshot of these bugs is revealed by the following silliness in PL/Tcl's
selftest: it creates an operator @< on int4, and then tries to use it to
sort a char(N) column. The system would let it do that :-( (and evidently
has done so since 6.3 :-( :-(). The result in this case was just a silly
sort order, but the reverse combination would've provoked coredump from
trying to dereference integers. With this fix you get more reasonable
behavior:
pltcl_test=# select * from T_pkey1 order by key1, key2 using @<;
ERROR: Unable to identify an operator '@<' for types 'bpchar' and 'bpchar'
You will have to retype this query using an explicit cast
allocated by plan nodes are not leaked at end of query. This doesn't
really matter for normal queries, but it sure does for queries invoked
repetitively inside SQL functions. Clean up some other grotty code
associated with tupdescs, and fix a few other memory leaks exposed by
tests with simple SQL functions.
are treated more like 'cancel' interrupts: the signal handler sets a
flag that is examined at well-defined spots, rather than trying to cope
with an interrupt that might happen anywhere. See pghackers discussion
of 1/12/01.
1. Distinguish cases where a Datum representing a tuple datatype is an OID
from cases where it is a pointer to TupleTableSlot, and make sure we use
the right typlen in each case.
2. Make fetchatt() and related code support 8-byte by-value datatypes on
machines where Datum is 8 bytes. Centralize knowledge of the available
by-value datatype sizes in two macros in tupmacs.h, so that this will be
easier if we ever have to do it again.
an error as we used to. In an OUTER JOIN scenario, retrieving a null
CTID from one of the input relations is entirely expected. We still
want to lock the input rows from the other relations, so just ignore
the null and keep going.
maintained for each cache entry. A cache entry will not be freed until
the matching ReleaseSysCache call has been executed. This eliminates
worries about cache entries getting dropped while still in use. See
my posting to pg-hackers of even date for more info.
joins, and clean things up a good deal at the same time. Append plan node
no longer hacks on rangetable at runtime --- instead, all child tables are
given their own RT entries during planning. Concept of multiple target
tables pushed up into execMain, replacing bug-prone implementation within
nodeAppend. Planner now supports generating Append plans for inheritance
sets either at the top of the plan (the old way) or at the bottom. Expanding
at the bottom is appropriate for tables used as sources, since they may
appear inside an outer join; but we must still expand at the top when the
target of an UPDATE or DELETE is an inheritance set, because we actually need
a different targetlist and junkfilter for each target table in that case.
Fortunately a target table can't be inside an outer join... Bizarre mutual
recursion between union_planner and prepunion.c is gone --- in fact,
union_planner doesn't really have much to do with union queries anymore,
so I renamed it grouping_planner.
(WAL logging for this is not done yet, however.) Clean up a number of really
crufty things that are no longer needed now that DROP behaves nicely. Make
temp table mapper do the right things when drop or rename affecting a temp
table is rolled back. Also, remove "relation modified while in use" error
check, in favor of locking tables at first reference and holding that lock
throughout the statement.
ExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a
cursor declaration will behave in a reasonable fashion, whereas before
it was overridden by the FETCH count.
SQL92 semantics, including support for ALL option. All three can be used
in subqueries and views. DISTINCT and ORDER BY work now in views, too.
This rewrite fixes many problems with cross-datatype UNIONs and INSERT/SELECT
where the SELECT yields different datatypes than the INSERT needs. I did
that by making UNION subqueries and SELECT in INSERT be treated like
subselects-in-FROM, thereby allowing an extra level of targetlist where the
datatype conversions can be inserted safely.
INITDB NEEDED!
(Don't forget that an alias is required.) Views reimplemented as expanding
to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually
work now (he says optimistically). No UNION support in subselects/views
yet, but I have some ideas about that. Rule-related permissions checking
moved out of rewriter and into executor.
INITDB REQUIRED!