Commit Graph

528 Commits

Author SHA1 Message Date
Tom Lane 48239e156f Avoid useless work during set_plain_rel_pathlist() when the relation
will be excluded by constraint exclusion anyway.  Greg Stark
2007-04-21 06:18:52 +00:00
Tom Lane ca3d14f2a9 Tweak set_rel_width() to avoid redundant executions of getrelid().
In very large queries this accounts for a noticeable fraction of
planning time.  Per an example from Greg Stark.
2007-04-21 02:41:13 +00:00
Tom Lane 2e824a8ea9 Rewrite choose_bitmap_and() to make it more robust in the presence of
competing alternatives for indexes to use in a bitmap scan.  The former
coding took estimated selectivity as an overriding factor, causing it to
sometimes choose indexes that were much slower to scan than ones with a
slightly worse selectivity.  It was also too narrow-minded about which
combinations of indexes to consider ANDing.  The rewrite makes it pay more
attention to index scan cost than selectivity; this seems sane since it's
impossible to have very bad selectivity with low cost, whereas the reverse
isn't true.  Also, we now consider each index alone, as well as adding
each index to an AND-group led by each prior index, for a total of about
O(N^2) rather than O(N) combinations considered.  This makes the results
much less dependent on the exact order in which the indexes are
considered.  It's still a lot cheaper than an O(2^N) exhaustive search.
A prefilter step eliminates all but the cheapest of those indexes using
the same set of WHERE conditions, to keep the effective value of N down in
scenarios where the DBA has created lots of partially-redundant indexes.
2007-04-17 20:03:03 +00:00
Tom Lane fa92d21a48 Avoid running build_index_pathkeys() in situations where there cannot
possibly be any useful pathkeys --- to wit, queries with neither any
join clauses nor any ORDER BY request.  It's nearly free to check for
this case and it saves a useful fraction of the planning time for simple
queries.
2007-04-15 20:09:28 +00:00
Tom Lane f02a82b6ad Make 'col IS NULL' clauses be indexable conditions.
Teodor Sigaev, with some kibitzing from Tom Lane.
2007-04-06 22:33:43 +00:00
Tom Lane bf94076348 Fix array coercion expressions to ensure that the correct volatility is
seen by code inspecting the expression.  The best way to do this seems
to be to drop the original representation as a function invocation, and
instead make a special expression node type that represents applying
the element-type coercion function to each array element.  In this way
the element function is exposed and will be checked for volatility.
Per report from Guillaume Smet.
2007-03-27 23:21:12 +00:00
Tom Lane 54d20024c1 Fix some problems with selectivity estimation for partial indexes.
First, genericcostestimate() was being way too liberal about including
partial-index conditions in its selectivity estimate, resulting in
substantial underestimates for situations such as an indexqual "x = 42"
used with an index on x "WHERE x >= 40 AND x < 50".  While the code is
intentionally set up to favor selecting partial indexes when available,
this was too much...

Second, choose_bitmap_and() was likewise easily fooled by cases of this
type, since it would similarly think that the partial index had selectivity
independent of the indexqual.

Fixed by using predicate_implied_by() rather than simple equality checks
to determine redundancy.  This is a good deal more expensive but I don't
see much alternative.  At least the extra cost is only paid when there's
actually a partial index under consideration.

Per report from Jeff Davis.  I'm not going to risk back-patching this,
though.
2007-03-21 22:18:12 +00:00
Tom Lane 0f4ff460c4 Fix up the remaining places where the expression node structure would lose
available information about the typmod of an expression; namely, Const,
ArrayRef, ArrayExpr, and EXPR and ARRAY SubLinks.  In the ArrayExpr and
SubLink cases it wasn't really the data structure's fault, but exprTypmod()
being lazy.  This seems like a good idea in view of the expected increase in
typmod usage from Teodor's work to allow user-defined types to have typmods.
In particular this responds to the concerns we had about eliminating the
special-purpose hack that exprTypmod() used to have for BPCHAR Consts.
We can now tell whether or not such a Const has been cast to a specific
length, and report or display properly if so.

initdb forced due to changes in stored rules.
2007-03-17 00:11:05 +00:00
Tom Lane eab6b8b27e Turn the rangetable used by the executor into a flat list, and avoid storing
useless substructure for its RangeTblEntry nodes.  (I chose to keep using the
same struct node type and just zero out the link fields for unneeded info,
rather than making a separate ExecRangeTblEntry type --- it seemed too
fragile to have two different rangetable representations.)

Along the way, put subplans into a list in the toplevel PlannedStmt node,
and have SubPlan nodes refer to them by list index instead of direct pointers.
Vadim wanted to do that years ago, but I never understood what he was on about
until now.  It makes things a *whole* lot more robust, because we can stop
worrying about duplicate processing of subplans during expression tree
traversals.  That's been a constant source of bugs, and it's finally gone.

There are some consequent simplifications yet to be made, like not using
a separate EState for subplans in the executor, but I'll tackle that later.
2007-02-22 22:00:26 +00:00
Tom Lane 9cbd0c155d Remove the Query structure from the executor's API. This allows us to stop
storing mostly-redundant Query trees in prepared statements, portals, etc.
To replace Query, a new node type called PlannedStmt is inserted by the
planner at the top of a completed plan tree; this carries just the fields of
Query that are still needed at runtime.  The statement lists kept in portals
etc. now consist of intermixed PlannedStmt and bare utility-statement nodes
--- no Query.  This incidentally allows us to remove some fields from Query
and Plan nodes that shouldn't have been there in the first place.

Still to do: simplify the execution-time range table; at the moment the
range table passed to the executor still contains Query trees for subqueries.

initdb forced due to change of stored rules.
2007-02-20 17:32:18 +00:00
Tom Lane 7c5e5439d2 Get rid of some old and crufty global variables in the planner. When
this code was last gone over, there wasn't really any alternative to
globals because we didn't have the PlannerInfo struct being passed all
through the planner code.  Now that we do, we can restructure things
to avoid non-reentrancy.  I'm fooling with this because otherwise I'd
have had to add another global variable for the planned compact
range table list.
2007-02-19 07:03:34 +00:00
Tom Lane 6bef118b01 Restructure code that is responsible for ensuring that clauseless joins are
considered when it is necessary to do so because of a join-order restriction
(that is, an outer-join or IN-subselect construct).  The former coding was a
bit ad-hoc and inconsistent, and it missed some cases, as exposed by Mario
Weilguni's recent bug report.  His specific problem was that an IN could be
turned into a "clauseless" join due to constant-propagation removing the IN's
joinclause, and if the IN's subselect involved more than one relation and
there was more than one such IN linking to the same upper relation, then the
only valid join orders involve "bushy" plans but we would fail to consider the
specific paths needed to get there.  (See the example case added to the join
regression test.)  On examining the code I wonder if there weren't some other
problem cases too; in particular it seems that GEQO was defending against a
different set of corner cases than the main planner was.  There was also an
efficiency problem, in that when we did realize we needed a clauseless join
because of an IN, we'd consider clauseless joins against every other relation
whether this was sensible or not.  It seems a better design is to use the
outer-join and in-clause lists as a backup heuristic, just as the rule of
joining only where there are joinclauses is a heuristic: we'll join two
relations if they have a usable joinclause *or* this might be necessary to
satisfy an outer-join or IN-clause join order restriction.  I refactored the
code to have just one place considering this instead of three, and made sure
that it covered all the cases that any of them had been considering.

Backpatch as far as 8.1 (which has only the IN-clause form of the disease).
By rights 8.0 and 7.4 should have the bug too, but they accidentally fail
to fail, because the joininfo structure used in those releases preserves some
memory of there having once been a joinclause between the inner and outer
sides of an IN, and so it leads the code in the right direction anyway.
I'll be conservative and not touch them.
2007-02-16 00:14:01 +00:00
Tom Lane c17117649b Repair bug in 8.2's new logic for planning outer joins: we have to allow joins
that overlap an outer join's min_righthand but aren't fully contained in it,
to support joining within the RHS after having performed an outer join that
can commute with this one.  Aside from the direct fix in make_join_rel(),
fix has_join_restriction() and GEQO's desirable_join() to consider this
possibility.  Per report from Ian Harding.
2007-02-13 02:31:03 +00:00
Tom Lane 2b8758a389 Repair oversight in creation of "append relations": we should set up
rel->tuples as well as rel->rows, since some estimation functions expect both
to be valid in every baserel.  Per report from Dave Dutcher.
2007-01-28 18:50:40 +00:00
Tom Lane 4f06c688c7 Put back planner's ability to cache the results of mergejoinscansel(),
which I had removed in the first cut of the EquivalenceClass rewrite to
simplify that patch a little.  But it's still important --- in a four-way
join problem mergejoinscansel() was eating about 40% of the planning time
according to gprof.  Also, improve the EquivalenceClass code to re-use
join RestrictInfos rather than generating fresh ones for each join
considered.  This saves some memory space but more importantly improves
the effectiveness of caching planning info in RestrictInfos.
2007-01-22 20:00:40 +00:00
Tom Lane 5a7471c307 Add COST and ROWS options to CREATE/ALTER FUNCTION, plus underlying pg_proc
columns procost and prorows, to allow simple user adjustment of the estimated
cost of a function call, as well as control of the estimated number of rows
returned by a set-returning function.  We might eventually wish to extend this
to allow function-specific estimation routines, but there seems to be
consensus that we should try a simple constant estimate first.  In particular
this provides a relatively simple way to control the order in which different
WHERE clauses are applied in a plan node, which is a Good Thing in view of the
fact that the recent EquivalenceClass planner rewrite made that much less
predictable than before.
2007-01-22 01:35:23 +00:00
Tom Lane 066926dfbb Refactor some lsyscache routines to eliminate duplicate code and save
a couple of syscache lookups in make_pathkey_from_sortinfo().
2007-01-21 00:57:15 +00:00
Tom Lane f41803bb39 Refactor planner's pathkeys data structure to create a separate, explicit
representation of equivalence classes of variables.  This is an extensive
rewrite, but it brings a number of benefits:
* planner no longer fails in the presence of "incomplete" operator families
that don't offer operators for every possible combination of datatypes.
* avoid generating and then discarding redundant equality clauses.
* remove bogus assumption that derived equalities always use operators
named "=".
* mergejoins can work with a variety of sort orders (e.g., descending) now,
instead of tying each mergejoinable operator to exactly one sort order.
* better recognition of redundant sort columns.
* can make use of equalities appearing underneath an outer join.
2007-01-20 20:45:41 +00:00
Peter Eisentraut 2cc01004c6 Remove remains of old depend target. 2007-01-20 17:16:17 +00:00
Tom Lane a191a169d6 Change the planner-to-executor API so that the planner tells the executor
which comparison operators to use for plan nodes involving tuple comparison
(Agg, Group, Unique, SetOp).  Formerly the executor looked up the default
equality operator for the datatype, which was really pretty shaky, since it's
possible that the data being fed to the node is sorted according to some
nondefault operator class that could have an incompatible idea of equality.
The planner knows what it has sorted by and therefore can provide the right
equality operator to use.  Also, this change moves a couple of catalog lookups
out of the executor and into the planner, which should help startup time for
pre-planned queries by some small amount.  Modify the planner to remove some
other cavalier assumptions about always being able to use the default
operators.  Also add "nulls first/last" info to the Plan node for a mergejoin
--- neither the executor nor the planner can cope yet, but at least the API is
in place.
2007-01-10 18:06:05 +00:00
Tom Lane 4431758229 Support ORDER BY ... NULLS FIRST/LAST, and add ASC/DESC/NULLS FIRST/NULLS LAST
per-column options for btree indexes.  The planner's support for this is still
pretty rudimentary; it does not yet know how to plan mergejoins with
nondefault ordering options.  The documentation is pretty rudimentary, too.
I'll work on improving that stuff later.

Note incompatible change from prior behavior: ORDER BY ... USING will now be
rejected if the operator is not a less-than or greater-than member of some
btree opclass.  This prevents less-than-sane behavior if an operator that
doesn't actually define a proper sort ordering is selected.
2007-01-09 02:14:16 +00:00
Tom Lane 9a9a143a98 Remove cost_hashjoin's very ancient hack to discourage (once, entirely forbid)
hash joins with the estimated-larger relation on the inside.  There are
several cases where doing that makes perfect sense, and in cases where it
doesn't, the regular cost computation really ought to be able to figure that
out.  Make some marginal tweaks in said computation to try to get results
approximating reality a bit better.  Per an example from Shane Ambler.

Also, fix an oversight in the original patch to add seq_page_cost: the costs
of spilling a hash join to disk should be scaled by seq_page_cost.
2007-01-08 16:09:22 +00:00
Bruce Momjian 29dccf5fe0 Update CVS HEAD for 2007 copyright. Back branches are typically not
back-stamped for this.
2007-01-05 22:20:05 +00:00
Tom Lane a78fcfb512 Restructure operator classes to allow improved handling of cross-data-type
cases.  Operator classes now exist within "operator families".  While most
families are equivalent to a single class, related classes can be grouped
into one family to represent the fact that they are semantically compatible.
Cross-type operators are now naturally adjunct parts of a family, without
having to wedge them into a particular opclass as we had done originally.

This commit restructures the catalogs and cleans up enough of the fallout so
that everything still works at least as well as before, but most of the work
needed to actually improve the planner's behavior will come later.  Also,
there are not yet CREATE/DROP/ALTER OPERATOR FAMILY commands; the only way
to create a new family right now is to allow CREATE OPERATOR CLASS to make
one by default.  I owe some more documentation work, too.  But that can all
be done in smaller pieces once this infrastructure is in place.
2006-12-23 00:43:13 +00:00
Tom Lane 281f40187f Fix some planner bugs exposed by reports from Arjen van der Meijden. These
are all in new-in-8.2 logic associated with indexability of ScalarArrayOpExpr
(IN-clauses) or amortization of indexscan costs across repeated indexscans
on the inside of a nestloop.  In particular:

Fix some logic errors in the estimation for multiple scans induced by a
ScalarArrayOpExpr indexqual.

Include a small cost component in bitmap index scans to reflect the costs of
manipulating the bitmap itself; this is mainly to prevent a bitmap scan from
appearing to have the same cost as a plain indexscan for fetching a single
tuple.

Also add a per-index-scan-startup CPU cost component; while prior releases
were clearly too pessimistic about the cost of repeated indexscans, the
original 8.2 coding allowed the cost of an indexscan to effectively go to zero
if repeated often enough, which is overly optimistic.

Pay some attention to index correlation when estimating costs for a nestloop
inner indexscan: this is significant when the plan fetches multiple heap
tuples per iteration, since high correlation means those tuples are probably
on the same or adjacent heap pages.
2006-12-15 18:42:26 +00:00
Tom Lane f18c57fdf1 Fix planner to do the right thing when a degenerate outer join (one whose
joinclause doesn't use any outer-side vars) requires a "bushy" plan to be
created.  The normal heuristic to avoid joins with no joinclause has to be
overridden in that case.  Problem is new in 8.2; before that we forced the
outer join order anyway.  Per example from Teodor.
2006-12-12 21:31:02 +00:00
Tom Lane a46ca619f8 Suppress a few 'uninitialized variable' warnings that gcc emits only at
-O3 or higher (presumably because it inlines more things).  Per gripe
from Mark Mielke.
2006-11-11 01:14:19 +00:00
Tom Lane d19798e584 Fix set_joinrel_size_estimates() to estimate outer-join sizes more
accurately: we have to distinguish the effects of the join's own ON
clauses from the effects of pushed-down clauses.  Failing to do so
was a quick hack long ago, but it's time to be smarter.  Per example
from Thomas H.
2006-11-10 01:21:41 +00:00
Tom Lane 4df8de7a68 Fix check for whether a clauseless join has to be forced in the presence of
outer joins.  Originally it was only looking for overlap of the righthand
side of a left join, but we have to do it on the lefthand side too.
Per example from Jean-Pierre Pelletier.
2006-10-24 17:50:22 +00:00
Bruce Momjian f99a569a2e pgindent run for 8.2. 2006-10-04 00:30:14 +00:00
Tom Lane b74c543685 Improve usage of effective_cache_size parameter by assuming that all the
tables in the query compete for cache space, not just the one we are
currently costing an indexscan for.  This seems more realistic, and it
definitely will help in examples recently exhibited by Stefan
Kaltenbrunner.  To get the total size of all the tables involved, we must
tweak the handling of 'append relations' a bit --- formerly we looked up
information about the child tables on-the-fly during set_append_rel_pathlist,
but it needs to be done before we start doing any cost estimation, so
push it into the add_base_rels_to_query scan.
2006-09-19 22:49:53 +00:00
Tom Lane fb9e56eea1 Suppress subquery pullup/pushdown when a subquery contains volatile
functions in its targetlist, to avoid introducing multiple evaluations
of volatile functions that textually appear only once.  This is a
slightly tighter version of Jaime Casanova's recent patch.
2006-08-19 02:48:53 +00:00
Tom Lane 92c651f8b3 Fix an oversight in mergejoin planning: the planner would reject a
mergejoin possibility where the inner rel was less well sorted than
the outer (ie, it matches some but not all of the merge clauses that
can work with the outer), if the inner path in question is also the
overall cheapest path for its rel.  This is an old bug, but I'm not
sure it's worth back-patching, because it's such a corner case.
Noted while investigating a test case from Peter Hardman.
2006-08-17 17:06:37 +00:00
Tom Lane 144b0ae8ee Teach convert_subquery_pathkeys() to handle the case where the
subquery's pathkey is a RelabelType applied to something that appears
in the subquery's output; for example where the subquery returns a
varchar Var and the sort order is shown as that Var coerced to text.
This comes up because varchar doesn't have its own sort operator.
Per example from Peter Hardman.
2006-08-17 17:02:49 +00:00
Tom Lane 0ee26100b6 Fix UNION/INTERSECT/EXCEPT so that when two inputs being merged have
same data type and same typmod, we show that typmod as the output
typmod, rather than generic -1.  This responds to several complaints
over the past few years about UNIONs unexpectedly dropping length or
precision info.
2006-08-10 02:36:29 +00:00
Joe Conway 9caafda579 Add support for multi-row VALUES clauses as part of INSERT statements
(e.g. "INSERT ... VALUES (...), (...), ...") and elsewhere as allowed
by the spec. (e.g. similar to a FROM clause subselect). initdb required.
Joe Conway and Tom Lane.
2006-08-02 01:59:48 +00:00
Peter Eisentraut 79bc99a467 Convert effective_cache_size to an integer, for better integration with
upcoming units feature.
2006-07-26 11:35:56 +00:00
Tom Lane 98359c3e3f In the recent changes to make the planner account better for cache
effects in a nestloop inner indexscan, I had only dealt with plain index
scans and the index portion of bitmap scans.  But there will be cache
benefits for the heap accesses of bitmap scans too, so fix
cost_bitmap_heap_scan() to account for that.
2006-07-22 15:41:56 +00:00
Bruce Momjian e0522505bd Remove 576 references of include files that were not needed. 2006-07-14 14:52:27 +00:00
Bruce Momjian b844dd3f9e More include file adjustments. 2006-07-13 17:47:02 +00:00
Bruce Momjian fa601357fb Sort reference of include files, "A" - "F". 2006-07-11 16:35:33 +00:00
Tom Lane 08ccdf020e Fix oversight in planning for multiple indexscans driven by
ScalarArrayOpExpr index quals: we were estimating the right total
number of rows returned, but treating the index-access part of the
cost as if a single scan were fetching that many consecutive index
tuples.  Actually we should treat it as a multiple indexscan, and
if there are enough of 'em the Mackert-Lohman discount should kick in.
2006-07-01 22:07:23 +00:00
Tom Lane cffd89ca73 Revise the planner's handling of "pseudoconstant" WHERE clauses, that is
clauses containing no variables and no volatile functions.  Such a clause
can be used as a one-time qual in a gating Result plan node, to suppress
plan execution entirely when it is false.  Even when the clause is true,
putting it in a gating node wins by avoiding repeated evaluation of the
clause.  In previous PG releases, query_planner() would do this for
pseudoconstant clauses appearing at the top level of the jointree, but
there was no ability to generate a gating Result deeper in the plan tree.
To fix it, get rid of the special case in query_planner(), and instead
process pseudoconstant clauses through the normal RestrictInfo qual
distribution mechanism.  When a pseudoconstant clause is found attached to
a path node in create_plan(), pull it out and generate a gating Result at
that point.  This requires special-casing pseudoconstants in selectivity
estimation and cost_qual_eval, but on the whole it's pretty clean.
It probably even makes the planner a bit faster than before for the normal
case of no pseudoconstants, since removing pull_constant_clauses saves one
useless traversal of the qual tree.  Per gripe from Phil Frost.
2006-07-01 18:38:33 +00:00
Tom Lane ae0c8d09fb Remove "fuzzy comparison" logic in qsort comparison function for
choose_bitmap_and().  It was way too fuzzy --- per comment, it was meant to be
1% relative difference, but was actually coded as 0.01 absolute difference,
thus causing selectivities of say 0.001 and 0.000000000001 to be treated as
equal.  I believe this thinko explains Maxim Boguk's recent complaint.  While
we could change it to a relative test coded like compare_fuzzy_path_costs(),
there's a bigger problem here, which is that any fuzziness at all renders the
comparison function non-transitive, which could confuse qsort() to the point
of delivering completely wrong results.  So forget the whole thing and just
do an exact comparison.
2006-06-07 17:08:07 +00:00
Tom Lane 8a30cc2127 Make the planner estimate costs for nestloop inner indexscans on the basis
that the Mackert-Lohmann formula applies across all the repetitions of the
nestloop, not just each scan independently.  We use the M-L formula to
estimate the number of pages fetched from the index as well as from the table;
that isn't what it was designed for, but it seems reasonably applicable
anyway.  This makes large numbers of repetitions look much cheaper than
before, which accords with many reports we've received of overestimation
of the cost of a nestloop.  Also, change the index access cost model to
charge random_page_cost per index leaf page touched, while explicitly
not counting anything for access to metapage or upper tree pages.  This
may all need tweaking after we get some field experience, but in simple
tests it seems to be giving saner results than before.  The main thing
is to get the infrastructure in place to let cost_index() and amcostestimate
functions take repeated scans into account at all.  Per my recent proposal.

Note: this patch changes pg_proc.h, but I did not force initdb because
the changes are basically cosmetic --- the system does not look into
pg_proc to decide how to call an index amcostestimate function, and
there's no way to call such a function from SQL at all.
2006-06-06 17:59:58 +00:00
Tom Lane 7868590c61 While making the seq_page_cost changes, I was struck by the fact that
cost_nonsequential_access() is really totally inappropriate for its only
remaining use, namely estimating I/O costs in cost_sort().  The routine
was designed on the assumption that disk caching might eliminate the need
for some re-reads on a random basis, but there's nothing very random in
that sense about sort's access pattern --- it'll always be picking up the
oldest outputs.  If we had a good fix on the effective cache size we
might consider charging zero for I/O unless the sort temp file size
exceeds it, but that's probably putting much too much faith in the
parameter.  Instead just drop the logic in favor of a fixed compromise
between seq_page_cost and random_page_cost per page of sort I/O.
2006-06-05 20:56:33 +00:00
Tom Lane eed6c9ed7e Add a GUC parameter seq_page_cost, and use that everywhere we formerly
assumed that a sequential page fetch has cost 1.0.  This patch doesn't
in itself change the system's behavior at all, but it opens the door to
people adopting other units of measurement for EXPLAIN costs.  Also, if
we ever decide it's worth inventing per-tablespace access cost settings,
this change provides a workable intellectual framework for that.
2006-06-05 02:49:58 +00:00
Tom Lane eed57b1b92 Fix choose_bitmap_and() so that partial index predicates are considered when
deciding whether a potential additional indexscan is redundant or not.  As now
coded, any use of a partial index that was already used in a previous AND arm
will be rejected as redundant.  This might be overly restrictive, but not
considering the point at all is definitely bad, as per example in bug #2441
from Arjen van der Meijden.  In particular, a clauseless scan of a partial
index was *never* considered redundant by the previous coding, and that's
surely wrong.  Being more flexible would also require some consideration
of how not to double-count the index predicate's selectivity.
2006-05-18 19:56:46 +00:00
Tom Lane d18e334c65 Fix thinko in recent changes to handle ScalarArrayOpExpr as an indexable
condition: when there are multiple possible index paths involving
ScalarArrayOpExprs, they are logically to be ANDed together not ORed.
This thinko was a direct consequence of trying to put the processing
inside generate_bitmap_or_paths(), which I now see was a bit too cute.
So pull it out and make the callers do it separately (there are only two
that need it anyway).  Partially responds to bug #2441 from Arjen van der Meijden.
There are some additional infelicities exposed by his example, but they
are also in 8.1.x, while this mistake is not.
2006-05-18 17:12:10 +00:00
Tom Lane 427c6b5b98 Avoid assuming that statistics for a parent relation reflect the properties of
the union of its child relations as well.  This might have been a good idea
when it was originally coded, but it's a fatally bad idea when inheritance is
being used for partitioning.  It's better to have no stats at all than
completely misleading stats.  Per report from Mark Liberman.

The bug arguably exists all the way back, but I've only patched HEAD and 8.1
because we weren't particularly trying to support partitioning before 8.1.

Eventually we ought to look at deriving union statistics instead of just
punting, but for now the drop kick looks good.
2006-05-02 04:34:18 +00:00