postgresql/src/include/nodes/relation.h

1754 lines
75 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* relation.h
* Definitions for planner's internal data structures.
*
*
* Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
2010-09-20 22:08:53 +02:00
* src/include/nodes/relation.h
*
*-------------------------------------------------------------------------
*/
#ifndef RELATION_H
#define RELATION_H
#include "access/sdir.h"
#include "lib/stringinfo.h"
#include "nodes/params.h"
#include "nodes/parsenodes.h"
#include "storage/block.h"
/*
1999-02-18 01:49:48 +01:00
* Relids
* Set of relation identifiers (indexes into the rangetable).
*/
typedef Bitmapset *Relids;
/*
* When looking for a "cheapest path", this enum specifies whether we want
* cheapest startup cost or cheapest total cost.
*/
typedef enum CostSelector
{
STARTUP_COST, TOTAL_COST
} CostSelector;
/*
* The cost estimate produced by cost_qual_eval() includes both a one-time
* (startup) cost, and a per-tuple cost.
*/
typedef struct QualCost
{
Cost startup; /* one-time cost */
Cost per_tuple; /* per-evaluation cost */
} QualCost;
/*
* Costing aggregate function execution requires these statistics about
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
* the aggregates to be executed by a given Agg node. Note that the costs
* include the execution costs of the aggregates' argument expressions as
* well as the aggregate functions themselves.
*/
typedef struct AggClauseCosts
{
int numAggs; /* total number of aggregate functions */
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
int numOrderedAggs; /* number w/ DISTINCT/ORDER BY/WITHIN GROUP */
QualCost transCost; /* total per-input-row execution costs */
Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
2013-12-23 22:11:35 +01:00
Cost finalCost; /* total per-aggregated-row costs */
Size transitionSpace; /* space for pass-by-ref transition data */
} AggClauseCosts;
/*----------
* PlannerGlobal
* Global information for planning/optimization
*
* PlannerGlobal holds state for an entire planner invocation; this state
* is shared across all levels of sub-Queries that exist in the command being
* planned.
*----------
*/
typedef struct PlannerGlobal
{
NodeTag type;
ParamListInfo boundParams; /* Param values provided to planner() */
List *subplans; /* Plans for SubPlan nodes */
List *subroots; /* PlannerInfos for SubPlan nodes */
Bitmapset *rewindPlanIDs; /* indices of subplans that require REWIND */
List *finalrtable; /* "flat" rangetable for executor */
List *finalrowmarks; /* "flat" list of PlanRowMarks */
List *resultRelations; /* "flat" list of integer RT indexes */
List *relationOids; /* OIDs of relations the plan depends on */
List *invalItems; /* other dependencies, as PlanInvalItems */
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
int nParamExec; /* number of PARAM_EXEC Params used */
Index lastPHId; /* highest PlaceHolderVar ID assigned */
Index lastRowMarkId; /* highest PlanRowMark ID assigned */
int lastPlanNodeId; /* highest plan node ID assigned */
bool transientPlan; /* redo plan when TransactionXmin changes? */
Row-Level Security Policies (RLS) Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
2014-09-19 17:18:35 +02:00
Improve planner's cost estimation in the presence of semijoins. If we have a semijoin, say SELECT * FROM x WHERE x1 IN (SELECT y1 FROM y) and we're estimating the cost of a parameterized indexscan on x, the number of repetitions of the indexscan should not be taken as the size of y; it'll really only be the number of distinct values of y1, because the only valid plan with y on the outside of a nestloop would require y to be unique-ified before joining it to x. Most of the time this doesn't make that much difference, but sometimes it can lead to drastically underestimating the cost of the indexscan and hence choosing a bad plan, as pointed out by David Kubečka. Fixing this is a bit difficult because parameterized indexscans are costed out quite early in the planning process, before we have the information that would be needed to call estimate_num_groups() and thereby estimate the number of distinct values of the join column(s). However we can move the code that extracts a semijoin RHS's unique-ification columns, so that it's done in initsplan.c rather than on-the-fly in create_unique_path(). That shouldn't make any difference speed-wise and it's really a bit cleaner too. The other bit of information we need is the size of the semijoin RHS, which is easy if it's a single relation (we make those estimates before considering indexscan costs) but problematic if it's a join relation. The solution adopted here is just to use the product of the sizes of the join component rels. That will generally be an overestimate, but since estimate_num_groups() only uses this input as a clamp, an overestimate shouldn't hurt us too badly. In any case we don't allow this new logic to produce a value larger than we would have chosen before, so that at worst an overestimate leaves us no wiser than we were before.
2015-03-12 02:21:00 +01:00
bool hasRowSecurity; /* row security applied? */
Determine whether it's safe to attempt a parallel plan for a query. Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
2015-09-16 21:38:47 +02:00
bool parallelModeOK; /* parallel mode potentially OK? */
bool parallelModeNeeded; /* parallel mode actually required? */
} PlannerGlobal;
/* macro for fetching the Plan associated with a SubPlan node */
#define planner_subplan_get_plan(root, subplan) \
((Plan *) list_nth((root)->glob->subplans, (subplan)->plan_id - 1))
/*----------
* PlannerInfo
* Per-query information for planning/optimization
*
* This struct is conventionally called "root" in all the planner routines.
* It holds links to all of the planner's working state, in addition to the
* original Query. Note that at present the planner extensively modifies
* the passed-in Query data structure; someday that should stop.
*----------
*/
typedef struct PlannerInfo
{
NodeTag type;
Query *parse; /* the Query being planned */
PlannerGlobal *glob; /* global info for current planner run */
Index query_level; /* 1 at the outermost Query */
struct PlannerInfo *parent_root; /* NULL at outermost Query */
/*
* plan_params contains the expressions that this query level needs to
* make available to a lower query level that is currently being planned.
* outer_params contains the paramIds of PARAM_EXEC Params that outer
* query levels will make available to this query level.
*/
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
List *plan_params; /* list of PlannerParamItems, see below */
Bitmapset *outer_params;
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
/*
* simple_rel_array holds pointers to "base rels" and "other rels" (see
* comments for RelOptInfo for more info). It is indexed by rangetable
2005-10-15 04:49:52 +02:00
* index (so entry 0 is always wasted). Entries can be NULL when an RTE
* does not correspond to a base relation, such as a join RTE or an
* unreferenced view RTE; or if the RelOptInfo hasn't been made yet.
*/
struct RelOptInfo **simple_rel_array; /* All 1-rel RelOptInfos */
2006-10-04 02:30:14 +02:00
int simple_rel_array_size; /* allocated size of array */
/*
* simple_rte_array is the same length as simple_rel_array and holds
* pointers to the associated rangetable entries. This lets us avoid
* rt_fetch(), which can be a bit slow once large inheritance sets have
* been expanded.
*/
2007-11-15 22:14:46 +01:00
RangeTblEntry **simple_rte_array; /* rangetable as an array */
/*
* all_baserels is a Relids set of all base relids (but not "other"
* relids) in the query; that is, the Relids identifier of the final join
Compute correct em_nullable_relids in get_eclass_for_sort_expr(). Bug #8591 from Claudio Freire demonstrates that get_eclass_for_sort_expr must be able to compute valid em_nullable_relids for any new equivalence class members it creates. I'd worried about this in the commit message for db9f0e1d9a4a0842c814a464cdc9758c3f20b96c, but claimed that it wasn't a problem because multi-member ECs should already exist when it runs. That is transparently wrong, though, because this function is also called by initialize_mergeclause_eclasses, which runs during deconstruct_jointree. The example given in the bug report (which the new regression test item is based upon) fails because the COALESCE() expression is first seen by initialize_mergeclause_eclasses rather than process_equivalence. Fixing this requires passing the appropriate nullable_relids set to get_eclass_for_sort_expr, and it requires new code to compute that set for top-level expressions such as ORDER BY, GROUP BY, etc. We store the top-level nullable_relids in a new field in PlannerInfo to avoid computing it many times. In the back branches, I've added the new field at the end of the struct to minimize ABI breakage for planner plugins. There doesn't seem to be a good alternative to changing get_eclass_for_sort_expr's API signature, though. There probably aren't any third-party extensions calling that function directly; moreover, if there are, they probably need to think about what to pass for nullable_relids anyway. Back-patch to 9.2, like the previous patch in this area.
2013-11-15 22:46:18 +01:00
* we need to form. This is computed in make_one_rel, just before we
* start making Paths.
*/
Relids all_baserels;
Compute correct em_nullable_relids in get_eclass_for_sort_expr(). Bug #8591 from Claudio Freire demonstrates that get_eclass_for_sort_expr must be able to compute valid em_nullable_relids for any new equivalence class members it creates. I'd worried about this in the commit message for db9f0e1d9a4a0842c814a464cdc9758c3f20b96c, but claimed that it wasn't a problem because multi-member ECs should already exist when it runs. That is transparently wrong, though, because this function is also called by initialize_mergeclause_eclasses, which runs during deconstruct_jointree. The example given in the bug report (which the new regression test item is based upon) fails because the COALESCE() expression is first seen by initialize_mergeclause_eclasses rather than process_equivalence. Fixing this requires passing the appropriate nullable_relids set to get_eclass_for_sort_expr, and it requires new code to compute that set for top-level expressions such as ORDER BY, GROUP BY, etc. We store the top-level nullable_relids in a new field in PlannerInfo to avoid computing it many times. In the back branches, I've added the new field at the end of the struct to minimize ABI breakage for planner plugins. There doesn't seem to be a good alternative to changing get_eclass_for_sort_expr's API signature, though. There probably aren't any third-party extensions calling that function directly; moreover, if there are, they probably need to think about what to pass for nullable_relids anyway. Back-patch to 9.2, like the previous patch in this area.
2013-11-15 22:46:18 +01:00
/*
* nullable_baserels is a Relids set of base relids that are nullable by
* some outer join in the jointree; these are rels that are potentially
* nullable below the WHERE clause, SELECT targetlist, etc. This is
* computed in deconstruct_jointree.
*/
Relids nullable_baserels;
/*
* join_rel_list is a list of all join-relation RelOptInfos we have
2005-10-15 04:49:52 +02:00
* considered in this planning run. For small problems we just scan the
* list to do lookups, but when there are many join relations we build a
* hash table for faster lookups. The hash table is present and valid
* when join_rel_hash is not NULL. Note that we still maintain the list
2005-10-15 04:49:52 +02:00
* even when using the hash table for lookups; this simplifies life for
* GEQO.
*/
List *join_rel_list; /* list of join-relation RelOptInfos */
2005-10-15 04:49:52 +02:00
struct HTAB *join_rel_hash; /* optional hashtable for join relations */
/*
* When doing a dynamic-programming-style join search, join_rel_level[k]
* is a list of all join-relation RelOptInfos of level k, and
2010-02-26 03:01:40 +01:00
* join_cur_level is the current level. New join-relation RelOptInfos are
* automatically added to the join_rel_level[join_cur_level] list.
* join_rel_level is NULL if not in use.
*/
2010-02-26 03:01:40 +01:00
List **join_rel_level; /* lists of join-relation RelOptInfos */
int join_cur_level; /* index of list being extended */
List *init_plans; /* init SubPlans for query */
List *cte_plan_ids; /* per-CTE-item list of subplan IDs */
List *multiexpr_params; /* List of Lists of Params for
* MULTIEXPR subquery outputs */
2007-11-15 22:14:46 +01:00
List *eq_classes; /* list of active EquivalenceClasses */
2007-11-15 22:14:46 +01:00
List *canon_pathkeys; /* list of "canonical" PathKeys */
List *left_join_clauses; /* list of RestrictInfos for
* mergejoinable outer join clauses
* w/nonnullable var on left */
List *right_join_clauses; /* list of RestrictInfos for
* mergejoinable outer join clauses
* w/nonnullable var on right */
List *full_join_clauses; /* list of RestrictInfos for
* mergejoinable full join clauses */
List *join_info_list; /* list of SpecialJoinInfos */
List *lateral_info_list; /* list of LateralJoinInfos */
2006-10-04 02:30:14 +02:00
List *append_rel_list; /* list of AppendRelInfos */
List *rowMarks; /* list of PlanRowMarks */
List *placeholder_list; /* list of PlaceHolderInfos */
2005-10-15 04:49:52 +02:00
List *query_pathkeys; /* desired pathkeys for query_planner(), and
Postpone creation of pathkeys lists to fix bug #8049. This patch gets rid of the concept of, and infrastructure for, non-canonical PathKeys; we now only ever create canonical pathkey lists. The need for non-canonical pathkeys came from the desire to have grouping_planner initialize query_pathkeys and related pathkey lists before calling query_planner. However, since query_planner didn't actually *do* anything with those lists before they'd been made canonical, we can get rid of the whole mess by just not creating the lists at all until the point where we formerly canonicalized them. There are several ways in which we could implement that without making query_planner itself deal with grouping/sorting features (which are supposed to be the province of grouping_planner). I chose to add a callback function to query_planner's API; other alternatives would have required adding more fields to PlannerInfo, which while not bad in itself would create an ABI break for planner-related plugins in the 9.2 release series. This still breaks ABI for anything that calls query_planner directly, but it seems somewhat unlikely that there are any such plugins. I had originally conceived of this change as merely a step on the way to fixing bug #8049 from Teun Hoogendoorn; but it turns out that this fixes that bug all by itself, as per the added regression test. The reason is that now get_eclass_for_sort_expr is adding the ORDER BY expression at the end of EquivalenceClass creation not the start, and so anything that is in a multi-member EquivalenceClass has already been created with correct em_nullable_relids. I am suspicious that there are related scenarios in which we still need to teach get_eclass_for_sort_expr to compute correct nullable_relids, but am not eager to risk destabilizing either 9.2 or 9.3 to fix bugs that are only hypothetical. So for the moment, do this and stop here. Back-patch to 9.2 but not to earlier branches, since they don't exhibit this bug for lack of join-clause-movement logic that depends on em_nullable_relids being correct. (We might have to revisit that choice if any related bugs turn up.) In 9.2, don't change the signature of make_pathkeys_for_sortclauses nor remove canonicalize_pathkeys, so as not to risk more plugin breakage than we have to.
2013-04-29 20:49:01 +02:00
* actual pathkeys after planning */
List *group_pathkeys; /* groupClause pathkeys, if any */
List *window_pathkeys; /* pathkeys of bottom window, if any */
List *distinct_pathkeys; /* distinctClause pathkeys, if any */
List *sort_pathkeys; /* sortClause pathkeys, if any */
List *minmax_aggs; /* List of MinMaxAggInfos */
List *initial_rels; /* RelOptInfos we are now trying to join */
MemoryContext planner_cxt; /* context holding PlannerInfo */
2006-10-04 02:30:14 +02:00
double total_table_pages; /* # of pages in all tables of query */
2005-10-15 04:49:52 +02:00
double tuple_fraction; /* tuple_fraction passed to query_planner */
double limit_tuples; /* limit_tuples passed to query_planner */
2010-07-06 21:19:02 +02:00
bool hasInheritedTarget; /* true if parse->resultRelation is an
* inheritance child rel */
bool hasJoinRTEs; /* true if any RTEs are RTE_JOIN kind */
bool hasLateralRTEs; /* true if any RTEs are marked LATERAL */
bool hasDeletedRTEs; /* true if any RTE was deleted from jointree */
bool hasHavingQual; /* true if havingQual was non-null */
2006-10-04 02:30:14 +02:00
bool hasPseudoConstantQuals; /* true if any RestrictInfo has
* pseudoconstant = true */
bool hasRecursion; /* true if planning a recursive WITH item */
/* These fields are used only when hasRecursion is true: */
int wt_param_id; /* PARAM_EXEC ID for the work table */
struct Plan *non_recursive_plan; /* plan for non-recursive term */
/* These fields are workspace for createplan.c */
2011-04-10 17:42:00 +02:00
Relids curOuterRels; /* outer rels above current node */
List *curOuterParams; /* not-yet-assigned NestLoopParams */
/* optional private data for join_search_hook, e.g., GEQO */
void *join_search_private;
Support GROUPING SETS, CUBE and ROLLUP. This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
2015-05-16 03:40:59 +02:00
/* for GroupingFunc fixup in setrefs */
AttrNumber *grouping_map;
} PlannerInfo;
/*
* In places where it's known that simple_rte_array[] must have been prepared
* already, we just index into it to fetch RTEs. In code that might be
* executed before or after entering query_planner(), use this macro.
*/
#define planner_rt_fetch(rti, root) \
((root)->simple_rte_array ? (root)->simple_rte_array[rti] : \
rt_fetch(rti, (root)->parse->rtable))
/*----------
1998-07-18 06:22:52 +02:00
* RelOptInfo
* Per-relation information for planning/optimization
*
* For planning purposes, a "base rel" is either a plain relation (a table)
* or the output of a sub-SELECT or function that appears in the range table.
* In either case it is uniquely identified by an RT index. A "joinrel"
* is the joining of two or more base rels. A joinrel is identified by
* the set of RT indexes for its component baserels. We create RelOptInfo
* nodes for each baserel and joinrel, and store them in the PlannerInfo's
* simple_rel_array and join_rel_list respectively.
*
* Note that there is only one joinrel for any given set of component
* baserels, no matter what order we assemble them in; so an unordered
* set is the right datatype to identify it with.
*
* We also have "other rels", which are like base rels in that they refer to
* single RT indexes; but they are not part of the join tree, and are given
* a different RelOptKind to identify them. Lastly, there is a RelOptKind
* for "dead" relations, which are base rels that we have proven we don't
* need to join after all.
*
* Currently the only kind of otherrels are those made for member relations
* of an "append relation", that is an inheritance set or UNION ALL subquery.
* An append relation has a parent RTE that is a base rel, which represents
* the entire append relation. The member RTEs are otherrels. The parent
* is present in the query join tree but the members are not. The member
* RTEs and otherrels are used to plan the scans of the individual tables or
* subqueries of the append set; then the parent baserel is given Append
* and/or MergeAppend paths comprising the best paths for the individual
* member rels. (See comments for AppendRelInfo for more information.)
*
* At one time we also made otherrels to represent join RTEs, for use in
* handling join alias Vars. Currently this is not needed because all join
* alias Vars are expanded to non-aliased form during preprocess_expression.
*
* Parts of this data structure are specific to various scan and join
* mechanisms. It didn't seem worth creating new node types for them.
*
* relids - Set of base-relation identifiers; it is a base relation
* if there is just one, a join relation if more than one
* rows - estimated number of tuples in the relation after restriction
* clauses have been applied (ie, output rows of a plan for it)
* width - avg. number of bytes per tuple in the relation after the
* appropriate projections have been done (ie, output width)
Fix planner's cost estimation for SEMI/ANTI joins with inner indexscans. When the inner side of a nestloop SEMI or ANTI join is an indexscan that uses all the join clauses as indexquals, it can be presumed that both matched and unmatched outer rows will be processed very quickly: for matched rows, we'll stop after fetching one row from the indexscan, while for unmatched rows we'll have an indexscan that finds no matching index entries, which should also be quick. The planner already knew about this, but it was nonetheless charging for at least one full run of the inner indexscan, as a consequence of concerns about the behavior of materialized inner scans --- but those concerns don't apply in the fast case. If the inner side has low cardinality (many matching rows) this could make an indexscan plan look far more expensive than it actually is. To fix, rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we don't add the inner scan cost until we've inspected the indexquals, and then we can add either the full-run cost or just the first tuple's cost as appropriate. Experimentation with this fix uncovered another problem: add_path and friends were coded to disregard cheap startup cost when considering parameterized paths. That's usually okay (and desirable, because it thins the path herd faster); but in this fast case for SEMI/ANTI joins, it could result in throwing away the desired plain indexscan path in favor of a bitmap scan path before we ever get to the join costing logic. In the many-matching-rows cases of interest here, a bitmap scan will do a lot more work than required, so this is a problem. To fix, add a per-relation flag consider_param_startup that works like the existing consider_startup flag, but applies to parameterized paths, and set it for relations that are the inside of a SEMI or ANTI join. To make this patch reasonably safe to back-patch, care has been taken to avoid changing the planner's behavior except in the very narrow case of SEMI/ANTI joins with inner indexscans. There are places in compare_path_costs_fuzzily and add_path_precheck that are not terribly consistent with the new approach, but changing them will affect planner decisions at the margins in other cases, so we'll leave that for a HEAD-only fix. Back-patch to 9.3; before that, the consider_startup flag didn't exist, meaning that the second aspect of the patch would be too invasive. Per a complaint from Peter Holzer and analysis by Tomas Vondra.
2015-06-03 17:58:47 +02:00
* consider_startup - true if there is any value in keeping plain paths for
* this rel on the basis of having cheap startup cost
Fix planner's cost estimation for SEMI/ANTI joins with inner indexscans. When the inner side of a nestloop SEMI or ANTI join is an indexscan that uses all the join clauses as indexquals, it can be presumed that both matched and unmatched outer rows will be processed very quickly: for matched rows, we'll stop after fetching one row from the indexscan, while for unmatched rows we'll have an indexscan that finds no matching index entries, which should also be quick. The planner already knew about this, but it was nonetheless charging for at least one full run of the inner indexscan, as a consequence of concerns about the behavior of materialized inner scans --- but those concerns don't apply in the fast case. If the inner side has low cardinality (many matching rows) this could make an indexscan plan look far more expensive than it actually is. To fix, rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we don't add the inner scan cost until we've inspected the indexquals, and then we can add either the full-run cost or just the first tuple's cost as appropriate. Experimentation with this fix uncovered another problem: add_path and friends were coded to disregard cheap startup cost when considering parameterized paths. That's usually okay (and desirable, because it thins the path herd faster); but in this fast case for SEMI/ANTI joins, it could result in throwing away the desired plain indexscan path in favor of a bitmap scan path before we ever get to the join costing logic. In the many-matching-rows cases of interest here, a bitmap scan will do a lot more work than required, so this is a problem. To fix, add a per-relation flag consider_param_startup that works like the existing consider_startup flag, but applies to parameterized paths, and set it for relations that are the inside of a SEMI or ANTI join. To make this patch reasonably safe to back-patch, care has been taken to avoid changing the planner's behavior except in the very narrow case of SEMI/ANTI joins with inner indexscans. There are places in compare_path_costs_fuzzily and add_path_precheck that are not terribly consistent with the new approach, but changing them will affect planner decisions at the margins in other cases, so we'll leave that for a HEAD-only fix. Back-patch to 9.3; before that, the consider_startup flag didn't exist, meaning that the second aspect of the patch would be too invasive. Per a complaint from Peter Holzer and analysis by Tomas Vondra.
2015-06-03 17:58:47 +02:00
* consider_param_startup - the same for parameterized paths
* reltargetlist - List of Var and PlaceHolderVar nodes for the values
* we need to output from this relation.
* List is in no particular order, but all rels of an
* appendrel set must use corresponding orders.
* NOTE: in an appendrel child relation, may contain
* arbitrary expressions pulled up from a subquery!
* pathlist - List of Path nodes, one for each potentially useful
* method of generating the relation
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
* ppilist - ParamPathInfo nodes for parameterized Paths, if any
* cheapest_startup_path - the pathlist member with lowest startup cost
Adjust definition of cheapest_total_path to work better with LATERAL. In the initial cut at LATERAL, I kept the rule that cheapest_total_path was always unparameterized, which meant it had to be NULL if the relation has no unparameterized paths. It turns out to work much more nicely if we always have *some* path nominated as cheapest-total for each relation. In particular, let's still say it's the cheapest unparameterized path if there is one; if not, take the cheapest-total-cost path among those of the minimum available parameterization. (The first rule is actually a special case of the second.) This allows reversion of some temporary lobotomizations I'd put in place. In particular, the planner can now consider hash and merge joins for joins below a parameter-supplying nestloop, even if there aren't any unparameterized paths available. This should bring planning of LATERAL-containing queries to the same level as queries not using that feature. Along the way, simplify management of parameterized paths in add_path() and friends. In the original coding for parameterized paths in 9.2, I tried to minimize the logic changes in add_path(), so it just treated parameterization as yet another dimension of comparison for paths. We later made it ignore pathkeys (sort ordering) of parameterized paths, on the grounds that ordering isn't a useful property for the path on the inside of a nestloop, so we might as well get rid of useless parameterized paths as quickly as possible. But we didn't take that reasoning as far as we should have. Startup cost isn't a useful property inside a nestloop either, so add_path() ought to discount startup cost of parameterized paths as well. Having done that, the secondary sorting I'd implemented (in add_parameterized_path) is no longer needed --- any parameterized path that survives add_path() at all is worth considering at higher levels. So this should be a bit faster as well as simpler.
2012-08-30 04:05:27 +02:00
* (regardless of ordering) among the unparameterized paths;
* or NULL if there is no unparameterized path
* cheapest_total_path - the pathlist member with lowest total cost
Adjust definition of cheapest_total_path to work better with LATERAL. In the initial cut at LATERAL, I kept the rule that cheapest_total_path was always unparameterized, which meant it had to be NULL if the relation has no unparameterized paths. It turns out to work much more nicely if we always have *some* path nominated as cheapest-total for each relation. In particular, let's still say it's the cheapest unparameterized path if there is one; if not, take the cheapest-total-cost path among those of the minimum available parameterization. (The first rule is actually a special case of the second.) This allows reversion of some temporary lobotomizations I'd put in place. In particular, the planner can now consider hash and merge joins for joins below a parameter-supplying nestloop, even if there aren't any unparameterized paths available. This should bring planning of LATERAL-containing queries to the same level as queries not using that feature. Along the way, simplify management of parameterized paths in add_path() and friends. In the original coding for parameterized paths in 9.2, I tried to minimize the logic changes in add_path(), so it just treated parameterization as yet another dimension of comparison for paths. We later made it ignore pathkeys (sort ordering) of parameterized paths, on the grounds that ordering isn't a useful property for the path on the inside of a nestloop, so we might as well get rid of useless parameterized paths as quickly as possible. But we didn't take that reasoning as far as we should have. Startup cost isn't a useful property inside a nestloop either, so add_path() ought to discount startup cost of parameterized paths as well. Having done that, the secondary sorting I'd implemented (in add_parameterized_path) is no longer needed --- any parameterized path that survives add_path() at all is worth considering at higher levels. So this should be a bit faster as well as simpler.
2012-08-30 04:05:27 +02:00
* (regardless of ordering) among the unparameterized paths;
* or if there is no unparameterized path, the path with lowest
* total cost among the paths with minimum parameterization
* cheapest_unique_path - for caching cheapest path to produce unique
Adjust definition of cheapest_total_path to work better with LATERAL. In the initial cut at LATERAL, I kept the rule that cheapest_total_path was always unparameterized, which meant it had to be NULL if the relation has no unparameterized paths. It turns out to work much more nicely if we always have *some* path nominated as cheapest-total for each relation. In particular, let's still say it's the cheapest unparameterized path if there is one; if not, take the cheapest-total-cost path among those of the minimum available parameterization. (The first rule is actually a special case of the second.) This allows reversion of some temporary lobotomizations I'd put in place. In particular, the planner can now consider hash and merge joins for joins below a parameter-supplying nestloop, even if there aren't any unparameterized paths available. This should bring planning of LATERAL-containing queries to the same level as queries not using that feature. Along the way, simplify management of parameterized paths in add_path() and friends. In the original coding for parameterized paths in 9.2, I tried to minimize the logic changes in add_path(), so it just treated parameterization as yet another dimension of comparison for paths. We later made it ignore pathkeys (sort ordering) of parameterized paths, on the grounds that ordering isn't a useful property for the path on the inside of a nestloop, so we might as well get rid of useless parameterized paths as quickly as possible. But we didn't take that reasoning as far as we should have. Startup cost isn't a useful property inside a nestloop either, so add_path() ought to discount startup cost of parameterized paths as well. Having done that, the secondary sorting I'd implemented (in add_parameterized_path) is no longer needed --- any parameterized path that survives add_path() at all is worth considering at higher levels. So this should be a bit faster as well as simpler.
2012-08-30 04:05:27 +02:00
* (no duplicates) output from relation; NULL if not yet requested
* cheapest_parameterized_paths - best paths for their parameterizations;
* always includes cheapest_total_path, even if that's unparameterized
*
* If the relation is a base relation it will have these fields set:
*
* relid - RTE index (this is redundant with the relids field, but
* is provided for convenience of access)
* rtekind - distinguishes plain relation, subquery, or function RTE
* min_attr, max_attr - range of valid AttrNumbers for rel
* attr_needed - array of bitmapsets indicating the highest joinrel
* in which each attribute is needed; if bit 0 is set then
* the attribute is needed as part of final targetlist
* attr_widths - cache space for per-attribute width estimates;
* zero means not computed yet
* lateral_vars - lateral cross-references of rel, if any (list of
* Vars and PlaceHolderVars)
* lateral_relids - required outer rels for LATERAL, as a Relids set
* (for child rels this can be more than lateral_vars)
* lateral_referencers - relids of rels that reference this one laterally
* indexlist - list of IndexOptInfo nodes for relation's indexes
* (always NIL if it's not a table)
* pages - number of disk pages in relation (zero if not a table)
* tuples - number of tuples in relation (not considering restrictions)
* allvisfrac - fraction of disk pages that are marked all-visible
* subplan - plan for subquery (NULL if it's not a subquery)
* subroot - PlannerInfo for subquery (NULL if it's not a subquery)
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
* subplan_params - list of PlannerParamItems to be passed to subquery
*
* Note: for a subquery, tuples, subplan, subroot are not set immediately
* upon creation of the RelOptInfo object; they are filled in when
Code review for foreign/custom join pushdown patch. Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
2015-05-10 20:36:30 +02:00
* set_subquery_pathlist processes the object.
*
* For otherrels that are appendrel members, these fields are filled
* in just as for a baserel.
*
Code review for foreign/custom join pushdown patch. Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
2015-05-10 20:36:30 +02:00
* If the relation is either a foreign table or a join of foreign tables that
* all belong to the same foreign server, these fields will be set:
*
* serverid - OID of foreign server, if foreign table (else InvalidOid)
* fdwroutine - function hooks for FDW, if foreign table (else NULL)
* fdw_private - private state for FDW, if foreign table (else NULL)
*
* The presence of the remaining fields depends on the restrictions
* and joins that the relation participates in:
*
* baserestrictinfo - List of RestrictInfo nodes, containing info about
* each non-join qualification clause in which this relation
* participates (only used for base rels)
* baserestrictcost - Estimated cost of evaluating the baserestrictinfo
* clauses at a single tuple (only used for base rels)
* joininfo - List of RestrictInfo nodes, containing info about each
* join clause in which this relation participates (but
* note this excludes clauses that might be derivable from
* EquivalenceClasses)
* has_eclass_joins - flag that EquivalenceClass joins are possible
*
* Note: Keeping a restrictinfo list in the RelOptInfo is useful only for
* base rels, because for a join rel the set of clauses that are treated as
* restrict clauses varies depending on which sub-relations we choose to join.
* (For example, in a 3-base-rel join, a clause relating rels 1 and 2 must be
* treated as a restrictclause if we join {1} and {2 3} to make {1 2 3}; but
* if we join {1 2} and {3} then that clause will be a restrictclause in {1 2}
* and should not be processed again at the level of {1 2 3}.) Therefore,
* the restrictinfo list in the join case appears in individual JoinPaths
* (field joinrestrictinfo), not in the parent relation. But it's OK for
* the RelOptInfo to store the joininfo list, because that is the same
* for a given rel no matter how we form it.
*
* We store baserestrictcost in the RelOptInfo (for base relations) because
* we know we will need it at least once (to price the sequential scan)
* and may need it multiple times to price index scans.
*----------
*/
typedef enum RelOptKind
{
RELOPT_BASEREL,
RELOPT_JOINREL,
RELOPT_OTHER_MEMBER_REL,
RELOPT_DEADREL
} RelOptKind;
1998-07-18 06:22:52 +02:00
typedef struct RelOptInfo
{
NodeTag type;
RelOptKind reloptkind;
/* all relations included in this RelOptInfo */
Relids relids; /* set of base relids (rangetable indexes) */
/* size estimates generated by planner */
double rows; /* estimated number of result tuples */
int width; /* estimated avg width of result tuples */
/* per-relation planner control flags */
bool consider_startup; /* keep cheap-startup-cost paths? */
Fix planner's cost estimation for SEMI/ANTI joins with inner indexscans. When the inner side of a nestloop SEMI or ANTI join is an indexscan that uses all the join clauses as indexquals, it can be presumed that both matched and unmatched outer rows will be processed very quickly: for matched rows, we'll stop after fetching one row from the indexscan, while for unmatched rows we'll have an indexscan that finds no matching index entries, which should also be quick. The planner already knew about this, but it was nonetheless charging for at least one full run of the inner indexscan, as a consequence of concerns about the behavior of materialized inner scans --- but those concerns don't apply in the fast case. If the inner side has low cardinality (many matching rows) this could make an indexscan plan look far more expensive than it actually is. To fix, rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we don't add the inner scan cost until we've inspected the indexquals, and then we can add either the full-run cost or just the first tuple's cost as appropriate. Experimentation with this fix uncovered another problem: add_path and friends were coded to disregard cheap startup cost when considering parameterized paths. That's usually okay (and desirable, because it thins the path herd faster); but in this fast case for SEMI/ANTI joins, it could result in throwing away the desired plain indexscan path in favor of a bitmap scan path before we ever get to the join costing logic. In the many-matching-rows cases of interest here, a bitmap scan will do a lot more work than required, so this is a problem. To fix, add a per-relation flag consider_param_startup that works like the existing consider_startup flag, but applies to parameterized paths, and set it for relations that are the inside of a SEMI or ANTI join. To make this patch reasonably safe to back-patch, care has been taken to avoid changing the planner's behavior except in the very narrow case of SEMI/ANTI joins with inner indexscans. There are places in compare_path_costs_fuzzily and add_path_precheck that are not terribly consistent with the new approach, but changing them will affect planner decisions at the margins in other cases, so we'll leave that for a HEAD-only fix. Back-patch to 9.3; before that, the consider_startup flag didn't exist, meaning that the second aspect of the patch would be too invasive. Per a complaint from Peter Holzer and analysis by Tomas Vondra.
2015-06-03 17:58:47 +02:00
bool consider_param_startup; /* ditto, for parameterized paths? */
/* materialization information */
List *reltargetlist; /* Vars to be output by scan of relation */
List *pathlist; /* Path structures */
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
List *ppilist; /* ParamPathInfos used in pathlist */
struct Path *cheapest_startup_path;
struct Path *cheapest_total_path;
struct Path *cheapest_unique_path;
List *cheapest_parameterized_paths;
/* information about a base rel (not set for join rels!) */
Index relid;
Oid reltablespace; /* containing tablespace */
RTEKind rtekind; /* RELATION, SUBQUERY, or FUNCTION */
AttrNumber min_attr; /* smallest attrno of rel (often <0) */
AttrNumber max_attr; /* largest attrno of rel */
Relids *attr_needed; /* array indexed [min_attr .. max_attr] */
int32 *attr_widths; /* array indexed [min_attr .. max_attr] */
List *lateral_vars; /* LATERAL Vars and PHVs referenced by rel */
Relids lateral_relids; /* minimum parameterization of rel */
Relids lateral_referencers; /* rels that reference me laterally */
List *indexlist; /* list of IndexOptInfo */
BlockNumber pages; /* size estimates derived from pg_class */
double tuples;
double allvisfrac;
Revise FDW planning API, again. Further reflection shows that a single callback isn't very workable if we desire to let FDWs generate multiple Paths, because that forces the FDW to do all work necessary to generate a valid Plan node for each Path. Instead split the former PlanForeignScan API into three steps: GetForeignRelSize, GetForeignPaths, GetForeignPlan. We had already bit the bullet of breaking the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain, and it's substantially more flexible for complex FDWs. Add an fdw_private field to RelOptInfo so that the new functions can save state there rather than possibly having to recalculate information two or three times. In addition, we'd not thought through what would be needed to allow an FDW to set up subexpressions of its choice for runtime execution. We could treat ForeignScan.fdw_private as an executable expression but that seems likely to break existing FDWs unnecessarily (in particular, it would restrict the set of node types allowable in fdw_private to those supported by expression_tree_walker). Instead, invent a separate field fdw_exprs which will receive the postprocessing appropriate for expression trees. (One field is enough since it can be a list of expressions; also, we assume the corresponding expression state tree(s) will be held within fdw_state, so we don't need to add anything to ForeignScanState.) Per review of Hanada Shigeru's pgsql_fdw patch. We may need to tweak this further as we continue to work on that patch, but to me it feels a lot closer to being right now.
2012-03-09 18:48:48 +01:00
/* use "struct Plan" to avoid including plannodes.h here */
struct Plan *subplan; /* if subquery */
PlannerInfo *subroot; /* if subquery */
List *subplan_params; /* if subquery */
Code review for foreign/custom join pushdown patch. Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
2015-05-10 20:36:30 +02:00
/* Information about foreign tables and foreign joins */
Oid serverid; /* identifies server for the table or join */
Revise FDW planning API, again. Further reflection shows that a single callback isn't very workable if we desire to let FDWs generate multiple Paths, because that forces the FDW to do all work necessary to generate a valid Plan node for each Path. Instead split the former PlanForeignScan API into three steps: GetForeignRelSize, GetForeignPaths, GetForeignPlan. We had already bit the bullet of breaking the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain, and it's substantially more flexible for complex FDWs. Add an fdw_private field to RelOptInfo so that the new functions can save state there rather than possibly having to recalculate information two or three times. In addition, we'd not thought through what would be needed to allow an FDW to set up subexpressions of its choice for runtime execution. We could treat ForeignScan.fdw_private as an executable expression but that seems likely to break existing FDWs unnecessarily (in particular, it would restrict the set of node types allowable in fdw_private to those supported by expression_tree_walker). Instead, invent a separate field fdw_exprs which will receive the postprocessing appropriate for expression trees. (One field is enough since it can be a list of expressions; also, we assume the corresponding expression state tree(s) will be held within fdw_state, so we don't need to add anything to ForeignScanState.) Per review of Hanada Shigeru's pgsql_fdw patch. We may need to tweak this further as we continue to work on that patch, but to me it feels a lot closer to being right now.
2012-03-09 18:48:48 +01:00
/* use "struct FdwRoutine" to avoid including fdwapi.h here */
Code review for foreign/custom join pushdown patch. Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
2015-05-10 20:36:30 +02:00
struct FdwRoutine *fdwroutine;
void *fdw_private;
/* used by various scans and joins: */
2005-10-15 04:49:52 +02:00
List *baserestrictinfo; /* RestrictInfo structures (if base
* rel) */
QualCost baserestrictcost; /* cost of evaluating the above */
List *joininfo; /* RestrictInfo structures for join clauses
* involving this rel */
bool has_eclass_joins; /* T means joininfo is incomplete */
1999-05-26 00:43:53 +02:00
} RelOptInfo;
/*
* IndexOptInfo
* Per-index information for planning/optimization
*
* indexkeys[], indexcollations[], opfamily[], and opcintype[]
* each have ncolumns entries.
*
* sortopfamily[], reverse_sort[], and nulls_first[] likewise have
* ncolumns entries, if the index is ordered; but if it is unordered,
* those pointers are NULL.
*
* Zeroes in the indexkeys[] array indicate index columns that are
* expressions; there is one element in indexprs for each such column.
*
* For an ordered index, reverse_sort[] and nulls_first[] describe the
* sort ordering of a forward indexscan; we can also consider a backward
* indexscan, which will generate the reverse ordering.
*
* The indexprs and indpred expressions have been run through
* prepqual.c and eval_const_expressions() for ease of matching to
* WHERE clauses. indpred is in implicit-AND form.
*
* indextlist is a TargetEntry list representing the index columns.
* It provides an equivalent base-relation Var for each simple column,
* and links to the matching indexprs element for each expression column.
*/
typedef struct IndexOptInfo
{
NodeTag type;
Oid indexoid; /* OID of the index relation */
Oid reltablespace; /* tablespace of index (not table) */
RelOptInfo *rel; /* back-link to index's table */
Redesign the planner's handling of index-descent cost estimation. Historically we've used a couple of very ad-hoc fudge factors to try to get the right results when indexes of different sizes would satisfy a query with the same number of index leaf tuples being visited. In commit 21a39de5809cd3050a37d2554323cc1d0cbeed9d I tweaked one of these fudge factors, with results that proved disastrous for larger indexes. Commit bf01e34b556ff37982ba2d882db424aa484c0d07 fudged it some more, but still with not a lot of principle behind it. What seems like a better way to address these issues is to explicitly model index-descent costs, since that's what's really at stake when considering diferent indexes with similar leaf-page-level costs. We tried that once long ago, and found that charging random_page_cost per page descended through was way too much, because upper btree levels tend to stay in cache in real-world workloads. However, there's still CPU costs to think about, and the previous fudge factors can be seen as a crude attempt to account for those costs. So this patch replaces those fudge factors with explicit charges for the number of tuple comparisons needed to descend the index tree, plus a small charge per page touched in the descent. The cost multipliers are chosen so that the resulting charges are in the vicinity of the historical (pre-9.2) fudge factors for indexes of up to about a million tuples, while not ballooning unreasonably beyond that, as the old fudge factor did (even more so in 9.2). To make this work accurately for btree indexes, add some code that allows extraction of the known root-page height from a btree. There's no equivalent number readily available for other index types, but we can use the log of the number of index pages as an approximate substitute. This seems like too much of a behavioral change to risk back-patching, but it should improve matters going forward. In 9.2 I'll just revert the fudge-factor change.
2013-01-11 18:56:58 +01:00
/* index-size statistics (from pg_class and elsewhere) */
2005-10-15 04:49:52 +02:00
BlockNumber pages; /* number of disk pages in index */
double tuples; /* number of index tuples in index */
Redesign the planner's handling of index-descent cost estimation. Historically we've used a couple of very ad-hoc fudge factors to try to get the right results when indexes of different sizes would satisfy a query with the same number of index leaf tuples being visited. In commit 21a39de5809cd3050a37d2554323cc1d0cbeed9d I tweaked one of these fudge factors, with results that proved disastrous for larger indexes. Commit bf01e34b556ff37982ba2d882db424aa484c0d07 fudged it some more, but still with not a lot of principle behind it. What seems like a better way to address these issues is to explicitly model index-descent costs, since that's what's really at stake when considering diferent indexes with similar leaf-page-level costs. We tried that once long ago, and found that charging random_page_cost per page descended through was way too much, because upper btree levels tend to stay in cache in real-world workloads. However, there's still CPU costs to think about, and the previous fudge factors can be seen as a crude attempt to account for those costs. So this patch replaces those fudge factors with explicit charges for the number of tuple comparisons needed to descend the index tree, plus a small charge per page touched in the descent. The cost multipliers are chosen so that the resulting charges are in the vicinity of the historical (pre-9.2) fudge factors for indexes of up to about a million tuples, while not ballooning unreasonably beyond that, as the old fudge factor did (even more so in 9.2). To make this work accurately for btree indexes, add some code that allows extraction of the known root-page height from a btree. There's no equivalent number readily available for other index types, but we can use the log of the number of index pages as an approximate substitute. This seems like too much of a behavioral change to risk back-patching, but it should improve matters going forward. In 9.2 I'll just revert the fudge-factor change.
2013-01-11 18:56:58 +01:00
int tree_height; /* index tree height, or -1 if unknown */
/* index descriptor information */
int ncolumns; /* number of columns in index */
int *indexkeys; /* column numbers of index's keys, or 0 */
Oid *indexcollations; /* OIDs of collations of index columns */
Oid *opfamily; /* OIDs of operator families for columns */
Oid *opcintype; /* OIDs of opclass declared input data types */
Oid *sortopfamily; /* OIDs of btree opfamilies, if orderable */
bool *reverse_sort; /* is sort order descending? */
bool *nulls_first; /* do NULLs come first in the sort order? */
bool *canreturn; /* which index cols can be returned in an
Code review for foreign/custom join pushdown patch. Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
2015-05-10 20:36:30 +02:00
* index-only scan? */
Oid relam; /* OID of the access method (in pg_am) */
RegProcedure amcostestimate; /* OID of the access method's cost fcn */
2005-10-15 04:49:52 +02:00
List *indexprs; /* expressions for non-simple index columns */
List *indpred; /* predicate if a partial index, else NIL */
List *indextlist; /* targetlist representing index columns */
bool predOK; /* true if predicate matches query */
bool unique; /* true if a unique index */
bool immediate; /* is uniqueness enforced immediately? */
bool hypothetical; /* true if index doesn't really exist */
2011-04-10 17:42:00 +02:00
bool amcanorderbyop; /* does AM support order by operator result? */
bool amoptionalkey; /* can query omit key for the first column? */
bool amsearcharray; /* can AM handle ScalarArrayOpExpr quals? */
bool amsearchnulls; /* can AM search for NULL/NOT NULL entries? */
bool amhasgettuple; /* does AM have amgettuple interface? */
bool amhasgetbitmap; /* does AM have amgetbitmap interface? */
} IndexOptInfo;
/*
* EquivalenceClasses
*
* Whenever we can determine that a mergejoinable equality clause A = B is
* not delayed by any outer join, we create an EquivalenceClass containing
* the expressions A and B to record this knowledge. If we later find another
* equivalence B = C, we add C to the existing EquivalenceClass; this may
* require merging two existing EquivalenceClasses. At the end of the qual
* distribution process, we have sets of values that are known all transitively
* equal to each other, where "equal" is according to the rules of the btree
* operator family(s) shown in ec_opfamilies, as well as the collation shown
* by ec_collation. (We restrict an EC to contain only equalities whose
* operators belong to the same set of opfamilies. This could probably be
* relaxed, but for now it's not worth the trouble, since nearly all equality
* operators belong to only one btree opclass anyway. Similarly, we suppose
* that all or none of the input datatypes are collatable, so that a single
* collation value is sufficient.)
*
* We also use EquivalenceClasses as the base structure for PathKeys, letting
* us represent knowledge about different sort orderings being equivalent.
* Since every PathKey must reference an EquivalenceClass, we will end up
* with single-member EquivalenceClasses whenever a sort key expression has
* not been equivalenced to anything else. It is also possible that such an
* EquivalenceClass will contain a volatile expression ("ORDER BY random()"),
* which is a case that can't arise otherwise since clauses containing
* volatile functions are never considered mergejoinable. We mark such
* EquivalenceClasses specially to prevent them from being merged with
* ordinary EquivalenceClasses. Also, for volatile expressions we have
* to be careful to match the EquivalenceClass to the correct targetlist
* entry: consider SELECT random() AS a, random() AS b ... ORDER BY b,a.
* So we record the SortGroupRef of the originating sort clause.
*
* We allow equality clauses appearing below the nullable side of an outer join
* to form EquivalenceClasses, but these have a slightly different meaning:
* the included values might be all NULL rather than all the same non-null
* values. See src/backend/optimizer/README for more on that point.
*
* NB: if ec_merged isn't NULL, this class has been merged into another, and
* should be ignored in favor of using the pointed-to class.
*/
typedef struct EquivalenceClass
{
NodeTag type;
2007-11-15 22:14:46 +01:00
List *ec_opfamilies; /* btree operator family OIDs */
Oid ec_collation; /* collation, if datatypes are collatable */
2007-11-15 22:14:46 +01:00
List *ec_members; /* list of EquivalenceMembers */
List *ec_sources; /* list of generating RestrictInfos */
List *ec_derives; /* list of derived RestrictInfos */
Relids ec_relids; /* all relids appearing in ec_members, except
* for child members (see below) */
2007-11-15 22:14:46 +01:00
bool ec_has_const; /* any pseudoconstants in ec_members? */
bool ec_has_volatile; /* the (sole) member is a volatile expr */
bool ec_below_outer_join; /* equivalence applies below an OJ */
2007-11-15 22:14:46 +01:00
bool ec_broken; /* failed to generate needed clauses? */
Index ec_sortref; /* originating sortclause label, or 0 */
struct EquivalenceClass *ec_merged; /* set if merged into another EC */
} EquivalenceClass;
Fix some planner issues found while investigating Kevin Grittner's report of poorer planning in 8.3 than 8.2: 1. After pushing a constant across an outer join --- ie, given "a LEFT JOIN b ON (a.x = b.y) WHERE a.x = 42", we can deduce that b.y is sort of equal to 42, in the sense that we needn't fetch any b rows where it isn't 42 --- loop to see if any additional deductions can be made. Previous releases did that by recursing, but I had mistakenly thought that this was no longer necessary given the EquivalenceClass machinery. 2. Allow pushing constants across outer join conditions even if the condition is outerjoin_delayed due to a lower outer join. This is safe as long as the condition is strict and we re-test it at the upper join. 3. Keep the outer-join clause even if we successfully push a constant across it. This is *necessary* in the outerjoin_delayed case, but even in the simple case, it seems better to do this to ensure that the join search order heuristics will consider the join as reasonable to make. Mark such a clause as having selectivity 1.0, though, since it's not going to eliminate very many rows after application of the constant condition. 4. Tweak have_relevant_eclass_joinclause to report that two relations are joinable when they have vars that are equated to the same constant. We won't actually generate any joinclause from such an EquivalenceClass, but again it seems that in such a case it's a good idea to consider the join as worth costing out. 5. Fix a bug in select_mergejoin_clauses that was exposed by these changes: we have to reject candidate mergejoin clauses if either side was equated to a constant, because we can't construct a canonical pathkey list for such a clause. This is an implementation restriction that might be worth fixing someday, but it doesn't seem critical to get it done for 8.3.
2008-01-09 21:42:29 +01:00
/*
* If an EC contains a const and isn't below-outer-join, any PathKey depending
* on it must be redundant, since there's only one possible value of the key.
*/
#define EC_MUST_BE_REDUNDANT(eclass) \
((eclass)->ec_has_const && !(eclass)->ec_below_outer_join)
/*
* EquivalenceMember - one member expression of an EquivalenceClass
*
* em_is_child signifies that this element was built by transposing a member
Revisit handling of UNION ALL subqueries with non-Var output columns. In commit 57664ed25e5dea117158a2e663c29e60b3546e1c I tried to fix a bug reported by Teodor Sigaev by making non-simple-Var output columns distinct (by wrapping their expressions with dummy PlaceHolderVar nodes). This did not work too well. Commit b28ffd0fcc583c1811e5295279e7d4366c3cae6c fixed some ensuing problems with matching to child indexes, but per a recent report from Claus Stadler, constraint exclusion of UNION ALL subqueries was still broken, because constant-simplification didn't handle the injected PlaceHolderVars well either. On reflection, the original patch was quite misguided: there is no reason to expect that EquivalenceClass child members will be distinct. So instead of trying to make them so, we should ensure that we can cope with the situation when they're not. Accordingly, this patch reverts the code changes in the above-mentioned commits (though the regression test cases they added stay). Instead, I've added assorted defenses to make sure that duplicate EC child members don't cause any problems. Teodor's original problem ("MergeAppend child's targetlist doesn't match MergeAppend") is addressed more directly by revising prepare_sort_from_pathkeys to let the parent MergeAppend's sort list guide creation of each child's sort list. In passing, get rid of add_sort_column; as far as I can tell, testing for duplicate sort keys at this stage is dead code. Certainly it doesn't trigger often enough to be worth expending cycles on in ordinary queries. And keeping the test would've greatly complicated the new logic in prepare_sort_from_pathkeys, because comparing pathkey list entries against a previous output array requires that we not skip any entries in the list. Back-patch to 9.1, like the previous patches. The only known issue in this area that wasn't caused by the ill-advised previous patches was the MergeAppend planning failure, which of course is not relevant before 9.1. It's possible that we need some of the new defenses against duplicate child EC entries in older branches, but until there's some clear evidence of that I'm going to refrain from back-patching further.
2012-03-16 18:11:12 +01:00
* for an appendrel parent relation to represent the corresponding expression
* for an appendrel child. These members are used for determining the
Revisit handling of UNION ALL subqueries with non-Var output columns. In commit 57664ed25e5dea117158a2e663c29e60b3546e1c I tried to fix a bug reported by Teodor Sigaev by making non-simple-Var output columns distinct (by wrapping their expressions with dummy PlaceHolderVar nodes). This did not work too well. Commit b28ffd0fcc583c1811e5295279e7d4366c3cae6c fixed some ensuing problems with matching to child indexes, but per a recent report from Claus Stadler, constraint exclusion of UNION ALL subqueries was still broken, because constant-simplification didn't handle the injected PlaceHolderVars well either. On reflection, the original patch was quite misguided: there is no reason to expect that EquivalenceClass child members will be distinct. So instead of trying to make them so, we should ensure that we can cope with the situation when they're not. Accordingly, this patch reverts the code changes in the above-mentioned commits (though the regression test cases they added stay). Instead, I've added assorted defenses to make sure that duplicate EC child members don't cause any problems. Teodor's original problem ("MergeAppend child's targetlist doesn't match MergeAppend") is addressed more directly by revising prepare_sort_from_pathkeys to let the parent MergeAppend's sort list guide creation of each child's sort list. In passing, get rid of add_sort_column; as far as I can tell, testing for duplicate sort keys at this stage is dead code. Certainly it doesn't trigger often enough to be worth expending cycles on in ordinary queries. And keeping the test would've greatly complicated the new logic in prepare_sort_from_pathkeys, because comparing pathkey list entries against a previous output array requires that we not skip any entries in the list. Back-patch to 9.1, like the previous patches. The only known issue in this area that wasn't caused by the ill-advised previous patches was the MergeAppend planning failure, which of course is not relevant before 9.1. It's possible that we need some of the new defenses against duplicate child EC entries in older branches, but until there's some clear evidence of that I'm going to refrain from back-patching further.
2012-03-16 18:11:12 +01:00
* pathkeys of scans on the child relation and for explicitly sorting the
* child when necessary to build a MergeAppend path for the whole appendrel
* tree. An em_is_child member has no impact on the properties of the EC as a
* whole; in particular the EC's ec_relids field does NOT include the child
* relation. An em_is_child member should never be marked em_is_const nor
* cause ec_has_const or ec_has_volatile to be set, either. Thus, em_is_child
* members are not really full-fledged members of the EC, but just reflections
* or doppelgangers of real members. Most operations on EquivalenceClasses
* should ignore em_is_child members, and those that don't should test
* em_relids to make sure they only consider relevant members.
*
* em_datatype is usually the same as exprType(em_expr), but can be
* different when dealing with a binary-compatible opfamily; in particular
* anyarray_ops would never work without this. Use em_datatype when
* looking up a specific btree operator to work with this expression.
*/
typedef struct EquivalenceMember
{
NodeTag type;
Expr *em_expr; /* the expression represented */
Relids em_relids; /* all relids appearing in em_expr */
Fix planning of non-strict equivalence clauses above outer joins. If a potential equivalence clause references a variable from the nullable side of an outer join, the planner needs to take care that derived clauses are not pushed to below the outer join; else they may use the wrong value for the variable. (The problem arises only with non-strict clauses, since if an upper clause can be proven strict then the outer join will get simplified to a plain join.) The planner attempted to prevent this type of error by checking that potential equivalence clauses aren't outerjoin-delayed as a whole, but actually we have to check each side separately, since the two sides of the clause will get moved around separately if it's treated as an equivalence. Bugs of this type can be demonstrated as far back as 7.4, even though releases before 8.3 had only a very ad-hoc notion of equivalence clauses. In addition, we neglected to account for the possibility that such clauses might have nonempty nullable_relids even when not outerjoin-delayed; so the equivalence-class machinery lacked logic to compute correct nullable_relids values for clauses it constructs. This oversight was harmless before 9.2 because we were only using RestrictInfo.nullable_relids for OR clauses; but as of 9.2 it could result in pushing constructed equivalence clauses to incorrect places. (This accounts for bug #7604 from Bill MacArthur.) Fix the first problem by adding a new test check_equivalence_delay() in distribute_qual_to_rels, and fix the second one by adding code in equivclass.c and called functions to set correct nullable_relids for generated clauses. Although I believe the second part of this is not currently necessary before 9.2, I chose to back-patch it anyway, partly to keep the logic similar across branches and partly because it seems possible we might find other reasons why we need valid values of nullable_relids in the older branches. Add regression tests illustrating these problems. In 9.0 and up, also add test cases checking that we can push constants through outer joins, since we've broken that optimization before and I nearly broke it again with an overly simplistic patch for this problem.
2012-10-18 18:28:45 +02:00
Relids em_nullable_relids; /* nullable by lower outer joins */
bool em_is_const; /* expression is pseudoconstant? */
bool em_is_child; /* derived version for a child relation? */
Oid em_datatype; /* the "nominal type" used by the opfamily */
} EquivalenceMember;
/*
* PathKeys
*
* The sort ordering of a path is represented by a list of PathKey nodes.
* An empty list implies no known ordering. Otherwise the first item
* represents the primary sort key, the second the first secondary sort key,
* etc. The value being sorted is represented by linking to an
* EquivalenceClass containing that value and including pk_opfamily among its
* ec_opfamilies. The EquivalenceClass tells which collation to use, too.
* This is a convenient method because it makes it trivial to detect
* equivalent and closely-related orderings. (See optimizer/README for more
* information.)
*
* Note: pk_strategy is either BTLessStrategyNumber (for ASC) or
* BTGreaterStrategyNumber (for DESC). We assume that all ordering-capable
* index types will use btree-compatible strategy numbers.
*/
typedef struct PathKey
{
NodeTag type;
EquivalenceClass *pk_eclass; /* the value that is ordered */
2007-11-15 22:14:46 +01:00
Oid pk_opfamily; /* btree opfamily defining the ordering */
int pk_strategy; /* sort direction (ASC or DESC) */
bool pk_nulls_first; /* do NULLs come before normal values? */
} PathKey;
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
/*
* ParamPathInfo
*
* All parameterized paths for a given relation with given required outer rels
* link to a single ParamPathInfo, which stores common information such as
* the estimated rowcount for this parameterization. We do this partly to
* avoid recalculations, but mostly to ensure that the estimated rowcount
* is in fact the same for every such path.
*
* Note: ppi_clauses is only used in ParamPathInfos for base relation paths;
* in join cases it's NIL because the set of relevant clauses varies depending
* on how the join is formed. The relevant clauses will appear in each
* parameterized join path's joinrestrictinfo list, instead.
*/
typedef struct ParamPathInfo
{
NodeTag type;
Relids ppi_req_outer; /* rels supplying parameters used by path */
double ppi_rows; /* estimated number of result tuples */
List *ppi_clauses; /* join clauses available from outer rels */
} ParamPathInfo;
/*
* Type "Path" is used as-is for sequential-scan paths, as well as some other
* simple plan types that we don't need any extra information in the path for.
* For other path types it is the first component of a larger struct.
2002-11-27 21:52:04 +01:00
*
* "pathtype" is the NodeTag of the Plan node we could build from this Path.
* It is partially redundant with the Path's NodeTag, but allows us to use
* the same Path type for multiple Plan types when there is no need to
* distinguish the Plan type during path processing.
*
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
* "param_info", if not NULL, links to a ParamPathInfo that identifies outer
* relation(s) that provide parameter values to each scan of this path.
* That means this path can only be joined to those rels by means of nestloop
* joins with this path on the inside. Also note that a parameterized path
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
* is responsible for testing all "movable" joinclauses involving this rel
* and the specified outer rel(s).
*
* "rows" is the same as parent->rows in simple paths, but in parameterized
* paths and UniquePaths it can be less than parent->rows, reflecting the
* fact that we've filtered by extra join conditions or removed duplicates.
*
* "pathkeys" is a List of PathKey nodes (see above), describing the sort
* ordering of the path's output rows.
*/
typedef struct Path
{
NodeTag type;
NodeTag pathtype; /* tag identifying scan/join method */
RelOptInfo *parent; /* the relation this path can build */
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
ParamPathInfo *param_info; /* parameterization info, or NULL if none */
/* estimated size/costs for path (see costsize.c for more info) */
double rows; /* estimated number of result tuples */
2005-10-15 04:49:52 +02:00
Cost startup_cost; /* cost expended before fetching any tuples */
Cost total_cost; /* total cost (assuming all tuples fetched) */
List *pathkeys; /* sort ordering of path's output */
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
/* pathkeys is a List of PathKey nodes; see above */
} Path;
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
/* Macro for extracting a path's parameterization relids; beware double eval */
#define PATH_REQ_OUTER(path) \
((path)->param_info ? (path)->param_info->ppi_req_outer : (Relids) NULL)
/*----------
* IndexPath represents an index scan over a single index.
*
* This struct is used for both regular indexscans and index-only scans;
* path.pathtype is T_IndexScan or T_IndexOnlyScan to show which is meant.
*
* 'indexinfo' is the index to be scanned.
*
* 'indexclauses' is a list of index qualification clauses, with implicit
* AND semantics across the list. Each clause is a RestrictInfo node from
* the query's WHERE or JOIN conditions. An empty list implies a full
* index scan.
*
* 'indexquals' has the same structure as 'indexclauses', but it contains
* the actual index qual conditions that can be used with the index.
* In simple cases this is identical to 'indexclauses', but when special
* indexable operators appear in 'indexclauses', they are replaced by the
* derived indexscannable conditions in 'indexquals'.
*
* 'indexqualcols' is an integer list of index column numbers (zero-based)
* of the same length as 'indexquals', showing which index column each qual
* is meant to be used with. 'indexquals' is required to be ordered by
* index column, so 'indexqualcols' must form a nondecreasing sequence.
* (The order of multiple quals for the same index column is unspecified.)
*
* 'indexorderbys', if not NIL, is a list of ORDER BY expressions that have
* been found to be usable as ordering operators for an amcanorderbyop index.
* The list must match the path's pathkeys, ie, one expression per pathkey
* in the same order. These are not RestrictInfos, just bare expressions,
Improve planner's handling of duplicated index column expressions. It's potentially useful for an index to repeat the same indexable column or expression in multiple index columns, if the columns have different opclasses. (If they share opclasses too, the duplicate column is pretty useless, but nonetheless we've allowed such cases since 9.0.) However, the planner failed to cope with this, because createplan.c was relying on simple equal() matching to figure out which index column each index qual is intended for. We do have that information available upstream in indxpath.c, though, so the fix is to not flatten the multi-level indexquals list when putting it into an IndexPath. Then we can rely on the sublist structure to identify target index columns in createplan.c. There's a similar issue for index ORDER BYs (the KNNGIST feature), so introduce a multi-level-list representation for that too. This adds a bit more representational overhead, but we might more or less buy that back by not having to search for matching index columns anymore in createplan.c; likewise btcostestimate saves some cycles. Per bug #6351 from Christian Rudolph. Likely symptoms include the "btree index keys must be ordered by attribute" failure shown there, as well as "operator MMMM is not a member of opfamily NNNN". Although this is a pre-existing problem that can be demonstrated in 9.0 and 9.1, I'm not going to back-patch it, because the API changes in the planner seem likely to break things such as index plugins. The corner cases where this matters seem too narrow to justify possibly breaking things in a minor release.
2011-12-24 00:44:21 +01:00
* since they generally won't yield booleans. Also, unlike the case for
* quals, it's guaranteed that each expression has the index key on the left
* side of the operator.
*
* 'indexorderbycols' is an integer list of index column numbers (zero-based)
* of the same length as 'indexorderbys', showing which index column each
* ORDER BY expression is meant to be used with. (There is no restriction
* on which index column each ORDER BY can be used with.)
*
* 'indexscandir' is one of:
* ForwardScanDirection: forward scan of an ordered index
* BackwardScanDirection: backward scan of an ordered index
* NoMovementScanDirection: scan of an unordered index, or don't care
* (The executor doesn't care whether it gets ForwardScanDirection or
* NoMovementScanDirection for an indexscan, but the planner wants to
* distinguish ordered from unordered indexes for building pathkeys.)
*
* 'indextotalcost' and 'indexselectivity' are saved in the IndexPath so that
* we need not recompute them when considering using the same index in a
* bitmap index/heap scan (see BitmapHeapPath). The costs of the IndexPath
* itself represent the costs of an IndexScan or IndexOnlyScan plan type.
*----------
*/
typedef struct IndexPath
{
Path path;
IndexOptInfo *indexinfo;
List *indexclauses;
List *indexquals;
List *indexqualcols;
List *indexorderbys;
List *indexorderbycols;
ScanDirection indexscandir;
Cost indextotalcost;
Selectivity indexselectivity;
} IndexPath;
/*
* BitmapHeapPath represents one or more indexscans that generate TID bitmaps
* instead of directly accessing the heap, followed by AND/OR combinations
* to produce a single bitmap, followed by a heap scan that uses the bitmap.
* Note that the output is always considered unordered, since it will come
* out in physical heap order no matter what the underlying indexes did.
*
* The individual indexscans are represented by IndexPath nodes, and any
* logic on top of them is represented by a tree of BitmapAndPath and
* BitmapOrPath nodes. Notice that we can use the same IndexPath node both
* to represent a regular (or index-only) index scan plan, and as the child
* of a BitmapHeapPath that represents scanning the same index using a
* BitmapIndexScan. The startup_cost and total_cost figures of an IndexPath
* always represent the costs to use it as a regular (or index-only)
* IndexScan. The costs of a BitmapIndexScan can be computed using the
* IndexPath's indextotalcost and indexselectivity.
*/
typedef struct BitmapHeapPath
{
Path path;
Path *bitmapqual; /* IndexPath, BitmapAndPath, BitmapOrPath */
} BitmapHeapPath;
/*
* BitmapAndPath represents a BitmapAnd plan node; it can only appear as
* part of the substructure of a BitmapHeapPath. The Path structure is
* a bit more heavyweight than we really need for this, but for simplicity
* we make it a derivative of Path anyway.
*/
typedef struct BitmapAndPath
{
Path path;
2005-10-15 04:49:52 +02:00
List *bitmapquals; /* IndexPaths and BitmapOrPaths */
Selectivity bitmapselectivity;
} BitmapAndPath;
/*
* BitmapOrPath represents a BitmapOr plan node; it can only appear as
* part of the substructure of a BitmapHeapPath. The Path structure is
* a bit more heavyweight than we really need for this, but for simplicity
* we make it a derivative of Path anyway.
*/
typedef struct BitmapOrPath
{
Path path;
2005-10-15 04:49:52 +02:00
List *bitmapquals; /* IndexPaths and BitmapAndPaths */
Selectivity bitmapselectivity;
} BitmapOrPath;
/*
* TidPath represents a scan by TID
*
* tidquals is an implicitly OR'ed list of qual expressions of the form
* "CTID = pseudoconstant" or "CTID = ANY(pseudoconstant_array)".
* Note they are bare expressions, not RestrictInfos.
*/
typedef struct TidPath
{
Path path;
List *tidquals; /* qual(s) involving CTID = something */
} TidPath;
/*
Revise FDW planning API, again. Further reflection shows that a single callback isn't very workable if we desire to let FDWs generate multiple Paths, because that forces the FDW to do all work necessary to generate a valid Plan node for each Path. Instead split the former PlanForeignScan API into three steps: GetForeignRelSize, GetForeignPaths, GetForeignPlan. We had already bit the bullet of breaking the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain, and it's substantially more flexible for complex FDWs. Add an fdw_private field to RelOptInfo so that the new functions can save state there rather than possibly having to recalculate information two or three times. In addition, we'd not thought through what would be needed to allow an FDW to set up subexpressions of its choice for runtime execution. We could treat ForeignScan.fdw_private as an executable expression but that seems likely to break existing FDWs unnecessarily (in particular, it would restrict the set of node types allowable in fdw_private to those supported by expression_tree_walker). Instead, invent a separate field fdw_exprs which will receive the postprocessing appropriate for expression trees. (One field is enough since it can be a list of expressions; also, we assume the corresponding expression state tree(s) will be held within fdw_state, so we don't need to add anything to ForeignScanState.) Per review of Hanada Shigeru's pgsql_fdw patch. We may need to tweak this further as we continue to work on that patch, but to me it feels a lot closer to being right now.
2012-03-09 18:48:48 +01:00
* ForeignPath represents a potential scan of a foreign table
*
* fdw_private stores FDW private data about the scan. While fdw_private is
Revise FDW planning API, again. Further reflection shows that a single callback isn't very workable if we desire to let FDWs generate multiple Paths, because that forces the FDW to do all work necessary to generate a valid Plan node for each Path. Instead split the former PlanForeignScan API into three steps: GetForeignRelSize, GetForeignPaths, GetForeignPlan. We had already bit the bullet of breaking the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain, and it's substantially more flexible for complex FDWs. Add an fdw_private field to RelOptInfo so that the new functions can save state there rather than possibly having to recalculate information two or three times. In addition, we'd not thought through what would be needed to allow an FDW to set up subexpressions of its choice for runtime execution. We could treat ForeignScan.fdw_private as an executable expression but that seems likely to break existing FDWs unnecessarily (in particular, it would restrict the set of node types allowable in fdw_private to those supported by expression_tree_walker). Instead, invent a separate field fdw_exprs which will receive the postprocessing appropriate for expression trees. (One field is enough since it can be a list of expressions; also, we assume the corresponding expression state tree(s) will be held within fdw_state, so we don't need to add anything to ForeignScanState.) Per review of Hanada Shigeru's pgsql_fdw patch. We may need to tweak this further as we continue to work on that patch, but to me it feels a lot closer to being right now.
2012-03-09 18:48:48 +01:00
* not actually touched by the core code during normal operations, it's
* generally a good idea to use a representation that can be dumped by
* nodeToString(), so that you can examine the structure during debugging
* with tools like pprint().
*/
typedef struct ForeignPath
{
Path path;
List *fdw_private;
} ForeignPath;
/*
* CustomPath represents a table scan done by some out-of-core extension.
*
* We provide a set of hooks here - which the provider must take care to set
* up correctly - to allow extensions to supply their own methods of scanning
* a relation. For example, a provider might provide GPU acceleration, a
* cache-based scan, or some other kind of logic we haven't dreamed up yet.
*
* CustomPaths can be injected into the planning process for a relation by
* set_rel_pathlist_hook functions.
*
* Core code must avoid assuming that the CustomPath is only as large as
* the structure declared here; providers are allowed to make it the first
* element in a larger structure. (Since the planner never copies Paths,
* this doesn't add any complication.) However, for consistency with the
* FDW case, we provide a "custom_private" field in CustomPath; providers
* may prefer to use that rather than define another struct type.
*/
struct CustomPath;
#define CUSTOMPATH_SUPPORT_BACKWARD_SCAN 0x0001
#define CUSTOMPATH_SUPPORT_MARK_RESTORE 0x0002
typedef struct CustomPathMethods
{
const char *CustomName;
/* Convert Path to a Plan */
struct Plan *(*PlanCustomPath) (PlannerInfo *root,
RelOptInfo *rel,
struct CustomPath *best_path,
List *tlist,
List *clauses,
List *custom_plans);
/* Optional: print additional fields besides "private" */
void (*TextOutCustomPath) (StringInfo str,
const struct CustomPath *node);
} CustomPathMethods;
typedef struct CustomPath
{
Path path;
uint32 flags; /* mask of CUSTOMPATH_* flags, see above */
List *custom_paths; /* list of child Path nodes, if any */
List *custom_private;
const CustomPathMethods *methods;
} CustomPath;
/*
* AppendPath represents an Append plan, ie, successive execution of
* several member plans.
*
* Note: it is possible for "subpaths" to contain only one, or even no,
* elements. These cases are optimized during create_append_plan.
* In particular, an AppendPath with no subpaths is a "dummy" path that
* is created to represent the case that a relation is provably empty.
*/
typedef struct AppendPath
{
Path path;
List *subpaths; /* list of component Paths */
} AppendPath;
#define IS_DUMMY_PATH(p) \
(IsA((p), AppendPath) && ((AppendPath *) (p))->subpaths == NIL)
/* A relation that's been proven empty will have one path that is dummy */
#define IS_DUMMY_REL(r) \
((r)->cheapest_total_path != NULL && \
IS_DUMMY_PATH((r)->cheapest_total_path))
/*
* MergeAppendPath represents a MergeAppend plan, ie, the merging of sorted
* results from several member plans to produce similarly-sorted output.
*/
typedef struct MergeAppendPath
{
Path path;
List *subpaths; /* list of component Paths */
double limit_tuples; /* hard limit on output tuples, or -1 */
} MergeAppendPath;
/*
* ResultPath represents use of a Result plan node to compute a variable-free
* targetlist with no underlying tables (a "SELECT expressions" query).
* The query could have a WHERE clause, too, represented by "quals".
*
* Note that quals is a list of bare clauses, not RestrictInfos.
*/
typedef struct ResultPath
{
Path path;
List *quals;
} ResultPath;
/*
* MaterialPath represents use of a Material plan node, i.e., caching of
* the output of its subpath. This is used when the subpath is expensive
* and needs to be scanned repeatedly, or when we need mark/restore ability
* and the subpath doesn't have it.
*/
typedef struct MaterialPath
{
Path path;
Path *subpath;
} MaterialPath;
/*
* UniquePath represents elimination of distinct rows from the output of
* its subpath.
*
* This is unlike the other Path nodes in that it can actually generate
* different plans: either hash-based or sort-based implementation, or a
* no-op if the input path can be proven distinct already. The decision
* is sufficiently localized that it's not worth having separate Path node
* types. (Note: in the no-op case, we could eliminate the UniquePath node
* entirely and just return the subpath; but it's convenient to have a
* UniquePath in the path tree to signal upper-level routines that the input
* is known distinct.)
*/
typedef enum
{
UNIQUE_PATH_NOOP, /* input is known unique already */
UNIQUE_PATH_HASH, /* use hashing */
UNIQUE_PATH_SORT /* use sorting */
} UniquePathMethod;
typedef struct UniquePath
{
Path path;
Path *subpath;
UniquePathMethod umethod;
List *in_operators; /* equality operators of the IN clause */
List *uniq_exprs; /* expressions to be made unique */
} UniquePath;
/*
* All join-type paths share these fields.
*/
typedef struct JoinPath
{
Path path;
JoinType jointype;
Path *outerjoinpath; /* path for the outer side of the join */
Path *innerjoinpath; /* path for the inner side of the join */
List *joinrestrictinfo; /* RestrictInfos to apply to join */
/*
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
* See the notes for RelOptInfo and ParamPathInfo to understand why
* joinrestrictinfo is needed in JoinPath, and can't be merged into the
* parent RelOptInfo.
*/
} JoinPath;
/*
* A nested-loop path needs no special fields.
*/
typedef JoinPath NestPath;
/*
* A mergejoin path has these fields.
*
* Unlike other path types, a MergePath node doesn't represent just a single
* run-time plan node: it can represent up to four. Aside from the MergeJoin
* node itself, there can be a Sort node for the outer input, a Sort node
* for the inner input, and/or a Material node for the inner input. We could
* represent these nodes by separate path nodes, but considering how many
* different merge paths are investigated during a complex join problem,
* it seems better to avoid unnecessary palloc overhead.
*
* path_mergeclauses lists the clauses (in the form of RestrictInfos)
* that will be used in the merge.
*
* Note that the mergeclauses are a subset of the parent relation's
* restriction-clause list. Any join clauses that are not mergejoinable
* appear only in the parent's restrict list, and must be checked by a
* qpqual at execution time.
*
* outersortkeys (resp. innersortkeys) is NIL if the outer path
* (resp. inner path) is already ordered appropriately for the
* mergejoin. If it is not NIL then it is a PathKeys list describing
* the ordering that must be created by an explicit Sort node.
*
* materialize_inner is TRUE if a Material node should be placed atop the
* inner input. This may appear with or without an inner Sort step.
*/
1999-02-12 18:25:05 +01:00
typedef struct MergePath
{
1999-02-12 18:25:05 +01:00
JoinPath jpath;
2005-10-15 04:49:52 +02:00
List *path_mergeclauses; /* join clauses to be used for merge */
2010-02-26 03:01:40 +01:00
List *outersortkeys; /* keys for explicit sort, if any */
List *innersortkeys; /* keys for explicit sort, if any */
bool materialize_inner; /* add Materialize to inner? */
} MergePath;
1999-02-22 20:55:44 +01:00
/*
* A hashjoin path has these fields.
*
* The remarks above for mergeclauses apply for hashclauses as well.
*
* Hashjoin does not care what order its inputs appear in, so we have
* no need for sortkeys.
1999-02-22 20:55:44 +01:00
*/
typedef struct HashPath
{
JoinPath jpath;
List *path_hashclauses; /* join clauses used for hashing */
int num_batches; /* number of batches expected */
} HashPath;
1999-02-22 20:55:44 +01:00
/*
* Restriction clause info.
*
* We create one of these for each AND sub-clause of a restriction condition
* (WHERE or JOIN/ON clause). Since the restriction clauses are logically
* ANDed, we can use any one of them or any subset of them to filter out
* tuples, without having to evaluate the rest. The RestrictInfo node itself
* stores data used by the optimizer while choosing the best query plan.
*
* If a restriction clause references a single base relation, it will appear
* in the baserestrictinfo list of the RelOptInfo for that base rel.
*
* If a restriction clause references more than one base rel, it will
* appear in the joininfo list of every RelOptInfo that describes a strict
* subset of the base rels mentioned in the clause. The joininfo lists are
* used to drive join tree building by selecting plausible join candidates.
* The clause cannot actually be applied until we have built a join rel
* containing all the base rels it references, however.
*
* When we construct a join rel that includes all the base rels referenced
* in a multi-relation restriction clause, we place that clause into the
* joinrestrictinfo lists of paths for the join rel, if neither left nor
* right sub-path includes all base rels referenced in the clause. The clause
* will be applied at that join level, and will not propagate any further up
* the join tree. (Note: the "predicate migration" code was once intended to
* push restriction clauses up and down the plan tree based on evaluation
* costs, but it's dead code and is unlikely to be resurrected in the
* foreseeable future.)
*
* Note that in the presence of more than two rels, a multi-rel restriction
* might reach different heights in the join tree depending on the join
* sequence we use. So, these clauses cannot be associated directly with
* the join RelOptInfo, but must be kept track of on a per-join-path basis.
*
* RestrictInfos that represent equivalence conditions (i.e., mergejoinable
* equalities that are not outerjoin-delayed) are handled a bit differently.
* Initially we attach them to the EquivalenceClasses that are derived from
* them. When we construct a scan or join path, we look through all the
* EquivalenceClasses and generate derived RestrictInfos representing the
* minimal set of conditions that need to be checked for this particular scan
* or join to enforce that all members of each EquivalenceClass are in fact
* equal in all rows emitted by the scan or join.
*
* When dealing with outer joins we have to be very careful about pushing qual
* clauses up and down the tree. An outer join's own JOIN/ON conditions must
Fix some planner issues found while investigating Kevin Grittner's report of poorer planning in 8.3 than 8.2: 1. After pushing a constant across an outer join --- ie, given "a LEFT JOIN b ON (a.x = b.y) WHERE a.x = 42", we can deduce that b.y is sort of equal to 42, in the sense that we needn't fetch any b rows where it isn't 42 --- loop to see if any additional deductions can be made. Previous releases did that by recursing, but I had mistakenly thought that this was no longer necessary given the EquivalenceClass machinery. 2. Allow pushing constants across outer join conditions even if the condition is outerjoin_delayed due to a lower outer join. This is safe as long as the condition is strict and we re-test it at the upper join. 3. Keep the outer-join clause even if we successfully push a constant across it. This is *necessary* in the outerjoin_delayed case, but even in the simple case, it seems better to do this to ensure that the join search order heuristics will consider the join as reasonable to make. Mark such a clause as having selectivity 1.0, though, since it's not going to eliminate very many rows after application of the constant condition. 4. Tweak have_relevant_eclass_joinclause to report that two relations are joinable when they have vars that are equated to the same constant. We won't actually generate any joinclause from such an EquivalenceClass, but again it seems that in such a case it's a good idea to consider the join as worth costing out. 5. Fix a bug in select_mergejoin_clauses that was exposed by these changes: we have to reject candidate mergejoin clauses if either side was equated to a constant, because we can't construct a canonical pathkey list for such a clause. This is an implementation restriction that might be worth fixing someday, but it doesn't seem critical to get it done for 8.3.
2008-01-09 21:42:29 +01:00
* be evaluated exactly at that join node, unless they are "degenerate"
* conditions that reference only Vars from the nullable side of the join.
* Quals appearing in WHERE or in a JOIN above the outer join cannot be pushed
* down below the outer join, if they reference any nullable Vars.
* RestrictInfo nodes contain a flag to indicate whether a qual has been
* pushed down to a lower level than its original syntactic placement in the
* join tree would suggest. If an outer join prevents us from pushing a qual
* down to its "natural" semantic level (the level associated with just the
* base rels used in the qual) then we mark the qual with a "required_relids"
* value including more than just the base rels it actually uses. By
Fix some planner issues found while investigating Kevin Grittner's report of poorer planning in 8.3 than 8.2: 1. After pushing a constant across an outer join --- ie, given "a LEFT JOIN b ON (a.x = b.y) WHERE a.x = 42", we can deduce that b.y is sort of equal to 42, in the sense that we needn't fetch any b rows where it isn't 42 --- loop to see if any additional deductions can be made. Previous releases did that by recursing, but I had mistakenly thought that this was no longer necessary given the EquivalenceClass machinery. 2. Allow pushing constants across outer join conditions even if the condition is outerjoin_delayed due to a lower outer join. This is safe as long as the condition is strict and we re-test it at the upper join. 3. Keep the outer-join clause even if we successfully push a constant across it. This is *necessary* in the outerjoin_delayed case, but even in the simple case, it seems better to do this to ensure that the join search order heuristics will consider the join as reasonable to make. Mark such a clause as having selectivity 1.0, though, since it's not going to eliminate very many rows after application of the constant condition. 4. Tweak have_relevant_eclass_joinclause to report that two relations are joinable when they have vars that are equated to the same constant. We won't actually generate any joinclause from such an EquivalenceClass, but again it seems that in such a case it's a good idea to consider the join as worth costing out. 5. Fix a bug in select_mergejoin_clauses that was exposed by these changes: we have to reject candidate mergejoin clauses if either side was equated to a constant, because we can't construct a canonical pathkey list for such a clause. This is an implementation restriction that might be worth fixing someday, but it doesn't seem critical to get it done for 8.3.
2008-01-09 21:42:29 +01:00
* pretending that the qual references all the rels required to form the outer
* join, we prevent it from being evaluated below the outer join's joinrel.
* When we do form the outer join's joinrel, we still need to distinguish
* those quals that are actually in that join's JOIN/ON condition from those
* that appeared elsewhere in the tree and were pushed down to the join rel
* because they used no other rels. That's what the is_pushed_down flag is
* for; it tells us that a qual is not an OUTER JOIN qual for the set of base
* rels listed in required_relids. A clause that originally came from WHERE
* or an INNER JOIN condition will *always* have its is_pushed_down flag set.
* It's possible for an OUTER JOIN clause to be marked is_pushed_down too,
* if we decide that it can be pushed down into the nullable side of the join.
* In that case it acts as a plain filter qual for wherever it gets evaluated.
Fix some planner issues found while investigating Kevin Grittner's report of poorer planning in 8.3 than 8.2: 1. After pushing a constant across an outer join --- ie, given "a LEFT JOIN b ON (a.x = b.y) WHERE a.x = 42", we can deduce that b.y is sort of equal to 42, in the sense that we needn't fetch any b rows where it isn't 42 --- loop to see if any additional deductions can be made. Previous releases did that by recursing, but I had mistakenly thought that this was no longer necessary given the EquivalenceClass machinery. 2. Allow pushing constants across outer join conditions even if the condition is outerjoin_delayed due to a lower outer join. This is safe as long as the condition is strict and we re-test it at the upper join. 3. Keep the outer-join clause even if we successfully push a constant across it. This is *necessary* in the outerjoin_delayed case, but even in the simple case, it seems better to do this to ensure that the join search order heuristics will consider the join as reasonable to make. Mark such a clause as having selectivity 1.0, though, since it's not going to eliminate very many rows after application of the constant condition. 4. Tweak have_relevant_eclass_joinclause to report that two relations are joinable when they have vars that are equated to the same constant. We won't actually generate any joinclause from such an EquivalenceClass, but again it seems that in such a case it's a good idea to consider the join as worth costing out. 5. Fix a bug in select_mergejoin_clauses that was exposed by these changes: we have to reject candidate mergejoin clauses if either side was equated to a constant, because we can't construct a canonical pathkey list for such a clause. This is an implementation restriction that might be worth fixing someday, but it doesn't seem critical to get it done for 8.3.
2008-01-09 21:42:29 +01:00
* (In short, is_pushed_down is only false for non-degenerate outer join
* conditions. Possibly we should rename it to reflect that meaning?)
*
Fix some planner issues found while investigating Kevin Grittner's report of poorer planning in 8.3 than 8.2: 1. After pushing a constant across an outer join --- ie, given "a LEFT JOIN b ON (a.x = b.y) WHERE a.x = 42", we can deduce that b.y is sort of equal to 42, in the sense that we needn't fetch any b rows where it isn't 42 --- loop to see if any additional deductions can be made. Previous releases did that by recursing, but I had mistakenly thought that this was no longer necessary given the EquivalenceClass machinery. 2. Allow pushing constants across outer join conditions even if the condition is outerjoin_delayed due to a lower outer join. This is safe as long as the condition is strict and we re-test it at the upper join. 3. Keep the outer-join clause even if we successfully push a constant across it. This is *necessary* in the outerjoin_delayed case, but even in the simple case, it seems better to do this to ensure that the join search order heuristics will consider the join as reasonable to make. Mark such a clause as having selectivity 1.0, though, since it's not going to eliminate very many rows after application of the constant condition. 4. Tweak have_relevant_eclass_joinclause to report that two relations are joinable when they have vars that are equated to the same constant. We won't actually generate any joinclause from such an EquivalenceClass, but again it seems that in such a case it's a good idea to consider the join as worth costing out. 5. Fix a bug in select_mergejoin_clauses that was exposed by these changes: we have to reject candidate mergejoin clauses if either side was equated to a constant, because we can't construct a canonical pathkey list for such a clause. This is an implementation restriction that might be worth fixing someday, but it doesn't seem critical to get it done for 8.3.
2008-01-09 21:42:29 +01:00
* RestrictInfo nodes also contain an outerjoin_delayed flag, which is true
* if the clause's applicability must be delayed due to any outer joins
* appearing below it (ie, it has to be postponed to some join level higher
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
* than the set of relations it actually references).
*
* There is also an outer_relids field, which is NULL except for outer join
* clauses; for those, it is the set of relids on the outer side of the
* clause's outer join. (These are rels that the clause cannot be applied to
* in parameterized scans, since pushing it into the join's outer side would
* lead to wrong answers.)
*
* There is also a nullable_relids field, which is the set of rels the clause
* references that can be forced null by some outer join below the clause.
*
* outerjoin_delayed = true is subtly different from nullable_relids != NULL:
* a clause might reference some nullable rels and yet not be
* outerjoin_delayed because it also references all the other rels of the
* outer join(s). A clause that is not outerjoin_delayed can be enforced
* anywhere it is computable.
*
* In general, the referenced clause might be arbitrarily complex. The
* kinds of clauses we can handle as indexscan quals, mergejoin clauses,
* or hashjoin clauses are limited (e.g., no volatile functions). The code
* for each kind of path is responsible for identifying the restrict clauses
* it can use and ignoring the rest. Clauses not implemented by an indexscan,
* mergejoin, or hashjoin will be placed in the plan qual or joinqual field
* of the finished Plan node, where they will be enforced by general-purpose
* qual-expression-evaluation code. (But we are still entitled to count
* their selectivity when estimating the result tuple count, if we
* can guess what it is...)
*
* When the referenced clause is an OR clause, we generate a modified copy
* in which additional RestrictInfo nodes are inserted below the top-level
* OR/AND structure. This is a convenience for OR indexscan processing:
* indexquals taken from either the top level or an OR subclause will have
* associated RestrictInfo nodes.
*
* The can_join flag is set true if the clause looks potentially useful as
* a merge or hash join clause, that is if it is a binary opclause with
* nonoverlapping sets of relids referenced in the left and right sides.
* (Whether the operator is actually merge or hash joinable isn't checked,
* however.)
*
* The pseudoconstant flag is set true if the clause contains no Vars of
* the current query level and no volatile functions. Such a clause can be
* pulled out and used as a one-time qual in a gating Result node. We keep
* pseudoconstant clauses in the same lists as other RestrictInfos so that
* the regular clause-pushing machinery can assign them to the correct join
* level, but they need to be treated specially for cost and selectivity
* estimates. Note that a pseudoconstant clause can never be an indexqual
* or merge or hash join clause, so it's of no interest to large parts of
* the planner.
*
* When join clauses are generated from EquivalenceClasses, there may be
* several equally valid ways to enforce join equivalence, of which we need
* apply only one. We mark clauses of this kind by setting parent_ec to
* point to the generating EquivalenceClass. Multiple clauses with the same
* parent_ec in the same join are redundant.
1999-02-22 20:55:44 +01:00
*/
typedef struct RestrictInfo
{
NodeTag type;
Expr *clause; /* the represented clause of WHERE or JOIN */
2004-08-29 07:07:03 +02:00
bool is_pushed_down; /* TRUE if clause was pushed down in level */
bool outerjoin_delayed; /* TRUE if delayed by lower outer join */
bool can_join; /* see comment above */
2006-10-04 02:30:14 +02:00
bool pseudoconstant; /* see comment above */
/* The set of relids (varnos) actually referenced in the clause: */
Relids clause_relids;
/* The set of relids required to evaluate the clause: */
Relids required_relids;
Revise parameterized-path mechanism to fix assorted issues. This patch adjusts the treatment of parameterized paths so that all paths with the same parameterization (same set of required outer rels) for the same relation will have the same rowcount estimate. We cache the rowcount estimates to ensure that property, and hopefully save a few cycles too. Doing this makes it practical for add_path_precheck to operate without a rowcount estimate: it need only assume that paths with different parameterizations never dominate each other, which is close enough to true anyway for coarse filtering, because normally a more-parameterized path should yield fewer rows thanks to having more join clauses to apply. In add_path, we do the full nine yards of comparing rowcount estimates along with everything else, so that we can discard parameterized paths that don't actually have an advantage. This fixes some issues I'd found with add_path rejecting parameterized paths on the grounds that they were more expensive than not-parameterized ones, even though they yielded many fewer rows and hence would be cheaper once subsequent joining was considered. To make the same-rowcounts assumption valid, we have to require that any parameterized path enforce *all* join clauses that could be obtained from the particular set of outer rels, even if not all of them are useful for indexing. This is required at both base scans and joins. It's a good thing anyway since the net impact is that join quals are checked at the lowest practical level in the join tree. Hence, discard the original rather ad-hoc mechanism for choosing parameterization joinquals, and build a better one that has a more principled rule for when clauses can be moved. The original rule was actually buggy anyway for lack of knowledge about which relations are part of an outer join's outer side; getting this right requires adding an outer_relids field to RestrictInfo.
2012-04-19 21:52:46 +02:00
/* If an outer-join clause, the outer-side relations, else NULL: */
Relids outer_relids;
/* The relids used in the clause that are nullable by lower outer joins: */
Relids nullable_relids;
/* These fields are set for any binary opclause: */
Relids left_relids; /* relids in left side of clause */
Relids right_relids; /* relids in right side of clause */
/* This field is NULL unless clause is an OR clause: */
Expr *orclause; /* modified clause with RestrictInfos */
/* This field is NULL unless clause is potentially redundant: */
EquivalenceClass *parent_ec; /* generating EquivalenceClass */
/* cache space for cost and selectivity */
QualCost eval_cost; /* eval cost of clause; -1 if not yet set */
Selectivity norm_selec; /* selectivity for "normal" (JOIN_INNER)
* semantics; -1 if not yet set; >1 means a
* redundant clause */
Selectivity outer_selec; /* selectivity for outer join semantics; -1 if
* not yet set */
/* valid if clause is mergejoinable, else NIL */
List *mergeopfamilies; /* opfamilies containing clause operator */
/* cache space for mergeclause processing; NULL if not yet set */
EquivalenceClass *left_ec; /* EquivalenceClass containing lefthand */
2007-11-15 22:14:46 +01:00
EquivalenceClass *right_ec; /* EquivalenceClass containing righthand */
EquivalenceMember *left_em; /* EquivalenceMember for lefthand */
EquivalenceMember *right_em; /* EquivalenceMember for righthand */
List *scansel_cache; /* list of MergeScanSelCache structs */
/* transient workspace for use while considering a specific join path */
bool outer_is_left; /* T = outer var on left, F = on right */
/* valid if clause is hashjoinable, else InvalidOid: */
Oid hashjoinoperator; /* copy of clause operator */
/* cache space for hashclause processing; -1 if not yet set */
Selectivity left_bucketsize; /* avg bucketsize of left side */
Selectivity right_bucketsize; /* avg bucketsize of right side */
} RestrictInfo;
/*
* Since mergejoinscansel() is a relatively expensive function, and would
* otherwise be invoked many times while planning a large join tree,
* we go out of our way to cache its results. Each mergejoinable
* RestrictInfo carries a list of the specific sort orderings that have
* been considered for use with it, and the resulting selectivities.
*/
typedef struct MergeScanSelCache
{
/* Ordering details (cache lookup key) */
Oid opfamily; /* btree opfamily defining the ordering */
Oid collation; /* collation for the ordering */
int strategy; /* sort direction (ASC or DESC) */
bool nulls_first; /* do NULLs come before normal values? */
/* Results */
Selectivity leftstartsel; /* first-join fraction for clause left side */
Selectivity leftendsel; /* last-join fraction for clause left side */
Selectivity rightstartsel; /* first-join fraction for clause right side */
Selectivity rightendsel; /* last-join fraction for clause right side */
} MergeScanSelCache;
/*
* Placeholder node for an expression to be evaluated below the top level
* of a plan tree. This is used during planning to represent the contained
* expression. At the end of the planning process it is replaced by either
* the contained expression or a Var referring to a lower-level evaluation of
* the contained expression. Typically the evaluation occurs below an outer
* join, and Var references above the outer join might thereby yield NULL
* instead of the expression value.
*
* Although the planner treats this as an expression node type, it is not
* recognized by the parser or executor, so we declare it here rather than
* in primnodes.h.
*/
typedef struct PlaceHolderVar
{
Expr xpr;
Expr *phexpr; /* the represented expression */
Relids phrels; /* base relids syntactically within expr src */
Index phid; /* ID for PHV (unique within planner run) */
Index phlevelsup; /* > 0 if PHV belongs to outer query */
} PlaceHolderVar;
/*
* "Special join" info.
*
* One-sided outer joins constrain the order of joining partially but not
* completely. We flatten such joins into the planner's top-level list of
* relations to join, but record information about each outer join in a
* SpecialJoinInfo struct. These structs are kept in the PlannerInfo node's
* join_info_list.
*
* Similarly, semijoins and antijoins created by flattening IN (subselect)
* and EXISTS(subselect) clauses create partial constraints on join order.
* These are likewise recorded in SpecialJoinInfo structs.
*
* We make SpecialJoinInfos for FULL JOINs even though there is no flexibility
* of planning for them, because this simplifies make_join_rel()'s API.
*
* min_lefthand and min_righthand are the sets of base relids that must be
* available on each side when performing the special join. lhs_strict is
* true if the special join's condition cannot succeed when the LHS variables
* are all NULL (this means that an outer join can commute with upper-level
* outer joins even if it appears in their RHS). We don't bother to set
* lhs_strict for FULL JOINs, however.
*
* It is not valid for either min_lefthand or min_righthand to be empty sets;
* if they were, this would break the logic that enforces join order.
*
Rewrite make_outerjoininfo's construction of min_lefthand and min_righthand sets for outer joins, in the light of bug #3588 and additional thought and experimentation. The original methodology was fatally flawed for nests of more than two outer joins: it got the relationships between adjacent joins right, but didn't always come to the right conclusions about whether a join could be interchanged with one two or more levels below it. This was largely caused by a mistaken idea that we should use the min_lefthand + min_righthand sets of a sub-join as the minimum left or right input set of an upper join when we conclude that the sub-join can't commute with the upper one. If there's a still-lower join that the sub-join *can* commute with, this method led us to think that that one could commute with the topmost join; which it can't. Another problem (not directly connected to bug #3588) was that make_outerjoininfo's processing-order-dependent method for enforcing outer join identity #3 didn't work right: if we decided that join A could safely commute with lower join B, we dropped all information about sub-joins under B that join A could perhaps not safely commute with, because we removed B's entire min_righthand from A's. To fix, make an explicit computation of all inner join combinations that occur below an outer join, and add to that the full syntactic relsets of any lower outer joins that we determine it can't commute with. This method gives much more direct enforcement of the outer join rearrangement identities, and it turns out not to cost a lot of additional bookkeeping. Thanks to Richard Harris for the bug report and test case.
2007-08-31 03:44:06 +02:00
* syn_lefthand and syn_righthand are the sets of base relids that are
* syntactically below this special join. (These are needed to help compute
* min_lefthand and min_righthand for higher joins.)
Rewrite make_outerjoininfo's construction of min_lefthand and min_righthand sets for outer joins, in the light of bug #3588 and additional thought and experimentation. The original methodology was fatally flawed for nests of more than two outer joins: it got the relationships between adjacent joins right, but didn't always come to the right conclusions about whether a join could be interchanged with one two or more levels below it. This was largely caused by a mistaken idea that we should use the min_lefthand + min_righthand sets of a sub-join as the minimum left or right input set of an upper join when we conclude that the sub-join can't commute with the upper one. If there's a still-lower join that the sub-join *can* commute with, this method led us to think that that one could commute with the topmost join; which it can't. Another problem (not directly connected to bug #3588) was that make_outerjoininfo's processing-order-dependent method for enforcing outer join identity #3 didn't work right: if we decided that join A could safely commute with lower join B, we dropped all information about sub-joins under B that join A could perhaps not safely commute with, because we removed B's entire min_righthand from A's. To fix, make an explicit computation of all inner join combinations that occur below an outer join, and add to that the full syntactic relsets of any lower outer joins that we determine it can't commute with. This method gives much more direct enforcement of the outer join rearrangement identities, and it turns out not to cost a lot of additional bookkeeping. Thanks to Richard Harris for the bug report and test case.
2007-08-31 03:44:06 +02:00
*
* delay_upper_joins is set TRUE if we detect a pushed-down clause that has
* to be evaluated after this join is formed (because it references the RHS).
* Any outer joins that have such a clause and this join in their RHS cannot
* commute with this join, because that would leave noplace to check the
* pushed-down clause. (We don't track this for FULL JOINs, either.)
*
Improve planner's cost estimation in the presence of semijoins. If we have a semijoin, say SELECT * FROM x WHERE x1 IN (SELECT y1 FROM y) and we're estimating the cost of a parameterized indexscan on x, the number of repetitions of the indexscan should not be taken as the size of y; it'll really only be the number of distinct values of y1, because the only valid plan with y on the outside of a nestloop would require y to be unique-ified before joining it to x. Most of the time this doesn't make that much difference, but sometimes it can lead to drastically underestimating the cost of the indexscan and hence choosing a bad plan, as pointed out by David Kubečka. Fixing this is a bit difficult because parameterized indexscans are costed out quite early in the planning process, before we have the information that would be needed to call estimate_num_groups() and thereby estimate the number of distinct values of the join column(s). However we can move the code that extracts a semijoin RHS's unique-ification columns, so that it's done in initsplan.c rather than on-the-fly in create_unique_path(). That shouldn't make any difference speed-wise and it's really a bit cleaner too. The other bit of information we need is the size of the semijoin RHS, which is easy if it's a single relation (we make those estimates before considering indexscan costs) but problematic if it's a join relation. The solution adopted here is just to use the product of the sizes of the join component rels. That will generally be an overestimate, but since estimate_num_groups() only uses this input as a clamp, an overestimate shouldn't hurt us too badly. In any case we don't allow this new logic to produce a value larger than we would have chosen before, so that at worst an overestimate leaves us no wiser than we were before.
2015-03-12 02:21:00 +01:00
* For a semijoin, we also extract the join operators and their RHS arguments
* and set semi_operators, semi_rhs_exprs, semi_can_btree, and semi_can_hash.
* This is done in support of possibly unique-ifying the RHS, so we don't
* bother unless at least one of semi_can_btree and semi_can_hash can be set
* true. (You might expect that this information would be computed during
* join planning; but it's helpful to have it available during planning of
* parameterized table scans, so we store it in the SpecialJoinInfo structs.)
*
* jointype is never JOIN_RIGHT; a RIGHT JOIN is handled by switching
* the inputs to make it a LEFT JOIN. So the allowed values of jointype
* in a join_info_list member are only LEFT, FULL, SEMI, or ANTI.
*
* For purposes of join selectivity estimation, we create transient
* SpecialJoinInfo structures for regular inner joins; so it is possible
* to have jointype == JOIN_INNER in such a structure, even though this is
* not allowed within join_info_list. We also create transient
* SpecialJoinInfos with jointype == JOIN_INNER for outer joins, since for
* cost estimation purposes it is sometimes useful to know the join size under
* plain innerjoin semantics. Note that lhs_strict, delay_upper_joins, and
Improve planner's cost estimation in the presence of semijoins. If we have a semijoin, say SELECT * FROM x WHERE x1 IN (SELECT y1 FROM y) and we're estimating the cost of a parameterized indexscan on x, the number of repetitions of the indexscan should not be taken as the size of y; it'll really only be the number of distinct values of y1, because the only valid plan with y on the outside of a nestloop would require y to be unique-ified before joining it to x. Most of the time this doesn't make that much difference, but sometimes it can lead to drastically underestimating the cost of the indexscan and hence choosing a bad plan, as pointed out by David Kubečka. Fixing this is a bit difficult because parameterized indexscans are costed out quite early in the planning process, before we have the information that would be needed to call estimate_num_groups() and thereby estimate the number of distinct values of the join column(s). However we can move the code that extracts a semijoin RHS's unique-ification columns, so that it's done in initsplan.c rather than on-the-fly in create_unique_path(). That shouldn't make any difference speed-wise and it's really a bit cleaner too. The other bit of information we need is the size of the semijoin RHS, which is easy if it's a single relation (we make those estimates before considering indexscan costs) but problematic if it's a join relation. The solution adopted here is just to use the product of the sizes of the join component rels. That will generally be an overestimate, but since estimate_num_groups() only uses this input as a clamp, an overestimate shouldn't hurt us too badly. In any case we don't allow this new logic to produce a value larger than we would have chosen before, so that at worst an overestimate leaves us no wiser than we were before.
2015-03-12 02:21:00 +01:00
* of course the semi_xxx fields are not set meaningfully within such structs.
*/
typedef struct SpecialJoinInfo
{
NodeTag type;
Relids min_lefthand; /* base relids in minimum LHS for join */
Relids min_righthand; /* base relids in minimum RHS for join */
Rewrite make_outerjoininfo's construction of min_lefthand and min_righthand sets for outer joins, in the light of bug #3588 and additional thought and experimentation. The original methodology was fatally flawed for nests of more than two outer joins: it got the relationships between adjacent joins right, but didn't always come to the right conclusions about whether a join could be interchanged with one two or more levels below it. This was largely caused by a mistaken idea that we should use the min_lefthand + min_righthand sets of a sub-join as the minimum left or right input set of an upper join when we conclude that the sub-join can't commute with the upper one. If there's a still-lower join that the sub-join *can* commute with, this method led us to think that that one could commute with the topmost join; which it can't. Another problem (not directly connected to bug #3588) was that make_outerjoininfo's processing-order-dependent method for enforcing outer join identity #3 didn't work right: if we decided that join A could safely commute with lower join B, we dropped all information about sub-joins under B that join A could perhaps not safely commute with, because we removed B's entire min_righthand from A's. To fix, make an explicit computation of all inner join combinations that occur below an outer join, and add to that the full syntactic relsets of any lower outer joins that we determine it can't commute with. This method gives much more direct enforcement of the outer join rearrangement identities, and it turns out not to cost a lot of additional bookkeeping. Thanks to Richard Harris for the bug report and test case.
2007-08-31 03:44:06 +02:00
Relids syn_lefthand; /* base relids syntactically within LHS */
Relids syn_righthand; /* base relids syntactically within RHS */
JoinType jointype; /* always INNER, LEFT, FULL, SEMI, or ANTI */
bool lhs_strict; /* joinclause is strict for some LHS rel */
2007-11-15 22:14:46 +01:00
bool delay_upper_joins; /* can't commute with upper RHS */
Improve planner's cost estimation in the presence of semijoins. If we have a semijoin, say SELECT * FROM x WHERE x1 IN (SELECT y1 FROM y) and we're estimating the cost of a parameterized indexscan on x, the number of repetitions of the indexscan should not be taken as the size of y; it'll really only be the number of distinct values of y1, because the only valid plan with y on the outside of a nestloop would require y to be unique-ified before joining it to x. Most of the time this doesn't make that much difference, but sometimes it can lead to drastically underestimating the cost of the indexscan and hence choosing a bad plan, as pointed out by David Kubečka. Fixing this is a bit difficult because parameterized indexscans are costed out quite early in the planning process, before we have the information that would be needed to call estimate_num_groups() and thereby estimate the number of distinct values of the join column(s). However we can move the code that extracts a semijoin RHS's unique-ification columns, so that it's done in initsplan.c rather than on-the-fly in create_unique_path(). That shouldn't make any difference speed-wise and it's really a bit cleaner too. The other bit of information we need is the size of the semijoin RHS, which is easy if it's a single relation (we make those estimates before considering indexscan costs) but problematic if it's a join relation. The solution adopted here is just to use the product of the sizes of the join component rels. That will generally be an overestimate, but since estimate_num_groups() only uses this input as a clamp, an overestimate shouldn't hurt us too badly. In any case we don't allow this new logic to produce a value larger than we would have chosen before, so that at worst an overestimate leaves us no wiser than we were before.
2015-03-12 02:21:00 +01:00
/* Remaining fields are set only for JOIN_SEMI jointype: */
bool semi_can_btree; /* true if semi_operators are all btree */
bool semi_can_hash; /* true if semi_operators are all hash */
List *semi_operators; /* OIDs of equality join operators */
List *semi_rhs_exprs; /* righthand-side expressions of these ops */
} SpecialJoinInfo;
/*
* "Lateral join" info.
*
* Lateral references constrain the join order in a way that's somewhat like
* outer joins, though different in detail. We construct a LateralJoinInfo
* for each lateral cross-reference, placing them in the PlannerInfo node's
* lateral_info_list.
*
* For unflattened LATERAL RTEs, we generate LateralJoinInfo(s) in which
* lateral_rhs is the relid of the LATERAL baserel, and lateral_lhs is a set
* of relids of baserels it references, all of which must be present on the
* LHS to compute a parameter needed by the RHS. Typically, lateral_lhs is
* a singleton, but it can include multiple rels if the RHS references a
* PlaceHolderVar with a multi-rel ph_eval_at level. We disallow joining to
* only part of the LHS in such cases, since that would result in a join tree
* with no convenient place to compute the PHV.
*
* When an appendrel contains lateral references (eg "LATERAL (SELECT x.col1
* UNION ALL SELECT y.col2)"), the LateralJoinInfos reference the parent
* baserel not the member otherrels, since it is the parent relid that is
* considered for joining purposes.
*
* If any LATERAL RTEs were flattened into the parent query, it is possible
* that the query now contains PlaceHolderVars containing lateral references,
* representing expressions that need to be evaluated at particular spots in
* the jointree but contain lateral references to Vars from elsewhere. These
* give rise to LateralJoinInfos in which lateral_rhs is the evaluation point
* of a PlaceHolderVar and lateral_lhs is the set of lateral rels it needs.
*/
typedef struct LateralJoinInfo
{
NodeTag type;
Relids lateral_lhs; /* rels needed to compute a lateral value */
Relids lateral_rhs; /* rel where lateral value is needed */
} LateralJoinInfo;
/*
* Append-relation info.
*
* When we expand an inheritable table or a UNION-ALL subselect into an
* "append relation" (essentially, a list of child RTEs), we build an
* AppendRelInfo for each child RTE. The list of AppendRelInfos indicates
* which child RTEs must be included when expanding the parent, and each
* node carries information needed to translate Vars referencing the parent
* into Vars referencing that child.
*
* These structs are kept in the PlannerInfo node's append_rel_list.
* Note that we just throw all the structs into one list, and scan the
* whole list when desiring to expand any one parent. We could have used
* a more complex data structure (eg, one list per parent), but this would
* be harder to update during operations such as pulling up subqueries,
* and not really any easier to scan. Considering that typical queries
* will not have many different append parents, it doesn't seem worthwhile
* to complicate things.
*
* Note: after completion of the planner prep phase, any given RTE is an
* append parent having entries in append_rel_list if and only if its
* "inh" flag is set. We clear "inh" for plain tables that turn out not
* to have inheritance children, and (in an abuse of the original meaning
* of the flag) we set "inh" for subquery RTEs that turn out to be
* flattenable UNION ALL queries. This lets us avoid useless searches
* of append_rel_list.
*
* Note: the data structure assumes that append-rel members are single
* baserels. This is OK for inheritance, but it prevents us from pulling
* up a UNION ALL member subquery if it contains a join. While that could
* be fixed with a more complex data structure, at present there's not much
* point because no improvement in the plan could result.
*/
typedef struct AppendRelInfo
{
NodeTag type;
2006-10-04 02:30:14 +02:00
/*
2006-10-04 02:30:14 +02:00
* These fields uniquely identify this append relationship. There can be
* (in fact, always should be) multiple AppendRelInfos for the same
* parent_relid, but never more than one per child_relid, since a given
* RTE cannot be a child of more than one append parent.
*/
Index parent_relid; /* RT index of append parent rel */
Index child_relid; /* RT index of append child rel */
2006-10-04 02:30:14 +02:00
/*
* For an inheritance appendrel, the parent and child are both regular
* relations, and we store their rowtype OIDs here for use in translating
* whole-row Vars. For a UNION-ALL appendrel, the parent and child are
* both subqueries with no named rowtype, and we store InvalidOid here.
*/
2006-10-04 02:30:14 +02:00
Oid parent_reltype; /* OID of parent's composite type */
Oid child_reltype; /* OID of child's composite type */
/*
2006-10-04 02:30:14 +02:00
* The N'th element of this list is a Var or expression representing the
* child column corresponding to the N'th column of the parent. This is
* used to translate Vars referencing the parent rel into references to
* the child. A list element is NULL if it corresponds to a dropped
* column of the parent (this is only possible for inheritance cases, not
* UNION ALL). The list elements are always simple Vars for inheritance
* cases, but can be arbitrary expressions in UNION ALL cases.
*
* Notice we only store entries for user columns (attno > 0). Whole-row
2006-10-04 02:30:14 +02:00
* Vars are special-cased, and system columns (attno < 0) need no special
* translation since their attnos are the same for all tables.
*
* Caution: the Vars have varlevelsup = 0. Be careful to adjust as needed
2006-10-04 02:30:14 +02:00
* when copying into a subquery.
*/
2006-10-04 02:30:14 +02:00
List *translated_vars; /* Expressions in the child's Vars */
/*
2006-10-04 02:30:14 +02:00
* We store the parent table's OID here for inheritance, or InvalidOid for
* UNION ALL. This is only needed to help in generating error messages if
* an attempt is made to reference a dropped parent column.
*/
Oid parent_reloid; /* OID of parent relation */
} AppendRelInfo;
/*
* For each distinct placeholder expression generated during planning, we
* store a PlaceHolderInfo node in the PlannerInfo node's placeholder_list.
* This stores info that is needed centrally rather than in each copy of the
* PlaceHolderVar. The phid fields identify which PlaceHolderInfo goes with
* each PlaceHolderVar. Note that phid is unique throughout a planner run,
* not just within a query level --- this is so that we need not reassign ID's
* when pulling a subquery into its parent.
*
* The idea is to evaluate the expression at (only) the ph_eval_at join level,
* then allow it to bubble up like a Var until the ph_needed join level.
* ph_needed has the same definition as attr_needed for a regular Var.
*
* The PlaceHolderVar's expression might contain LATERAL references to vars
* coming from outside its syntactic scope. If so, those rels are *not*
* included in ph_eval_at, but they are recorded in ph_lateral.
*
* Notice that when ph_eval_at is a join rather than a single baserel, the
* PlaceHolderInfo may create constraints on join order: the ph_eval_at join
* has to be formed below any outer joins that should null the PlaceHolderVar.
*
* We create a PlaceHolderInfo only after determining that the PlaceHolderVar
* is actually referenced in the plan tree, so that unreferenced placeholders
* don't result in unnecessary constraints on join order.
*/
typedef struct PlaceHolderInfo
{
NodeTag type;
Index phid; /* ID for PH (unique within planner run) */
PlaceHolderVar *ph_var; /* copy of PlaceHolderVar tree */
Relids ph_eval_at; /* lowest level we can evaluate value at */
Relids ph_lateral; /* relids of contained lateral refs, if any */
Relids ph_needed; /* highest level the value is needed at */
int32 ph_width; /* estimated attribute width */
} PlaceHolderInfo;
/*
* For each potentially index-optimizable MIN/MAX aggregate function,
* root->minmax_aggs stores a MinMaxAggInfo describing it.
*/
typedef struct MinMaxAggInfo
{
NodeTag type;
Oid aggfnoid; /* pg_proc Oid of the aggregate */
Oid aggsortop; /* Oid of its sort operator */
Expr *target; /* expression we are aggregating on */
PlannerInfo *subroot; /* modified "root" for planning the subquery */
Path *path; /* access path for subquery */
Cost pathcost; /* estimated cost to fetch first row */
Param *param; /* param for subplan's output */
} MinMaxAggInfo;
/*
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
* At runtime, PARAM_EXEC slots are used to pass values around from one plan
* node to another. They can be used to pass values down into subqueries (for
* outer references in subqueries), or up out of subqueries (for the results
* of a subplan), or from a NestLoop plan node into its inner relation (when
* the inner scan is parameterized with values from the outer relation).
* The planner is responsible for assigning nonconflicting PARAM_EXEC IDs to
* the PARAM_EXEC Params it generates.
*
* Outer references are managed via root->plan_params, which is a list of
* PlannerParamItems. While planning a subquery, each parent query level's
* plan_params contains the values required from it by the current subquery.
* During create_plan(), we use plan_params to track values that must be
* passed from outer to inner sides of NestLoop plan nodes.
*
* The item a PlannerParamItem represents can be one of three kinds:
*
* A Var: the slot represents a variable of this level that must be passed
* down because subqueries have outer references to it, or must be passed
* from a NestLoop node to its inner scan. The varlevelsup value in the Var
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
* will always be zero.
*
* A PlaceHolderVar: this works much like the Var case, except that the
* entry is a PlaceHolderVar node with a contained expression. The PHV
* will have phlevelsup = 0, and the contained expression is adjusted
* to match in level.
*
* An Aggref (with an expression tree representing its argument): the slot
* represents an aggregate expression that is an outer reference for some
* subquery. The Aggref itself has agglevelsup = 0, and its argument tree
* is adjusted to match in level.
*
* Note: we detect duplicate Var and PlaceHolderVar parameters and coalesce
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
* them into one slot, but we do not bother to do that for Aggrefs.
* The scope of duplicate-elimination only extends across the set of
* parameters passed from one query level into a single subquery, or for
* nestloop parameters across the set of nestloop parameters used in a single
* query level. So there is no possibility of a PARAM_EXEC slot being used
* for conflicting purposes.
*
* In addition, PARAM_EXEC slots are assigned for Params representing outputs
* from subplans (values that are setParam items for those subplans). These
* IDs need not be tracked via PlannerParamItems, since we do not need any
* duplicate-elimination nor later processing of the represented expressions.
* Instead, we just record the assignment of the slot number by incrementing
* root->glob->nParamExec.
*/
typedef struct PlannerParamItem
{
NodeTag type;
Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. These changes will mostly also need to be made in the back branches, but I'm going to hold off on that until after 9.2.0 wraps.
2012-09-05 18:54:03 +02:00
Node *item; /* the Var, PlaceHolderVar, or Aggref */
int paramId; /* its assigned PARAM_EXEC slot number */
} PlannerParamItem;
/*
* When making cost estimates for a SEMI or ANTI join, there are some
* correction factors that are needed in both nestloop and hash joins
* to account for the fact that the executor can stop scanning inner rows
* as soon as it finds a match to the current outer row. These numbers
* depend only on the selected outer and inner join relations, not on the
* particular paths used for them, so it's worthwhile to calculate them
* just once per relation pair not once per considered path. This struct
* is filled by compute_semi_anti_join_factors and must be passed along
* to the join cost estimation functions.
*
* outer_match_frac is the fraction of the outer tuples that are
* expected to have at least one match.
* match_count is the average number of matches expected for
* outer tuples that have at least one match.
*/
typedef struct SemiAntiJoinFactors
{
Selectivity outer_match_frac;
Selectivity match_count;
} SemiAntiJoinFactors;
Code review for foreign/custom join pushdown patch. Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
2015-05-10 20:36:30 +02:00
/*
* Struct for extra information passed to subroutines of add_paths_to_joinrel
*
* restrictlist contains all of the RestrictInfo nodes for restriction
* clauses that apply to this join
* mergeclause_list is a list of RestrictInfo nodes for available
* mergejoin clauses in this join
* sjinfo is extra info about special joins for selectivity estimation
* semifactors is as shown above (only valid for SEMI or ANTI joins)
* param_source_rels are OK targets for parameterization of result paths
* extra_lateral_rels are additional parameterization for result paths
*/
typedef struct JoinPathExtraData
{
List *restrictlist;
List *mergeclause_list;
SpecialJoinInfo *sjinfo;
SemiAntiJoinFactors semifactors;
Relids param_source_rels;
Relids extra_lateral_rels;
} JoinPathExtraData;
/*
* For speed reasons, cost estimation for join paths is performed in two
* phases: the first phase tries to quickly derive a lower bound for the
* join cost, and then we check if that's sufficient to reject the path.
* If not, we come back for a more refined cost estimate. The first phase
* fills a JoinCostWorkspace struct with its preliminary cost estimates
* and possibly additional intermediate values. The second phase takes
* these values as inputs to avoid repeating work.
*
* (Ideally we'd declare this in cost.h, but it's also needed in pathnode.h,
* so seems best to put it here.)
*/
typedef struct JoinCostWorkspace
{
/* Preliminary cost estimates --- must not be larger than final ones! */
Cost startup_cost; /* cost expended before fetching any tuples */
Cost total_cost; /* total cost (assuming all tuples fetched) */
/* Fields below here should be treated as private to costsize.c */
Cost run_cost; /* non-startup cost components */
/* private for cost_nestloop code */
Fix planner's cost estimation for SEMI/ANTI joins with inner indexscans. When the inner side of a nestloop SEMI or ANTI join is an indexscan that uses all the join clauses as indexquals, it can be presumed that both matched and unmatched outer rows will be processed very quickly: for matched rows, we'll stop after fetching one row from the indexscan, while for unmatched rows we'll have an indexscan that finds no matching index entries, which should also be quick. The planner already knew about this, but it was nonetheless charging for at least one full run of the inner indexscan, as a consequence of concerns about the behavior of materialized inner scans --- but those concerns don't apply in the fast case. If the inner side has low cardinality (many matching rows) this could make an indexscan plan look far more expensive than it actually is. To fix, rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we don't add the inner scan cost until we've inspected the indexquals, and then we can add either the full-run cost or just the first tuple's cost as appropriate. Experimentation with this fix uncovered another problem: add_path and friends were coded to disregard cheap startup cost when considering parameterized paths. That's usually okay (and desirable, because it thins the path herd faster); but in this fast case for SEMI/ANTI joins, it could result in throwing away the desired plain indexscan path in favor of a bitmap scan path before we ever get to the join costing logic. In the many-matching-rows cases of interest here, a bitmap scan will do a lot more work than required, so this is a problem. To fix, add a per-relation flag consider_param_startup that works like the existing consider_startup flag, but applies to parameterized paths, and set it for relations that are the inside of a SEMI or ANTI join. To make this patch reasonably safe to back-patch, care has been taken to avoid changing the planner's behavior except in the very narrow case of SEMI/ANTI joins with inner indexscans. There are places in compare_path_costs_fuzzily and add_path_precheck that are not terribly consistent with the new approach, but changing them will affect planner decisions at the margins in other cases, so we'll leave that for a HEAD-only fix. Back-patch to 9.3; before that, the consider_startup flag didn't exist, meaning that the second aspect of the patch would be too invasive. Per a complaint from Peter Holzer and analysis by Tomas Vondra.
2015-06-03 17:58:47 +02:00
Cost inner_run_cost; /* also used by cost_mergejoin code */
Cost inner_rescan_run_cost;
/* private for cost_mergejoin code */
double outer_rows;
double inner_rows;
double outer_skip_rows;
double inner_skip_rows;
/* private for cost_hashjoin code */
int numbuckets;
int numbatches;
} JoinCostWorkspace;
#endif /* RELATION_H */