2000-09-29 20:21:41 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* nodeSubqueryscan.c
|
|
|
|
* Support routines for scanning subqueries (subselects in rangetable).
|
|
|
|
*
|
2000-10-05 21:11:39 +02:00
|
|
|
* This is just enough different from sublinks (nodeSubplan.c) to mean that
|
|
|
|
* we need two sets of code. Ought to look at trying to unify the cases.
|
|
|
|
*
|
|
|
|
*
|
2018-01-03 05:30:12 +01:00
|
|
|
* Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
|
2000-09-29 20:21:41 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/executor/nodeSubqueryscan.c
|
2000-09-29 20:21:41 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* INTERFACE ROUTINES
|
|
|
|
* ExecSubqueryScan scans a subquery.
|
|
|
|
* ExecSubqueryNext retrieve next tuple in sequential order.
|
|
|
|
* ExecInitSubqueryScan creates and initializes a subqueryscan node.
|
|
|
|
* ExecEndSubqueryScan releases any storage allocated.
|
2010-07-12 19:01:06 +02:00
|
|
|
* ExecReScanSubqueryScan rescans the relation
|
2000-09-29 20:21:41 +02:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include "executor/execdebug.h"
|
|
|
|
#include "executor/nodeSubqueryscan.h"
|
|
|
|
|
2002-12-05 16:50:39 +01:00
|
|
|
static TupleTableSlot *SubqueryNext(SubqueryScanState *node);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* Scan Support
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* SubqueryNext
|
|
|
|
*
|
|
|
|
* This is a workhorse for ExecSubqueryScan
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
static TupleTableSlot *
|
2002-12-05 16:50:39 +01:00
|
|
|
SubqueryNext(SubqueryScanState *node)
|
2000-09-29 20:21:41 +02:00
|
|
|
{
|
|
|
|
TupleTableSlot *slot;
|
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2006-12-26 22:37:20 +01:00
|
|
|
* Get the next tuple from the sub-query.
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
slot = ExecProcNode(node->subplan);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2005-05-23 00:30:20 +02:00
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* We just return the subplan's result slot, rather than expending extra
|
|
|
|
* cycles for ExecCopySlot(). (Our own ScanTupleSlot is used only for
|
|
|
|
* EvalPlanQual rechecks.)
|
2005-05-23 00:30:20 +02:00
|
|
|
*/
|
2000-09-29 20:21:41 +02:00
|
|
|
return slot;
|
|
|
|
}
|
|
|
|
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
/*
|
|
|
|
* SubqueryRecheck -- access method routine to recheck a tuple in EvalPlanQual
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
SubqueryRecheck(SubqueryScanState *node, TupleTableSlot *slot)
|
|
|
|
{
|
|
|
|
/* nothing to check */
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2000-09-29 20:21:41 +02:00
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecSubqueryScan(node)
|
|
|
|
*
|
|
|
|
* Scans the subquery sequentially and returns the next qualifying
|
|
|
|
* tuple.
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
* We call the ExecScan() routine and pass it the appropriate
|
|
|
|
* access method functions.
|
|
|
|
* ----------------------------------------------------------------
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
2017-07-17 09:33:49 +02:00
|
|
|
static TupleTableSlot *
|
|
|
|
ExecSubqueryScan(PlanState *pstate)
|
2000-09-29 20:21:41 +02:00
|
|
|
{
|
2017-07-17 09:33:49 +02:00
|
|
|
SubqueryScanState *node = castNode(SubqueryScanState, pstate);
|
|
|
|
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
return ExecScan(&node->ss,
|
|
|
|
(ExecScanAccessMtd) SubqueryNext,
|
|
|
|
(ExecScanRecheckMtd) SubqueryRecheck);
|
2000-09-29 20:21:41 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecInitSubqueryScan
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
SubqueryScanState *
|
2006-02-28 05:10:28 +01:00
|
|
|
ExecInitSubqueryScan(SubqueryScan *node, EState *estate, int eflags)
|
2000-09-29 20:21:41 +02:00
|
|
|
{
|
|
|
|
SubqueryScanState *subquerystate;
|
|
|
|
|
2006-02-28 05:10:28 +01:00
|
|
|
/* check for unsupported flags */
|
|
|
|
Assert(!(eflags & EXEC_FLAG_MARK));
|
|
|
|
|
2011-09-03 21:35:12 +02:00
|
|
|
/* SubqueryScan should not have any "normal" children */
|
2002-12-05 16:50:39 +01:00
|
|
|
Assert(outerPlan(node) == NULL);
|
|
|
|
Assert(innerPlan(node) == NULL);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2002-12-05 16:50:39 +01:00
|
|
|
* create state structure
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
|
|
|
subquerystate = makeNode(SubqueryScanState);
|
2002-12-05 16:50:39 +01:00
|
|
|
subquerystate->ss.ps.plan = (Plan *) node;
|
|
|
|
subquerystate->ss.ps.state = estate;
|
2017-07-17 09:33:49 +02:00
|
|
|
subquerystate->ss.ps.ExecProcNode = ExecSubqueryScan;
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
|
|
|
* Miscellaneous initialization
|
2000-09-29 20:21:41 +02:00
|
|
|
*
|
2001-03-22 07:16:21 +01:00
|
|
|
* create expression context for node
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecAssignExprContext(estate, &subquerystate->ss.ps);
|
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
|
|
|
* initialize subquery
|
2002-12-15 17:17:59 +01:00
|
|
|
*/
|
2007-02-27 02:11:26 +01:00
|
|
|
subquerystate->subplan = ExecInitNode(node->subplan, estate, eflags);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2005-05-23 00:30:20 +02:00
|
|
|
/*
|
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-10 02:19:39 +01:00
|
|
|
* Initialize scan slot and type (needed by ExecAssignScanProjectionInfo)
|
2005-05-23 00:30:20 +02:00
|
|
|
*/
|
2018-02-17 06:17:38 +01:00
|
|
|
ExecInitScanTupleSlot(estate, &subquerystate->ss,
|
|
|
|
ExecGetResultType(subquerystate->subplan));
|
2005-05-23 00:30:20 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-10 02:19:39 +01:00
|
|
|
* Initialize result type and projection.
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-10 02:19:39 +01:00
|
|
|
ExecInitResultTypeTL(&subquerystate->ss.ps);
|
2005-05-23 00:30:20 +02:00
|
|
|
ExecAssignScanProjectionInfo(&subquerystate->ss);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2018-02-17 06:17:38 +01:00
|
|
|
/*
|
|
|
|
* initialize child expressions
|
|
|
|
*/
|
|
|
|
subquerystate->ss.ps.qual =
|
|
|
|
ExecInitQual(node->scan.plan.qual, (PlanState *) subquerystate);
|
|
|
|
|
2002-12-05 16:50:39 +01:00
|
|
|
return subquerystate;
|
2000-09-29 20:21:41 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
|
|
|
* ExecEndSubqueryScan
|
|
|
|
*
|
|
|
|
* frees any storage allocated through C routines.
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecEndSubqueryScan(SubqueryScanState *node)
|
2000-09-29 20:21:41 +02:00
|
|
|
{
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2002-12-15 17:17:59 +01:00
|
|
|
* Free the exprcontext
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
ExecFreeExprContext(&node->ss.ps);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2002-12-05 16:50:39 +01:00
|
|
|
* clean out the upper tuple table
|
2000-09-29 20:21:41 +02:00
|
|
|
*/
|
Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.
Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them. It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType(). So separate the determination of the result
type from the slot creation. The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot(). That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().
Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.
This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).
Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-10 02:19:39 +01:00
|
|
|
if (node->ss.ps.ps_ResultTupleSlot)
|
|
|
|
ExecClearTuple(node->ss.ps.ps_ResultTupleSlot);
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
ExecClearTuple(node->ss.ss_ScanTupleSlot);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2000-09-29 20:21:41 +02:00
|
|
|
* close down subquery
|
|
|
|
*/
|
2007-02-27 02:11:26 +01:00
|
|
|
ExecEndNode(node->subplan);
|
2000-09-29 20:21:41 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* ----------------------------------------------------------------
|
2010-07-12 19:01:06 +02:00
|
|
|
* ExecReScanSubqueryScan
|
2000-09-29 20:21:41 +02:00
|
|
|
*
|
|
|
|
* Rescans the relation.
|
|
|
|
* ----------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2010-07-12 19:01:06 +02:00
|
|
|
ExecReScanSubqueryScan(SubqueryScanState *node)
|
2000-09-29 20:21:41 +02:00
|
|
|
{
|
Re-implement EvalPlanQual processing to improve its performance and eliminate
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
2009-10-26 03:26:45 +01:00
|
|
|
ExecScanReScan(&node->ss);
|
2000-09-29 20:21:41 +02:00
|
|
|
|
2001-05-08 21:47:02 +02:00
|
|
|
/*
|
|
|
|
* ExecReScan doesn't know about my subplan, so I have to do
|
2014-05-06 18:12:18 +02:00
|
|
|
* changed-parameter signaling myself. This is just as well, because the
|
2005-10-15 04:49:52 +02:00
|
|
|
* subplan has its own memory context in which its chgParam state lives.
|
2001-05-08 21:47:02 +02:00
|
|
|
*/
|
2002-12-05 16:50:39 +01:00
|
|
|
if (node->ss.ps.chgParam != NULL)
|
2003-02-09 01:30:41 +01:00
|
|
|
UpdateChangedParamSet(node->subplan, node->ss.ps.chgParam);
|
2001-10-25 07:50:21 +02:00
|
|
|
|
2001-05-08 21:47:02 +02:00
|
|
|
/*
|
|
|
|
* if chgParam of subnode is not null then plan will be re-scanned by
|
|
|
|
* first ExecProcNode.
|
|
|
|
*/
|
|
|
|
if (node->subplan->chgParam == NULL)
|
2010-07-12 19:01:06 +02:00
|
|
|
ExecReScan(node->subplan);
|
2000-09-29 20:21:41 +02:00
|
|
|
}
|