2015-09-29 03:55:57 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* execParallel.c
|
|
|
|
* Support routines for parallel execution.
|
|
|
|
*
|
2020-01-01 18:21:45 +01:00
|
|
|
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
|
2015-09-29 03:55:57 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
2015-10-22 16:37:24 +02:00
|
|
|
* This file contains routines that are intended to support setting up,
|
|
|
|
* using, and tearing down a ParallelContext from within the PostgreSQL
|
|
|
|
* executor. The ParallelContext machinery will handle starting the
|
|
|
|
* workers and ensuring that their state generally matches that of the
|
|
|
|
* leader; see src/backend/access/transam/README.parallel for details.
|
|
|
|
* However, we must save and restore relevant executor state, such as
|
|
|
|
* any ParamListInfo associated with the query, buffer usage info, and
|
|
|
|
* the actual plan to be passed down to the worker.
|
2015-09-29 03:55:57 +02:00
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
|
|
|
* src/backend/executor/execParallel.c
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include "executor/execParallel.h"
|
|
|
|
#include "executor/executor.h"
|
Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 23:28:39 +01:00
|
|
|
#include "executor/nodeAppend.h"
|
Support parallel bitmap heap scans.
The index is scanned by a single process, but then all cooperating
processes can iterate jointly over the resulting set of heap blocks.
In the future, we might also want to support using a parallel bitmap
index scan to set up for a parallel bitmap heap scan, but that's a
job for another day.
Dilip Kumar, with some corrections and cosmetic changes by me. The
larger patch set of which this is a part has been reviewed and tested
by (at least) Andres Freund, Amit Khandekar, Tushar Ahuja, Rafia
Sabih, Haribabu Kommi, Thomas Munro, and me.
Discussion: http://postgr.es/m/CAFiTN-uc4=0WxRGfCzs-xfkMYcSEWUC-Fon6thkJGjkh9i=13A@mail.gmail.com
2017-03-08 18:05:43 +01:00
|
|
|
#include "executor/nodeBitmapHeapscan.h"
|
2016-02-03 18:46:18 +01:00
|
|
|
#include "executor/nodeCustom.h"
|
|
|
|
#include "executor/nodeForeignscan.h"
|
2017-12-05 19:55:56 +01:00
|
|
|
#include "executor/nodeHash.h"
|
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 08:39:21 +01:00
|
|
|
#include "executor/nodeHashjoin.h"
|
2017-02-19 11:23:59 +01:00
|
|
|
#include "executor/nodeIndexonlyscan.h"
|
2019-11-12 04:00:16 +01:00
|
|
|
#include "executor/nodeIndexscan.h"
|
2017-08-29 19:22:49 +02:00
|
|
|
#include "executor/nodeSeqscan.h"
|
|
|
|
#include "executor/nodeSort.h"
|
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
2018-09-15 19:42:33 +02:00
|
|
|
#include "executor/nodeSubplan.h"
|
2015-09-29 03:55:57 +02:00
|
|
|
#include "executor/tqueue.h"
|
2018-09-25 21:54:29 +02:00
|
|
|
#include "jit/jit.h"
|
2015-09-29 03:55:57 +02:00
|
|
|
#include "nodes/nodeFuncs.h"
|
2019-11-12 04:00:16 +01:00
|
|
|
#include "pgstat.h"
|
2015-09-29 03:55:57 +02:00
|
|
|
#include "storage/spin.h"
|
|
|
|
#include "tcop/tcopprot.h"
|
2017-11-16 18:06:14 +01:00
|
|
|
#include "utils/datum.h"
|
2016-12-19 22:47:15 +01:00
|
|
|
#include "utils/dsa.h"
|
2017-11-16 18:06:14 +01:00
|
|
|
#include "utils/lsyscache.h"
|
2015-09-29 03:55:57 +02:00
|
|
|
#include "utils/memutils.h"
|
|
|
|
#include "utils/snapmgr.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Magic numbers for parallel executor communication. We use constants
|
|
|
|
* greater than any 32-bit integer here so that values < 2^32 can be used
|
|
|
|
* by individual parallel nodes to store their own state.
|
|
|
|
*/
|
2017-08-29 19:12:23 +02:00
|
|
|
#define PARALLEL_KEY_EXECUTOR_FIXED UINT64CONST(0xE000000000000001)
|
|
|
|
#define PARALLEL_KEY_PLANNEDSTMT UINT64CONST(0xE000000000000002)
|
2017-11-16 18:06:14 +01:00
|
|
|
#define PARALLEL_KEY_PARAMLISTINFO UINT64CONST(0xE000000000000003)
|
2017-08-29 19:12:23 +02:00
|
|
|
#define PARALLEL_KEY_BUFFER_USAGE UINT64CONST(0xE000000000000004)
|
|
|
|
#define PARALLEL_KEY_TUPLE_QUEUE UINT64CONST(0xE000000000000005)
|
|
|
|
#define PARALLEL_KEY_INSTRUMENTATION UINT64CONST(0xE000000000000006)
|
|
|
|
#define PARALLEL_KEY_DSA UINT64CONST(0xE000000000000007)
|
|
|
|
#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xE000000000000008)
|
2018-09-25 21:54:29 +02:00
|
|
|
#define PARALLEL_KEY_JIT_INSTRUMENTATION UINT64CONST(0xE000000000000009)
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
#define PARALLEL_TUPLE_QUEUE_SIZE 65536
|
|
|
|
|
2017-08-29 19:12:23 +02:00
|
|
|
/*
|
|
|
|
* Fixed-size random stuff that we need to pass to parallel workers.
|
|
|
|
*/
|
|
|
|
typedef struct FixedParallelExecutorState
|
|
|
|
{
|
|
|
|
int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */
|
2017-11-16 18:06:14 +01:00
|
|
|
dsa_pointer param_exec;
|
2017-11-20 18:00:33 +01:00
|
|
|
int eflags;
|
2018-03-22 19:45:07 +01:00
|
|
|
int jit_flags;
|
2017-08-29 19:12:23 +02:00
|
|
|
} FixedParallelExecutorState;
|
|
|
|
|
2016-04-27 17:29:45 +02:00
|
|
|
/*
|
|
|
|
* DSM structure for accumulating per-PlanState instrumentation.
|
|
|
|
*
|
|
|
|
* instrument_options: Same meaning here as in instrument.c.
|
|
|
|
*
|
|
|
|
* instrument_offset: Offset, relative to the start of this structure,
|
|
|
|
* of the first Instrumentation object. This will depend on the length of
|
|
|
|
* the plan_node_id array.
|
|
|
|
*
|
|
|
|
* num_workers: Number of workers.
|
|
|
|
*
|
|
|
|
* num_plan_nodes: Number of plan nodes.
|
|
|
|
*
|
|
|
|
* plan_node_id: Array of plan nodes for which we are gathering instrumentation
|
|
|
|
* from parallel workers. The length of this array is given by num_plan_nodes.
|
|
|
|
*/
|
2015-09-29 03:55:57 +02:00
|
|
|
struct SharedExecutorInstrumentation
|
|
|
|
{
|
2016-04-27 17:29:45 +02:00
|
|
|
int instrument_options;
|
|
|
|
int instrument_offset;
|
|
|
|
int num_workers;
|
|
|
|
int num_plan_nodes;
|
|
|
|
int plan_node_id[FLEXIBLE_ARRAY_MEMBER];
|
2015-12-09 19:18:09 +01:00
|
|
|
/* array of num_plan_nodes * num_workers Instrumentation objects follows */
|
2015-09-29 03:55:57 +02:00
|
|
|
};
|
2015-12-09 19:18:09 +01:00
|
|
|
#define GetInstrumentationArray(sei) \
|
|
|
|
(AssertVariableIsOfTypeMacro(sei, SharedExecutorInstrumentation *), \
|
|
|
|
(Instrumentation *) (((char *) sei) + sei->instrument_offset))
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Context object for ExecParallelEstimate. */
|
|
|
|
typedef struct ExecParallelEstimateContext
|
|
|
|
{
|
|
|
|
ParallelContext *pcxt;
|
2016-06-10 00:02:36 +02:00
|
|
|
int nnodes;
|
2015-09-29 03:55:57 +02:00
|
|
|
} ExecParallelEstimateContext;
|
|
|
|
|
2016-04-02 03:53:10 +02:00
|
|
|
/* Context object for ExecParallelInitializeDSM. */
|
2015-09-29 03:55:57 +02:00
|
|
|
typedef struct ExecParallelInitializeDSMContext
|
|
|
|
{
|
|
|
|
ParallelContext *pcxt;
|
|
|
|
SharedExecutorInstrumentation *instrumentation;
|
2016-06-10 00:02:36 +02:00
|
|
|
int nnodes;
|
2015-09-29 03:55:57 +02:00
|
|
|
} ExecParallelInitializeDSMContext;
|
|
|
|
|
|
|
|
/* Helper functions that run in the parallel leader. */
|
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
2015-10-01 01:23:36 +02:00
|
|
|
static char *ExecSerializePlan(Plan *plan, EState *estate);
|
2015-09-29 03:55:57 +02:00
|
|
|
static bool ExecParallelEstimate(PlanState *node,
|
2019-05-22 19:04:48 +02:00
|
|
|
ExecParallelEstimateContext *e);
|
2015-09-29 03:55:57 +02:00
|
|
|
static bool ExecParallelInitializeDSM(PlanState *node,
|
2019-05-22 19:04:48 +02:00
|
|
|
ExecParallelInitializeDSMContext *d);
|
2015-10-30 10:43:00 +01:00
|
|
|
static shm_mq_handle **ExecParallelSetupTupleQueues(ParallelContext *pcxt,
|
2019-05-22 19:04:48 +02:00
|
|
|
bool reinitialize);
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
static bool ExecParallelReInitializeDSM(PlanState *planstate,
|
2019-05-22 19:04:48 +02:00
|
|
|
ParallelContext *pcxt);
|
2015-09-29 03:55:57 +02:00
|
|
|
static bool ExecParallelRetrieveInstrumentation(PlanState *planstate,
|
2019-05-22 19:04:48 +02:00
|
|
|
SharedExecutorInstrumentation *instrumentation);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
2017-04-15 05:50:16 +02:00
|
|
|
/* Helper function that runs in the parallel worker. */
|
2015-09-29 03:55:57 +02:00
|
|
|
static DestReceiver *ExecParallelGetReceiver(dsm_segment *seg, shm_toc *toc);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a serialized representation of the plan to be sent to each worker.
|
|
|
|
*/
|
|
|
|
static char *
|
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
2015-10-01 01:23:36 +02:00
|
|
|
ExecSerializePlan(Plan *plan, EState *estate)
|
2015-09-29 03:55:57 +02:00
|
|
|
{
|
|
|
|
PlannedStmt *pstmt;
|
2017-04-12 22:06:49 +02:00
|
|
|
ListCell *lc;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* We can't scribble on the original plan, so make a copy. */
|
|
|
|
plan = copyObject(plan);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The worker will start its own copy of the executor, and that copy will
|
|
|
|
* insert a junk filter if the toplevel node has any resjunk entries. We
|
|
|
|
* don't want that to happen, because while resjunk columns shouldn't be
|
|
|
|
* sent back to the user, here the tuples are coming back to another
|
|
|
|
* backend which may very well need them. So mutate the target list
|
|
|
|
* accordingly. This is sort of a hack; there might be better ways to do
|
|
|
|
* this...
|
|
|
|
*/
|
2017-04-12 22:06:49 +02:00
|
|
|
foreach(lc, plan->targetlist)
|
2015-09-29 03:55:57 +02:00
|
|
|
{
|
2017-04-12 22:06:49 +02:00
|
|
|
TargetEntry *tle = lfirst_node(TargetEntry, lc);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
tle->resjunk = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a dummy PlannedStmt. Most of the fields don't need to be valid
|
|
|
|
* for our purposes, but the worker will need at least a minimal
|
|
|
|
* PlannedStmt to start the executor.
|
|
|
|
*/
|
|
|
|
pstmt = makeNode(PlannedStmt);
|
|
|
|
pstmt->commandType = CMD_SELECT;
|
2017-10-12 01:52:46 +02:00
|
|
|
pstmt->queryId = UINT64CONST(0);
|
Avoid invalidating all foreign-join cached plans when user mappings change.
We must not push down a foreign join when the foreign tables involved
should be accessed under different user mappings. Previously we tried
to enforce that rule literally during planning, but that meant that the
resulting plans were dependent on the current contents of the
pg_user_mapping catalog, and we had to blow away all cached plans
containing any remote join when anything at all changed in pg_user_mapping.
This could have been improved somewhat, but the fact that a syscache inval
callback has very limited info about what changed made it hard to do better
within that design. Instead, let's change the planner to not consider user
mappings per se, but to allow a foreign join if both RTEs have the same
checkAsUser value. If they do, then they necessarily will use the same
user mapping at runtime, and we don't need to know specifically which one
that is. Post-plan-time changes in pg_user_mapping no longer require any
plan invalidation.
This rule does give up some optimization ability, to wit where two foreign
table references come from views with different owners or one's from a view
and one's directly in the query, but nonetheless the same user mapping
would have applied. We'll sacrifice the first case, but to not regress
more than we have to in the second case, allow a foreign join involving
both zero and nonzero checkAsUser values if the nonzero one is the same as
the prevailing effective userID. In that case, mark the plan as only
runnable by that userID.
The plancache code already had a notion of plans being userID-specific,
in order to support RLS. It was a little confused though, in particular
lacking clarity of thought as to whether it was the rewritten query or just
the finished plan that's dependent on the userID. Rearrange that code so
that it's clearer what depends on which, and so that the same logic applies
to both RLS-injected role dependency and foreign-join-injected role
dependency.
Note that this patch doesn't remove the other issue mentioned in the
original complaint, which is that while we'll reliably stop using a foreign
join if it's disallowed in a new context, we might fail to start using a
foreign join if it's now allowed, but we previously created a generic
cached plan that didn't use one. It was agreed that the chance of winning
that way was not high enough to justify the much larger number of plan
invalidations that would have to occur if we tried to cause it to happen.
In passing, clean up randomly-varying spelling of EXPLAIN commands in
postgres_fdw.sql, and fix a COSTS ON example that had been allowed to
leak into the committed tests.
This reverts most of commits fbe5a3fb7 and 5d4171d1c, which were the
previous attempt at ensuring we wouldn't push down foreign joins that
span permissions contexts.
Etsuro Fujita and Tom Lane
Discussion: <d49c1e5b-f059-20f4-c132-e9752ee0113e@lab.ntt.co.jp>
2016-07-15 23:22:56 +02:00
|
|
|
pstmt->hasReturning = false;
|
|
|
|
pstmt->hasModifyingCTE = false;
|
|
|
|
pstmt->canSetTag = true;
|
|
|
|
pstmt->transientPlan = false;
|
|
|
|
pstmt->dependsOnRole = false;
|
|
|
|
pstmt->parallelModeNeeded = false;
|
2015-09-29 03:55:57 +02:00
|
|
|
pstmt->planTree = plan;
|
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
2015-10-01 01:23:36 +02:00
|
|
|
pstmt->rtable = estate->es_range_table;
|
2015-09-29 03:55:57 +02:00
|
|
|
pstmt->resultRelations = NIL;
|
Further adjust EXPLAIN's choices of table alias names.
This patch causes EXPLAIN to always assign a separate table alias to the
parent RTE of an append relation (inheritance set); before, such RTEs
were ignored if not actually scanned by the plan. Since the child RTEs
now always have that same alias to start with (cf. commit 55a1954da),
the net effect is that the parent RTE usually gets the alias used or
implied by the query text, and the children all get that alias with "_N"
appended. (The exception to "usually" is if there are duplicate aliases
in different subtrees of the original query; then some of those original
RTEs will also have "_N" appended.)
This results in more uniform output for partitioned-table plans than
we had before: the partitioned table itself gets the original alias,
and all child tables have aliases with "_N", rather than the previous
behavior where one of the children would get an alias without "_N".
The reason for giving the parent RTE an alias, even if it isn't scanned
by the plan, is that we now use the parent's alias to qualify Vars that
refer to an appendrel output column and appear above the Append or
MergeAppend that computes the appendrel. But below the append, Vars
refer to some one of the child relations, and are displayed that way.
This seems clearer than the old behavior where a Var that could carry
values from any child relation was displayed as if it referred to only
one of them.
While at it, change ruleutils.c so that the code paths used by EXPLAIN
deal in Plan trees not PlanState trees. This effectively reverts a
decision made in commit 1cc29fe7c, which seemed like a good idea at
the time to make ruleutils.c consistent with explain.c. However,
it's problematic because we'd really like to allow executor startup
pruning to remove all the children of an append node when possible,
leaving no child PlanState to resolve Vars against. (That's not done
here, but will be in the next patch.) This requires different handling
of subplans and initplans than before, but is otherwise a pretty
straightforward change.
Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
2019-12-11 23:05:18 +01:00
|
|
|
pstmt->rootResultRelations = NIL;
|
|
|
|
pstmt->appendRelations = NIL;
|
2017-04-12 22:06:49 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Transfer only parallel-safe subplans, leaving a NULL "hole" in the list
|
|
|
|
* for unsafe ones (so that the list indexes of the safe ones are
|
|
|
|
* preserved). This positively ensures that the worker won't try to run,
|
|
|
|
* or even do ExecInitNode on, an unsafe subplan. That's important to
|
|
|
|
* protect, eg, non-parallel-aware FDWs from getting into trouble.
|
|
|
|
*/
|
|
|
|
pstmt->subplans = NIL;
|
|
|
|
foreach(lc, estate->es_plannedstmt->subplans)
|
|
|
|
{
|
|
|
|
Plan *subplan = (Plan *) lfirst(lc);
|
|
|
|
|
|
|
|
if (subplan && !subplan->parallel_safe)
|
|
|
|
subplan = NULL;
|
|
|
|
pstmt->subplans = lappend(pstmt->subplans, subplan);
|
|
|
|
}
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
pstmt->rewindPlanIDs = NULL;
|
|
|
|
pstmt->rowMarks = NIL;
|
|
|
|
pstmt->relationOids = NIL;
|
|
|
|
pstmt->invalItems = NIL; /* workers can't replan anyway... */
|
2017-11-13 21:24:12 +01:00
|
|
|
pstmt->paramExecTypes = estate->es_plannedstmt->paramExecTypes;
|
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 22:02:35 +01:00
|
|
|
pstmt->utilityStmt = NULL;
|
|
|
|
pstmt->stmt_location = -1;
|
|
|
|
pstmt->stmt_len = -1;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Return serialized copy of our dummy PlannedStmt. */
|
|
|
|
return nodeToString(pstmt);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-08-29 19:22:49 +02:00
|
|
|
* Parallel-aware plan nodes (and occasionally others) may need some state
|
|
|
|
* which is shared across all parallel workers. Before we size the DSM, give
|
|
|
|
* them a chance to call shm_toc_estimate_chunk or shm_toc_estimate_keys on
|
|
|
|
* &pcxt->estimator.
|
2015-09-29 03:55:57 +02:00
|
|
|
*
|
|
|
|
* While we're at it, count the number of PlanState nodes in the tree, so
|
2019-06-07 01:53:52 +02:00
|
|
|
* we know how many Instrumentation structures we need.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e)
|
|
|
|
{
|
|
|
|
if (planstate == NULL)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Count this node. */
|
|
|
|
e->nnodes++;
|
|
|
|
|
2017-08-29 19:22:49 +02:00
|
|
|
switch (nodeTag(planstate))
|
2015-11-11 14:57:52 +01:00
|
|
|
{
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_SeqScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-01-20 20:29:22 +01:00
|
|
|
ExecSeqScanEstimate((SeqScanState *) planstate,
|
|
|
|
e->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_IndexScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-02-15 19:53:24 +01:00
|
|
|
ExecIndexScanEstimate((IndexScanState *) planstate,
|
|
|
|
e->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_IndexOnlyScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-02-19 11:23:59 +01:00
|
|
|
ExecIndexOnlyScanEstimate((IndexOnlyScanState *) planstate,
|
|
|
|
e->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_ForeignScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-02-03 18:46:18 +01:00
|
|
|
ExecForeignScanEstimate((ForeignScanState *) planstate,
|
|
|
|
e->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 23:28:39 +01:00
|
|
|
case T_AppendState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecAppendEstimate((AppendState *) planstate,
|
|
|
|
e->pcxt);
|
|
|
|
break;
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_CustomScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-02-03 18:46:18 +01:00
|
|
|
ExecCustomScanEstimate((CustomScanState *) planstate,
|
|
|
|
e->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_BitmapHeapScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
Support parallel bitmap heap scans.
The index is scanned by a single process, but then all cooperating
processes can iterate jointly over the resulting set of heap blocks.
In the future, we might also want to support using a parallel bitmap
index scan to set up for a parallel bitmap heap scan, but that's a
job for another day.
Dilip Kumar, with some corrections and cosmetic changes by me. The
larger patch set of which this is a part has been reviewed and tested
by (at least) Andres Freund, Amit Khandekar, Tushar Ahuja, Rafia
Sabih, Haribabu Kommi, Thomas Munro, and me.
Discussion: http://postgr.es/m/CAFiTN-uc4=0WxRGfCzs-xfkMYcSEWUC-Fon6thkJGjkh9i=13A@mail.gmail.com
2017-03-08 18:05:43 +01:00
|
|
|
ExecBitmapHeapEstimate((BitmapHeapScanState *) planstate,
|
|
|
|
e->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 08:39:21 +01:00
|
|
|
case T_HashJoinState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecHashJoinEstimate((HashJoinState *) planstate,
|
|
|
|
e->pcxt);
|
|
|
|
break;
|
2017-12-05 19:55:56 +01:00
|
|
|
case T_HashState:
|
|
|
|
/* even when not parallel-aware, for EXPLAIN ANALYZE */
|
|
|
|
ExecHashEstimate((HashState *) planstate, e->pcxt);
|
|
|
|
break;
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_SortState:
|
2017-12-05 19:55:56 +01:00
|
|
|
/* even when not parallel-aware, for EXPLAIN ANALYZE */
|
2017-08-29 19:22:49 +02:00
|
|
|
ExecSortEstimate((SortState *) planstate, e->pcxt);
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
break;
|
|
|
|
|
2017-08-29 19:22:49 +02:00
|
|
|
default:
|
|
|
|
break;
|
2015-11-11 14:57:52 +01:00
|
|
|
}
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
return planstate_tree_walker(planstate, ExecParallelEstimate, e);
|
|
|
|
}
|
|
|
|
|
2017-11-16 18:06:14 +01:00
|
|
|
/*
|
|
|
|
* Estimate the amount of space required to serialize the indicated parameters.
|
|
|
|
*/
|
|
|
|
static Size
|
|
|
|
EstimateParamExecSpace(EState *estate, Bitmapset *params)
|
|
|
|
{
|
|
|
|
int paramid;
|
|
|
|
Size sz = sizeof(int);
|
|
|
|
|
|
|
|
paramid = -1;
|
|
|
|
while ((paramid = bms_next_member(params, paramid)) >= 0)
|
|
|
|
{
|
|
|
|
Oid typeOid;
|
|
|
|
int16 typLen;
|
|
|
|
bool typByVal;
|
|
|
|
ParamExecData *prm;
|
|
|
|
|
|
|
|
prm = &(estate->es_param_exec_vals[paramid]);
|
|
|
|
typeOid = list_nth_oid(estate->es_plannedstmt->paramExecTypes,
|
|
|
|
paramid);
|
|
|
|
|
|
|
|
sz = add_size(sz, sizeof(int)); /* space for paramid */
|
|
|
|
|
|
|
|
/* space for datum/isnull */
|
|
|
|
if (OidIsValid(typeOid))
|
|
|
|
get_typlenbyval(typeOid, &typLen, &typByVal);
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* If no type OID, assume by-value, like copyParamList does. */
|
|
|
|
typLen = sizeof(Datum);
|
|
|
|
typByVal = true;
|
|
|
|
}
|
|
|
|
sz = add_size(sz,
|
|
|
|
datumEstimateSpace(prm->value, prm->isnull,
|
|
|
|
typByVal, typLen));
|
|
|
|
}
|
|
|
|
return sz;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Serialize specified PARAM_EXEC parameters.
|
|
|
|
*
|
|
|
|
* We write the number of parameters first, as a 4-byte integer, and then
|
|
|
|
* write details for each parameter in turn. The details for each parameter
|
|
|
|
* consist of a 4-byte paramid (location of param in execution time internal
|
|
|
|
* parameter array) and then the datum as serialized by datumSerialize().
|
|
|
|
*/
|
|
|
|
static dsa_pointer
|
2017-12-18 18:17:37 +01:00
|
|
|
SerializeParamExecParams(EState *estate, Bitmapset *params, dsa_area *area)
|
2017-11-16 18:06:14 +01:00
|
|
|
{
|
|
|
|
Size size;
|
|
|
|
int nparams;
|
|
|
|
int paramid;
|
|
|
|
ParamExecData *prm;
|
|
|
|
dsa_pointer handle;
|
|
|
|
char *start_address;
|
|
|
|
|
|
|
|
/* Allocate enough space for the current parameter values. */
|
|
|
|
size = EstimateParamExecSpace(estate, params);
|
2017-12-18 18:17:37 +01:00
|
|
|
handle = dsa_allocate(area, size);
|
|
|
|
start_address = dsa_get_address(area, handle);
|
2017-11-16 18:06:14 +01:00
|
|
|
|
|
|
|
/* First write the number of parameters as a 4-byte integer. */
|
|
|
|
nparams = bms_num_members(params);
|
|
|
|
memcpy(start_address, &nparams, sizeof(int));
|
|
|
|
start_address += sizeof(int);
|
|
|
|
|
|
|
|
/* Write details for each parameter in turn. */
|
|
|
|
paramid = -1;
|
|
|
|
while ((paramid = bms_next_member(params, paramid)) >= 0)
|
|
|
|
{
|
|
|
|
Oid typeOid;
|
|
|
|
int16 typLen;
|
|
|
|
bool typByVal;
|
|
|
|
|
|
|
|
prm = &(estate->es_param_exec_vals[paramid]);
|
|
|
|
typeOid = list_nth_oid(estate->es_plannedstmt->paramExecTypes,
|
|
|
|
paramid);
|
|
|
|
|
|
|
|
/* Write paramid. */
|
|
|
|
memcpy(start_address, ¶mid, sizeof(int));
|
|
|
|
start_address += sizeof(int);
|
|
|
|
|
|
|
|
/* Write datum/isnull */
|
|
|
|
if (OidIsValid(typeOid))
|
|
|
|
get_typlenbyval(typeOid, &typLen, &typByVal);
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* If no type OID, assume by-value, like copyParamList does. */
|
|
|
|
typLen = sizeof(Datum);
|
|
|
|
typByVal = true;
|
|
|
|
}
|
|
|
|
datumSerialize(prm->value, prm->isnull, typByVal, typLen,
|
|
|
|
&start_address);
|
|
|
|
}
|
|
|
|
|
|
|
|
return handle;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore specified PARAM_EXEC parameters.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
RestoreParamExecParams(char *start_address, EState *estate)
|
|
|
|
{
|
|
|
|
int nparams;
|
|
|
|
int i;
|
|
|
|
int paramid;
|
|
|
|
|
|
|
|
memcpy(&nparams, start_address, sizeof(int));
|
|
|
|
start_address += sizeof(int);
|
|
|
|
|
|
|
|
for (i = 0; i < nparams; i++)
|
|
|
|
{
|
|
|
|
ParamExecData *prm;
|
|
|
|
|
|
|
|
/* Read paramid */
|
|
|
|
memcpy(¶mid, start_address, sizeof(int));
|
|
|
|
start_address += sizeof(int);
|
|
|
|
prm = &(estate->es_param_exec_vals[paramid]);
|
|
|
|
|
|
|
|
/* Read datum/isnull. */
|
|
|
|
prm->value = datumRestore(&start_address, &prm->isnull);
|
|
|
|
prm->execPlan = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
2016-01-20 20:29:22 +01:00
|
|
|
* Initialize the dynamic shared memory segment that will be used to control
|
|
|
|
* parallel execution.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ExecParallelInitializeDSM(PlanState *planstate,
|
|
|
|
ExecParallelInitializeDSMContext *d)
|
|
|
|
{
|
|
|
|
if (planstate == NULL)
|
|
|
|
return false;
|
|
|
|
|
2015-12-09 19:18:09 +01:00
|
|
|
/* If instrumentation is enabled, initialize slot for this node. */
|
2015-09-29 03:55:57 +02:00
|
|
|
if (d->instrumentation != NULL)
|
2015-12-09 19:18:09 +01:00
|
|
|
d->instrumentation->plan_node_id[d->nnodes] =
|
|
|
|
planstate->plan->plan_node_id;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Count this node. */
|
|
|
|
d->nnodes++;
|
|
|
|
|
2016-01-20 20:29:22 +01:00
|
|
|
/*
|
2017-08-29 19:22:49 +02:00
|
|
|
* Call initializers for DSM-using plan nodes.
|
2016-01-20 20:29:22 +01:00
|
|
|
*
|
2017-08-29 19:22:49 +02:00
|
|
|
* Most plan nodes won't do anything here, but plan nodes that allocated
|
|
|
|
* DSM may need to initialize shared state in the DSM before parallel
|
|
|
|
* workers are launched. They can allocate the space they previously
|
2016-01-20 20:29:22 +01:00
|
|
|
* estimated using shm_toc_allocate, and add the keys they previously
|
|
|
|
* estimated using shm_toc_insert, in each case targeting pcxt->toc.
|
|
|
|
*/
|
2017-08-29 19:22:49 +02:00
|
|
|
switch (nodeTag(planstate))
|
2015-11-11 14:57:52 +01:00
|
|
|
{
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_SeqScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-01-20 20:29:22 +01:00
|
|
|
ExecSeqScanInitializeDSM((SeqScanState *) planstate,
|
|
|
|
d->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_IndexScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-02-15 19:53:24 +01:00
|
|
|
ExecIndexScanInitializeDSM((IndexScanState *) planstate,
|
|
|
|
d->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_IndexOnlyScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-02-19 11:23:59 +01:00
|
|
|
ExecIndexOnlyScanInitializeDSM((IndexOnlyScanState *) planstate,
|
|
|
|
d->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_ForeignScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-02-03 18:46:18 +01:00
|
|
|
ExecForeignScanInitializeDSM((ForeignScanState *) planstate,
|
|
|
|
d->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 23:28:39 +01:00
|
|
|
case T_AppendState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecAppendInitializeDSM((AppendState *) planstate,
|
|
|
|
d->pcxt);
|
|
|
|
break;
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_CustomScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-02-03 18:46:18 +01:00
|
|
|
ExecCustomScanInitializeDSM((CustomScanState *) planstate,
|
|
|
|
d->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_BitmapHeapScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
Support parallel bitmap heap scans.
The index is scanned by a single process, but then all cooperating
processes can iterate jointly over the resulting set of heap blocks.
In the future, we might also want to support using a parallel bitmap
index scan to set up for a parallel bitmap heap scan, but that's a
job for another day.
Dilip Kumar, with some corrections and cosmetic changes by me. The
larger patch set of which this is a part has been reviewed and tested
by (at least) Andres Freund, Amit Khandekar, Tushar Ahuja, Rafia
Sabih, Haribabu Kommi, Thomas Munro, and me.
Discussion: http://postgr.es/m/CAFiTN-uc4=0WxRGfCzs-xfkMYcSEWUC-Fon6thkJGjkh9i=13A@mail.gmail.com
2017-03-08 18:05:43 +01:00
|
|
|
ExecBitmapHeapInitializeDSM((BitmapHeapScanState *) planstate,
|
|
|
|
d->pcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 08:39:21 +01:00
|
|
|
case T_HashJoinState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecHashJoinInitializeDSM((HashJoinState *) planstate,
|
|
|
|
d->pcxt);
|
|
|
|
break;
|
2017-12-05 19:55:56 +01:00
|
|
|
case T_HashState:
|
|
|
|
/* even when not parallel-aware, for EXPLAIN ANALYZE */
|
|
|
|
ExecHashInitializeDSM((HashState *) planstate, d->pcxt);
|
|
|
|
break;
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_SortState:
|
2017-12-05 19:55:56 +01:00
|
|
|
/* even when not parallel-aware, for EXPLAIN ANALYZE */
|
2017-08-29 19:22:49 +02:00
|
|
|
ExecSortInitializeDSM((SortState *) planstate, d->pcxt);
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
break;
|
|
|
|
|
2017-08-29 19:22:49 +02:00
|
|
|
default:
|
|
|
|
break;
|
2015-11-11 14:57:52 +01:00
|
|
|
}
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
return planstate_tree_walker(planstate, ExecParallelInitializeDSM, d);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* It sets up the response queues for backend workers to return tuples
|
|
|
|
* to the main backend and start the workers.
|
|
|
|
*/
|
|
|
|
static shm_mq_handle **
|
2015-10-30 10:43:00 +01:00
|
|
|
ExecParallelSetupTupleQueues(ParallelContext *pcxt, bool reinitialize)
|
2015-09-29 03:55:57 +02:00
|
|
|
{
|
|
|
|
shm_mq_handle **responseq;
|
|
|
|
char *tqueuespace;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Skip this if no workers. */
|
|
|
|
if (pcxt->nworkers == 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* Allocate memory for shared memory queue handles. */
|
|
|
|
responseq = (shm_mq_handle **)
|
|
|
|
palloc(pcxt->nworkers * sizeof(shm_mq_handle *));
|
|
|
|
|
2015-10-30 10:43:00 +01:00
|
|
|
/*
|
|
|
|
* If not reinitializing, allocate space from the DSM for the queues;
|
|
|
|
* otherwise, find the already allocated space.
|
|
|
|
*/
|
|
|
|
if (!reinitialize)
|
|
|
|
tqueuespace =
|
|
|
|
shm_toc_allocate(pcxt->toc,
|
2016-05-06 20:23:47 +02:00
|
|
|
mul_size(PARALLEL_TUPLE_QUEUE_SIZE,
|
|
|
|
pcxt->nworkers));
|
2015-10-30 10:43:00 +01:00
|
|
|
else
|
2017-06-05 18:05:42 +02:00
|
|
|
tqueuespace = shm_toc_lookup(pcxt->toc, PARALLEL_KEY_TUPLE_QUEUE, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Create the queues, and become the receiver for each. */
|
|
|
|
for (i = 0; i < pcxt->nworkers; ++i)
|
|
|
|
{
|
|
|
|
shm_mq *mq;
|
|
|
|
|
2016-05-06 20:23:47 +02:00
|
|
|
mq = shm_mq_create(tqueuespace +
|
|
|
|
((Size) i) * PARALLEL_TUPLE_QUEUE_SIZE,
|
2015-09-29 03:55:57 +02:00
|
|
|
(Size) PARALLEL_TUPLE_QUEUE_SIZE);
|
|
|
|
|
|
|
|
shm_mq_set_receiver(mq, MyProc);
|
|
|
|
responseq[i] = shm_mq_attach(mq, pcxt->seg, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Add array of queues to shm_toc, so others can find it. */
|
2015-10-30 10:43:00 +01:00
|
|
|
if (!reinitialize)
|
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_TUPLE_QUEUE, tqueuespace);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Return array of handles. */
|
|
|
|
return responseq;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up the required infrastructure for backend workers to perform
|
|
|
|
* execution and return results to the main backend.
|
|
|
|
*/
|
|
|
|
ParallelExecutorInfo *
|
2017-11-16 18:06:14 +01:00
|
|
|
ExecInitParallelPlan(PlanState *planstate, EState *estate,
|
|
|
|
Bitmapset *sendParams, int nworkers,
|
2017-08-29 19:12:23 +02:00
|
|
|
int64 tuples_needed)
|
2015-09-29 03:55:57 +02:00
|
|
|
{
|
|
|
|
ParallelExecutorInfo *pei;
|
|
|
|
ParallelContext *pcxt;
|
|
|
|
ExecParallelEstimateContext e;
|
|
|
|
ExecParallelInitializeDSMContext d;
|
2017-08-29 19:12:23 +02:00
|
|
|
FixedParallelExecutorState *fpes;
|
2015-09-29 03:55:57 +02:00
|
|
|
char *pstmt_data;
|
|
|
|
char *pstmt_space;
|
2017-11-16 18:06:14 +01:00
|
|
|
char *paramlistinfo_space;
|
2015-09-29 03:55:57 +02:00
|
|
|
BufferUsage *bufusage_space;
|
|
|
|
SharedExecutorInstrumentation *instrumentation = NULL;
|
2018-09-25 21:54:29 +02:00
|
|
|
SharedJitInstrumentation *jit_instrumentation = NULL;
|
2015-09-29 03:55:57 +02:00
|
|
|
int pstmt_len;
|
2017-11-16 18:06:14 +01:00
|
|
|
int paramlistinfo_len;
|
2015-09-29 03:55:57 +02:00
|
|
|
int instrumentation_len = 0;
|
2018-09-25 21:54:29 +02:00
|
|
|
int jit_instrumentation_len = 0;
|
2015-12-09 19:18:09 +01:00
|
|
|
int instrument_offset = 0;
|
2016-12-19 22:47:15 +01:00
|
|
|
Size dsa_minsize = dsa_minimum_size();
|
2017-02-22 07:45:17 +01:00
|
|
|
char *query_string;
|
|
|
|
int query_len;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
2018-09-15 19:42:33 +02:00
|
|
|
/*
|
|
|
|
* Force any initplan outputs that we're going to pass to workers to be
|
|
|
|
* evaluated, if they weren't already.
|
|
|
|
*
|
|
|
|
* For simplicity, we use the EState's per-output-tuple ExprContext here.
|
|
|
|
* That risks intra-query memory leakage, since we might pass through here
|
|
|
|
* many times before that ExprContext gets reset; but ExecSetParamPlan
|
|
|
|
* doesn't normally leak any memory in the context (see its comments), so
|
|
|
|
* it doesn't seem worth complicating this function's API to pass it a
|
|
|
|
* shorter-lived ExprContext. This might need to change someday.
|
|
|
|
*/
|
|
|
|
ExecSetParamPlanMulti(sendParams, GetPerTupleExprContext(estate));
|
2017-11-16 18:06:14 +01:00
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/* Allocate object for return value. */
|
|
|
|
pei = palloc0(sizeof(ParallelExecutorInfo));
|
2015-11-18 18:35:25 +01:00
|
|
|
pei->finished = false;
|
2015-09-29 03:55:57 +02:00
|
|
|
pei->planstate = planstate;
|
|
|
|
|
|
|
|
/* Fix up and serialize plan to be sent to workers. */
|
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
2015-10-01 01:23:36 +02:00
|
|
|
pstmt_data = ExecSerializePlan(planstate->plan, estate);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Create a parallel context. */
|
Enable parallel query with SERIALIZABLE isolation.
Previously, the SERIALIZABLE isolation level prevented parallel query
from being used. Allow the two features to be used together by
sharing the leader's SERIALIZABLEXACT with parallel workers.
An extra per-SERIALIZABLEXACT LWLock is introduced to make it safe to
share, and new logic is introduced to coordinate the early release
of the SERIALIZABLEXACT required for the SXACT_FLAG_RO_SAFE
optimization, as follows:
The first backend to observe the SXACT_FLAG_RO_SAFE flag (set by
some other transaction) will 'partially release' the SERIALIZABLEXACT,
meaning that the conflicts and locks it holds are released, but the
SERIALIZABLEXACT itself will remain active because other backends
might still have a pointer to it.
Whenever any backend notices the SXACT_FLAG_RO_SAFE flag, it clears
its own MySerializableXact variable and frees local resources so that
it can skip SSI checks for the rest of the transaction. In the
special case of the leader process, it transfers the SERIALIZABLEXACT
to a new variable SavedSerializableXact, so that it can be completely
released at the end of the transaction after all workers have exited.
Remove the serializable_okay flag added to CreateParallelContext() by
commit 9da0cc35, because it's now redundant.
Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Robert Haas, Masahiko Sawada, Kevin Grittner
Discussion: https://postgr.es/m/CAEepm=0gXGYhtrVDWOTHS8SQQy_=S9xo+8oCxGLWZAOoeJ=yzQ@mail.gmail.com
2019-03-15 04:23:46 +01:00
|
|
|
pcxt = CreateParallelContext("postgres", "ParallelQueryMain", nworkers);
|
2015-09-29 03:55:57 +02:00
|
|
|
pei->pcxt = pcxt;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Before telling the parallel context to create a dynamic shared memory
|
|
|
|
* segment, we need to figure out how big it should be. Estimate space
|
|
|
|
* for the various things we need to store.
|
|
|
|
*/
|
|
|
|
|
2017-08-29 19:12:23 +02:00
|
|
|
/* Estimate space for fixed-size state. */
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator,
|
|
|
|
sizeof(FixedParallelExecutorState));
|
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
2017-02-22 07:45:17 +01:00
|
|
|
/* Estimate space for query text. */
|
|
|
|
query_len = strlen(estate->es_sourceText);
|
2017-12-20 23:21:55 +01:00
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, query_len + 1);
|
2017-02-22 07:45:17 +01:00
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/* Estimate space for serialized PlannedStmt. */
|
|
|
|
pstmt_len = strlen(pstmt_data) + 1;
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, pstmt_len);
|
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
|
|
|
/* Estimate space for serialized ParamListInfo. */
|
2017-11-16 18:06:14 +01:00
|
|
|
paramlistinfo_len = EstimateParamListSpace(estate->es_param_list_info);
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, paramlistinfo_len);
|
2015-09-29 03:55:57 +02:00
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Estimate space for BufferUsage.
|
|
|
|
*
|
|
|
|
* If EXPLAIN is not in use and there are no extensions loaded that care,
|
|
|
|
* we could skip this. But we have no way of knowing whether anyone's
|
|
|
|
* looking at pgBufferUsage, so do it unconditionally.
|
|
|
|
*/
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator,
|
2016-05-06 20:23:47 +02:00
|
|
|
mul_size(sizeof(BufferUsage), pcxt->nworkers));
|
2015-09-29 03:55:57 +02:00
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
|
|
|
/* Estimate space for tuple queues. */
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
mul_size(PARALLEL_TUPLE_QUEUE_SIZE, pcxt->nworkers));
|
2015-09-29 03:55:57 +02:00
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
|
|
|
/*
|
2016-06-10 00:02:36 +02:00
|
|
|
* Give parallel-aware nodes a chance to add to the estimates, and get a
|
|
|
|
* count of how many PlanState nodes there are.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
|
|
|
e.pcxt = pcxt;
|
|
|
|
e.nnodes = 0;
|
|
|
|
ExecParallelEstimate(planstate, &e);
|
|
|
|
|
|
|
|
/* Estimate space for instrumentation, if required. */
|
|
|
|
if (estate->es_instrument)
|
|
|
|
{
|
|
|
|
instrumentation_len =
|
2016-05-03 16:52:25 +02:00
|
|
|
offsetof(SharedExecutorInstrumentation, plan_node_id) +
|
|
|
|
sizeof(int) * e.nnodes;
|
2015-12-09 19:18:09 +01:00
|
|
|
instrumentation_len = MAXALIGN(instrumentation_len);
|
|
|
|
instrument_offset = instrumentation_len;
|
2016-05-06 20:23:47 +02:00
|
|
|
instrumentation_len +=
|
|
|
|
mul_size(sizeof(Instrumentation),
|
|
|
|
mul_size(e.nnodes, nworkers));
|
2015-09-29 03:55:57 +02:00
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, instrumentation_len);
|
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
2018-09-25 21:54:29 +02:00
|
|
|
|
|
|
|
/* Estimate space for JIT instrumentation, if required. */
|
|
|
|
if (estate->es_jit_flags != PGJIT_NONE)
|
|
|
|
{
|
|
|
|
jit_instrumentation_len =
|
|
|
|
offsetof(SharedJitInstrumentation, jit_instr) +
|
|
|
|
sizeof(JitInstrumentation) * nworkers;
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, jit_instrumentation_len);
|
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
}
|
2015-09-29 03:55:57 +02:00
|
|
|
}
|
|
|
|
|
2016-12-19 22:47:15 +01:00
|
|
|
/* Estimate space for DSA area. */
|
|
|
|
shm_toc_estimate_chunk(&pcxt->estimator, dsa_minsize);
|
|
|
|
shm_toc_estimate_keys(&pcxt->estimator, 1);
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/* Everyone's had a chance to ask for space, so now create the DSM. */
|
|
|
|
InitializeParallelDSM(pcxt);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* OK, now we have a dynamic shared memory segment, and it should be big
|
|
|
|
* enough to store all of the data we estimated we would want to put into
|
|
|
|
* it, plus whatever general stuff (not specifically executor-related) the
|
|
|
|
* ParallelContext itself needs to store there. None of the space we
|
|
|
|
* asked for has been allocated or initialized yet, though, so do that.
|
|
|
|
*/
|
|
|
|
|
2017-08-29 19:12:23 +02:00
|
|
|
/* Store fixed-size state. */
|
|
|
|
fpes = shm_toc_allocate(pcxt->toc, sizeof(FixedParallelExecutorState));
|
|
|
|
fpes->tuples_needed = tuples_needed;
|
2017-11-16 18:06:14 +01:00
|
|
|
fpes->param_exec = InvalidDsaPointer;
|
2017-11-20 18:00:33 +01:00
|
|
|
fpes->eflags = estate->es_top_eflags;
|
2018-03-22 19:45:07 +01:00
|
|
|
fpes->jit_flags = estate->es_jit_flags;
|
2017-08-29 19:12:23 +02:00
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, fpes);
|
|
|
|
|
2017-02-22 07:45:17 +01:00
|
|
|
/* Store query string */
|
2017-12-20 23:21:55 +01:00
|
|
|
query_string = shm_toc_allocate(pcxt->toc, query_len + 1);
|
|
|
|
memcpy(query_string, estate->es_sourceText, query_len + 1);
|
2017-02-22 07:45:17 +01:00
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_QUERY_TEXT, query_string);
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/* Store serialized PlannedStmt. */
|
|
|
|
pstmt_space = shm_toc_allocate(pcxt->toc, pstmt_len);
|
|
|
|
memcpy(pstmt_space, pstmt_data, pstmt_len);
|
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_PLANNEDSTMT, pstmt_space);
|
|
|
|
|
|
|
|
/* Store serialized ParamListInfo. */
|
2017-11-16 18:06:14 +01:00
|
|
|
paramlistinfo_space = shm_toc_allocate(pcxt->toc, paramlistinfo_len);
|
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_PARAMLISTINFO, paramlistinfo_space);
|
|
|
|
SerializeParamList(estate->es_param_list_info, ¶mlistinfo_space);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Allocate space for each worker's BufferUsage; no need to initialize. */
|
|
|
|
bufusage_space = shm_toc_allocate(pcxt->toc,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
mul_size(sizeof(BufferUsage), pcxt->nworkers));
|
2015-09-29 03:55:57 +02:00
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_BUFFER_USAGE, bufusage_space);
|
|
|
|
pei->buffer_usage = bufusage_space;
|
|
|
|
|
2017-09-01 23:38:54 +02:00
|
|
|
/* Set up the tuple queues that the workers will write into. */
|
2015-10-30 10:43:00 +01:00
|
|
|
pei->tqueue = ExecParallelSetupTupleQueues(pcxt, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
2017-09-01 23:38:54 +02:00
|
|
|
/* We don't need the TupleQueueReaders yet, though. */
|
|
|
|
pei->reader = NULL;
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
2016-06-10 00:02:36 +02:00
|
|
|
* If instrumentation options were supplied, allocate space for the data.
|
|
|
|
* It only gets partially initialized here; the rest happens during
|
|
|
|
* ExecParallelInitializeDSM.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
|
|
|
if (estate->es_instrument)
|
|
|
|
{
|
2015-12-09 19:18:09 +01:00
|
|
|
Instrumentation *instrument;
|
2016-06-10 00:02:36 +02:00
|
|
|
int i;
|
2015-12-09 19:18:09 +01:00
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
instrumentation = shm_toc_allocate(pcxt->toc, instrumentation_len);
|
|
|
|
instrumentation->instrument_options = estate->es_instrument;
|
2015-12-09 19:18:09 +01:00
|
|
|
instrumentation->instrument_offset = instrument_offset;
|
|
|
|
instrumentation->num_workers = nworkers;
|
|
|
|
instrumentation->num_plan_nodes = e.nnodes;
|
|
|
|
instrument = GetInstrumentationArray(instrumentation);
|
|
|
|
for (i = 0; i < nworkers * e.nnodes; ++i)
|
|
|
|
InstrInit(&instrument[i], estate->es_instrument);
|
2015-09-29 03:55:57 +02:00
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_INSTRUMENTATION,
|
|
|
|
instrumentation);
|
|
|
|
pei->instrumentation = instrumentation;
|
2018-09-25 21:54:29 +02:00
|
|
|
|
|
|
|
if (estate->es_jit_flags != PGJIT_NONE)
|
|
|
|
{
|
|
|
|
jit_instrumentation = shm_toc_allocate(pcxt->toc,
|
|
|
|
jit_instrumentation_len);
|
|
|
|
jit_instrumentation->num_workers = nworkers;
|
|
|
|
memset(jit_instrumentation->jit_instr, 0,
|
|
|
|
sizeof(JitInstrumentation) * nworkers);
|
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_JIT_INSTRUMENTATION,
|
|
|
|
jit_instrumentation);
|
|
|
|
pei->jit_instrumentation = jit_instrumentation;
|
|
|
|
}
|
2015-09-29 03:55:57 +02:00
|
|
|
}
|
|
|
|
|
2016-12-19 22:47:15 +01:00
|
|
|
/*
|
|
|
|
* Create a DSA area that can be used by the leader and all workers.
|
|
|
|
* (However, if we failed to create a DSM and are using private memory
|
|
|
|
* instead, then skip this.)
|
|
|
|
*/
|
|
|
|
if (pcxt->seg != NULL)
|
|
|
|
{
|
|
|
|
char *area_space;
|
|
|
|
|
|
|
|
area_space = shm_toc_allocate(pcxt->toc, dsa_minsize);
|
|
|
|
shm_toc_insert(pcxt->toc, PARALLEL_KEY_DSA, area_space);
|
|
|
|
pei->area = dsa_create_in_place(area_space, dsa_minsize,
|
|
|
|
LWTRANCHE_PARALLEL_QUERY_DSA,
|
|
|
|
pcxt->seg);
|
|
|
|
|
2017-11-16 18:06:14 +01:00
|
|
|
/*
|
|
|
|
* Serialize parameters, if any, using DSA storage. We don't dare use
|
|
|
|
* the main parallel query DSM for this because we might relaunch
|
|
|
|
* workers after the values have changed (and thus the amount of
|
|
|
|
* storage required has changed).
|
|
|
|
*/
|
|
|
|
if (!bms_is_empty(sendParams))
|
|
|
|
{
|
2017-12-18 18:17:37 +01:00
|
|
|
pei->param_exec = SerializeParamExecParams(estate, sendParams,
|
|
|
|
pei->area);
|
2017-11-16 18:06:14 +01:00
|
|
|
fpes->param_exec = pei->param_exec;
|
|
|
|
}
|
|
|
|
}
|
2016-12-19 22:47:15 +01:00
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
|
|
|
* Give parallel-aware nodes a chance to initialize their shared data.
|
|
|
|
* This also initializes the elements of instrumentation->ps_instrument,
|
|
|
|
* if it exists.
|
|
|
|
*/
|
|
|
|
d.pcxt = pcxt;
|
|
|
|
d.instrumentation = instrumentation;
|
|
|
|
d.nnodes = 0;
|
2017-12-18 18:17:37 +01:00
|
|
|
|
|
|
|
/* Install our DSA area while initializing the plan. */
|
|
|
|
estate->es_query_dsa = pei->area;
|
2015-09-29 03:55:57 +02:00
|
|
|
ExecParallelInitializeDSM(planstate, &d);
|
2017-12-18 18:17:37 +01:00
|
|
|
estate->es_query_dsa = NULL;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/*
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
* Make sure that the world hasn't shifted under our feet. This could
|
2015-09-29 03:55:57 +02:00
|
|
|
* probably just be an Assert(), but let's be conservative for now.
|
|
|
|
*/
|
|
|
|
if (e.nnodes != d.nnodes)
|
|
|
|
elog(ERROR, "inconsistent count of PlanState nodes");
|
|
|
|
|
|
|
|
/* OK, we're ready to rock and roll. */
|
|
|
|
return pei;
|
|
|
|
}
|
|
|
|
|
2017-09-01 23:38:54 +02:00
|
|
|
/*
|
|
|
|
* Set up tuple queue readers to read the results of a parallel subplan.
|
|
|
|
*
|
|
|
|
* This is separate from ExecInitParallelPlan() because we can launch the
|
|
|
|
* worker processes and let them start doing something before we do this.
|
|
|
|
*/
|
|
|
|
void
|
2017-09-15 04:59:02 +02:00
|
|
|
ExecParallelCreateReaders(ParallelExecutorInfo *pei)
|
2017-09-01 23:38:54 +02:00
|
|
|
{
|
|
|
|
int nworkers = pei->pcxt->nworkers_launched;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
Assert(pei->reader == NULL);
|
|
|
|
|
|
|
|
if (nworkers > 0)
|
|
|
|
{
|
|
|
|
pei->reader = (TupleQueueReader **)
|
|
|
|
palloc(nworkers * sizeof(TupleQueueReader *));
|
|
|
|
|
|
|
|
for (i = 0; i < nworkers; i++)
|
|
|
|
{
|
|
|
|
shm_mq_set_handle(pei->tqueue[i],
|
|
|
|
pei->pcxt->worker[i].bgwhandle);
|
2017-09-15 04:59:02 +02:00
|
|
|
pei->reader[i] = CreateTupleQueueReader(pei->tqueue[i]);
|
2017-09-01 23:38:54 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
/*
|
|
|
|
* Re-initialize the parallel executor shared memory state before launching
|
|
|
|
* a fresh batch of workers.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecParallelReinitialize(PlanState *planstate,
|
2017-11-16 18:06:14 +01:00
|
|
|
ParallelExecutorInfo *pei,
|
|
|
|
Bitmapset *sendParams)
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
{
|
2017-11-16 18:06:14 +01:00
|
|
|
EState *estate = planstate->state;
|
|
|
|
FixedParallelExecutorState *fpes;
|
|
|
|
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
/* Old workers must already be shut down */
|
|
|
|
Assert(pei->finished);
|
|
|
|
|
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
2018-09-15 19:42:33 +02:00
|
|
|
/*
|
|
|
|
* Force any initplan outputs that we're going to pass to workers to be
|
|
|
|
* evaluated, if they weren't already (see comments in
|
|
|
|
* ExecInitParallelPlan).
|
|
|
|
*/
|
|
|
|
ExecSetParamPlanMulti(sendParams, GetPerTupleExprContext(estate));
|
2017-11-16 18:06:14 +01:00
|
|
|
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
ReinitializeParallelDSM(pei->pcxt);
|
|
|
|
pei->tqueue = ExecParallelSetupTupleQueues(pei->pcxt, true);
|
2017-09-01 23:38:54 +02:00
|
|
|
pei->reader = NULL;
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
pei->finished = false;
|
|
|
|
|
2017-11-16 18:06:14 +01:00
|
|
|
fpes = shm_toc_lookup(pei->pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, false);
|
|
|
|
|
|
|
|
/* Free any serialized parameters from the last round. */
|
|
|
|
if (DsaPointerIsValid(fpes->param_exec))
|
|
|
|
{
|
2017-12-18 18:17:37 +01:00
|
|
|
dsa_free(pei->area, fpes->param_exec);
|
2017-11-16 18:06:14 +01:00
|
|
|
fpes->param_exec = InvalidDsaPointer;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Serialize current parameter values if required. */
|
|
|
|
if (!bms_is_empty(sendParams))
|
|
|
|
{
|
2017-12-18 18:17:37 +01:00
|
|
|
pei->param_exec = SerializeParamExecParams(estate, sendParams,
|
|
|
|
pei->area);
|
2017-11-16 18:06:14 +01:00
|
|
|
fpes->param_exec = pei->param_exec;
|
|
|
|
}
|
|
|
|
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
/* Traverse plan tree and let each child node reset associated state. */
|
2017-12-18 18:17:37 +01:00
|
|
|
estate->es_query_dsa = pei->area;
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
ExecParallelReInitializeDSM(planstate, pei->pcxt);
|
2017-12-18 18:17:37 +01:00
|
|
|
estate->es_query_dsa = NULL;
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Traverse plan tree to reinitialize per-node dynamic shared memory state
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ExecParallelReInitializeDSM(PlanState *planstate,
|
|
|
|
ParallelContext *pcxt)
|
|
|
|
{
|
|
|
|
if (planstate == NULL)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Call reinitializers for DSM-using plan nodes.
|
|
|
|
*/
|
|
|
|
switch (nodeTag(planstate))
|
|
|
|
{
|
|
|
|
case T_SeqScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecSeqScanReInitializeDSM((SeqScanState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
|
|
|
case T_IndexScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecIndexScanReInitializeDSM((IndexScanState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
|
|
|
case T_IndexOnlyScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecIndexOnlyScanReInitializeDSM((IndexOnlyScanState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
|
|
|
case T_ForeignScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecForeignScanReInitializeDSM((ForeignScanState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 23:28:39 +01:00
|
|
|
case T_AppendState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecAppendReInitializeDSM((AppendState *) planstate, pcxt);
|
|
|
|
break;
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
case T_CustomScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecCustomScanReInitializeDSM((CustomScanState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
|
|
|
case T_BitmapHeapScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecBitmapHeapReInitializeDSM((BitmapHeapScanState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 08:39:21 +01:00
|
|
|
case T_HashJoinState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecHashJoinReInitializeDSM((HashJoinState *) planstate,
|
|
|
|
pcxt);
|
|
|
|
break;
|
2017-12-05 19:55:56 +01:00
|
|
|
case T_HashState:
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
case T_SortState:
|
2017-12-19 18:21:56 +01:00
|
|
|
/* these nodes have DSM state, but no reinitialization is required */
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return planstate_tree_walker(planstate, ExecParallelReInitializeDSM, pcxt);
|
|
|
|
}
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
2016-12-23 17:53:35 +01:00
|
|
|
* Copy instrumentation information about this node and its descendants from
|
2015-09-29 03:55:57 +02:00
|
|
|
* dynamic shared memory.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ExecParallelRetrieveInstrumentation(PlanState *planstate,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
SharedExecutorInstrumentation *instrumentation)
|
2015-09-29 03:55:57 +02:00
|
|
|
{
|
2015-12-09 19:18:09 +01:00
|
|
|
Instrumentation *instrument;
|
2016-06-10 00:02:36 +02:00
|
|
|
int i;
|
|
|
|
int n;
|
|
|
|
int ibytes;
|
|
|
|
int plan_node_id = planstate->plan->plan_node_id;
|
2016-08-16 19:23:32 +02:00
|
|
|
MemoryContext oldcontext;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
2017-02-06 10:33:58 +01:00
|
|
|
/* Find the instrumentation for this node. */
|
2015-12-09 19:18:09 +01:00
|
|
|
for (i = 0; i < instrumentation->num_plan_nodes; ++i)
|
|
|
|
if (instrumentation->plan_node_id[i] == plan_node_id)
|
2015-09-29 03:55:57 +02:00
|
|
|
break;
|
2015-12-09 19:18:09 +01:00
|
|
|
if (i >= instrumentation->num_plan_nodes)
|
2015-09-29 03:55:57 +02:00
|
|
|
elog(ERROR, "plan node %d not found", plan_node_id);
|
|
|
|
|
2015-12-09 19:18:09 +01:00
|
|
|
/* Accumulate the statistics from all workers. */
|
|
|
|
instrument = GetInstrumentationArray(instrumentation);
|
|
|
|
instrument += i * instrumentation->num_workers;
|
|
|
|
for (n = 0; n < instrumentation->num_workers; ++n)
|
|
|
|
InstrAggNode(planstate->instrument, &instrument[n]);
|
|
|
|
|
2017-12-13 21:19:28 +01:00
|
|
|
/*
|
|
|
|
* Also store the per-worker detail.
|
|
|
|
*
|
|
|
|
* Worker instrumentation should be allocated in the same context as the
|
|
|
|
* regular instrumentation information, which is the per-query context.
|
|
|
|
* Switch into per-query memory context.
|
|
|
|
*/
|
|
|
|
oldcontext = MemoryContextSwitchTo(planstate->state->es_query_cxt);
|
|
|
|
ibytes = mul_size(instrumentation->num_workers, sizeof(Instrumentation));
|
|
|
|
planstate->worker_instrument =
|
|
|
|
palloc(ibytes + offsetof(WorkerInstrumentation, instrument));
|
|
|
|
MemoryContextSwitchTo(oldcontext);
|
2016-08-16 19:23:32 +02:00
|
|
|
|
2015-12-09 19:18:09 +01:00
|
|
|
planstate->worker_instrument->num_workers = instrumentation->num_workers;
|
2017-12-13 21:19:28 +01:00
|
|
|
memcpy(&planstate->worker_instrument->instrument, instrument, ibytes);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
2017-12-05 19:55:56 +01:00
|
|
|
/* Perform any node-type-specific work that needs to be done. */
|
|
|
|
switch (nodeTag(planstate))
|
|
|
|
{
|
|
|
|
case T_SortState:
|
|
|
|
ExecSortRetrieveInstrumentation((SortState *) planstate);
|
|
|
|
break;
|
|
|
|
case T_HashState:
|
|
|
|
ExecHashRetrieveInstrumentation((HashState *) planstate);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2017-08-29 19:22:49 +02:00
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
return planstate_tree_walker(planstate, ExecParallelRetrieveInstrumentation,
|
|
|
|
instrumentation);
|
|
|
|
}
|
|
|
|
|
2018-09-25 21:54:29 +02:00
|
|
|
/*
|
|
|
|
* Add up the workers' JIT instrumentation from dynamic shared memory.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ExecParallelRetrieveJitInstrumentation(PlanState *planstate,
|
|
|
|
SharedJitInstrumentation *shared_jit)
|
|
|
|
{
|
|
|
|
JitInstrumentation *combined;
|
|
|
|
int ibytes;
|
|
|
|
|
|
|
|
int n;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Accumulate worker JIT instrumentation into the combined JIT
|
2018-10-03 21:48:37 +02:00
|
|
|
* instrumentation, allocating it if required.
|
2018-09-25 21:54:29 +02:00
|
|
|
*/
|
2018-10-03 21:48:37 +02:00
|
|
|
if (!planstate->state->es_jit_worker_instr)
|
|
|
|
planstate->state->es_jit_worker_instr =
|
2018-09-25 21:54:29 +02:00
|
|
|
MemoryContextAllocZero(planstate->state->es_query_cxt, sizeof(JitInstrumentation));
|
2018-10-03 21:48:37 +02:00
|
|
|
combined = planstate->state->es_jit_worker_instr;
|
2018-09-25 21:54:29 +02:00
|
|
|
|
2019-01-23 12:39:00 +01:00
|
|
|
/* Accumulate all the workers' instrumentations. */
|
2018-09-25 21:54:29 +02:00
|
|
|
for (n = 0; n < shared_jit->num_workers; ++n)
|
|
|
|
InstrJitAgg(combined, &shared_jit->jit_instr[n]);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Store the per-worker detail.
|
|
|
|
*
|
|
|
|
* Similar to ExecParallelRetrieveInstrumentation(), allocate the
|
|
|
|
* instrumentation in per-query context.
|
|
|
|
*/
|
|
|
|
ibytes = offsetof(SharedJitInstrumentation, jit_instr)
|
2019-05-22 18:55:34 +02:00
|
|
|
+ mul_size(shared_jit->num_workers, sizeof(JitInstrumentation));
|
2018-09-25 21:54:29 +02:00
|
|
|
planstate->worker_jit_instrument =
|
|
|
|
MemoryContextAlloc(planstate->state->es_query_cxt, ibytes);
|
|
|
|
|
|
|
|
memcpy(planstate->worker_jit_instrument, shared_jit, ibytes);
|
|
|
|
}
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
|
|
|
* Finish parallel execution. We wait for parallel workers to finish, and
|
2017-12-19 18:21:56 +01:00
|
|
|
* accumulate their buffer usage.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecParallelFinish(ParallelExecutorInfo *pei)
|
|
|
|
{
|
2017-09-01 23:38:54 +02:00
|
|
|
int nworkers = pei->pcxt->nworkers_launched;
|
2016-06-10 00:02:36 +02:00
|
|
|
int i;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
2017-09-01 23:38:54 +02:00
|
|
|
/* Make this be a no-op if called twice in a row. */
|
2015-11-18 18:35:25 +01:00
|
|
|
if (pei->finished)
|
|
|
|
return;
|
|
|
|
|
2017-09-01 23:38:54 +02:00
|
|
|
/*
|
|
|
|
* Detach from tuple queues ASAP, so that any still-active workers will
|
|
|
|
* notice that no further results are wanted.
|
|
|
|
*/
|
|
|
|
if (pei->tqueue != NULL)
|
|
|
|
{
|
|
|
|
for (i = 0; i < nworkers; i++)
|
|
|
|
shm_mq_detach(pei->tqueue[i]);
|
|
|
|
pfree(pei->tqueue);
|
|
|
|
pei->tqueue = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* While we're waiting for the workers to finish, let's get rid of the
|
|
|
|
* tuple queue readers. (Any other local cleanup could be done here too.)
|
|
|
|
*/
|
|
|
|
if (pei->reader != NULL)
|
|
|
|
{
|
|
|
|
for (i = 0; i < nworkers; i++)
|
|
|
|
DestroyTupleQueueReader(pei->reader[i]);
|
|
|
|
pfree(pei->reader);
|
|
|
|
pei->reader = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now wait for the workers to finish. */
|
2015-09-29 03:55:57 +02:00
|
|
|
WaitForParallelWorkersToFinish(pei->pcxt);
|
|
|
|
|
2017-09-01 23:38:54 +02:00
|
|
|
/*
|
|
|
|
* Next, accumulate buffer usage. (This must wait for the workers to
|
|
|
|
* finish, or we might get incomplete data.)
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nworkers; i++)
|
2015-09-29 03:55:57 +02:00
|
|
|
InstrAccumParallelQuery(&pei->buffer_usage[i]);
|
|
|
|
|
2015-11-18 18:35:25 +01:00
|
|
|
pei->finished = true;
|
2015-09-29 03:55:57 +02:00
|
|
|
}
|
|
|
|
|
2015-10-16 17:56:02 +02:00
|
|
|
/*
|
2017-12-19 18:21:56 +01:00
|
|
|
* Accumulate instrumentation, and then clean up whatever ParallelExecutorInfo
|
|
|
|
* resources still exist after ExecParallelFinish. We separate these
|
|
|
|
* routines because someone might want to examine the contents of the DSM
|
|
|
|
* after ExecParallelFinish and before calling this routine.
|
2015-10-16 17:56:02 +02:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ExecParallelCleanup(ParallelExecutorInfo *pei)
|
|
|
|
{
|
2017-12-19 18:21:56 +01:00
|
|
|
/* Accumulate instrumentation, if any. */
|
|
|
|
if (pei->instrumentation)
|
|
|
|
ExecParallelRetrieveInstrumentation(pei->planstate,
|
|
|
|
pei->instrumentation);
|
|
|
|
|
2018-09-25 21:54:29 +02:00
|
|
|
/* Accumulate JIT instrumentation, if any. */
|
|
|
|
if (pei->jit_instrumentation)
|
|
|
|
ExecParallelRetrieveJitInstrumentation(pei->planstate,
|
2019-05-22 18:55:34 +02:00
|
|
|
pei->jit_instrumentation);
|
2018-09-25 21:54:29 +02:00
|
|
|
|
2017-11-16 18:06:14 +01:00
|
|
|
/* Free any serialized parameters. */
|
|
|
|
if (DsaPointerIsValid(pei->param_exec))
|
|
|
|
{
|
|
|
|
dsa_free(pei->area, pei->param_exec);
|
|
|
|
pei->param_exec = InvalidDsaPointer;
|
|
|
|
}
|
2016-12-19 22:47:15 +01:00
|
|
|
if (pei->area != NULL)
|
|
|
|
{
|
|
|
|
dsa_detach(pei->area);
|
|
|
|
pei->area = NULL;
|
|
|
|
}
|
2015-10-16 17:56:02 +02:00
|
|
|
if (pei->pcxt != NULL)
|
|
|
|
{
|
|
|
|
DestroyParallelContext(pei->pcxt);
|
|
|
|
pei->pcxt = NULL;
|
|
|
|
}
|
|
|
|
pfree(pei);
|
|
|
|
}
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
|
|
|
* Create a DestReceiver to write tuples we produce to the shm_mq designated
|
|
|
|
* for that purpose.
|
|
|
|
*/
|
|
|
|
static DestReceiver *
|
|
|
|
ExecParallelGetReceiver(dsm_segment *seg, shm_toc *toc)
|
|
|
|
{
|
|
|
|
char *mqspace;
|
|
|
|
shm_mq *mq;
|
|
|
|
|
2017-06-05 18:05:42 +02:00
|
|
|
mqspace = shm_toc_lookup(toc, PARALLEL_KEY_TUPLE_QUEUE, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
mqspace += ParallelWorkerNumber * PARALLEL_TUPLE_QUEUE_SIZE;
|
|
|
|
mq = (shm_mq *) mqspace;
|
|
|
|
shm_mq_set_sender(mq, MyProc);
|
|
|
|
return CreateTupleQueueDestReceiver(shm_mq_attach(mq, seg, NULL));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a QueryDesc for the PlannedStmt we are to execute, and return it.
|
|
|
|
*/
|
|
|
|
static QueryDesc *
|
|
|
|
ExecParallelGetQueryDesc(shm_toc *toc, DestReceiver *receiver,
|
|
|
|
int instrument_options)
|
|
|
|
{
|
|
|
|
char *pstmtspace;
|
|
|
|
char *paramspace;
|
|
|
|
PlannedStmt *pstmt;
|
|
|
|
ParamListInfo paramLI;
|
2017-02-22 07:45:17 +01:00
|
|
|
char *queryString;
|
|
|
|
|
|
|
|
/* Get the query string from shared memory */
|
2017-06-05 18:05:42 +02:00
|
|
|
queryString = shm_toc_lookup(toc, PARALLEL_KEY_QUERY_TEXT, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/* Reconstruct leader-supplied PlannedStmt. */
|
2017-06-05 18:05:42 +02:00
|
|
|
pstmtspace = shm_toc_lookup(toc, PARALLEL_KEY_PLANNEDSTMT, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
pstmt = (PlannedStmt *) stringToNode(pstmtspace);
|
|
|
|
|
|
|
|
/* Reconstruct ParamListInfo. */
|
2017-11-16 18:06:14 +01:00
|
|
|
paramspace = shm_toc_lookup(toc, PARALLEL_KEY_PARAMLISTINFO, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
paramLI = RestoreParamList(¶mspace);
|
|
|
|
|
2018-09-25 22:55:22 +02:00
|
|
|
/* Create a QueryDesc for the query. */
|
2015-09-29 03:55:57 +02:00
|
|
|
return CreateQueryDesc(pstmt,
|
2017-02-22 07:45:17 +01:00
|
|
|
queryString,
|
2015-09-29 03:55:57 +02:00
|
|
|
GetActiveSnapshot(), InvalidSnapshot,
|
2017-04-01 06:17:18 +02:00
|
|
|
receiver, paramLI, NULL, instrument_options);
|
2015-09-29 03:55:57 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-12-23 17:53:35 +01:00
|
|
|
* Copy instrumentation information from this node and its descendants into
|
2015-09-29 03:55:57 +02:00
|
|
|
* dynamic shared memory, so that the parallel leader can retrieve it.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ExecParallelReportInstrumentation(PlanState *planstate,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
SharedExecutorInstrumentation *instrumentation)
|
2015-09-29 03:55:57 +02:00
|
|
|
{
|
2016-06-10 00:02:36 +02:00
|
|
|
int i;
|
|
|
|
int plan_node_id = planstate->plan->plan_node_id;
|
2015-12-09 19:18:09 +01:00
|
|
|
Instrumentation *instrument;
|
|
|
|
|
|
|
|
InstrEndLoop(planstate->instrument);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we shuffled the plan_node_id values in ps_instrument into sorted
|
2016-06-10 00:02:36 +02:00
|
|
|
* order, we could use binary search here. This might matter someday if
|
|
|
|
* we're pushing down sufficiently large plan trees. For now, do it the
|
|
|
|
* slow, dumb way.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
2015-12-09 19:18:09 +01:00
|
|
|
for (i = 0; i < instrumentation->num_plan_nodes; ++i)
|
|
|
|
if (instrumentation->plan_node_id[i] == plan_node_id)
|
2015-09-29 03:55:57 +02:00
|
|
|
break;
|
2015-12-09 19:18:09 +01:00
|
|
|
if (i >= instrumentation->num_plan_nodes)
|
2015-09-29 03:55:57 +02:00
|
|
|
elog(ERROR, "plan node %d not found", plan_node_id);
|
|
|
|
|
|
|
|
/*
|
2015-12-09 19:18:09 +01:00
|
|
|
* Add our statistics to the per-node, per-worker totals. It's possible
|
|
|
|
* that this could happen more than once if we relaunched workers.
|
2015-09-29 03:55:57 +02:00
|
|
|
*/
|
2015-12-09 19:18:09 +01:00
|
|
|
instrument = GetInstrumentationArray(instrumentation);
|
|
|
|
instrument += i * instrumentation->num_workers;
|
|
|
|
Assert(IsParallelWorker());
|
|
|
|
Assert(ParallelWorkerNumber < instrumentation->num_workers);
|
|
|
|
InstrAggNode(&instrument[ParallelWorkerNumber], planstate->instrument);
|
2015-09-29 03:55:57 +02:00
|
|
|
|
|
|
|
return planstate_tree_walker(planstate, ExecParallelReportInstrumentation,
|
|
|
|
instrumentation);
|
|
|
|
}
|
|
|
|
|
2015-11-11 14:57:52 +01:00
|
|
|
/*
|
2016-12-23 17:53:35 +01:00
|
|
|
* Initialize the PlanState and its descendants with the information
|
2015-11-11 14:57:52 +01:00
|
|
|
* retrieved from shared memory. This has to be done once the PlanState
|
|
|
|
* is allocated and initialized by executor; that is, after ExecutorStart().
|
|
|
|
*/
|
|
|
|
static bool
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt)
|
2015-11-11 14:57:52 +01:00
|
|
|
{
|
|
|
|
if (planstate == NULL)
|
|
|
|
return false;
|
|
|
|
|
2017-08-29 19:22:49 +02:00
|
|
|
switch (nodeTag(planstate))
|
2015-11-11 14:57:52 +01:00
|
|
|
{
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_SeqScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecSeqScanInitializeWorker((SeqScanState *) planstate, pwcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_IndexScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecIndexScanInitializeWorker((IndexScanState *) planstate,
|
|
|
|
pwcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_IndexOnlyScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecIndexOnlyScanInitializeWorker((IndexOnlyScanState *) planstate,
|
|
|
|
pwcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_ForeignScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-02-03 18:46:18 +01:00
|
|
|
ExecForeignScanInitializeWorker((ForeignScanState *) planstate,
|
2017-11-17 02:28:11 +01:00
|
|
|
pwcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
Support Parallel Append plan nodes.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
2017-12-05 23:28:39 +01:00
|
|
|
case T_AppendState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecAppendInitializeWorker((AppendState *) planstate, pwcxt);
|
|
|
|
break;
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_CustomScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2016-02-03 18:46:18 +01:00
|
|
|
ExecCustomScanInitializeWorker((CustomScanState *) planstate,
|
2017-11-17 02:28:11 +01:00
|
|
|
pwcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
|
|
|
case T_BitmapHeapScanState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecBitmapHeapInitializeWorker((BitmapHeapScanState *) planstate,
|
|
|
|
pwcxt);
|
2017-08-29 19:22:49 +02:00
|
|
|
break;
|
Add parallel-aware hash joins.
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
2017-12-21 08:39:21 +01:00
|
|
|
case T_HashJoinState:
|
|
|
|
if (planstate->plan->parallel_aware)
|
|
|
|
ExecHashJoinInitializeWorker((HashJoinState *) planstate,
|
|
|
|
pwcxt);
|
|
|
|
break;
|
2017-12-05 19:55:56 +01:00
|
|
|
case T_HashState:
|
|
|
|
/* even when not parallel-aware, for EXPLAIN ANALYZE */
|
|
|
|
ExecHashInitializeWorker((HashState *) planstate, pwcxt);
|
|
|
|
break;
|
2017-08-29 19:22:49 +02:00
|
|
|
case T_SortState:
|
2017-12-05 19:55:56 +01:00
|
|
|
/* even when not parallel-aware, for EXPLAIN ANALYZE */
|
2017-11-17 02:28:11 +01:00
|
|
|
ExecSortInitializeWorker((SortState *) planstate, pwcxt);
|
Separate reinitialization of shared parallel-scan state from ExecReScan.
Previously, the parallel executor logic did reinitialization of shared
state within the ExecReScan code for parallel-aware scan nodes. This is
problematic, because it means that the ExecReScan call has to occur
synchronously (ie, during the parent Gather node's ReScan call). That is
swimming very much against the tide so far as the ExecReScan machinery is
concerned; the fact that it works at all today depends on a lot of fragile
assumptions, such as that no plan node between Gather and a parallel-aware
scan node is parameterized. Another objection is that because ExecReScan
might be called in workers as well as the leader, hacky extra tests are
needed in some places to prevent unwanted shared-state resets.
Hence, let's separate this code into two functions, a ReInitializeDSM
call and the ReScan call proper. ReInitializeDSM is called only in
the leader and is guaranteed to run before we start new workers.
ReScan is returned to its traditional function of resetting only local
state, which means that ExecReScan's usual habits of delaying or
eliminating child rescan calls are safe again.
As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary
to make these changes in 9.6, which is a good thing because the FDW and
CustomScan APIs are impacted.
Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
2017-08-30 19:18:16 +02:00
|
|
|
break;
|
|
|
|
|
2017-08-29 19:22:49 +02:00
|
|
|
default:
|
|
|
|
break;
|
2015-11-11 14:57:52 +01:00
|
|
|
}
|
|
|
|
|
2017-11-17 02:28:11 +01:00
|
|
|
return planstate_tree_walker(planstate, ExecParallelInitializeWorker,
|
|
|
|
pwcxt);
|
2015-11-11 14:57:52 +01:00
|
|
|
}
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/*
|
|
|
|
* Main entrypoint for parallel query worker processes.
|
|
|
|
*
|
2017-01-12 15:05:14 +01:00
|
|
|
* We reach this function from ParallelWorkerMain, so the setup necessary to
|
|
|
|
* create a sensible parallel environment has already been done;
|
|
|
|
* ParallelWorkerMain worries about stuff like the transaction state, combo
|
|
|
|
* CID mappings, and GUC values, so we don't need to deal with any of that
|
|
|
|
* here.
|
2015-09-29 03:55:57 +02:00
|
|
|
*
|
|
|
|
* Our job is to deal with concerns specific to the executor. The parallel
|
|
|
|
* group leader will have stored a serialized PlannedStmt, and it's our job
|
|
|
|
* to execute that plan and write the resulting tuples to the appropriate
|
|
|
|
* tuple queue. Various bits of supporting information that we need in order
|
|
|
|
* to do this are also stored in the dsm_segment and can be accessed through
|
|
|
|
* the shm_toc.
|
|
|
|
*/
|
2017-04-15 05:50:16 +02:00
|
|
|
void
|
2015-09-29 03:55:57 +02:00
|
|
|
ParallelQueryMain(dsm_segment *seg, shm_toc *toc)
|
|
|
|
{
|
2017-08-29 19:12:23 +02:00
|
|
|
FixedParallelExecutorState *fpes;
|
2015-09-29 03:55:57 +02:00
|
|
|
BufferUsage *buffer_usage;
|
|
|
|
DestReceiver *receiver;
|
|
|
|
QueryDesc *queryDesc;
|
|
|
|
SharedExecutorInstrumentation *instrumentation;
|
2018-09-25 21:54:29 +02:00
|
|
|
SharedJitInstrumentation *jit_instrumentation;
|
2015-09-29 03:55:57 +02:00
|
|
|
int instrument_options = 0;
|
2016-12-19 22:47:15 +01:00
|
|
|
void *area_space;
|
|
|
|
dsa_area *area;
|
2017-11-17 02:28:11 +01:00
|
|
|
ParallelWorkerContext pwcxt;
|
2015-09-29 03:55:57 +02:00
|
|
|
|
2017-08-29 19:12:23 +02:00
|
|
|
/* Get fixed-size state. */
|
|
|
|
fpes = shm_toc_lookup(toc, PARALLEL_KEY_EXECUTOR_FIXED, false);
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/* Set up DestReceiver, SharedExecutorInstrumentation, and QueryDesc. */
|
|
|
|
receiver = ExecParallelGetReceiver(seg, toc);
|
2017-06-05 18:05:42 +02:00
|
|
|
instrumentation = shm_toc_lookup(toc, PARALLEL_KEY_INSTRUMENTATION, true);
|
2015-09-29 03:55:57 +02:00
|
|
|
if (instrumentation != NULL)
|
|
|
|
instrument_options = instrumentation->instrument_options;
|
2018-09-25 21:54:29 +02:00
|
|
|
jit_instrumentation = shm_toc_lookup(toc, PARALLEL_KEY_JIT_INSTRUMENTATION,
|
|
|
|
true);
|
2015-09-29 03:55:57 +02:00
|
|
|
queryDesc = ExecParallelGetQueryDesc(toc, receiver, instrument_options);
|
|
|
|
|
2017-02-22 07:45:17 +01:00
|
|
|
/* Setting debug_query_string for individual workers */
|
|
|
|
debug_query_string = queryDesc->sourceText;
|
|
|
|
|
|
|
|
/* Report workers' query for monitoring purposes */
|
|
|
|
pgstat_report_activity(STATE_RUNNING, debug_query_string);
|
|
|
|
|
2016-12-19 22:47:15 +01:00
|
|
|
/* Attach to the dynamic shared memory area. */
|
2017-06-05 18:05:42 +02:00
|
|
|
area_space = shm_toc_lookup(toc, PARALLEL_KEY_DSA, false);
|
2016-12-19 22:47:15 +01:00
|
|
|
area = dsa_attach_in_place(area_space, seg);
|
|
|
|
|
|
|
|
/* Start up the executor */
|
2018-03-22 19:45:07 +01:00
|
|
|
queryDesc->plannedstmt->jitFlags = fpes->jit_flags;
|
2017-11-20 18:00:33 +01:00
|
|
|
ExecutorStart(queryDesc, fpes->eflags);
|
2016-12-19 22:47:15 +01:00
|
|
|
|
|
|
|
/* Special executor initialization steps for parallel workers */
|
|
|
|
queryDesc->planstate->state->es_query_dsa = area;
|
2017-11-16 18:06:14 +01:00
|
|
|
if (DsaPointerIsValid(fpes->param_exec))
|
|
|
|
{
|
|
|
|
char *paramexec_space;
|
|
|
|
|
|
|
|
paramexec_space = dsa_get_address(area, fpes->param_exec);
|
|
|
|
RestoreParamExecParams(paramexec_space, queryDesc->estate);
|
|
|
|
|
|
|
|
}
|
2017-11-17 02:28:11 +01:00
|
|
|
pwcxt.toc = toc;
|
|
|
|
pwcxt.seg = seg;
|
|
|
|
ExecParallelInitializeWorker(queryDesc->planstate, &pwcxt);
|
2016-12-19 22:47:15 +01:00
|
|
|
|
2017-08-29 19:12:23 +02:00
|
|
|
/* Pass down any tuple bound */
|
|
|
|
ExecSetTupleBound(fpes->tuples_needed, queryDesc->planstate);
|
|
|
|
|
2018-08-03 05:41:37 +02:00
|
|
|
/*
|
|
|
|
* Prepare to track buffer usage during query execution.
|
|
|
|
*
|
|
|
|
* We do this after starting up the executor to match what happens in the
|
|
|
|
* leader, which also doesn't count buffer accesses that occur during
|
|
|
|
* executor startup.
|
|
|
|
*/
|
|
|
|
InstrStartParallelQuery();
|
|
|
|
|
2017-08-29 19:12:23 +02:00
|
|
|
/*
|
|
|
|
* Run the plan. If we specified a tuple bound, be careful not to demand
|
|
|
|
* more tuples than that.
|
|
|
|
*/
|
|
|
|
ExecutorRun(queryDesc,
|
|
|
|
ForwardScanDirection,
|
|
|
|
fpes->tuples_needed < 0 ? (int64) 0 : fpes->tuples_needed,
|
|
|
|
true);
|
2016-12-19 22:47:15 +01:00
|
|
|
|
|
|
|
/* Shut down the executor */
|
2015-09-29 03:55:57 +02:00
|
|
|
ExecutorFinish(queryDesc);
|
|
|
|
|
|
|
|
/* Report buffer usage during parallel execution. */
|
2017-06-05 18:05:42 +02:00
|
|
|
buffer_usage = shm_toc_lookup(toc, PARALLEL_KEY_BUFFER_USAGE, false);
|
2015-09-29 03:55:57 +02:00
|
|
|
InstrEndParallelQuery(&buffer_usage[ParallelWorkerNumber]);
|
|
|
|
|
|
|
|
/* Report instrumentation data if any instrumentation options are set. */
|
|
|
|
if (instrumentation != NULL)
|
|
|
|
ExecParallelReportInstrumentation(queryDesc->planstate,
|
|
|
|
instrumentation);
|
|
|
|
|
2018-09-25 21:54:29 +02:00
|
|
|
/* Report JIT instrumentation data if any */
|
|
|
|
if (queryDesc->estate->es_jit && jit_instrumentation != NULL)
|
|
|
|
{
|
|
|
|
Assert(ParallelWorkerNumber < jit_instrumentation->num_workers);
|
|
|
|
jit_instrumentation->jit_instr[ParallelWorkerNumber] =
|
|
|
|
queryDesc->estate->es_jit->instr;
|
|
|
|
}
|
|
|
|
|
Add a Gather executor node.
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
2015-10-01 01:23:36 +02:00
|
|
|
/* Must do this after capturing instrumentation. */
|
|
|
|
ExecutorEnd(queryDesc);
|
|
|
|
|
2015-09-29 03:55:57 +02:00
|
|
|
/* Cleanup. */
|
2016-12-19 22:47:15 +01:00
|
|
|
dsa_detach(area);
|
2015-09-29 03:55:57 +02:00
|
|
|
FreeQueryDesc(queryDesc);
|
2017-09-07 18:06:23 +02:00
|
|
|
receiver->rDestroy(receiver);
|
2015-09-29 03:55:57 +02:00
|
|
|
}
|