Allow parallel query for prepared statements with generic plans.

This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't.  In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked.  In a generic plan, the Param node survives
and must be treated as parallel-safe.  SerializeParamList provides
for the transmission of parameter values to workers.

Amit Kapila with help from Kuntal Ghosh.  Some changes by me.

Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
This commit is contained in:
Robert Haas 2017-10-27 22:22:39 +02:00
parent 6784d7a1dc
commit 682ce911f8
4 changed files with 37 additions and 7 deletions

View File

@ -1223,13 +1223,17 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
/*
* We can't pass Params to workers at the moment either, so they are also
* parallel-restricted, unless they are PARAM_EXEC Params listed in
* safe_param_ids, meaning they could be generated within the worker.
* parallel-restricted, unless they are PARAM_EXTERN Params or are
* PARAM_EXEC Params listed in safe_param_ids, meaning they could be
* generated within the worker.
*/
else if (IsA(node, Param))
{
Param *param = (Param *) node;
if (param->paramkind == PARAM_EXTERN)
return false;
if (param->paramkind != PARAM_EXEC ||
!list_member_int(context->safe_param_ids, param->paramid))
{

View File

@ -6588,8 +6588,8 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan)
* force_parallel_mode is on, the planner might've stuck a Gather node
* atop that. The simplest way to deal with this is to look through the
* Gather node. The Gather node's tlist would normally contain a Var
* referencing the child node's output ... but setrefs.c might also have
* copied a Const as-is.
* referencing the child node's output, but it could also be a Param, or
* it could be a Const that setrefs.c copied as-is.
*/
plan = stmt->planTree;
for (;;)
@ -6616,9 +6616,9 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan)
/* If setrefs.c copied up a Const, no need to look further */
if (IsA(tle_expr, Const))
break;
/* Otherwise, it better be an outer Var */
Assert(IsA(tle_expr, Var));
Assert(((Var *) tle_expr)->varno == OUTER_VAR);
/* Otherwise, it had better be a Param or an outer Var */
Assert(IsA(tle_expr, Param) || (IsA(tle_expr, Var) &&
((Var *) tle_expr)->varno == OUTER_VAR));
/* Descend to the child node */
plan = plan->lefttree;
}

View File

@ -101,6 +101,26 @@ explain (costs off)
-> Parallel Index Only Scan using tenk1_unique1 on tenk1
(5 rows)
-- test prepared statement
prepare tenk1_count(integer) As select count((unique1)) from tenk1 where hundred > $1;
explain (costs off) execute tenk1_count(1);
QUERY PLAN
----------------------------------------------
Finalize Aggregate
-> Gather
Workers Planned: 4
-> Partial Aggregate
-> Parallel Seq Scan on tenk1
Filter: (hundred > 1)
(6 rows)
execute tenk1_count(1);
count
-------
9800
(1 row)
deallocate tenk1_count;
-- test parallel plans for queries containing un-correlated subplans.
alter table tenk2 set (parallel_workers = 0);
explain (costs off)

View File

@ -39,6 +39,12 @@ explain (costs off)
select sum(parallel_restricted(unique1)) from tenk1
group by(parallel_restricted(unique1));
-- test prepared statement
prepare tenk1_count(integer) As select count((unique1)) from tenk1 where hundred > $1;
explain (costs off) execute tenk1_count(1);
execute tenk1_count(1);
deallocate tenk1_count;
-- test parallel plans for queries containing un-correlated subplans.
alter table tenk2 set (parallel_workers = 0);
explain (costs off)