Fix array size allocation for HashAggregate hash keys.

When there were duplicate columns in the hash key list, the array
sizes could be miscomputed, resulting in access off the end of the
array. Adjust the computation to ensure the array is always large
enough.

(I considered whether the duplicates could be removed in planning, but
I can't rule out the possibility that duplicate columns might have
different hash functions assigned. Simpler to just make sure it works
at execution time regardless.)

Bug apparently introduced in fc4b3dea2 as part of narrowing down the
tuples stored in the hashtable. Reported by Colm McHugh of Salesforce,
though I didn't use their patch. Backpatch back to version 10 where
the bug was introduced.

Discussion: https://postgr.es/m/CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
This commit is contained in:
Andrew Gierth 2019-05-23 15:26:01 +01:00
parent 156c0c2dff
commit 44e95b5728
3 changed files with 47 additions and 7 deletions

View File

@ -1312,9 +1312,14 @@ build_hash_table(AggState *aggstate)
* by themselves, and secondly ctids for row-marks.
*
* To eliminate duplicates, we build a bitmapset of the needed columns, and
* then build an array of the columns included in the hashtable. Note that
* the array is preserved over ExecReScanAgg, so we allocate it in the
* per-query context (unlike the hash table itself).
* then build an array of the columns included in the hashtable. We might
* still have duplicates if the passed-in grpColIdx has them, which can happen
* in edge cases from semijoins/distinct; these can't always be removed,
* because it's not certain that the duplicate cols will be using the same
* hash function.
*
* Note that the array is preserved over ExecReScanAgg, so we allocate it in
* the per-query context (unlike the hash table itself).
*/
static void
find_hash_columns(AggState *aggstate)
@ -1335,6 +1340,7 @@ find_hash_columns(AggState *aggstate)
AttrNumber *grpColIdx = perhash->aggnode->grpColIdx;
List *hashTlist = NIL;
TupleDesc hashDesc;
int maxCols;
int i;
perhash->largestGrpColIdx = 0;
@ -1359,15 +1365,24 @@ find_hash_columns(AggState *aggstate)
colnos = bms_del_member(colnos, attnum);
}
}
/* Add in all the grouping columns */
for (i = 0; i < perhash->numCols; i++)
colnos = bms_add_member(colnos, grpColIdx[i]);
/*
* Compute maximum number of input columns accounting for possible
* duplications in the grpColIdx array, which can happen in some edge
* cases where HashAggregate was generated as part of a semijoin or a
* DISTINCT.
*/
maxCols = bms_num_members(colnos) + perhash->numCols;
perhash->hashGrpColIdxInput =
palloc(bms_num_members(colnos) * sizeof(AttrNumber));
palloc(maxCols * sizeof(AttrNumber));
perhash->hashGrpColIdxHash =
palloc(perhash->numCols * sizeof(AttrNumber));
/* Add all the grouping columns to colnos */
for (i = 0; i < perhash->numCols; i++)
colnos = bms_add_member(colnos, grpColIdx[i]);
/*
* First build mapping for columns directly hashed. These are the
* first, because they'll be accessed when computing hash values and

View File

@ -2270,3 +2270,21 @@ select v||'a', case when v||'a' = 'aa' then 1 else 0 end, count(*)
ba | 0 | 1
(2 rows)
-- Make sure that generation of HashAggregate for uniqification purposes
-- does not lead to array overflow due to unexpected duplicate hash keys
-- see CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
explain (costs off)
select 1 from tenk1
where (hundred, thousand) in (select twothousand, twothousand from onek);
QUERY PLAN
-------------------------------------------------------------
Hash Join
Hash Cond: (tenk1.hundred = onek.twothousand)
-> Seq Scan on tenk1
Filter: (hundred = thousand)
-> Hash
-> HashAggregate
Group Key: onek.twothousand, onek.twothousand
-> Seq Scan on onek
(8 rows)

View File

@ -988,3 +988,10 @@ select v||'a', case v||'a' when 'aa' then 1 else 0 end, count(*)
select v||'a', case when v||'a' = 'aa' then 1 else 0 end, count(*)
from unnest(array['a','b']) u(v)
group by v||'a' order by 1;
-- Make sure that generation of HashAggregate for uniqification purposes
-- does not lead to array overflow due to unexpected duplicate hash keys
-- see CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
explain (costs off)
select 1 from tenk1
where (hundred, thousand) in (select twothousand, twothousand from onek);