Limit Parallel Hash's bucket array to MaxAllocSize.

Make sure that we don't exceed MaxAllocSize when increasing the number of
buckets.  Perhaps later we'll remove that limit and use DSA_ALLOC_HUGE, but
for now just prevent further increases like the non-parallel code.  This
change avoids the error from bug report #15225.

Author: Thomas Munro
Reviewed-By: Tom Lane
Reported-by: Frits Jalvingh
Discussion: https://postgr.es/m/152802081668.26724.16985037679312485972%40wrigleys.postgresql.org
This commit is contained in:
Thomas Munro 2018-06-10 20:30:25 +12:00
parent f6b95ff434
commit 86a2218eb0
1 changed files with 4 additions and 1 deletions

View File

@ -2818,9 +2818,12 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size,
{
hashtable->batches[0].shared->ntuples += hashtable->batches[0].ntuples;
hashtable->batches[0].ntuples = 0;
/* Guard against integer overflow and alloc size overflow */
if (hashtable->batches[0].shared->ntuples + 1 >
hashtable->nbuckets * NTUP_PER_BUCKET &&
hashtable->nbuckets < (INT_MAX / 2))
hashtable->nbuckets < (INT_MAX / 2) &&
hashtable->nbuckets * 2 <=
MaxAllocSize / sizeof(dsa_pointer_atomic))
{
pstate->growth = PHJ_GROWTH_NEED_MORE_BUCKETS;
LWLockRelease(&pstate->lock);