Make sure that hash join's bulk-tuple-transfer loops are interruptible.

The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and
ExecHashRemoveNextSkewBucket() are all capable of iterating over many
tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend
might fail to respond to SIGINT or SIGTERM for an unreasonably long time.
Fix that.  In the case of ExecHashJoinNewBatch(), it seems useful to put
the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather
than directly in the loop, because that will also ensure that both
principal code paths through ExecHashJoinOuterGetTuple() will do a
CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises.

Back-patch to all supported branches.

Tom Lane and Thomas Munro

Discussion: https://postgr.es/m/6044.1487121720@sss.pgh.pa.us
This commit is contained in:
Tom Lane 2017-02-15 16:40:05 -05:00
parent 2b18743614
commit f2ec57dee9
2 changed files with 13 additions and 0 deletions

View File

@ -720,6 +720,9 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable)
/* next tuple in this chunk */
idx += MAXALIGN(hashTupleSize);
/* allow this loop to be cancellable */
CHECK_FOR_INTERRUPTS();
}
/* we're done with this chunk - free it and proceed to the next one */
@ -1599,6 +1602,9 @@ ExecHashRemoveNextSkewBucket(HashJoinTable hashtable)
}
hashTuple = nextHashTuple;
/* allow this loop to be cancellable */
CHECK_FOR_INTERRUPTS();
}
/*

View File

@ -856,6 +856,13 @@ ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
size_t nread;
MinimalTuple tuple;
/*
* We check for interrupts here because this is typically taken as an
* alternative code path to an ExecProcNode() call, which would include
* such a check.
*/
CHECK_FOR_INTERRUPTS();
/*
* Since both the hash value and the MinimalTuple length word are uint32,
* we can read them both in one BufFileRead() call without any type