Hold interrupts while running dsm_detach() callbacks.

While cleaning up after a parallel query or parallel index creation that
created temporary files, we could be interrupted by a statement timeout.
The error handling path would then fail to clean up the files when it
ran dsm_detach() again, because the callback was already popped off the
list.  Prevent this hazard by holding interrupts while the cleanup code
runs.

Thanks to Heikki Linnakangas for this suggestion, and also to Kyotaro
Horiguchi, Masahiko Sawada, Justin Pryzby and Tom Lane for discussion of
this and earlier ideas on how to fix the problem.

Back-patch to all supported releases.

Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20191212180506.GR2082@telsasoft.com
This commit is contained in:
Thomas Munro 2021-02-15 13:32:58 +13:00
parent c3dc311ffd
commit 840eda04eb
1 changed files with 6 additions and 1 deletions

View File

@ -660,8 +660,12 @@ dsm_detach(dsm_segment *seg)
/*
* Invoke registered callbacks. Just in case one of those callbacks
* throws a further error that brings us back here, pop the callback
* before invoking it, to avoid infinite error recursion.
* before invoking it, to avoid infinite error recursion. Don't allow
* interrupts while running the individual callbacks in non-error code
* paths, to avoid leaving cleanup work unfinished if we're interrupted by
* a statement timeout or similar.
*/
HOLD_INTERRUPTS();
while (!slist_is_empty(&seg->on_detach))
{
slist_node *node;
@ -677,6 +681,7 @@ dsm_detach(dsm_segment *seg)
function(seg, arg);
}
RESUME_INTERRUPTS();
/*
* Try to remove the mapping, if one exists. Normally, there will be, but