From 5f12bc94dcc60403bdfec6ea3d97aa616a512d9e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 13 Apr 2021 12:17:24 -0400 Subject: [PATCH] Avoid improbable PANIC during heap_update. heap_update needs to clear any existing "all visible" flag on the old tuple's page (and on the new page too, if different). Per coding rules, to do this it must acquire pin on the appropriate visibility-map page while not holding exclusive buffer lock; which creates a race condition since someone else could set the flag whenever we're not holding the buffer lock. The code is supposed to handle that by re-checking the flag after acquiring buffer lock and retrying if it became set. However, one code path through heap_update itself, as well as one in its subroutine RelationGetBufferForTuple, failed to do this. The end result, in the unlikely event that a concurrent VACUUM did set the flag while we're transiently not holding lock, is a non-recurring "PANIC: wrong buffer passed to visibilitymap_clear" failure. This has been seen a few times in the buildfarm since recent VACUUM changes that added code paths that could set the all-visible flag while holding only exclusive buffer lock. Previously, the flag was (usually?) set only after doing LockBufferForCleanup, which would insist on buffer pin count zero, thus preventing the flag from becoming set partway through heap_update. However, it's clear that it's heap_update not VACUUM that's at fault here. What's less clear is whether there is any hazard from these bugs in released branches. heap_update is certainly violating API expectations, but if there is no code path that can set all-visible without a cleanup lock then it's only a latent bug. That's not 100% certain though, besides which we should worry about extensions or future back-patch fixes that could introduce such code paths. I chose to back-patch to v12. Fixing RelationGetBufferForTuple before that would require also back-patching portions of older fixes (notably 0d1fe9f74), which is more code churn than seems prudent to fix a hypothetical issue. Discussion: https://postgr.es/m/2247102.1618008027@sss.pgh.pa.us --- src/backend/access/heap/heapam.c | 44 ++++++++++++++++++++------------ src/backend/access/heap/hio.c | 24 ++++++++++++----- 2 files changed, 45 insertions(+), 23 deletions(-) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 86a2b8b57d..666eec36d3 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -3476,7 +3476,7 @@ l2: * overhead would be unchanged, that doesn't seem necessarily * worthwhile. */ - if (PageIsAllVisible(BufferGetPage(buffer)) && + if (PageIsAllVisible(page) && visibilitymap_clear(relation, block, vmbuffer, VISIBILITYMAP_ALL_FROZEN)) cleared_all_frozen = true; @@ -3538,36 +3538,46 @@ l2: * first". To implement this, we must do RelationGetBufferForTuple * while not holding the lock on the old page, and we must rely on it * to get the locks on both pages in the correct order. + * + * Another consideration is that we need visibility map page pin(s) if + * we will have to clear the all-visible flag on either page. If we + * call RelationGetBufferForTuple, we rely on it to acquire any such + * pins; but if we don't, we have to handle that here. Hence we need + * a loop. */ - if (newtupsize > pagefree) - { - /* Assume there's no chance to put heaptup on same page. */ - newbuf = RelationGetBufferForTuple(relation, heaptup->t_len, - buffer, 0, NULL, - &vmbuffer_new, &vmbuffer); - } - else + for (;;) { + if (newtupsize > pagefree) + { + /* It doesn't fit, must use RelationGetBufferForTuple. */ + newbuf = RelationGetBufferForTuple(relation, heaptup->t_len, + buffer, 0, NULL, + &vmbuffer_new, &vmbuffer); + /* We're all done. */ + break; + } + /* Acquire VM page pin if needed and we don't have it. */ + if (vmbuffer == InvalidBuffer && PageIsAllVisible(page)) + visibilitymap_pin(relation, block, &vmbuffer); /* Re-acquire the lock on the old tuple's page. */ LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); /* Re-check using the up-to-date free space */ pagefree = PageGetHeapFreeSpace(page); - if (newtupsize > pagefree) + if (newtupsize > pagefree || + (vmbuffer == InvalidBuffer && PageIsAllVisible(page))) { /* - * Rats, it doesn't fit anymore. We must now unlock and - * relock to avoid deadlock. Fortunately, this path should - * seldom be taken. + * Rats, it doesn't fit anymore, or somebody just now set the + * all-visible flag. We must now unlock and loop to avoid + * deadlock. Fortunately, this path should seldom be taken. */ LockBuffer(buffer, BUFFER_LOCK_UNLOCK); - newbuf = RelationGetBufferForTuple(relation, heaptup->t_len, - buffer, 0, NULL, - &vmbuffer_new, &vmbuffer); } else { - /* OK, it fits here, so we're done. */ + /* We're all done. */ newbuf = buffer; + break; } } } diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c index d41d318eef..cf2d7743b5 100644 --- a/src/backend/access/heap/hio.c +++ b/src/backend/access/heap/hio.c @@ -284,9 +284,13 @@ RelationAddExtraBlocks(Relation relation, BulkInsertState bistate) * happen if space is freed in that page after heap_update finds there's not * enough there). In that case, the page will be pinned and locked only once. * - * For the vmbuffer and vmbuffer_other arguments, we avoid deadlock by - * locking them only after locking the corresponding heap page, and taking - * no further lwlocks while they are locked. + * We also handle the possibility that the all-visible flag will need to be + * cleared on one or both pages. If so, pin on the associated visibility map + * page must be acquired before acquiring buffer lock(s), to avoid possibly + * doing I/O while holding buffer locks. The pins are passed back to the + * caller using the input-output arguments vmbuffer and vmbuffer_other. + * Note that in some cases the caller might have already acquired such pins, + * which is indicated by these arguments not being InvalidBuffer on entry. * * We normally use FSM to help us find free space. However, * if HEAP_INSERT_SKIP_FSM is specified, we just append a new empty page to @@ -631,6 +635,8 @@ loop: if (otherBuffer != InvalidBuffer) { Assert(otherBuffer != buffer); + targetBlock = BufferGetBlockNumber(buffer); + Assert(targetBlock > otherBlock); if (unlikely(!ConditionalLockBuffer(otherBuffer))) { @@ -639,10 +645,16 @@ loop: LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); /* - * Because the buffer was unlocked for a while, it's possible, - * although unlikely, that the page was filled. If so, just retry - * from start. + * Because the buffers were unlocked for a while, it's possible, + * although unlikely, that an all-visible flag became set or that + * somebody used up the available space in the new page. We can + * use GetVisibilityMapPins to deal with the first case. In the + * second case, just retry from start. */ + GetVisibilityMapPins(relation, otherBuffer, buffer, + otherBlock, targetBlock, vmbuffer_other, + vmbuffer); + if (len > PageGetHeapFreeSpace(page)) { LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK);