Fix race in SSI interaction with bitmap heap scan.

When performing a bitmap heap scan, we don't want to miss concurrent
writes that occurred after we observed the heap's rs_nblocks, but before
we took predicate locks on index pages.  Therefore, we can't skip
fetching any heap tuples that are referenced by the index, because we
need to test them all with CheckForSerializableConflictOut().  The
old optimization that would ignore any references to blocks >=
rs_nblocks gets in the way of that requirement, because it means that
concurrent writes in that window are ignored.

Removing that optimization shouldn't affect correctness at any isolation
level, because any new tuples shouldn't be visible to an MVCC snapshot.
There also shouldn't be any error-causing references to heap blocks past
the end, because we should have held at least an AccessShareLock on the
table before the index scan.  It can't get smaller while our transaction
is running.  For now, though, we'll keep the optimization at lower
levels to avoid making unnecessary changes in a bug fix.

Back-patch to all supported releases.  In release 11, the code is in a
different place but not fundamentally different.  Fixes one aspect of
bug #17949.

Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
This commit is contained in:
Thomas Munro 2023-07-03 16:18:20 +12:00
parent 0048c3b515
commit 814f3c8e48
1 changed files with 7 additions and 3 deletions

View File

@ -39,6 +39,7 @@
#include "access/relscan.h"
#include "access/transam.h"
#include "access/xact.h"
#include "access/visibilitymap.h"
#include "executor/execdebug.h"
#include "executor/nodeBitmapHeapscan.h"
@ -217,9 +218,12 @@ BitmapHeapNext(BitmapHeapScanState *node)
* Ignore any claimed entries past what we think is the end of the
* relation. (This is probably not necessary given that we got at
* least AccessShareLock on the table before performing any of the
* indexscans, but let's be safe.)
* indexscans, but let's be safe.) We don't take this optimization
* in SERIALIZABLE isolation though, as we need to examine all
* invisible tuples reachable by the index.
*/
if (tbmres->blockno >= scan->rs_nblocks)
if (!IsolationIsSerializable() &&
tbmres->blockno >= scan->rs_nblocks)
{
node->tbmres = tbmres = NULL;
continue;
@ -390,7 +394,7 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
/*
* Acquire pin on the target heap page, trading in any pin we held before.
*/
Assert(page < scan->rs_nblocks);
Assert(IsolationIsSerializable() || page < scan->rs_nblocks);
scan->rs_cbuf = ReleaseAndReadBuffer(scan->rs_cbuf,
scan->rs_rd,