Remove unneeded nbtree latestRemovedXid comments.

Discussing the low level issue of nbtree VACUUM and recovery conflicts
in btvacuumpage() now seems inappropriate.  The same issue is discussed
in nbtxlog.h, as well as in a comment block above _bt_delitems_vacuum().

The comment block made more sense when it was part of a broader
discussion of nbtree VACUUM "pin scans".  These were removed by commit
9f83468b.
This commit is contained in:
Peter Geoghegan 2021-09-26 20:25:14 -07:00
parent e6a7600202
commit 895267a326
1 changed files with 1 additions and 26 deletions

View File

@ -1183,12 +1183,6 @@ backtrack:
opaque->btpo_next < scanblkno)
backtrack_to = opaque->btpo_next;
/*
* When each VACUUM begins, it determines an OldestXmin cutoff value.
* Tuples before the cutoff are removed by VACUUM. Scan over all
* items to see which ones need to be deleted according to cutoff
* point using callback.
*/
ndeletable = 0;
nupdatable = 0;
minoff = P_FIRSTDATAKEY(opaque);
@ -1197,6 +1191,7 @@ backtrack:
nhtidslive = 0;
if (callback)
{
/* btbulkdelete callback tells us what to delete (or update) */
for (offnum = minoff;
offnum <= maxoff;
offnum = OffsetNumberNext(offnum))
@ -1206,26 +1201,6 @@ backtrack:
itup = (IndexTuple) PageGetItem(page,
PageGetItemId(page, offnum));
/*
* Hot Standby assumes that it's okay that XLOG_BTREE_VACUUM
* records do not produce their own conflicts. This is safe
* as long as the callback function only considers whether the
* index tuple refers to pre-cutoff heap tuples that were
* certainly already pruned away during VACUUM's initial heap
* scan by the time we get here. (heapam's XLOG_HEAP2_PRUNE
* records produce conflicts using a latestRemovedXid value
* for the pointed-to heap tuples, so there is no need to
* produce our own conflict now.)
*
* Backends with snapshots acquired after a VACUUM starts but
* before it finishes could have visibility cutoff with a
* later xid than VACUUM's OldestXmin cutoff. These backends
* might happen to opportunistically mark some index tuples
* LP_DEAD before we reach them, even though they may be after
* our cutoff. We don't try to kill these "extra" index
* tuples in _bt_delitems_vacuum(). This keep things simple,
* and allows us to always avoid generating our own conflicts.
*/
Assert(!BTreeTupleIsPivot(itup));
if (!BTreeTupleIsPosting(itup))
{