Standardize ItemIdData terminology.

The term "item pointer" should not be used to refer to ItemIdData
variables, since that is needlessly ambiguous.  Only
ItemPointerData/ItemPointer variables should be called item pointers.

To fix, establish the convention that ItemIdData variables should always
be referred to either as "item identifiers" or "line pointers".  The
term "item identifier" already predominates in docs and translatable
messages, and so should be the preferred alternative there.

Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
This commit is contained in:
Peter Geoghegan 2019-05-13 15:53:39 -07:00
parent 08ca9d7fec
commit ae7291acbc
14 changed files with 52 additions and 73 deletions

View File

@ -2164,7 +2164,7 @@ invariant_l_offset(BtreeCheckState *state, BTScanInsert key,
* Does the invariant hold that the key is less than or equal to a given upper
* bound offset item?
*
* Caller should have verified that upperbound's item pointer is consistent
* Caller should have verified that upperbound's line pointer is consistent
* using PageGetItemIdCareful() call.
*
* If this function returns false, convention is that caller throws error due
@ -2187,7 +2187,7 @@ invariant_leq_offset(BtreeCheckState *state, BTScanInsert key,
* Does the invariant hold that the key is strictly greater than a given lower
* bound offset item?
*
* Caller should have verified that lowerbound's item pointer is consistent
* Caller should have verified that lowerbound's line pointer is consistent
* using PageGetItemIdCareful() call.
*
* If this function returns false, convention is that caller throws error due

View File

@ -149,8 +149,8 @@ the descendant heap-only tuple. It is conceivable that someone prunes
the heap-only tuple before that, and even conceivable that the line pointer
is re-used for another purpose. Therefore, when following a HOT chain,
it is always necessary to be prepared for the possibility that the
linked-to item pointer is unused, dead, or redirected; and if it is a
normal item pointer, we still have to check that XMIN of the tuple matches
linked-to line pointer is unused, dead, or redirected; and if it is a
normal line pointer, we still have to check that XMIN of the tuple matches
the XMAX of the tuple we left. Otherwise we should assume that we have
come to the end of the HOT chain. Note that this sort of XMIN/XMAX
matching is required when following ordinary update chains anyway.
@ -171,14 +171,14 @@ bit: there can be at most one visible tuple in the chain, so we can stop
when we find it. This rule does not work for non-MVCC snapshots, though.)
Sequential scans do not need to pay attention to the HOT links because
they scan every item pointer on the page anyway. The same goes for a
they scan every line pointer on the page anyway. The same goes for a
bitmap heap scan with a lossy bitmap.
Pruning
-------
HOT pruning means updating item pointers so that HOT chains are
HOT pruning means updating line pointers so that HOT chains are
reduced in length, by collapsing out line pointers for intermediate dead
tuples. Although this makes those line pointers available for re-use,
it does not immediately make the space occupied by their tuples available.
@ -271,7 +271,7 @@ physical tuple by eliminating an intermediate heap-only tuple or
replacing a physical root tuple by a redirect pointer, a decrement in
the table's number of dead tuples is reported to pgstats, which may
postpone autovacuuming. Note that we do not count replacing a root tuple
by a DEAD item pointer as decrementing n_dead_tuples; we still want
by a DEAD line pointer as decrementing n_dead_tuples; we still want
autovacuum to run to clean up the index entries and DEAD item.
This area probably needs further work ...

View File

@ -7163,7 +7163,7 @@ log_heap_clean(Relation reln, Buffer buffer,
* arrays need not be stored too. Note that even if all three arrays are
* empty, we want to expose the buffer as a candidate for whole-page
* storage, since this record type implies a defragmentation operation
* even if no item pointers changed state.
* even if no line pointers changed state.
*/
if (nredirected > 0)
XLogRegisterBufData(0, (char *) redirected,
@ -7724,7 +7724,7 @@ heap_xlog_clean(XLogReaderState *record)
nunused = (end - nowunused);
Assert(nunused >= 0);
/* Update all item pointers per the record, and repair fragmentation */
/* Update all line pointers per the record, and repair fragmentation */
heap_page_prune_execute(buffer,
redirected, nredirected,
nowdead, ndead,

View File

@ -2162,7 +2162,7 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,
else
{
/*
* Bitmap is lossy, so we must examine each item pointer on the page.
* Bitmap is lossy, so we must examine each line pointer on the page.
* But we can ignore HOT chains, since we'll check each tuple anyway.
*/
Page dp = (Page) BufferGetPage(buffer);

View File

@ -324,7 +324,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
/*
* Prune specified item pointer or a HOT chain originating at that item.
* Prune specified line pointer or a HOT chain originating at line pointer.
*
* If the item is an index-referenced tuple (i.e. not a heap-only tuple),
* the HOT chain is pruned by removing all DEAD tuples at the start of the HOT
@ -454,7 +454,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
}
/*
* Likewise, a dead item pointer can't be part of the chain. (We
* Likewise, a dead line pointer can't be part of the chain. (We
* already eliminated the case of dead root tuple outside this
* function.)
*/
@ -630,7 +630,7 @@ heap_prune_record_prunable(PruneState *prstate, TransactionId xid)
prstate->new_prune_xid = xid;
}
/* Record item pointer to be redirected */
/* Record line pointer to be redirected */
static void
heap_prune_record_redirect(PruneState *prstate,
OffsetNumber offnum, OffsetNumber rdoffnum)
@ -645,7 +645,7 @@ heap_prune_record_redirect(PruneState *prstate,
prstate->marked[rdoffnum] = true;
}
/* Record item pointer to be marked dead */
/* Record line pointer to be marked dead */
static void
heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)
{
@ -656,7 +656,7 @@ heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)
prstate->marked[offnum] = true;
}
/* Record item pointer to be marked unused */
/* Record line pointer to be marked unused */
static void
heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
{

View File

@ -509,7 +509,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
live_tuples, /* live tuples (reltuples estimate) */
tups_vacuumed, /* tuples cleaned up by vacuum */
nkeep, /* dead-but-not-removable tuples */
nunused; /* unused item pointers */
nunused; /* unused line pointers */
IndexBulkDeleteResult **indstats;
int i;
PGRUsage ru0;
@ -1017,7 +1017,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
ItemPointerSet(&(tuple.t_self), blkno, offnum);
/*
* DEAD item pointers are to be vacuumed normally; but we don't
* DEAD line pointers are to be vacuumed normally; but we don't
* count them in tups_vacuumed, else we'd be double-counting (at
* least in the common case where heap_page_prune() just freed up
* a non-HOT tuple).
@ -1483,7 +1483,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
appendStringInfo(&buf,
_("%.0f dead row versions cannot be removed yet, oldest xmin: %u\n"),
nkeep, OldestXmin);
appendStringInfo(&buf, _("There were %.0f unused item pointers.\n"),
appendStringInfo(&buf, _("There were %.0f unused item identifiers.\n"),
nunused);
appendStringInfo(&buf, ngettext("Skipped %u page due to buffer pins, ",
"Skipped %u pages due to buffer pins, ",

View File

@ -38,32 +38,6 @@
* This file contains the index_ routines which used
* to be a scattered collection of stuff in access/genam.
*
*
* old comments
* Scans are implemented as follows:
*
* `0' represents an invalid item pointer.
* `-' represents an unknown item pointer.
* `X' represents a known item pointers.
* `+' represents known or invalid item pointers.
* `*' represents any item pointers.
*
* State is represented by a triple of these symbols in the order of
* previous, current, next. Note that the case of reverse scans works
* identically.
*
* State Result
* (1) + + - + 0 0 (if the next item pointer is invalid)
* (2) + X - (otherwise)
* (3) * 0 0 * 0 0 (no change)
* (4) + X 0 X 0 0 (shift)
* (5) * + X + X - (shift, add unknown)
*
* All other states cannot occur.
*
* Note: It would be possible to cache the status of the previous and
* next item pointer using the flags.
*
*-------------------------------------------------------------------------
*/

View File

@ -1689,7 +1689,7 @@ _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf,
* Direct access to page is not good but faster - we should implement
* some new func in page API. Note we only store the tuples
* themselves, knowing that they were inserted in item-number order
* and so the item pointers can be reconstructed. See comments for
* and so the line pointers can be reconstructed. See comments for
* _bt_restore_page().
*/
XLogRegisterBufData(1,

View File

@ -337,7 +337,7 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
InvalidBlockNumber, InvalidOffsetNumber);
/*
* We implement the move step by swapping the item pointers of the source
* We implement the move step by swapping the line pointers of the source
* and target tuples, then replacing the newly-source tuples with
* placeholders. This is perhaps unduly friendly with the page data
* representation, but it's fast and doesn't risk page overflow when a

View File

@ -65,7 +65,7 @@ PageInit(Page page, Size pageSize, Size specialSize)
* Check that the page header and checksum (if any) appear valid.
*
* This is called when a page has just been read in from disk. The idea is
* to cheaply detect trashed pages before we go nuts following bogus item
* to cheaply detect trashed pages before we go nuts following bogus line
* pointers, testing invalid transaction identifiers, etc.
*
* It turns out to be necessary to allow zeroed pages here too. Even though
@ -170,12 +170,12 @@ PageIsVerified(Page page, BlockNumber blkno)
* reason. A WARNING is issued indicating the reason for the refusal.
*
* offsetNumber must be either InvalidOffsetNumber to specify finding a
* free item pointer, or a value between FirstOffsetNumber and one past
* the last existing item, to specify using that particular item pointer.
* free line pointer, or a value between FirstOffsetNumber and one past
* the last existing item, to specify using that particular line pointer.
*
* If offsetNumber is valid and flag PAI_OVERWRITE is set, we just store
* the item at the specified offsetNumber, which must be either a
* currently-unused item pointer, or one past the last existing item.
* currently-unused line pointer, or one past the last existing item.
*
* If offsetNumber is valid and flag PAI_OVERWRITE is not set, insert
* the item at the specified offsetNumber, moving existing items later
@ -314,7 +314,7 @@ PageAddItemExtended(Page page,
memmove(itemId + 1, itemId,
(limit - offsetNumber) * sizeof(ItemIdData));
/* set the item pointer */
/* set the line pointer */
ItemIdSetNormal(itemId, upper, size);
/*
@ -529,7 +529,7 @@ PageRepairFragmentation(Page page)
itemidptr->itemoff >= (int) pd_special))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("corrupted item pointer: %u",
errmsg("corrupted line pointer: %u",
itemidptr->itemoff)));
itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
totallen += itemidptr->alignedlen;
@ -763,7 +763,7 @@ PageIndexTupleDelete(Page page, OffsetNumber offnum)
offset != MAXALIGN(offset))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("corrupted item pointer: offset = %u, size = %u",
errmsg("corrupted line pointer: offset = %u, size = %u",
offset, (unsigned int) size)));
/* Amount of space to actually be deleted */
@ -881,7 +881,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
pd_lower, pd_upper, pd_special)));
/*
* Scan the item pointer array and build a list of just the ones we are
* Scan the line pointer array and build a list of just the ones we are
* going to keep. Notice we do not modify the page yet, since we are
* still validity-checking.
*/
@ -901,7 +901,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
offset != MAXALIGN(offset))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("corrupted item pointer: offset = %u, length = %u",
errmsg("corrupted line pointer: offset = %u, length = %u",
offset, (unsigned int) size)));
if (nextitm < nitems && offnum == itemnos[nextitm])
@ -989,14 +989,14 @@ PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offnum)
offset != MAXALIGN(offset))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("corrupted item pointer: offset = %u, size = %u",
errmsg("corrupted line pointer: offset = %u, size = %u",
offset, (unsigned int) size)));
/* Amount of space to actually be deleted */
size = MAXALIGN(size);
/*
* Either set the item pointer to "unused", or zap it if it's the last
* Either set the line pointer to "unused", or zap it if it's the last
* one. (Note: it's possible that the next-to-last one(s) are already
* unused, but we do not trouble to try to compact them out if so.)
*/
@ -1054,7 +1054,7 @@ PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offnum)
* other tuples' data up or down as needed to keep the page compacted.
* This is better than deleting and reinserting the tuple, because it
* avoids any data shifting when the tuple size doesn't change; and
* even when it does, we avoid moving the item pointers around.
* even when it does, we avoid moving the line pointers around.
* Conceivably this could also be of use to an index AM that cares about
* the physical order of tuples as well as their ItemId order.
*
@ -1099,7 +1099,7 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
offset != MAXALIGN(offset))
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("corrupted item pointer: offset = %u, size = %u",
errmsg("corrupted line pointer: offset = %u, size = %u",
offset, (unsigned int) oldsize)));
/*

View File

@ -564,7 +564,7 @@ do { \
* MaxHeapTuplesPerPage is an upper bound on the number of tuples that can
* fit on one heap page. (Note that indexes could have more, because they
* use a smaller tuple header.) We arrive at the divisor because each tuple
* must be maxaligned, and it must have an associated item pointer.
* must be maxaligned, and it must have an associated line pointer.
*
* Note: with HOT, there could theoretically be more line pointers (not actual
* tuples) than this on a heap page. However we constrain the number of line

View File

@ -131,7 +131,7 @@ typedef IndexAttributeBitMapData * IndexAttributeBitMap;
* fit on one index page. An index tuple must have either data or a null
* bitmap, so we can safely assume it's at least 1 byte bigger than a bare
* IndexTupleData struct. We arrive at the divisor because each tuple
* must be maxaligned, and it must have an associated item pointer.
* must be maxaligned, and it must have an associated line pointer.
*
* To be index-type-independent, this does not account for any special space
* on the page, and is thus conservative.

View File

@ -53,14 +53,18 @@
*
* NOTES:
*
* linp1..N form an ItemId array. ItemPointers point into this array
* rather than pointing directly to a tuple. Note that OffsetNumbers
* linp1..N form an ItemId (line pointer) array. ItemPointers point
* to a physical block number and a logical offset (line pointer
* number) within that block/page. Note that OffsetNumbers
* conventionally start at 1, not 0.
*
* tuple1..N are added "backwards" on the page. because a tuple's
* ItemPointer points to its ItemId entry rather than its actual
* tuple1..N are added "backwards" on the page. Since an ItemPointer
* offset is used to access an ItemId entry rather than an actual
* byte-offset position, tuples can be physically shuffled on a page
* whenever the need arises.
* whenever the need arises. This indirection also keeps crash recovery
* relatively simple, because the low-level details of page space
* management can be controlled by standard buffer page code during
* logging, and during recovery.
*
* AM-generic per-page information is kept in PageHeaderData.
*
@ -233,7 +237,7 @@ typedef PageHeaderData *PageHeader;
/*
* PageGetContents
* To be used in case the page does not contain item pointers.
* To be used in cases where the page does not contain line pointers.
*
* Note: prior to 8.3 this was not guaranteed to yield a MAXALIGN'd result.
* Now it is. Beware of old code that might think the offset to the contents

View File

@ -1,7 +1,7 @@
/*-------------------------------------------------------------------------
*
* itemid.h
* Standard POSTGRES buffer page item identifier definitions.
* Standard POSTGRES buffer page item identifier/line pointer definitions.
*
*
* Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
@ -15,16 +15,17 @@
#define ITEMID_H
/*
* An item pointer (also called line pointer) on a buffer page
* A line pointer on a buffer page. See buffer page definitions and comments
* for an explanation of how line pointers are used.
*
* In some cases an item pointer is "in use" but does not have any associated
* storage on the page. By convention, lp_len == 0 in every item pointer
* In some cases a line pointer is "in use" but does not have any associated
* storage on the page. By convention, lp_len == 0 in every line pointer
* that does not have storage, independently of its lp_flags state.
*/
typedef struct ItemIdData
{
unsigned lp_off:15, /* offset to tuple (from start of page) */
lp_flags:2, /* state of item pointer, see below */
lp_flags:2, /* state of line pointer, see below */
lp_len:15; /* byte length of tuple */
} ItemIdData;
@ -72,7 +73,7 @@ typedef uint16 ItemLength;
/*
* ItemIdGetRedirect
* In a REDIRECT pointer, lp_off holds the link to the next item pointer
* In a REDIRECT pointer, lp_off holds offset number for next line pointer
*/
#define ItemIdGetRedirect(itemId) \
((itemId)->lp_off)