Refactor nbtinsert.c so that the final itemsz of each new non-pivot
tuple (the MAXALIGN()'d size) is determined once. Most of the functions
used by leaf page inserts used the insertstate.itemsz value already.
This commit makes everything use insertstate.itemsz as standard
practice. The goal is to decouple tuple size from "effective" tuple
size. Making this distinction isn't truly necessary right now, but that
might change in the future.
Also explain why we consistently apply MAXALIGN() to get an effective
index tuple size. This was rather unclear, in part because it isn't
actually strictly necessary right now.
Commit 074251db added an assertion that verified the fastpath/rightmost
page insert optimization's assumption about free space: There should
always be enough free space on the page to insert the new item without
splitting the page. Otherwise, we end up using the "concurrent root
page split" phony/fake stack path in _bt_insert_parent(). This does not
lead to incorrect behavior, but it is likely to be far slower than
simply using the regular _bt_search() path. The assertion catches
serious performance bugs that would probably take a long time to detect
any other way.
It seems much more natural to make this assertion just before the point
that we generate a fake/phony descent stack. Move the assert there.
This also makes _bt_insertonpg() a bit more readable.
Only a very basic logic bug in a _bt_insertonpg() caller could lead to a
violation of this invariant. Besides, any newitemoff used for an
internal page is sanitized using other "can't happen" errors in
_bt_getstackbuf() or its callers, before _bt_insertonpg() even gets
called.
Also, move the error/assertion from the insert-without-split path of
_bt_insertonpg() to the top of the same function. There is no reason
why this invariant only applies to insertions that happen to not result
in a page split; cover every insertion. The assertion naturally belongs
next to the existing generic assertions that document relatively
high-level invariants for the item being inserted.
_bt_split() is passed NULL as its insertion scankey for internal page
splits. Two recently added Assert() statements failed to consider this,
leading to a crash with pg_upgrade'd BREE_VERSION < 4 indexes. Remove
the assertions.
The assertions in question were added by commit 0d861bbb, which added
nbtree deduplication. It would be possible to fix the assertions
directly instead, but they weren't adding much anyway.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ruhttps://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
Invent the concept of a B-Tree equalimage ("equality implies image
equality") support function, registered as support function 4. This
indicates whether it is safe (or not safe) to apply optimizations that
assume that any two datums considered equal by an operator class's order
method must be interchangeable without any loss of semantic information.
This is static information about an operator class and a collation.
Register an equalimage routine for almost all of the existing B-Tree
opclasses. We only need two trivial routines for all of the opclasses
that are included with the core distribution. There is one routine for
opclasses that index non-collatable types (which returns 'true'
unconditionally), plus another routine for collatable types (which
returns 'true' when the collation is a deterministic collation).
This patch is infrastructure for an upcoming patch that adds B-Tree
deduplication.
Author: Peter Geoghegan, Anastasia Lubennikova
Discussion: https://postgr.es/m/CAH2-Wzn3Ee49Gmxb7V1VJ3-AC8fWn-Fr8pfWQebHe8rYRxt5OQ@mail.gmail.com
btbuild() has nothing to say about how NULL values compare in nbtree.
Besides, there are _bt_compare() header comments that describe how NULL
values are handled.
If no DSM slots are available, a ParallelContext can still be
created, but its seg pointer is NULL. Teach parallel btree build
to cope with that by falling back to a regular non-parallel build,
to avoid crashing with a segmentation fault.
Back-patch to 11, where parallel CREATE INDEX landed.
Reported-by: Nicola Contu
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CA%2BhUKGJgJEBnkuODBVomyK3MWFvDBbMVj%3Dgdt6DnRPU-5sQ6UQ%40mail.gmail.com
The following changes make the predicate locking functions more
generic and suitable for use by future access methods:
- PredicateLockTuple() is renamed to PredicateLockTID(). It takes
ItemPointer and inserting transaction ID instead of HeapTuple.
- CheckForSerializableConflictIn() takes blocknum instead of buffer.
- CheckForSerializableConflictOut() no longer takes HeapTuple or buffer.
Author: Ashwin Agrawal
Reviewed-by: Andres Freund, Kuntal Ghosh, Thomas Munro
Discussion: https://postgr.es/m/CALfoeiv0k3hkEb3Oqk%3DziWqtyk2Jys1UOK5hwRBNeANT_yX%2Bng%40mail.gmail.com
Introduce new fields amusemaintenanceworkmem and amparallelvacuumoptions
in IndexAmRoutine for parallel vacuum. The amusemaintenanceworkmem tells
whether a particular IndexAM uses maintenance_work_mem or not. This will
help in controlling the memory used by individual workers as otherwise,
each worker can consume memory equal to maintenance_work_mem. The
amparallelvacuumoptions tell whether a particular IndexAM participates in
a parallel vacuum and if so in which phase (bulkdelete, vacuumcleanup) of
vacuum.
Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Tomas Vondra and Robert Haas
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.comhttps://postgr.es/m/CAA4eK1LmcD5aPogzwim5Nn58Ki+74a6Edghx4Wd8hAskvHaq5A@mail.gmail.com
The fastpath insert optimization's incomplete split flag Assert() is
redundant. We'll reach the more general Assert() within
_bt_findinsertloc() in all cases. (Besides, Assert()'ing that the
rightmost page doesn't have the flag set never made much sense.)
Commit 558a9165e0 taught _bt_delitems_delete() to produce its own XID
horizon on the primary. Standbys no longer needed to generate their own
latestRemovedXid, since they could just use the explicitly logged value
from the primary instead. The deleted offset numbers array from the
xl_btree_delete WAL record was no longer used by the REDO routine for
anything other than deleting the items.
This enables a minor optimization: We now treat the array as buffer
state, not generic WAL data, following _bt_delitems_vacuum()'s example.
This should be a minor win, since it allows us to avoid including the
deleted items array in cases where XLogInsert() stores the whole buffer
anyway. The primary goal here is to make the code more maintainable,
though. Removing inessential differences between the two functions
highlights the fundamental differences that remain.
Also change xl_btree_delete to use uint32 for the size of the array of
item offsets being deleted. This brings xl_btree_delete closer to
xl_btree_vacuum. Furthermore, it seems like a good idea to use an
explicit-width integer type (the field was previously an "int").
Bump XLOG_PAGE_MAGIC because xl_btree_delete changed.
Discussion: https://postgr.es/m/CAH2-Wzkz4TjmezzfAbaV1zYrh=fr0bCpzuJTvBe5iUQ3aUPsCQ@mail.gmail.com
Adjust a comment that describes how alignment of the new left page high
key works in btree_xlog_split(), the nbtree page split REDO routine.
The wording used before commit 2c03216d83 is much clearer, so go back
to that.
_bt_delitems_vacuum() comments claimed that it isn't worth another scan
of the page to avoid falsely unsetting the BTP_HAS_GARBAGE page flag
hint (this happens to be the same wording that was removed from
_bt_delitems_delete() by my recent commit fe97c61c). The comments made
little sense, though. The issue can't have much to do with performing a
second scan of the target leaf page, since an LP_DEAD test could easily
be performed in the first scan of the page anyway (the scan that takes
place in btvacuumpage() caller).
Revise the explanation. It makes much more sense to frame this as an
issue about recovery conflicts. _bt_delitems_vacuum() cannot easily
generate an XID cutoff in the same way that _bt_delitems_delete() is
designed to.
Falsely unsetting the page flag is not ideal, and is likely to happen
more often than was supposed by the original comments. Explain why it
usually isn't a problem in practice. There may be an argument for
_bt_delitems_vacuum() not clearing the BTP_HAS_GARBAGE bit, removing the
question of it being falsely unset by VACUUM (there may even be an
argument for not using a page level hint at all). This can be revisited
later.
Commit fe97c61c updated LP_DEAD item deletion comments, but missed a
minor discrepancy on the REDO side. Fix it now.
In passing, don't talk about the btree_xlog_vacuum() behavior within
btree_xlog_delete(). The reliance on XLOG_HEAP2_CLEANUP_INFO records
for recovery conflicts is already discussed within btvacuumpage() and
mentioned again in passing above btree_xlog_vacuum(), which seems
sufficient.
This follows multiple complains from Peter Geoghegan, Andres Freund and
Alvaro Herrera that this issue ought to be dug more before actually
happening, if it happens.
Discussion: https://postgr.es/m/20191226144606.GA5659@alvherre.pgsql
The following renaming is done so as source files related to index
access methods are more consistent with table access methods (the
original names used for index AMs ware too generic, and could be
confused as including features related to table AMs):
- amapi.h -> indexam.h.
- amapi.c -> indexamapi.c. Here we have an equivalent with
backend/access/table/tableamapi.c.
- amvalidate.c -> indexamvalidate.c.
- amvalidate.h -> indexamvalidate.h.
- genam.c -> indexgenam.c.
- genam.h -> indexgenam.h.
This has been discussed during the development of v12 when table AM was
worked on, but the renaming never happened.
Author: Michael Paquier
Reviewed-by: Fabien Coelho, Julien Rouhaud
Discussion: https://postgr.es/m/20191223053434.GF34339@paquier.xyz
Comments about the consequences of clearing the BTP_HAS_GARBAGE page
flag bit that apply only to VACUUM were added to code that deals with
opportunistic deletion of LP_DEAD items by commit a760893d. The same
comment block was added to both _bt_delitems_vacuum() and
_bt_delitems_delete(). Correct _bt_delitems_delete()'s copy of the
comment block.
_bt_delitems_delete() reliably deletes items that were found by caller
to have their LP_DEAD bit set. There is no question about whether or
not unsetting the BTP_HAS_GARBAGE bit can miss some LP_DEAD items that
were set recently.
Also tweak a related section of the nbtree README.
The REDO routine for nbtree's xl_btree_vacuum record type hasn't
performed a "pin scan" since commit 3e4b7d87 went in, so clearly there
isn't any point in VACUUM WAL-logging information that won't actually be
used. Finish off the work of commit 3e4b7d87 (and the closely related
preceding commit 687f2cd7) by removing the code that generates this
unused information. Also remove the REDO routine code disabled by
commit 3e4b7d87.
Replace the unneeded lastBlockVacuumed field in xl_btree_vacuum with a
new "ndeleted" field. The new field isn't actually needed right now,
since we could continue to infer the array length from the overall
record length. However, an upcoming patch to add deduplication to
nbtree needs to add an "items updated" field to xl_btree_vacuum, so we
might as well start being explicit about the number of items now.
(Besides, it doesn't seem like a good idea to leave the xl_btree_vacuum
struct without any fields; the C standard says that that's undefined.)
nbtree VACUUM no longer forces writing a WAL record for the last block
in the index. Writing out a WAL record with no items for the final
block was supposed to force processing of a lastBlockVacuumed field by a
pin scan.
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
Discussion: https://postgr.es/m/CAH2-WzmY_mT7UnTzFB5LBQDBkKpdV5UxP3B5bLb7uP%3D%3D6UQJRQ%40mail.gmail.com
This reworks the reloption parsing and build of a couple of index AMs by
creating new structures for each index AM's options. This split was
already done for BRIN, GIN and GiST (which actually has a fillfactor
parameter), but not for hash, B-tree and SPGiST which relied on
StdRdOptions due to an overlap with the default option set.
This saves a couple of bytes for rd_options in each relcache entry with
indexes making use of relation options, and brings more consistency
between all index AMs. While on it, add a couple of AssertMacro() calls
to make sure that utility macros to grab values of reloptions are used
with the expected index AM.
Author: Nikolay Shaplov
Reviewed-by: Amit Langote, Michael Paquier, Álvaro Herrera, Dent John
Discussion: https://postgr.es/m/4127670.gFlpRb6XCm@x200m
Make it clear that _bt_pgaddtup() truncates the first data item on an
internal page because its key is supposed to be treated as minus
infinity within _bt_compare().
An upcoming patch that adds deduplication to the nbtree AM will rely on
_bt_keep_natts_fast() understanding that differences in TOAST input
state can never affect its answer. In particular, two opclass-equal
datums (with opclasses deemed safe for deduplication) should never be
treated as unequal by _bt_keep_natts_fast() due to TOAST input
differences.
This also seems like a good idea on general principle. nbtsplitloc.c
will now occasionally make better decisions about where to split a leaf
page. The behavior of _bt_keep_natts_fast() is now somewhat closer to
the behavior of _bt_keep_natts().
Discussion: https://postgr.es/m/CAH2-Wzn3Ee49Gmxb7V1VJ3-AC8fWn-Fr8pfWQebHe8rYRxt5OQ@mail.gmail.com
nbtree index builds once stashed the "minimum key" for a page, which was
used as the basis of the pivot tuple that gets placed in the next level
up (i.e. the tuple that stores the downlink to the page in question).
It doesn't quite work that way anymore, so the "minimum key" terminology
now seems misleading (these days the minimum key is actually a straight
copy of the high key from the left sibling, which is a distinct thing in
subtle but important ways). Rename this concept to "low key". This
name is a lot clearer given that there is now a sharp distinction
between pivot and non-pivot tuples. Also remove comments that describe
obsolete details about how the minimum key concept used to work.
Rather than generating the minus infinity item for the leftmost page on
a level by copying the new item and truncating that copy, simply
allocate a small buffer. The old approach confusingly created the
impression that the new item had some kind of significance. This was
another artifact of how things used to work before commits 8224de4f and
dd299df8.
When maintaining or merging patches, one of the most common sources
for conflicts are the list of objects in makefiles. Especially when
the split across lines has been changed on both sides, which is
somewhat common due to attempting to stay below 80 columns, those
conflicts are unnecessarily laborious to resolve.
By splitting, and alphabetically sorting, OBJS style lines into one
object per line, conflicts should be less frequent, and easier to
resolve when they still occur.
Author: Andres Freund
Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
Assert that _bt_binsrch() binary searches with scantid set in insertion
scankey cannot be performed on leaf pages. Leaf-level binary searches
where scantid is set must use _bt_binsrch_insert() instead.
_bt_binsrch_insert() is likely to have additional responsibilities in
the future, such as searching within GIN-style posting lists using
scantid. It seems like a good idea to tighten things up now.
Commit efada2b8e9, which made the nbtree page deletion algorithm more
robust, removed the concept of a half-dead internal page. Remove a
comment about half dead parent pages that was overlooked.
The Postgres approach to coupling locks during an ascent of the tree is
slightly different to the approach taken by Lehman and Yao. Add a new
paragraph to the "Differences to the Lehman & Yao algorithm" section of
the nbtree README that explains the similarities and differences.
Adjust the struct comment that describes how page splits use their
descent stack to cascade up the tree from the leaf level.
In passing, fix up some unrelated nbtree comments that had typos or were
obsolete.
The initial value of the nbtree stack downlink block number field
recorded during an initial descent of the tree wasn't actually used.
Both _bt_getstackbuf() callers overwrote the value with their own value.
Remove the block number field from the stack struct, and add a child
block number argument to _bt_getstackbuf() in its place. This makes the
overall design of _bt_getstackbuf() clearer.
Author: Peter Geoghegan
Reviewed-By: Anastasia Lubennikova
Discussion: https://postgr.es/m/CAH2-Wzmx+UbXt2YNOUCZ-a04VdXU=S=OHuAuD7Z8uQq-PXTYUg@mail.gmail.com
Commit d2086b08b0 removed almost all cases where nbtree must release a
read buffer lock and acquire a write buffer lock instead, so remaining
cases in which that's still necessary are not notable enough to appear
in the nbtree README.
More importantly, holding on to a buffer pin in cases where nbtree must
trade a read lock for a write lock is very unlikely to save any I/O.
This seems to have been a long overlooked throwback to a time when
nbtree cared about write-ordering dependencies, and performed
synchronous buffer writes. It hasn't worked that way in many years.
Use the PageIndexTupleOverwrite() bufpage.c routine within nbtree
instead of deleting a tuple and re-inserting its replacement. This
makes the intent of affected code slightly clearer. It also makes
CREATE INDEX slightly faster, since there is no longer a need to shift
every leaf page's line pointer array back and forth during index builds.
Author: Peter Geoghegan, Anastasia Lubennikova
Reviewed-By: Anastasia Lubennikova
Discussion: https://postgr.es/m/CAH2-Wz=Zk=B9+Vwm376WuO7YTjFc2SSskifQm4Nme3RRRPtOSQ@mail.gmail.com