postgresql/src/backend/access/heap/pruneheap.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1214 lines
37 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* pruneheap.c
* heap page pruning and HOT-chain management code
*
* Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
2010-09-20 22:08:53 +02:00
* src/backend/access/heap/pruneheap.c
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/heapam.h"
#include "access/heapam_xlog.h"
#include "access/htup_details.h"
#include "access/transam.h"
#include "access/xlog.h"
#include "access/xloginsert.h"
#include "catalog/catalog.h"
#include "miscadmin.h"
#include "pgstat.h"
#include "storage/bufmgr.h"
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
#include "utils/snapmgr.h"
#include "utils/rel.h"
#include "utils/snapmgr.h"
/* Working data for heap_page_prune and subroutines */
typedef struct
{
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
Relation rel;
/* tuple visibility test, initialized for the relation */
GlobalVisState *vistest;
/*
* Thresholds set by TransactionIdLimitedForOldSnapshots() if they have
* been computed (done on demand, and only if
* OldSnapshotThresholdActive()). The first time a tuple is about to be
* removed based on the limited horizon, old_snap_used is set to true, and
* SetOldSnapshotThresholdTimestamp() is called. See
* heap_prune_satisfies_vacuum().
*/
TimestampTz old_snap_ts;
TransactionId old_snap_xmin;
bool old_snap_used;
TransactionId new_prune_xid; /* new prune hint value for page */
TransactionId snapshotConflictHorizon; /* latest xid removed */
Allow read only connections during recovery, known as Hot Standby. Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record. New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far. This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required. Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit. Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
int nredirected; /* numbers of entries in arrays below */
int ndead;
int nunused;
/* arrays that accumulate indexes of items to be changed */
OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
OffsetNumber nowdead[MaxHeapTuplesPerPage];
OffsetNumber nowunused[MaxHeapTuplesPerPage];
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
/*
* marked[i] is true if item i is entered in one of the above arrays.
*
* This needs to be MaxHeapTuplesPerPage + 1 long as FirstOffsetNumber is
* 1. Otherwise every access would need to subtract 1.
*/
bool marked[MaxHeapTuplesPerPage + 1];
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
/*
* Tuple visibility is only computed once for each tuple, for correctness
* and efficiency reasons; see comment in heap_page_prune() for details.
* This is of type int8[], instead of HTSV_Result[], so we can use -1 to
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
* indicate no visibility has been computed, e.g. for LP_DEAD items.
*
* Same indexing as ->marked.
*/
int8 htsv[MaxHeapTuplesPerPage + 1];
} PruneState;
/* Local functions */
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
static HTSV_Result heap_prune_satisfies_vacuum(PruneState *prstate,
HeapTuple tup,
Buffer buffer);
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
static int heap_prune_chain(Buffer buffer,
OffsetNumber rootoffnum,
PruneState *prstate);
static void heap_prune_record_prunable(PruneState *prstate, TransactionId xid);
static void heap_prune_record_redirect(PruneState *prstate,
OffsetNumber offnum, OffsetNumber rdoffnum);
static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
static void page_verify_redirects(Page page);
/*
* Optionally prune and repair fragmentation in the specified page.
*
* This is an opportunistic function. It will perform housekeeping
* only if the page heuristically looks like a candidate for pruning and we
* can acquire buffer cleanup lock without blocking.
*
* Note: this is called quite often. It's important that it fall out quickly
* if there's not any use in pruning.
*
* Caller must have pin on the buffer, and must *not* have a lock on it.
*/
void
heap_page_prune_opt(Relation relation, Buffer buffer)
{
Page page = BufferGetPage(buffer);
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
TransactionId prune_xid;
GlobalVisState *vistest;
TransactionId limited_xmin = InvalidTransactionId;
TimestampTz limited_ts = 0;
Size minfree;
/*
* We can't write WAL in recovery mode, so there's no point trying to
* clean the page. The primary will likely issue a cleaning WAL record
* soon anyway, so this is no particular loss.
*/
if (RecoveryInProgress())
return;
/*
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
* XXX: Magic to keep old_snapshot_threshold tests appear "working". They
* currently are broken, and discussion of what to do about them is
* ongoing. See
* https://www.postgresql.org/message-id/20200403001235.e6jfdll3gh2ygbuc%40alap3.anarazel.de
*/
if (old_snapshot_threshold == 0)
SnapshotTooOldMagicForTest();
/*
* First check whether there's any chance there's something to prune,
* determining the appropriate horizon is a waste if there's no prune_xid
* (i.e. no updates/deletes left potentially dead tuples around).
*/
prune_xid = ((PageHeader) page)->pd_prune_xid;
if (!TransactionIdIsValid(prune_xid))
return;
/*
* Check whether prune_xid indicates that there may be dead rows that can
* be cleaned up.
*
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
* It is OK to check the old snapshot limit before acquiring the cleanup
* lock because the worst that can happen is that we are not quite as
* aggressive about the cleanup (by however many transaction IDs are
* consumed between this point and acquiring the lock). This allows us to
* save significant overhead in the case where the page is found not to be
* prunable.
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
*
* Even if old_snapshot_threshold is set, we first check whether the page
* can be pruned without. Both because
* TransactionIdLimitedForOldSnapshots() is not cheap, and because not
* unnecessarily relying on old_snapshot_threshold avoids causing
* conflicts.
*/
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
vistest = GlobalVisTestFor(relation);
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
if (!GlobalVisTestIsRemovableXid(vistest, prune_xid))
{
if (!OldSnapshotThresholdActive())
return;
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
if (!TransactionIdLimitedForOldSnapshots(GlobalVisTestNonRemovableHorizon(vistest),
relation,
&limited_xmin, &limited_ts))
return;
if (!TransactionIdPrecedes(prune_xid, limited_xmin))
return;
}
/*
* We prune when a previous UPDATE failed to find enough space on the page
* for a new tuple version, or when free space falls below the relation's
* fill-factor target (but not less than 10%).
*
* Checking free space here is questionable since we aren't holding any
* lock on the buffer; in the worst case we could get a bogus answer. It's
* unlikely to be *seriously* wrong, though, since reading either pd_lower
* or pd_upper is probably atomic. Avoiding taking a lock seems more
* important than sometimes getting a wrong answer in what is after all
* just a heuristic estimate.
*/
minfree = RelationGetTargetPageFreeSpace(relation,
HEAP_DEFAULT_FILLFACTOR);
minfree = Max(minfree, BLCKSZ / 10);
if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)
{
/* OK, try to get exclusive buffer lock */
if (!ConditionalLockBufferForCleanup(buffer))
return;
/*
* Now that we have buffer lock, get accurate information about the
* page's free space, and recheck the heuristic about whether to
* prune.
*/
if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)
{
int ndeleted,
nnewlpdead;
ndeleted = heap_page_prune(relation, buffer, vistest, limited_xmin,
limited_ts, &nnewlpdead, NULL);
/*
* Report the number of tuples reclaimed to pgstats. This is
* ndeleted minus the number of newly-LP_DEAD-set items.
*
* We derive the number of dead tuples like this to avoid totally
* forgetting about items that were set to LP_DEAD, since they
* still need to be cleaned up by VACUUM. We only want to count
* heap-only tuples that just became LP_UNUSED in our report,
* which don't.
*
* VACUUM doesn't have to compensate in the same way when it
* tracks ndeleted, since it will set the same LP_DEAD items to
* LP_UNUSED separately.
*/
if (ndeleted > nnewlpdead)
pgstat_update_heap_dead_tuples(relation,
ndeleted - nnewlpdead);
}
/* And release buffer lock */
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
/*
* We avoid reuse of any free space created on the page by unrelated
* UPDATEs/INSERTs by opting to not update the FSM at this point. The
* free space should be reused by UPDATEs to *this* page.
*/
}
}
/*
* Prune and repair fragmentation in the specified page.
*
* Caller must have pin and buffer cleanup lock on the page. Note that we
* don't update the FSM information for page on caller's behalf. Caller might
* also need to account for a reduction in the length of the line pointer
* array following array truncation by us.
*
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
* vistest is used to distinguish whether tuples are DEAD or RECENTLY_DEAD
* (see heap_prune_satisfies_vacuum and
* HeapTupleSatisfiesVacuum). old_snap_xmin / old_snap_ts need to
* either have been set by TransactionIdLimitedForOldSnapshots, or
* InvalidTransactionId/0 respectively.
*
* Sets *nnewlpdead for caller, indicating the number of items that were
* newly set LP_DEAD during prune operation.
*
* off_loc is the offset location required by the caller to use in error
* callback.
*
* Returns the number of tuples deleted from the page during this call.
*/
int
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
heap_page_prune(Relation relation, Buffer buffer,
GlobalVisState *vistest,
TransactionId old_snap_xmin,
TimestampTz old_snap_ts,
int *nnewlpdead,
OffsetNumber *off_loc)
{
int ndeleted = 0;
Page page = BufferGetPage(buffer);
BlockNumber blockno = BufferGetBlockNumber(buffer);
OffsetNumber offnum,
maxoff;
PruneState prstate;
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
HeapTupleData tup;
/*
* Our strategy is to scan the page and make lists of items to change,
* then apply the changes within a critical section. This keeps as much
* logic as possible out of the critical section, and also ensures that
* WAL replay will work the same as the normal case.
*
* First, initialize the new pd_prune_xid value to zero (indicating no
* prunable tuples). If we find any tuples which may soon become
* prunable, we will save the lowest relevant XID in new_prune_xid. Also
* initialize the rest of our working state.
*/
prstate.new_prune_xid = InvalidTransactionId;
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
prstate.rel = relation;
prstate.vistest = vistest;
prstate.old_snap_xmin = old_snap_xmin;
prstate.old_snap_ts = old_snap_ts;
prstate.old_snap_used = false;
prstate.snapshotConflictHorizon = InvalidTransactionId;
prstate.nredirected = prstate.ndead = prstate.nunused = 0;
memset(prstate.marked, 0, sizeof(prstate.marked));
maxoff = PageGetMaxOffsetNumber(page);
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
tup.t_tableOid = RelationGetRelid(prstate.rel);
/*
* Determine HTSV for all tuples.
*
* This is required for correctness to deal with cases where running HTSV
* twice could result in different results (e.g. RECENTLY_DEAD can turn to
* DEAD if another checked item causes GlobalVisTestIsRemovableFullXid()
* to update the horizon, INSERT_IN_PROGRESS can change to DEAD if the
* inserting transaction aborts, ...). That in turn could cause
* heap_prune_chain() to behave incorrectly if a tuple is reached twice,
* once directly via a heap_prune_chain() and once following a HOT chain.
*
* It's also good for performance. Most commonly tuples within a page are
* stored at decreasing offsets (while the items are stored at increasing
* offsets). When processing all tuples on a page this leads to reading
* memory at decreasing offsets within a page, with a variable stride.
* That's hard for CPU prefetchers to deal with. Processing the items in
* reverse order (and thus the tuples in increasing order) increases
* prefetching efficiency significantly / decreases the number of cache
* misses.
*/
for (offnum = maxoff;
offnum >= FirstOffsetNumber;
offnum = OffsetNumberPrev(offnum))
{
ItemId itemid = PageGetItemId(page, offnum);
HeapTupleHeader htup;
/* Nothing to do if slot doesn't contain a tuple */
if (!ItemIdIsNormal(itemid))
{
prstate.htsv[offnum] = -1;
continue;
}
htup = (HeapTupleHeader) PageGetItem(page, itemid);
tup.t_data = htup;
tup.t_len = ItemIdGetLength(itemid);
ItemPointerSet(&(tup.t_self), blockno, offnum);
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
/*
* Set the offset number so that we can display it along with any
* error that occurred while processing this tuple.
*/
if (off_loc)
*off_loc = offnum;
prstate.htsv[offnum] = heap_prune_satisfies_vacuum(&prstate, &tup,
buffer);
}
/* Scan the page */
for (offnum = FirstOffsetNumber;
offnum <= maxoff;
offnum = OffsetNumberNext(offnum))
{
ItemId itemid;
/* Ignore items already processed as part of an earlier chain */
if (prstate.marked[offnum])
continue;
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
/* see preceding loop */
if (off_loc)
*off_loc = offnum;
/* Nothing to do if slot is empty or already dead */
itemid = PageGetItemId(page, offnum);
if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))
continue;
/* Process this item or chain of items */
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
ndeleted += heap_prune_chain(buffer, offnum, &prstate);
}
/* Clear the offset information once we have processed the given page. */
if (off_loc)
*off_loc = InvalidOffsetNumber;
/* Any error while applying the changes is critical */
START_CRIT_SECTION();
/* Have we found any prunable items? */
if (prstate.nredirected > 0 || prstate.ndead > 0 || prstate.nunused > 0)
{
/*
* Apply the planned item changes, then repair page fragmentation, and
* update the page's hint bit about whether it has free line pointers.
*/
heap_page_prune_execute(buffer,
prstate.redirected, prstate.nredirected,
prstate.nowdead, prstate.ndead,
prstate.nowunused, prstate.nunused);
/*
* Update the page's pd_prune_xid field to either zero, or the lowest
* XID of any soon-prunable tuple.
*/
((PageHeader) page)->pd_prune_xid = prstate.new_prune_xid;
/*
* Also clear the "page is full" flag, since there's no point in
* repeating the prune/defrag process until something else happens to
* the page.
*/
PageClearFull(page);
MarkBufferDirty(buffer);
/*
Remove tupgone special case from vacuumlazy.c. Retry the call to heap_prune_page() in rare cases where there is disagreement between the heap_prune_page() call and the call to HeapTupleSatisfiesVacuum() that immediately follows. Disagreement is possible when a concurrently-aborted transaction makes a tuple DEAD during the tiny window between each step. This was the only case where a tuple considered DEAD by VACUUM still had storage following pruning. VACUUM's definition of dead tuples is now uniformly simple and unambiguous: dead tuples from each page are always LP_DEAD line pointers that were encountered just after we performed pruning (and just before we considered freezing remaining items with tuple storage). Eliminating the tupgone=true special case enables INDEX_CLEANUP=off style skipping of index vacuuming that takes place based on flexible, dynamic criteria. The INDEX_CLEANUP=off case had to know about skipping indexes up-front before now, due to a subtle interaction with the special case (see commit dd695979) -- this was a special case unto itself. Now there are no special cases. And so now it won't matter when or how we decide to skip index vacuuming: it won't affect how pruning behaves, and it won't be affected by any of the implementation details of pruning or freezing. Also remove XLOG_HEAP2_CLEANUP_INFO records. These are no longer necessary because we now rely entirely on heap pruning taking care of recovery conflicts. There is no longer any need to generate recovery conflicts for DEAD tuples that pruning just missed. This also means that heap vacuuming now uses exactly the same strategy for recovery conflicts as index vacuuming always has: REDO routines never need to process a latestRemovedXid from the WAL record, since earlier REDO of the WAL record from pruning is sufficient in all cases. The generic XLOG_HEAP2_CLEAN record type is now split into two new record types to reflect this new division (these are called XLOG_HEAP2_PRUNE and XLOG_HEAP2_VACUUM). Also stop acquiring a super-exclusive lock for heap pages when they're vacuumed during VACUUM's second heap pass. A regular exclusive lock is enough. This is correct because heap page vacuuming is now strictly a matter of setting the LP_DEAD line pointers to LP_UNUSED. No other backend can have a pointer to a tuple located in a pinned buffer that can be invalidated by a concurrent heap page vacuum operation. Heap vacuuming can now be thought of as conceptually similar to index vacuuming and conceptually dissimilar to heap pruning. Heap pruning now has sole responsibility for anything involving the logical contents of the database (e.g., managing transaction status information, recovery conflicts, considering what to do with HOT chains). Index vacuuming and heap vacuuming are now only concerned with recycling garbage items from physical data structures that back the logical database. Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record changes. Credit for the idea of retrying pruning a page to avoid the tupgone case goes to Andres Freund. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
2021-04-06 17:49:22 +02:00
* Emit a WAL XLOG_HEAP2_PRUNE record showing what we did
*/
if (RelationNeedsWAL(relation))
{
Remove tupgone special case from vacuumlazy.c. Retry the call to heap_prune_page() in rare cases where there is disagreement between the heap_prune_page() call and the call to HeapTupleSatisfiesVacuum() that immediately follows. Disagreement is possible when a concurrently-aborted transaction makes a tuple DEAD during the tiny window between each step. This was the only case where a tuple considered DEAD by VACUUM still had storage following pruning. VACUUM's definition of dead tuples is now uniformly simple and unambiguous: dead tuples from each page are always LP_DEAD line pointers that were encountered just after we performed pruning (and just before we considered freezing remaining items with tuple storage). Eliminating the tupgone=true special case enables INDEX_CLEANUP=off style skipping of index vacuuming that takes place based on flexible, dynamic criteria. The INDEX_CLEANUP=off case had to know about skipping indexes up-front before now, due to a subtle interaction with the special case (see commit dd695979) -- this was a special case unto itself. Now there are no special cases. And so now it won't matter when or how we decide to skip index vacuuming: it won't affect how pruning behaves, and it won't be affected by any of the implementation details of pruning or freezing. Also remove XLOG_HEAP2_CLEANUP_INFO records. These are no longer necessary because we now rely entirely on heap pruning taking care of recovery conflicts. There is no longer any need to generate recovery conflicts for DEAD tuples that pruning just missed. This also means that heap vacuuming now uses exactly the same strategy for recovery conflicts as index vacuuming always has: REDO routines never need to process a latestRemovedXid from the WAL record, since earlier REDO of the WAL record from pruning is sufficient in all cases. The generic XLOG_HEAP2_CLEAN record type is now split into two new record types to reflect this new division (these are called XLOG_HEAP2_PRUNE and XLOG_HEAP2_VACUUM). Also stop acquiring a super-exclusive lock for heap pages when they're vacuumed during VACUUM's second heap pass. A regular exclusive lock is enough. This is correct because heap page vacuuming is now strictly a matter of setting the LP_DEAD line pointers to LP_UNUSED. No other backend can have a pointer to a tuple located in a pinned buffer that can be invalidated by a concurrent heap page vacuum operation. Heap vacuuming can now be thought of as conceptually similar to index vacuuming and conceptually dissimilar to heap pruning. Heap pruning now has sole responsibility for anything involving the logical contents of the database (e.g., managing transaction status information, recovery conflicts, considering what to do with HOT chains). Index vacuuming and heap vacuuming are now only concerned with recycling garbage items from physical data structures that back the logical database. Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record changes. Credit for the idea of retrying pruning a page to avoid the tupgone case goes to Andres Freund. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
2021-04-06 17:49:22 +02:00
xl_heap_prune xlrec;
XLogRecPtr recptr;
Add info in WAL records in preparation for logical slot conflict handling This commit only implements one prerequisite part for allowing logical decoding. The commit message contains an explanation of the overall design, which later commits will refer back to. Overall design: 1. We want to enable logical decoding on standbys, but replay of WAL from the primary might remove data that is needed by logical decoding, causing error(s) on the standby. To prevent those errors, a new replication conflict scenario needs to be addressed (as much as hot standby does). 2. Our chosen strategy for dealing with this type of replication slot is to invalidate logical slots for which needed data has been removed. 3. To do this we need the latestRemovedXid for each change, just as we do for physical replication conflicts, but we also need to know whether any particular change was to data that logical replication might access. That way, during WAL replay, we know when there is a risk of conflict and, if so, if there is a conflict. 4. We can't rely on the standby's relcache entries for this purpose in any way, because the startup process can't access catalog contents. 5. Therefore every WAL record that potentially removes data from the index or heap must carry a flag indicating whether or not it is one that might be accessed during logical decoding. Why do we need this for logical decoding on standby? First, let's forget about logical decoding on standby and recall that on a primary database, any catalog rows that may be needed by a logical decoding replication slot are not removed. This is done thanks to the catalog_xmin associated with the logical replication slot. But, with logical decoding on standby, in the following cases: - hot_standby_feedback is off - hot_standby_feedback is on but there is no a physical slot between the primary and the standby. Then, hot_standby_feedback will work, but only while the connection is alive (for example a node restart would break it) Then, the primary may delete system catalog rows that could be needed by the logical decoding on the standby (as it does not know about the catalog_xmin on the standby). So, it’s mandatory to identify those rows and invalidate the slots that may need them if any. Identifying those rows is the purpose of this commit. Implementation: When a WAL replay on standby indicates that a catalog table tuple is to be deleted by an xid that is greater than a logical slot's catalog_xmin, then that means the slot's catalog_xmin conflicts with the xid, and we need to handle the conflict. While subsequent commits will do the actual conflict handling, this commit adds a new field isCatalogRel in such WAL records (and a new bit set in the xl_heap_visible flags field), that is true for catalog tables, so as to arrange for conflict handling. The affected WAL records are the ones that already contain the snapshotConflictHorizon field, namely: - gistxlogDelete - gistxlogPageReuse - xl_hash_vacuum_one_page - xl_heap_prune - xl_heap_freeze_page - xl_heap_visible - xl_btree_reuse_page - xl_btree_delete - spgxlogVacuumRedirect Due to this new field being added, xl_hash_vacuum_one_page and gistxlogDelete do now contain the offsets to be deleted as a FLEXIBLE_ARRAY_MEMBER. This is needed to ensure correct alignment. It's not needed on the others struct where isCatalogRel has been added. This commit just introduces the WAL format changes mentioned above. Handling the actual conflicts will follow in future commits. Bumps XLOG_PAGE_MAGIC as the several WAL records are changed. Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Author: Andres Freund <andres@anarazel.de> (in an older version) Author: Amit Khandekar <amitdkhan.pg@gmail.com> (in an older version) Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
2023-04-02 21:32:19 +02:00
xlrec.isCatalogRel = RelationIsAccessibleInLogicalDecoding(relation);
xlrec.snapshotConflictHorizon = prstate.snapshotConflictHorizon;
Remove tupgone special case from vacuumlazy.c. Retry the call to heap_prune_page() in rare cases where there is disagreement between the heap_prune_page() call and the call to HeapTupleSatisfiesVacuum() that immediately follows. Disagreement is possible when a concurrently-aborted transaction makes a tuple DEAD during the tiny window between each step. This was the only case where a tuple considered DEAD by VACUUM still had storage following pruning. VACUUM's definition of dead tuples is now uniformly simple and unambiguous: dead tuples from each page are always LP_DEAD line pointers that were encountered just after we performed pruning (and just before we considered freezing remaining items with tuple storage). Eliminating the tupgone=true special case enables INDEX_CLEANUP=off style skipping of index vacuuming that takes place based on flexible, dynamic criteria. The INDEX_CLEANUP=off case had to know about skipping indexes up-front before now, due to a subtle interaction with the special case (see commit dd695979) -- this was a special case unto itself. Now there are no special cases. And so now it won't matter when or how we decide to skip index vacuuming: it won't affect how pruning behaves, and it won't be affected by any of the implementation details of pruning or freezing. Also remove XLOG_HEAP2_CLEANUP_INFO records. These are no longer necessary because we now rely entirely on heap pruning taking care of recovery conflicts. There is no longer any need to generate recovery conflicts for DEAD tuples that pruning just missed. This also means that heap vacuuming now uses exactly the same strategy for recovery conflicts as index vacuuming always has: REDO routines never need to process a latestRemovedXid from the WAL record, since earlier REDO of the WAL record from pruning is sufficient in all cases. The generic XLOG_HEAP2_CLEAN record type is now split into two new record types to reflect this new division (these are called XLOG_HEAP2_PRUNE and XLOG_HEAP2_VACUUM). Also stop acquiring a super-exclusive lock for heap pages when they're vacuumed during VACUUM's second heap pass. A regular exclusive lock is enough. This is correct because heap page vacuuming is now strictly a matter of setting the LP_DEAD line pointers to LP_UNUSED. No other backend can have a pointer to a tuple located in a pinned buffer that can be invalidated by a concurrent heap page vacuum operation. Heap vacuuming can now be thought of as conceptually similar to index vacuuming and conceptually dissimilar to heap pruning. Heap pruning now has sole responsibility for anything involving the logical contents of the database (e.g., managing transaction status information, recovery conflicts, considering what to do with HOT chains). Index vacuuming and heap vacuuming are now only concerned with recycling garbage items from physical data structures that back the logical database. Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record changes. Credit for the idea of retrying pruning a page to avoid the tupgone case goes to Andres Freund. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
2021-04-06 17:49:22 +02:00
xlrec.nredirected = prstate.nredirected;
xlrec.ndead = prstate.ndead;
XLogBeginInsert();
XLogRegisterData((char *) &xlrec, SizeOfHeapPrune);
XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
/*
* The OffsetNumber arrays are not actually in the buffer, but we
* pretend that they are. When XLogInsert stores the whole
* buffer, the offset arrays need not be stored too.
*/
if (prstate.nredirected > 0)
XLogRegisterBufData(0, (char *) prstate.redirected,
prstate.nredirected *
sizeof(OffsetNumber) * 2);
if (prstate.ndead > 0)
XLogRegisterBufData(0, (char *) prstate.nowdead,
prstate.ndead * sizeof(OffsetNumber));
if (prstate.nunused > 0)
XLogRegisterBufData(0, (char *) prstate.nowunused,
prstate.nunused * sizeof(OffsetNumber));
recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_PRUNE);
PageSetLSN(BufferGetPage(buffer), recptr);
}
}
else
{
/*
* If we didn't prune anything, but have found a new value for the
* pd_prune_xid field, update it and mark the buffer dirty. This is
* treated as a non-WAL-logged hint.
*
* Also clear the "page is full" flag if it is set, since there's no
* point in repeating the prune/defrag process until something else
* happens to the page.
*/
if (((PageHeader) page)->pd_prune_xid != prstate.new_prune_xid ||
PageIsFull(page))
{
((PageHeader) page)->pd_prune_xid = prstate.new_prune_xid;
PageClearFull(page);
MarkBufferDirtyHint(buffer, true);
}
}
END_CRIT_SECTION();
/* Record number of newly-set-LP_DEAD items for caller */
*nnewlpdead = prstate.ndead;
return ndeleted;
}
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
/*
* Perform visibility checks for heap pruning.
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
*
* This is more complicated than just using GlobalVisTestIsRemovableXid()
* because of old_snapshot_threshold. We only want to increase the threshold
* that triggers errors for old snapshots when we actually decide to remove a
* row based on the limited horizon.
*
* Due to its cost we also only want to call
* TransactionIdLimitedForOldSnapshots() if necessary, i.e. we might not have
* done so in heap_page_prune_opt() if pd_prune_xid was old enough. But we
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
* still want to be able to remove rows that are too new to be removed
* according to prstate->vistest, but that can be removed based on
* old_snapshot_threshold. So we call TransactionIdLimitedForOldSnapshots() on
* demand in here, if appropriate.
*/
static HTSV_Result
heap_prune_satisfies_vacuum(PruneState *prstate, HeapTuple tup, Buffer buffer)
{
HTSV_Result res;
TransactionId dead_after;
res = HeapTupleSatisfiesVacuumHorizon(tup, buffer, &dead_after);
if (res != HEAPTUPLE_RECENTLY_DEAD)
return res;
/*
* If we are already relying on the limited xmin, there is no need to
* delay doing so anymore.
*/
if (prstate->old_snap_used)
{
Assert(TransactionIdIsValid(prstate->old_snap_xmin));
if (TransactionIdPrecedes(dead_after, prstate->old_snap_xmin))
res = HEAPTUPLE_DEAD;
return res;
}
/*
* First check if GlobalVisTestIsRemovableXid() is sufficient to find the
* row dead. If not, and old_snapshot_threshold is enabled, try to use the
* lowered horizon.
*/
if (GlobalVisTestIsRemovableXid(prstate->vistest, dead_after))
res = HEAPTUPLE_DEAD;
else if (OldSnapshotThresholdActive())
{
/* haven't determined limited horizon yet, requests */
if (!TransactionIdIsValid(prstate->old_snap_xmin))
{
TransactionId horizon =
GlobalVisTestNonRemovableHorizon(prstate->vistest);
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
TransactionIdLimitedForOldSnapshots(horizon, prstate->rel,
&prstate->old_snap_xmin,
&prstate->old_snap_ts);
}
if (TransactionIdIsValid(prstate->old_snap_xmin) &&
TransactionIdPrecedes(dead_after, prstate->old_snap_xmin))
{
/*
* About to remove row based on snapshot_too_old. Need to raise
* the threshold so problematic accesses would error.
*/
Assert(!prstate->old_snap_used);
SetOldSnapshotThresholdTimestamp(prstate->old_snap_ts,
prstate->old_snap_xmin);
prstate->old_snap_used = true;
res = HEAPTUPLE_DEAD;
}
}
return res;
}
/*
* Prune specified line pointer or a HOT chain originating at line pointer.
*
* If the item is an index-referenced tuple (i.e. not a heap-only tuple),
* the HOT chain is pruned by removing all DEAD tuples at the start of the HOT
* chain. We also prune any RECENTLY_DEAD tuples preceding a DEAD tuple.
* This is OK because a RECENTLY_DEAD tuple preceding a DEAD tuple is really
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
* DEAD, our visibility test is just too coarse to detect it.
*
* In general, pruning must never leave behind a DEAD tuple that still has
* tuple storage. VACUUM isn't prepared to deal with that case. That's why
* VACUUM prunes the same heap page a second time (without dropping its lock
* in the interim) when it sees a newly DEAD tuple that we initially saw as
* in-progress. Retrying pruning like this can only happen when an inserting
* transaction concurrently aborts.
*
* The root line pointer is redirected to the tuple immediately after the
* latest DEAD tuple. If all tuples in the chain are DEAD, the root line
* pointer is marked LP_DEAD. (This includes the case of a DEAD simple
* tuple, which we treat as a chain of length 1.)
*
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
* We don't actually change the page here. We just add entries to the arrays in
* prstate showing the changes to be made. Items to be redirected are added
* to the redirected[] array (two entries per redirection); items to be set to
* LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED
* state are added to nowunused[].
*
* Returns the number of tuples (to be) deleted from the page.
*/
static int
snapshot scalability: Don't compute global horizons while building snapshots. To make GetSnapshotData() more scalable, it cannot not look at at each proc's xmin: While snapshot contents do not need to change whenever a read-only transaction commits or a snapshot is released, a proc's xmin is modified in those cases. The frequency of xmin modifications leads to, particularly on higher core count systems, many cache misses inside GetSnapshotData(), despite the data underlying a snapshot not changing. That is the most significant source of GetSnapshotData() scaling poorly on larger systems. Without accessing xmins, GetSnapshotData() cannot calculate accurate horizons / thresholds as it has so far. But we don't really have to: The horizons don't actually change that much between GetSnapshotData() calls. Nor are the horizons actually used every time a snapshot is built. The trick this commit introduces is to delay computation of accurate horizons until there use and using horizon boundaries to determine whether accurate horizons need to be computed. The use of RecentGlobal[Data]Xmin to decide whether a row version could be removed has been replaces with new GlobalVisTest* functions. These use two thresholds to determine whether a row can be pruned: 1) definitely_needed, indicating that rows deleted by XIDs >= definitely_needed are definitely still visible. 2) maybe_needed, indicating that rows deleted by XIDs < maybe_needed can definitely be removed GetSnapshotData() updates definitely_needed to be the xmin of the computed snapshot. When testing whether a row can be removed (with GlobalVisTestIsRemovableXid()) and the tested XID falls in between the two (i.e. XID >= maybe_needed && XID < definitely_needed) the boundaries can be recomputed to be more accurate. As it is not cheap to compute accurate boundaries, we limit the number of times that happens in short succession. As the boundaries used by GlobalVisTestIsRemovableXid() are never reset (with maybe_needed updated by GetSnapshotData()), it is likely that further test can benefit from an earlier computation of accurate horizons. To avoid regressing performance when old_snapshot_threshold is set (as that requires an accurate horizon to be computed), heap_page_prune_opt() doesn't unconditionally call TransactionIdLimitedForOldSnapshots() anymore. Both the computation of the limited horizon, and the triggering of errors (with SetOldSnapshotThresholdTimestamp()) is now only done when necessary to remove tuples. This commit just removes the accesses to PGXACT->xmin from GetSnapshotData(), but other members of PGXACT residing in the same cache line are accessed. Therefore this in itself does not result in a significant improvement. Subsequent commits will take advantage of the fact that GetSnapshotData() now does not need to access xmins anymore. Note: This contains a workaround in heap_page_prune_opt() to keep the snapshot_too_old tests working. While that workaround is ugly, the tests currently are not meaningful, and it seems best to address them separately. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
2020-08-13 01:03:49 +02:00
heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum, PruneState *prstate)
{
int ndeleted = 0;
Page dp = (Page) BufferGetPage(buffer);
TransactionId priorXmax = InvalidTransactionId;
ItemId rootlp;
HeapTupleHeader htup;
OffsetNumber latestdead = InvalidOffsetNumber,
maxoff = PageGetMaxOffsetNumber(dp),
offnum;
OffsetNumber chainitems[MaxHeapTuplesPerPage];
int nchain = 0,
i;
rootlp = PageGetItemId(dp, rootoffnum);
/*
* If it's a heap-only tuple, then it is not the start of a HOT chain.
*/
if (ItemIdIsNormal(rootlp))
{
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
Assert(prstate->htsv[rootoffnum] != -1);
htup = (HeapTupleHeader) PageGetItem(dp, rootlp);
if (HeapTupleHeaderIsHeapOnly(htup))
{
/*
* If the tuple is DEAD and doesn't chain to anything else, mark
* it unused immediately. (If it does chain, we can only remove
* it as part of pruning its chain.)
*
* We need this primarily to handle aborted HOT updates, that is,
* XMIN_INVALID heap-only tuples. Those might not be linked to by
* any chain, since the parent tuple might be re-updated before
* any pruning occurs. So we have to be able to reap them
* separately from chain-pruning. (Note that
* HeapTupleHeaderIsHotUpdated will never return true for an
* XMIN_INVALID tuple, so this code will work even when there were
* sequential updates within the aborted transaction.)
*
* Note that we might first arrive at a dead heap-only tuple
* either here or while following a chain below. Whichever path
* gets there first will mark the tuple unused.
*/
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
if (prstate->htsv[rootoffnum] == HEAPTUPLE_DEAD &&
!HeapTupleHeaderIsHotUpdated(htup))
{
heap_prune_record_unused(prstate, rootoffnum);
HeapTupleHeaderAdvanceConflictHorizon(htup,
&prstate->snapshotConflictHorizon);
ndeleted++;
}
/* Nothing more to do */
return ndeleted;
}
}
/* Start from the root tuple */
offnum = rootoffnum;
/* while not end of the chain */
for (;;)
{
ItemId lp;
bool tupdead,
recent_dead;
/* Sanity check (pure paranoia) */
if (offnum < FirstOffsetNumber)
break;
/*
* An offset past the end of page's line pointer array is possible
* when the array was truncated (original item must have been unused)
*/
if (offnum > maxoff)
break;
/* If item is already processed, stop --- it must not be same chain */
if (prstate->marked[offnum])
break;
lp = PageGetItemId(dp, offnum);
/* Unused item obviously isn't part of the chain */
if (!ItemIdIsUsed(lp))
break;
/*
* If we are looking at the redirected root line pointer, jump to the
* first normal tuple in the chain. If we find a redirect somewhere
* else, stop --- it must not be same chain.
*/
if (ItemIdIsRedirected(lp))
{
if (nchain > 0)
break; /* not at start of chain */
chainitems[nchain++] = offnum;
offnum = ItemIdGetRedirect(rootlp);
continue;
}
/*
* Likewise, a dead line pointer can't be part of the chain. (We
* already eliminated the case of dead root tuple outside this
* function.)
*/
if (ItemIdIsDead(lp))
break;
Assert(ItemIdIsNormal(lp));
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
Assert(prstate->htsv[offnum] != -1);
htup = (HeapTupleHeader) PageGetItem(dp, lp);
/*
* Check the tuple XMIN against prior XMAX, if any
*/
if (TransactionIdIsValid(priorXmax) &&
!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
break;
/*
* OK, this tuple is indeed a member of the chain.
*/
chainitems[nchain++] = offnum;
/*
* Check tuple's visibility status.
*/
tupdead = recent_dead = false;
Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. Since dc7420c2c92 the horizon used for pruning is determined "lazily". A more accurate horizon is built on-demand, rather than in GetSnapshotData(). If a horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls for the same tuple, the result can change from RECENTLY_DEAD to DEAD. heap_page_prune() can process the same tid multiple times (once following an update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum() of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the second, the "tuple is DEAD and doesn't chain to anything else" path in heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId unused. Initially not easily visible, Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version can reuse it. At that point the corruption may become visible, as index entries pointing to the "original" redirect item, now point to a unrelated tuple. To fix, compute HTSV for all tuples on a page only once. This fixes the entire class of problems of HTSV changing inside heap_page_prune(). However, visibility changes can obviously still occur between HTSV checks inside heap_page_prune() and outside (e.g. in lazy_scan_prune()). The computation of HTSV is now done in bulk, in heap_page_prune(), rather than on-demand in heap_prune_chain(). Besides being a bit simpler, it also is faster: Memory accesses can happen sequentially, rather than in the order of HOT chains. There are other causes of HeapTupleSatisfiesVacuum() results changing between two visibility checks for the same tuple, even before dc7420c2c92. E.g. HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction aborts between the two checks. None of the these other visibility status changes are known to cause corruption, but heap_page_prune()'s approach makes it hard to be confident. A patch implementing a more fundamental redesign of heap_page_prune(), which fixes this bug and simplifies pruning substantially, has been proposed by Peter Geoghegan in https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com However, that redesign is larger change than desirable for backpatching. As the new design still benefits from the batched visibility determination introduced in this commit, it makes sense to commit this narrower fix to 14 and master, and then commit Peter's improvement in master. The precise sequence required to trigger the bug is complicated and hard to do exercise in an isolation test (until we have wait points). Due to that the isolation test initially posted at https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de and updated in https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de isn't committable. A followup commit will introduce additional assertions, to detect problems like this more easily. Bug: #17255 Reported-By: Alexander Lakhin <exclusion@gmail.com> Debugged-By: Andres Freund <andres@anarazel.de> Debugged-By: Peter Geoghegan <pg@bowt.ie> Author: Andres Freund <andres@andres@anarazel.de> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de Backpatch: 14-, the oldest branch containing dc7420c2c92
2021-12-11 05:12:26 +01:00
switch ((HTSV_Result) prstate->htsv[offnum])
{
case HEAPTUPLE_DEAD:
tupdead = true;
break;
case HEAPTUPLE_RECENTLY_DEAD:
recent_dead = true;
2007-11-15 22:14:46 +01:00
/*
* This tuple may soon become DEAD. Update the hint field so
* that the page is reconsidered for pruning in future.
*/
heap_prune_record_prunable(prstate,
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
HeapTupleHeaderGetUpdateXid(htup));
break;
case HEAPTUPLE_DELETE_IN_PROGRESS:
/*
* This tuple may soon become DEAD. Update the hint field so
* that the page is reconsidered for pruning in future.
*/
heap_prune_record_prunable(prstate,
HeapTupleHeaderGetUpdateXid(htup));
break;
case HEAPTUPLE_LIVE:
case HEAPTUPLE_INSERT_IN_PROGRESS:
2007-11-15 22:14:46 +01:00
/*
* If we wanted to optimize for aborts, we might consider
* marking the page prunable when we see INSERT_IN_PROGRESS.
* But we don't. See related decisions about when to mark the
* page prunable in heapam.c.
*/
break;
default:
elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result");
break;
}
/*
* Remember the last DEAD tuple seen. We will advance past
* RECENTLY_DEAD tuples just in case there's a DEAD one after them;
* but we can't advance past anything else. We have to make sure that
* we don't miss any DEAD tuples, since DEAD tuples that still have
* tuple storage after pruning will confuse VACUUM.
*/
if (tupdead)
Allow read only connections during recovery, known as Hot Standby. Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record. New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far. This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required. Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit. Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
{
latestdead = offnum;
HeapTupleHeaderAdvanceConflictHorizon(htup,
&prstate->snapshotConflictHorizon);
Allow read only connections during recovery, known as Hot Standby. Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record. New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far. This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required. Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit. Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
}
else if (!recent_dead)
break;
/*
* If the tuple is not HOT-updated, then we are at the end of this
* HOT-update chain.
*/
if (!HeapTupleHeaderIsHotUpdated(htup))
break;
2018-04-07 22:24:10 +02:00
/* HOT implies it can't have moved to different partition */
Assert(!HeapTupleHeaderIndicatesMovedPartitions(htup));
/*
* Advance to next chain member.
*/
Assert(ItemPointerGetBlockNumber(&htup->t_ctid) ==
BufferGetBlockNumber(buffer));
offnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
priorXmax = HeapTupleHeaderGetUpdateXid(htup);
}
/*
* If we found a DEAD tuple in the chain, adjust the HOT chain so that all
* the DEAD tuples at the start of the chain are removed and the root line
* pointer is appropriately redirected.
*/
if (OffsetNumberIsValid(latestdead))
{
/*
* Mark as unused each intermediate item that we are able to remove
* from the chain.
*
* When the previous item is the last dead tuple seen, we are at the
* right candidate for redirection.
*/
for (i = 1; (i < nchain) && (chainitems[i - 1] != latestdead); i++)
{
heap_prune_record_unused(prstate, chainitems[i]);
ndeleted++;
}
/*
* If the root entry had been a normal tuple, we are deleting it, so
* count it in the result. But changing a redirect (even to DEAD
* state) doesn't count.
*/
if (ItemIdIsNormal(rootlp))
ndeleted++;
/*
* If the DEAD tuple is at the end of the chain, the entire chain is
* dead and the root line pointer can be marked dead. Otherwise just
* redirect the root to the correct chain member.
*/
if (i >= nchain)
heap_prune_record_dead(prstate, rootoffnum);
else
heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);
}
else if (nchain < 2 && ItemIdIsRedirected(rootlp))
{
/*
* We found a redirect item that doesn't point to a valid follow-on
* item. This can happen if the loop in heap_page_prune caused us to
* visit the dead successor of a redirect item before visiting the
* redirect item. We can clean up by setting the redirect item to
* DEAD state.
*/
heap_prune_record_dead(prstate, rootoffnum);
}
return ndeleted;
}
/* Record lowest soon-prunable XID */
static void
heap_prune_record_prunable(PruneState *prstate, TransactionId xid)
{
/*
* This should exactly match the PageSetPrunable macro. We can't store
* directly into the page header yet, so we update working state.
*/
Assert(TransactionIdIsNormal(xid));
if (!TransactionIdIsValid(prstate->new_prune_xid) ||
TransactionIdPrecedes(xid, prstate->new_prune_xid))
prstate->new_prune_xid = xid;
}
/* Record line pointer to be redirected */
static void
heap_prune_record_redirect(PruneState *prstate,
OffsetNumber offnum, OffsetNumber rdoffnum)
{
Assert(prstate->nredirected < MaxHeapTuplesPerPage);
prstate->redirected[prstate->nredirected * 2] = offnum;
prstate->redirected[prstate->nredirected * 2 + 1] = rdoffnum;
prstate->nredirected++;
Assert(!prstate->marked[offnum]);
prstate->marked[offnum] = true;
Assert(!prstate->marked[rdoffnum]);
prstate->marked[rdoffnum] = true;
}
/* Record line pointer to be marked dead */
static void
heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)
{
Assert(prstate->ndead < MaxHeapTuplesPerPage);
prstate->nowdead[prstate->ndead] = offnum;
prstate->ndead++;
Assert(!prstate->marked[offnum]);
prstate->marked[offnum] = true;
}
/* Record line pointer to be marked unused */
static void
heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
{
Assert(prstate->nunused < MaxHeapTuplesPerPage);
prstate->nowunused[prstate->nunused] = offnum;
prstate->nunused++;
Assert(!prstate->marked[offnum]);
prstate->marked[offnum] = true;
}
/*
* Perform the actual page changes needed by heap_page_prune.
Standardize cleanup lock terminology. The term "super-exclusive lock" is a synonym for "buffer cleanup lock" that first appeared in nbtree many years ago. Standardize things by consistently using the term cleanup lock. This finishes work started by commit 276db875. There is no good reason to have two terms. But there is a good reason to only have one: to avoid confusion around why VACUUM acquires a full cleanup lock (not just an ordinary exclusive lock) in index AMs, during ambulkdelete calls. This has nothing to do with protecting the physical index data structure itself. It is needed to implement a locking protocol that ensures that TIDs pointing to the heap/table structure cannot get marked for recycling by VACUUM before it is safe (which is somewhat similar to how VACUUM uses cleanup locks during its first heap pass). Note that it isn't strictly necessary for index AMs to implement this locking protocol -- several index AMs use an MVCC snapshot as their sole interlock to prevent unsafe TID recycling. In passing, update the nbtree README. Cleanly separate discussion of the aforementioned index vacuuming locking protocol from discussion of the "drop leaf page pin" optimization added by commit 2ed5b87f. We now structure discussion of the latter by describing how individual index scans may safely opt out of applying the standard locking protocol (and so can avoid blocking progress by VACUUM). Also document why the optimization is not safe to apply during nbtree index-only scans. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzngHgQa92tz6NQihf4nxJwRzCV36yMJO_i8dS+2mgEVKw@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkHPgsBBvGWjz=8PjNhDefy7XRkDKiT5NxMs-n5ZCf2dA@mail.gmail.com
2021-12-09 02:24:45 +01:00
* It is expected that the caller has a full cleanup lock on the
Remove tupgone special case from vacuumlazy.c. Retry the call to heap_prune_page() in rare cases where there is disagreement between the heap_prune_page() call and the call to HeapTupleSatisfiesVacuum() that immediately follows. Disagreement is possible when a concurrently-aborted transaction makes a tuple DEAD during the tiny window between each step. This was the only case where a tuple considered DEAD by VACUUM still had storage following pruning. VACUUM's definition of dead tuples is now uniformly simple and unambiguous: dead tuples from each page are always LP_DEAD line pointers that were encountered just after we performed pruning (and just before we considered freezing remaining items with tuple storage). Eliminating the tupgone=true special case enables INDEX_CLEANUP=off style skipping of index vacuuming that takes place based on flexible, dynamic criteria. The INDEX_CLEANUP=off case had to know about skipping indexes up-front before now, due to a subtle interaction with the special case (see commit dd695979) -- this was a special case unto itself. Now there are no special cases. And so now it won't matter when or how we decide to skip index vacuuming: it won't affect how pruning behaves, and it won't be affected by any of the implementation details of pruning or freezing. Also remove XLOG_HEAP2_CLEANUP_INFO records. These are no longer necessary because we now rely entirely on heap pruning taking care of recovery conflicts. There is no longer any need to generate recovery conflicts for DEAD tuples that pruning just missed. This also means that heap vacuuming now uses exactly the same strategy for recovery conflicts as index vacuuming always has: REDO routines never need to process a latestRemovedXid from the WAL record, since earlier REDO of the WAL record from pruning is sufficient in all cases. The generic XLOG_HEAP2_CLEAN record type is now split into two new record types to reflect this new division (these are called XLOG_HEAP2_PRUNE and XLOG_HEAP2_VACUUM). Also stop acquiring a super-exclusive lock for heap pages when they're vacuumed during VACUUM's second heap pass. A regular exclusive lock is enough. This is correct because heap page vacuuming is now strictly a matter of setting the LP_DEAD line pointers to LP_UNUSED. No other backend can have a pointer to a tuple located in a pinned buffer that can be invalidated by a concurrent heap page vacuum operation. Heap vacuuming can now be thought of as conceptually similar to index vacuuming and conceptually dissimilar to heap pruning. Heap pruning now has sole responsibility for anything involving the logical contents of the database (e.g., managing transaction status information, recovery conflicts, considering what to do with HOT chains). Index vacuuming and heap vacuuming are now only concerned with recycling garbage items from physical data structures that back the logical database. Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record changes. Credit for the idea of retrying pruning a page to avoid the tupgone case goes to Andres Freund. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
2021-04-06 17:49:22 +02:00
* buffer.
*/
void
heap_page_prune_execute(Buffer buffer,
OffsetNumber *redirected, int nredirected,
OffsetNumber *nowdead, int ndead,
OffsetNumber *nowunused, int nunused)
{
Page page = (Page) BufferGetPage(buffer);
OffsetNumber *offnum;
HeapTupleHeader htup PG_USED_FOR_ASSERTS_ONLY;
Remove tupgone special case from vacuumlazy.c. Retry the call to heap_prune_page() in rare cases where there is disagreement between the heap_prune_page() call and the call to HeapTupleSatisfiesVacuum() that immediately follows. Disagreement is possible when a concurrently-aborted transaction makes a tuple DEAD during the tiny window between each step. This was the only case where a tuple considered DEAD by VACUUM still had storage following pruning. VACUUM's definition of dead tuples is now uniformly simple and unambiguous: dead tuples from each page are always LP_DEAD line pointers that were encountered just after we performed pruning (and just before we considered freezing remaining items with tuple storage). Eliminating the tupgone=true special case enables INDEX_CLEANUP=off style skipping of index vacuuming that takes place based on flexible, dynamic criteria. The INDEX_CLEANUP=off case had to know about skipping indexes up-front before now, due to a subtle interaction with the special case (see commit dd695979) -- this was a special case unto itself. Now there are no special cases. And so now it won't matter when or how we decide to skip index vacuuming: it won't affect how pruning behaves, and it won't be affected by any of the implementation details of pruning or freezing. Also remove XLOG_HEAP2_CLEANUP_INFO records. These are no longer necessary because we now rely entirely on heap pruning taking care of recovery conflicts. There is no longer any need to generate recovery conflicts for DEAD tuples that pruning just missed. This also means that heap vacuuming now uses exactly the same strategy for recovery conflicts as index vacuuming always has: REDO routines never need to process a latestRemovedXid from the WAL record, since earlier REDO of the WAL record from pruning is sufficient in all cases. The generic XLOG_HEAP2_CLEAN record type is now split into two new record types to reflect this new division (these are called XLOG_HEAP2_PRUNE and XLOG_HEAP2_VACUUM). Also stop acquiring a super-exclusive lock for heap pages when they're vacuumed during VACUUM's second heap pass. A regular exclusive lock is enough. This is correct because heap page vacuuming is now strictly a matter of setting the LP_DEAD line pointers to LP_UNUSED. No other backend can have a pointer to a tuple located in a pinned buffer that can be invalidated by a concurrent heap page vacuum operation. Heap vacuuming can now be thought of as conceptually similar to index vacuuming and conceptually dissimilar to heap pruning. Heap pruning now has sole responsibility for anything involving the logical contents of the database (e.g., managing transaction status information, recovery conflicts, considering what to do with HOT chains). Index vacuuming and heap vacuuming are now only concerned with recycling garbage items from physical data structures that back the logical database. Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record changes. Credit for the idea of retrying pruning a page to avoid the tupgone case goes to Andres Freund. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
2021-04-06 17:49:22 +02:00
/* Shouldn't be called unless there's something to do */
Assert(nredirected > 0 || ndead > 0 || nunused > 0);
/* Update all redirected line pointers */
offnum = redirected;
for (int i = 0; i < nredirected; i++)
{
OffsetNumber fromoff = *offnum++;
OffsetNumber tooff = *offnum++;
ItemId fromlp = PageGetItemId(page, fromoff);
ItemId tolp PG_USED_FOR_ASSERTS_ONLY;
#ifdef USE_ASSERT_CHECKING
/*
* Any existing item that we set as an LP_REDIRECT (any 'from' item)
* must be the first item from a HOT chain. If the item has tuple
* storage then it can't be a heap-only tuple. Otherwise we are just
* maintaining an existing LP_REDIRECT from an existing HOT chain that
* has been pruned at least once before now.
*/
if (!ItemIdIsRedirected(fromlp))
{
Assert(ItemIdHasStorage(fromlp) && ItemIdIsNormal(fromlp));
htup = (HeapTupleHeader) PageGetItem(page, fromlp);
Assert(!HeapTupleHeaderIsHeapOnly(htup));
}
else
{
/* We shouldn't need to redundantly set the redirect */
Assert(ItemIdGetRedirect(fromlp) != tooff);
}
/*
* The item that we're about to set as an LP_REDIRECT (the 'from'
* item) will point to an existing item (the 'to' item) that is
* already a heap-only tuple. There can be at most one LP_REDIRECT
* item per HOT chain.
*
* We need to keep around an LP_REDIRECT item (after original
* non-heap-only root tuple gets pruned away) so that it's always
* possible for VACUUM to easily figure out what TID to delete from
* indexes when an entire HOT chain becomes dead. A heap-only tuple
* can never become LP_DEAD; an LP_REDIRECT item or a regular heap
* tuple can.
*
* This check may miss problems, e.g. the target of a redirect could
* be marked as unused subsequently. The page_verify_redirects() check
* below will catch such problems.
*/
tolp = PageGetItemId(page, tooff);
Assert(ItemIdHasStorage(tolp) && ItemIdIsNormal(tolp));
htup = (HeapTupleHeader) PageGetItem(page, tolp);
Assert(HeapTupleHeaderIsHeapOnly(htup));
#endif
ItemIdSetRedirect(fromlp, tooff);
}
/* Update all now-dead line pointers */
offnum = nowdead;
for (int i = 0; i < ndead; i++)
{
OffsetNumber off = *offnum++;
ItemId lp = PageGetItemId(page, off);
#ifdef USE_ASSERT_CHECKING
/*
* An LP_DEAD line pointer must be left behind when the original item
* (which is dead to everybody) could still be referenced by a TID in
* an index. This should never be necessary with any individual
* heap-only tuple item, though. (It's not clear how much of a problem
* that would be, but there is no reason to allow it.)
*/
if (ItemIdHasStorage(lp))
{
Assert(ItemIdIsNormal(lp));
htup = (HeapTupleHeader) PageGetItem(page, lp);
Assert(!HeapTupleHeaderIsHeapOnly(htup));
}
else
{
/* Whole HOT chain becomes dead */
Assert(ItemIdIsRedirected(lp));
}
#endif
ItemIdSetDead(lp);
}
/* Update all now-unused line pointers */
offnum = nowunused;
for (int i = 0; i < nunused; i++)
{
OffsetNumber off = *offnum++;
ItemId lp = PageGetItemId(page, off);
#ifdef USE_ASSERT_CHECKING
/*
* Only heap-only tuples can become LP_UNUSED during pruning. They
* don't need to be left in place as LP_DEAD items until VACUUM gets
* around to doing index vacuuming.
*/
Assert(ItemIdHasStorage(lp) && ItemIdIsNormal(lp));
htup = (HeapTupleHeader) PageGetItem(page, lp);
Assert(HeapTupleHeaderIsHeapOnly(htup));
#endif
ItemIdSetUnused(lp);
}
/*
* Finally, repair any fragmentation, and update the page's hint bit about
* whether it has free pointers.
*/
PageRepairFragmentation(page);
/*
* Now that the page has been modified, assert that redirect items still
* point to valid targets.
*/
page_verify_redirects(page);
}
/*
* If built with assertions, verify that all LP_REDIRECT items point to a
* valid item.
*
* One way that bugs related to HOT pruning show is redirect items pointing to
* removed tuples. It's not trivial to reliably check that marking an item
* unused will not orphan a redirect item during heap_prune_chain() /
* heap_page_prune_execute(), so we additionally check the whole page after
* pruning. Without this check such bugs would typically only cause asserts
* later, potentially well after the corruption has been introduced.
*
* Also check comments in heap_page_prune_execute()'s redirection loop.
*/
static void
page_verify_redirects(Page page)
{
#ifdef USE_ASSERT_CHECKING
OffsetNumber offnum;
OffsetNumber maxoff;
maxoff = PageGetMaxOffsetNumber(page);
for (offnum = FirstOffsetNumber;
offnum <= maxoff;
offnum = OffsetNumberNext(offnum))
{
ItemId itemid = PageGetItemId(page, offnum);
OffsetNumber targoff;
ItemId targitem;
HeapTupleHeader htup;
if (!ItemIdIsRedirected(itemid))
continue;
targoff = ItemIdGetRedirect(itemid);
targitem = PageGetItemId(page, targoff);
Assert(ItemIdIsUsed(targitem));
Assert(ItemIdIsNormal(targitem));
Assert(ItemIdHasStorage(targitem));
htup = (HeapTupleHeader) PageGetItem(page, targitem);
Assert(HeapTupleHeaderIsHeapOnly(htup));
}
#endif
}
/*
* For all items in this page, find their respective root line pointers.
* If item k is part of a HOT-chain with root at item j, then we set
* root_offsets[k - 1] = j.
*
* The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
* Unused entries are filled with InvalidOffsetNumber (zero).
*
* The function must be called with at least share lock on the buffer, to
* prevent concurrent prune operations.
*
* Note: The information collected here is valid only as long as the caller
* holds a pin on the buffer. Once pin is released, a tuple might be pruned
* and reused by a completely unrelated tuple.
*/
void
heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
{
OffsetNumber offnum,
maxoff;
MemSet(root_offsets, InvalidOffsetNumber,
MaxHeapTuplesPerPage * sizeof(OffsetNumber));
maxoff = PageGetMaxOffsetNumber(page);
for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
{
ItemId lp = PageGetItemId(page, offnum);
HeapTupleHeader htup;
OffsetNumber nextoffnum;
TransactionId priorXmax;
/* skip unused and dead items */
if (!ItemIdIsUsed(lp) || ItemIdIsDead(lp))
continue;
if (ItemIdIsNormal(lp))
{
htup = (HeapTupleHeader) PageGetItem(page, lp);
/*
* Check if this tuple is part of a HOT-chain rooted at some other
* tuple. If so, skip it for now; we'll process it when we find
* its root.
*/
if (HeapTupleHeaderIsHeapOnly(htup))
continue;
/*
* This is either a plain tuple or the root of a HOT-chain.
* Remember it in the mapping.
*/
root_offsets[offnum - 1] = offnum;
/* If it's not the start of a HOT-chain, we're done with it */
if (!HeapTupleHeaderIsHotUpdated(htup))
continue;
/* Set up to scan the HOT-chain */
nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
priorXmax = HeapTupleHeaderGetUpdateXid(htup);
}
else
{
/* Must be a redirect item. We do not set its root_offsets entry */
Assert(ItemIdIsRedirected(lp));
/* Set up to scan the HOT-chain */
nextoffnum = ItemIdGetRedirect(lp);
priorXmax = InvalidTransactionId;
}
/*
* Now follow the HOT-chain and collect other tuples in the chain.
*
* Note: Even though this is a nested loop, the complexity of the
* function is O(N) because a tuple in the page should be visited not
* more than twice, once in the outer loop and once in HOT-chain
* chases.
*/
for (;;)
{
/* Sanity check (pure paranoia) */
if (offnum < FirstOffsetNumber)
break;
/*
* An offset past the end of page's line pointer array is possible
* when the array was truncated
*/
if (offnum > maxoff)
break;
lp = PageGetItemId(page, nextoffnum);
/* Check for broken chains */
if (!ItemIdIsNormal(lp))
break;
htup = (HeapTupleHeader) PageGetItem(page, lp);
if (TransactionIdIsValid(priorXmax) &&
!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
break;
/* Remember the root line pointer for this item */
root_offsets[nextoffnum - 1] = offnum;
/* Advance to next chain member, if any */
if (!HeapTupleHeaderIsHotUpdated(htup))
break;
2018-04-07 22:24:10 +02:00
/* HOT implies it can't have moved to different partition */
Assert(!HeapTupleHeaderIndicatesMovedPartitions(htup));
nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
Improve concurrency of foreign key locking This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
priorXmax = HeapTupleHeaderGetUpdateXid(htup);
}
}
}