1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-10-18 00:15:09 +02:00
|
|
|
* nbtree.c
|
1996-07-09 08:22:35 +02:00
|
|
|
* Implementation of Lehman and Yao's btree management algorithm for
|
|
|
|
* Postgres.
|
|
|
|
*
|
1999-10-18 00:15:09 +02:00
|
|
|
* NOTES
|
|
|
|
* This file contains only the public interface routines.
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
2022-01-08 01:04:57 +01:00
|
|
|
* Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
1999-10-18 00:15:09 +02:00
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/access/nbtree/nbtree.c
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "postgres.h"
|
1996-11-05 11:35:38 +01:00
|
|
|
|
1999-07-16 07:00:38 +02:00
|
|
|
#include "access/nbtree.h"
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
#include "access/nbtxlog.h"
|
2008-06-19 02:46:06 +02:00
|
|
|
#include "access/relscan.h"
|
2014-11-06 12:52:08 +01:00
|
|
|
#include "access/xlog.h"
|
2022-01-30 16:25:24 +01:00
|
|
|
#include "access/xloginsert.h"
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
#include "commands/progress.h"
|
2004-02-10 04:42:45 +01:00
|
|
|
#include "commands/vacuum.h"
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
#include "miscadmin.h"
|
Support parallel btree index builds.
To make this work, tuplesort.c and logtape.c must also support
parallelism, so this patch adds that infrastructure and then applies
it to the particular case of parallel btree index builds. Testing
to date shows that this can often be 2-3x faster than a serial
index build.
The model for deciding how many workers to use is fairly primitive
at present, but it's better than not having the feature. We can
refine it as we get more experience.
Peter Geoghegan with some help from Rushabh Lathia. While Heikki
Linnakangas is not an author of this patch, he wrote other patches
without which this feature would not have been possible, and
therefore the release notes should possibly credit him as an author
of this feature. Reviewed by Claudio Freire, Heikki Linnakangas,
Thomas Munro, Tels, Amit Kapila, me.
Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
2018-02-02 19:25:55 +01:00
|
|
|
#include "nodes/execnodes.h"
|
2017-02-15 13:41:14 +01:00
|
|
|
#include "pgstat.h"
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
#include "postmaster/autovacuum.h"
|
2017-02-15 13:41:14 +01:00
|
|
|
#include "storage/condition_variable.h"
|
2008-09-30 12:52:14 +02:00
|
|
|
#include "storage/indexfsm.h"
|
2008-04-17 01:59:40 +02:00
|
|
|
#include "storage/ipc.h"
|
2006-05-11 01:18:39 +02:00
|
|
|
#include "storage/lmgr.h"
|
2010-12-29 12:48:53 +01:00
|
|
|
#include "storage/smgr.h"
|
2016-12-28 18:00:00 +01:00
|
|
|
#include "utils/builtins.h"
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
#include "utils/index_selfuncs.h"
|
2005-05-06 19:24:55 +02:00
|
|
|
#include "utils/memutils.h"
|
2000-07-21 08:42:39 +02:00
|
|
|
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2017-02-15 13:41:14 +01:00
|
|
|
/*
|
|
|
|
* BTPARALLEL_NOT_INITIALIZED indicates that the scan has not started.
|
|
|
|
*
|
|
|
|
* BTPARALLEL_ADVANCING indicates that some process is advancing the scan to
|
|
|
|
* a new page; others must wait.
|
|
|
|
*
|
|
|
|
* BTPARALLEL_IDLE indicates that no backend is currently advancing the scan
|
|
|
|
* to a new page; some process can start doing that.
|
|
|
|
*
|
|
|
|
* BTPARALLEL_DONE indicates that the scan is complete (including error exit).
|
|
|
|
* We reach this state once for every distinct combination of array keys.
|
|
|
|
*/
|
|
|
|
typedef enum
|
|
|
|
{
|
|
|
|
BTPARALLEL_NOT_INITIALIZED,
|
|
|
|
BTPARALLEL_ADVANCING,
|
|
|
|
BTPARALLEL_IDLE,
|
|
|
|
BTPARALLEL_DONE
|
|
|
|
} BTPS_State;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* BTParallelScanDescData contains btree specific shared information required
|
|
|
|
* for parallel scan.
|
|
|
|
*/
|
|
|
|
typedef struct BTParallelScanDescData
|
|
|
|
{
|
|
|
|
BlockNumber btps_scanPage; /* latest or next page to be scanned */
|
|
|
|
BTPS_State btps_pageStatus; /* indicates whether next page is
|
|
|
|
* available for scan. see above for
|
|
|
|
* possible states of parallel scan. */
|
|
|
|
int btps_arrayKeyCount; /* count indicating number of array scan
|
|
|
|
* keys processed by parallel scan */
|
|
|
|
slock_t btps_mutex; /* protects above variables */
|
|
|
|
ConditionVariable btps_cv; /* used to synchronize parallel scan */
|
|
|
|
} BTParallelScanDescData;
|
|
|
|
|
|
|
|
typedef struct BTParallelScanDescData *BTParallelScanDesc;
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
static void btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
|
|
|
|
IndexBulkDeleteCallback callback, void *callback_state,
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
BTCycleId cycleid);
|
2020-05-02 23:04:33 +02:00
|
|
|
static void btvacuumpage(BTVacState *vstate, BlockNumber scanblkno);
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
static BTVacuumPosting btreevacuumposting(BTVacState *vstate,
|
|
|
|
IndexTuple posting,
|
|
|
|
OffsetNumber updatedoffset,
|
|
|
|
int *nremaining);
|
Restructure index AM interface for index building and index tuple deletion,
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
2001-07-16 00:48:19 +02:00
|
|
|
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
* Btree handler function: return IndexAmRoutine with access method parameters
|
|
|
|
* and callbacks.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2000-06-13 09:35:40 +02:00
|
|
|
Datum
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
bthandler(PG_FUNCTION_ARGS)
|
|
|
|
{
|
|
|
|
IndexAmRoutine *amroutine = makeNode(IndexAmRoutine);
|
|
|
|
|
2016-04-28 15:39:25 +02:00
|
|
|
amroutine->amstrategies = BTMaxStrategyNumber;
|
|
|
|
amroutine->amsupport = BTNProcs;
|
Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have
much freedom in the semantics of indexing. These index AMs are GiST, GIN,
SP-GiST and BRIN. There opclasses define representation of keys, operations on
them and supported search strategies. So, it's natural that opclasses may be
faced some tradeoffs, which require user-side decision. This commit implements
opclass parameters allowing users to set some values, which tell opclass how to
index the particular dataset.
This commit doesn't introduce new storage in system catalog. Instead it uses
pg_attribute.attoptions, which is used for table column storage options but
unused for index attributes.
In order to evade changing signature of each opclass support function, we
implement unified way to pass options to opclass support functions. Options
are set to fn_expr as the constant bytea expression. It's possible due to the
fact that opclass support functions are executed outside of expressions, so
fn_expr is unused for them.
This commit comes with some examples of opclass options usage. We parametrize
signature length in GiST. That applies to multiple opclasses: tsvector_ops,
gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and
gist_hstore_ops. Also we parametrize maximum number of integer ranges for
gist__int_ops. However, the main future usage of this feature is expected
to be json, where users would be able to specify which way to index particular
json parts.
Catversion is bumped.
Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 18:17:11 +02:00
|
|
|
amroutine->amoptsprocnum = BTOPTIONS_PROC;
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
amroutine->amcanorder = true;
|
|
|
|
amroutine->amcanorderbyop = false;
|
|
|
|
amroutine->amcanbackward = true;
|
|
|
|
amroutine->amcanunique = true;
|
|
|
|
amroutine->amcanmulticol = true;
|
|
|
|
amroutine->amoptionalkey = true;
|
|
|
|
amroutine->amsearcharray = true;
|
|
|
|
amroutine->amsearchnulls = true;
|
|
|
|
amroutine->amstorage = false;
|
|
|
|
amroutine->amclusterable = true;
|
|
|
|
amroutine->ampredlocks = true;
|
2017-02-15 19:53:24 +01:00
|
|
|
amroutine->amcanparallel = true;
|
2018-04-07 22:00:39 +02:00
|
|
|
amroutine->amcaninclude = true;
|
2020-01-15 02:54:14 +01:00
|
|
|
amroutine->amusemaintenanceworkmem = false;
|
2021-11-30 19:15:14 +01:00
|
|
|
amroutine->amhotblocking = true;
|
2020-01-15 02:54:14 +01:00
|
|
|
amroutine->amparallelvacuumoptions =
|
|
|
|
VACUUM_OPTION_PARALLEL_BULKDEL | VACUUM_OPTION_PARALLEL_COND_CLEANUP;
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
amroutine->amkeytype = InvalidOid;
|
|
|
|
|
|
|
|
amroutine->ambuild = btbuild;
|
|
|
|
amroutine->ambuildempty = btbuildempty;
|
|
|
|
amroutine->aminsert = btinsert;
|
|
|
|
amroutine->ambulkdelete = btbulkdelete;
|
|
|
|
amroutine->amvacuumcleanup = btvacuumcleanup;
|
|
|
|
amroutine->amcanreturn = btcanreturn;
|
|
|
|
amroutine->amcostestimate = btcostestimate;
|
|
|
|
amroutine->amoptions = btoptions;
|
2016-08-14 00:31:14 +02:00
|
|
|
amroutine->amproperty = btproperty;
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
amroutine->ambuildphasename = btbuildphasename;
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
amroutine->amvalidate = btvalidate;
|
2020-08-01 23:12:47 +02:00
|
|
|
amroutine->amadjustmembers = btadjustmembers;
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
amroutine->ambeginscan = btbeginscan;
|
|
|
|
amroutine->amrescan = btrescan;
|
|
|
|
amroutine->amgettuple = btgettuple;
|
|
|
|
amroutine->amgetbitmap = btgetbitmap;
|
|
|
|
amroutine->amendscan = btendscan;
|
|
|
|
amroutine->ammarkpos = btmarkpos;
|
|
|
|
amroutine->amrestrpos = btrestrpos;
|
2017-02-15 13:41:14 +01:00
|
|
|
amroutine->amestimateparallelscan = btestimateparallelscan;
|
|
|
|
amroutine->aminitparallelscan = btinitparallelscan;
|
|
|
|
amroutine->amparallelrescan = btparallelrescan;
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
|
|
|
|
PG_RETURN_POINTER(amroutine);
|
|
|
|
}
|
|
|
|
|
2010-12-29 12:48:53 +01:00
|
|
|
/*
|
|
|
|
* btbuildempty() -- build an empty btree index in the initialization fork
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
void
|
|
|
|
btbuildempty(Relation index)
|
2010-12-29 12:48:53 +01:00
|
|
|
{
|
|
|
|
Page metapage;
|
|
|
|
|
|
|
|
/* Construct metapage. */
|
|
|
|
metapage = (Page) palloc(BLCKSZ);
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
_bt_initmetapage(metapage, P_NONE, 0, _bt_allequalimage(index, false));
|
2010-12-29 12:48:53 +01:00
|
|
|
|
2016-12-08 20:09:09 +01:00
|
|
|
/*
|
|
|
|
* Write the page and log it. It might seem that an immediate sync would
|
|
|
|
* be sufficient to guarantee that the file exists on disk, but recovery
|
|
|
|
* itself might remove it while replaying, for example, an
|
Add new block-by-block strategy for CREATE DATABASE.
Because this strategy logs changes on a block-by-block basis, it
avoids the need to checkpoint before and after the operation.
However, because it logs each changed block individually, it might
generate a lot of extra write-ahead logging if the template database
is large. Therefore, the older strategy remains available via a new
STRATEGY parameter to CREATE DATABASE, and a corresponding --strategy
option to createdb.
Somewhat controversially, this patch assembles the list of relations
to be copied to the new database by reading the pg_class relation of
the template database. Cross-database access like this isn't normally
possible, but it can be made to work here because there can't be any
connections to the database being copied, nor can it contain any
in-doubt transactions. Even so, we have to use lower-level interfaces
than normal, since the table scan and relcache interfaces will not
work for a database to which we're not connected. The advantage of
this approach is that we do not need to rely on the filesystem to
determine what ought to be copied, but instead on PostgreSQL's own
knowledge of the database structure. This avoids, for example,
copying stray files that happen to be located in the source database
directory.
Dilip Kumar, with a fairly large number of cosmetic changes by me.
Reviewed and tested by Ashutosh Sharma, Andres Freund, John Naylor,
Greg Nancarrow, Neha Sharma. Additional feedback from Bruce Momjian,
Heikki Linnakangas, Julien Rouhaud, Adam Brusselback, Kyotaro
Horiguchi, Tomas Vondra, Andrew Dunstan, Álvaro Herrera, and others.
Discussion: http://postgr.es/m/CA+TgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T=CYCvRyFHywSBUQ@mail.gmail.com
2022-03-29 17:31:43 +02:00
|
|
|
* XLOG_DBASE_CREATE* or XLOG_TBLSPC_CREATE record. Therefore, we need
|
2016-12-08 20:09:09 +01:00
|
|
|
* this even when wal_level=minimal.
|
|
|
|
*/
|
2013-03-22 14:54:07 +01:00
|
|
|
PageSetChecksumInplace(metapage, BTREE_METAPAGE);
|
2021-07-12 23:01:29 +02:00
|
|
|
smgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BTREE_METAPAGE,
|
2010-12-29 12:48:53 +01:00
|
|
|
(char *) metapage, true);
|
2021-07-12 23:01:29 +02:00
|
|
|
log_newpage(&RelationGetSmgr(index)->smgr_rnode.node, INIT_FORKNUM,
|
2017-11-03 21:31:32 +01:00
|
|
|
BTREE_METAPAGE, metapage, true);
|
2010-12-29 12:48:53 +01:00
|
|
|
|
|
|
|
/*
|
2014-04-23 11:56:41 +02:00
|
|
|
* An immediate sync is required even if we xlog'd the page, because the
|
2010-12-29 12:48:53 +01:00
|
|
|
* write did not go through shared_buffers and therefore a concurrent
|
2014-04-23 11:56:41 +02:00
|
|
|
* checkpoint may have moved the redo pointer past our xlog record.
|
2010-12-29 12:48:53 +01:00
|
|
|
*/
|
2021-07-12 23:01:29 +02:00
|
|
|
smgrimmedsync(RelationGetSmgr(index), INIT_FORKNUM);
|
2010-12-29 12:48:53 +01:00
|
|
|
}
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* btinsert() -- insert an index tuple into a btree.
|
|
|
|
*
|
|
|
|
* Descend the tree recursively, find the appropriate location for our
|
2005-03-21 02:24:04 +01:00
|
|
|
* new tuple, and put it there.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
bool
|
|
|
|
btinsert(Relation rel, Datum *values, bool *isnull,
|
|
|
|
ItemPointer ht_ctid, Relation heapRel,
|
Allow index AMs to cache data across aminsert calls within a SQL command.
It's always been possible for index AMs to cache data across successive
amgettuple calls within a single SQL command: the IndexScanDesc.opaque
field is meant for precisely that. However, no comparable facility
exists for amortizing setup work across successive aminsert calls.
This patch adds such a feature and teaches GIN, GIST, and BRIN to use it
to amortize catalog lookups they'd previously been doing on every call.
(The other standard index AMs keep everything they need in the relcache,
so there's little to improve there.)
For GIN, the overall improvement in a statement that inserts many rows
can be as much as 10%, though it seems a bit less for the other two.
In addition, this makes a really significant difference in runtime
for CLOBBER_CACHE_ALWAYS tests, since in those builds the repeated
catalog lookups are vastly more expensive.
The reason this has been hard up to now is that the aminsert function is
not passed any useful place to cache per-statement data. What I chose to
do is to add suitable fields to struct IndexInfo and pass that to aminsert.
That's not widening the index AM API very much because IndexInfo is already
within the ken of ambuild; in fact, by passing the same info to aminsert
as to ambuild, this is really removing an inconsistency in the AM API.
Discussion: https://postgr.es/m/27568.1486508680@sss.pgh.pa.us
2017-02-09 17:52:12 +01:00
|
|
|
IndexUniqueCheck checkUnique,
|
2021-01-13 17:11:00 +01:00
|
|
|
bool indexUnchanged,
|
Allow index AMs to cache data across aminsert calls within a SQL command.
It's always been possible for index AMs to cache data across successive
amgettuple calls within a single SQL command: the IndexScanDesc.opaque
field is meant for precisely that. However, no comparable facility
exists for amortizing setup work across successive aminsert calls.
This patch adds such a feature and teaches GIN, GIST, and BRIN to use it
to amortize catalog lookups they'd previously been doing on every call.
(The other standard index AMs keep everything they need in the relcache,
so there's little to improve there.)
For GIN, the overall improvement in a statement that inserts many rows
can be as much as 10%, though it seems a bit less for the other two.
In addition, this makes a really significant difference in runtime
for CLOBBER_CACHE_ALWAYS tests, since in those builds the repeated
catalog lookups are vastly more expensive.
The reason this has been hard up to now is that the aminsert function is
not passed any useful place to cache per-statement data. What I chose to
do is to add suitable fields to struct IndexInfo and pass that to aminsert.
That's not widening the index AM API very much because IndexInfo is already
within the ken of ambuild; in fact, by passing the same info to aminsert
as to ambuild, this is really removing an inconsistency in the AM API.
Discussion: https://postgr.es/m/27568.1486508680@sss.pgh.pa.us
2017-02-09 17:52:12 +01:00
|
|
|
IndexInfo *indexInfo)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2009-07-29 22:56:21 +02:00
|
|
|
bool result;
|
1996-08-26 08:32:06 +02:00
|
|
|
IndexTuple itup;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-08-26 08:32:06 +02:00
|
|
|
/* generate an index tuple */
|
2005-03-21 02:24:04 +01:00
|
|
|
itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
|
1996-08-26 08:32:06 +02:00
|
|
|
itup->t_tid = *ht_ctid;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2021-01-13 18:21:32 +01:00
|
|
|
result = _bt_doinsert(rel, itup, checkUnique, indexUnchanged, heapRel);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-08-26 08:32:06 +02:00
|
|
|
pfree(itup);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return result;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btgettuple() -- Get the next tuple in the scan.
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
bool
|
|
|
|
btgettuple(IndexScanDesc scan, ScanDirection dir)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2002-05-24 20:57:57 +02:00
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
|
|
|
bool res;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2008-04-13 21:18:14 +02:00
|
|
|
/* btree indexes are never lossy */
|
|
|
|
scan->xs_recheck = false;
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2011-10-16 21:39:24 +02:00
|
|
|
* If we have any array keys, initialize them during first call for a
|
|
|
|
* scan. We can't do this in btrescan because we don't know the scan
|
|
|
|
* direction at that time.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2011-10-16 21:39:24 +02:00
|
|
|
if (so->numArrayKeys && !BTScanPosIsValid(so->currPos))
|
|
|
|
{
|
|
|
|
/* punt if we have any unsatisfiable array keys */
|
|
|
|
if (so->numArrayKeys < 0)
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return false;
|
2011-10-16 21:39:24 +02:00
|
|
|
|
|
|
|
_bt_start_array_keys(scan, dir);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This loop handles advancing to the next array elements, if any */
|
|
|
|
do
|
1998-07-30 07:05:05 +02:00
|
|
|
{
|
2002-05-24 20:57:57 +02:00
|
|
|
/*
|
2011-10-16 21:39:24 +02:00
|
|
|
* If we've already initialized this scan, we can just advance it in
|
|
|
|
* the appropriate direction. If we haven't done so yet, we call
|
|
|
|
* _bt_first() to get the first item in the scan.
|
2002-05-24 20:57:57 +02:00
|
|
|
*/
|
2011-10-16 21:39:24 +02:00
|
|
|
if (!BTScanPosIsValid(so->currPos))
|
|
|
|
res = _bt_first(scan, dir);
|
|
|
|
else
|
2002-05-24 20:57:57 +02:00
|
|
|
{
|
|
|
|
/*
|
2011-10-16 21:39:24 +02:00
|
|
|
* Check to see if we should kill the previously-fetched tuple.
|
2002-05-24 20:57:57 +02:00
|
|
|
*/
|
2011-10-16 21:39:24 +02:00
|
|
|
if (scan->kill_prior_tuple)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Yes, remember it for later. (We'll deal with all such
|
|
|
|
* tuples at once right before leaving the index page.) The
|
|
|
|
* test for numKilled overrun is not just paranoia: if the
|
|
|
|
* caller reverses direction in the indexscan then the same
|
|
|
|
* item might get entered multiple times. It's not worth
|
|
|
|
* trying to optimize that, so we don't detect it, but instead
|
|
|
|
* just forget any excess entries.
|
|
|
|
*/
|
|
|
|
if (so->killedItems == NULL)
|
|
|
|
so->killedItems = (int *)
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
palloc(MaxTIDsPerBTreePage * sizeof(int));
|
|
|
|
if (so->numKilled < MaxTIDsPerBTreePage)
|
2011-10-16 21:39:24 +02:00
|
|
|
so->killedItems[so->numKilled++] = so->currPos.itemIndex;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now continue the scan.
|
|
|
|
*/
|
|
|
|
res = _bt_next(scan, dir);
|
2002-05-24 20:57:57 +02:00
|
|
|
}
|
2002-09-04 22:31:48 +02:00
|
|
|
|
2011-10-16 21:39:24 +02:00
|
|
|
/* If we have a tuple, return it ... */
|
|
|
|
if (res)
|
|
|
|
break;
|
|
|
|
/* ... otherwise see if we have more array keys to deal with */
|
|
|
|
} while (so->numArrayKeys && _bt_advance_array_keys(scan, dir));
|
1998-09-01 06:40:42 +02:00
|
|
|
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return res;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2005-03-28 01:53:05 +02:00
|
|
|
/*
|
2008-04-11 00:25:26 +02:00
|
|
|
* btgetbitmap() -- gets all matching tuples, and adds them to a bitmap
|
2005-03-28 01:53:05 +02:00
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
int64
|
|
|
|
btgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
|
2005-03-28 01:53:05 +02:00
|
|
|
{
|
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
2008-04-11 00:25:26 +02:00
|
|
|
int64 ntids = 0;
|
|
|
|
ItemPointer heapTid;
|
2005-03-28 01:53:05 +02:00
|
|
|
|
2011-10-16 21:39:24 +02:00
|
|
|
/*
|
|
|
|
* If we have any array keys, initialize them.
|
|
|
|
*/
|
|
|
|
if (so->numArrayKeys)
|
2005-03-28 01:53:05 +02:00
|
|
|
{
|
2011-10-16 21:39:24 +02:00
|
|
|
/* punt if we have any unsatisfiable array keys */
|
|
|
|
if (so->numArrayKeys < 0)
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return ntids;
|
2011-10-16 21:39:24 +02:00
|
|
|
|
|
|
|
_bt_start_array_keys(scan, ForwardScanDirection);
|
2005-03-28 01:53:05 +02:00
|
|
|
}
|
|
|
|
|
2011-10-16 21:39:24 +02:00
|
|
|
/* This loop handles advancing to the next array elements, if any */
|
|
|
|
do
|
2005-03-28 01:53:05 +02:00
|
|
|
{
|
2011-10-16 21:39:24 +02:00
|
|
|
/* Fetch the first page & tuple */
|
|
|
|
if (_bt_first(scan, ForwardScanDirection))
|
2006-05-07 03:21:30 +02:00
|
|
|
{
|
2011-10-16 21:39:24 +02:00
|
|
|
/* Save tuple ID, and continue scanning */
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
heapTid = &scan->xs_heaptid;
|
2011-10-16 21:39:24 +02:00
|
|
|
tbm_add_tuples(tbm, heapTid, 1, false);
|
|
|
|
ntids++;
|
|
|
|
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Advance to next tuple within page. This is the same as the
|
|
|
|
* easy case in _bt_next().
|
|
|
|
*/
|
|
|
|
if (++so->currPos.itemIndex > so->currPos.lastItem)
|
|
|
|
{
|
|
|
|
/* let _bt_next do the heavy lifting */
|
|
|
|
if (!_bt_next(scan, ForwardScanDirection))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Save tuple ID, and continue scanning */
|
|
|
|
heapTid = &so->currPos.items[so->currPos.itemIndex].heapTid;
|
|
|
|
tbm_add_tuples(tbm, heapTid, 1, false);
|
|
|
|
ntids++;
|
|
|
|
}
|
2006-05-07 03:21:30 +02:00
|
|
|
}
|
2011-10-16 21:39:24 +02:00
|
|
|
/* Now see if we have more array keys to deal with */
|
|
|
|
} while (so->numArrayKeys && _bt_advance_array_keys(scan, ForwardScanDirection));
|
2005-03-28 01:53:05 +02:00
|
|
|
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return ntids;
|
2005-03-28 01:53:05 +02:00
|
|
|
}
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* btbeginscan() -- start a scan on a btree index
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
IndexScanDesc
|
|
|
|
btbeginscan(Relation rel, int nkeys, int norderbys)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
|
|
|
IndexScanDesc scan;
|
2010-12-03 02:50:48 +01:00
|
|
|
BTScanOpaque so;
|
|
|
|
|
|
|
|
/* no order by operators allowed */
|
|
|
|
Assert(norderbys == 0);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-30 09:56:04 +02:00
|
|
|
/* get the scan */
|
2010-12-03 02:50:48 +01:00
|
|
|
scan = RelationGetIndexScan(rel, nkeys, norderbys);
|
|
|
|
|
|
|
|
/* allocate private workspace */
|
|
|
|
so = (BTScanOpaque) palloc(sizeof(BTScanOpaqueData));
|
2015-03-25 20:24:43 +01:00
|
|
|
BTScanPosInvalidate(so->currPos);
|
|
|
|
BTScanPosInvalidate(so->markPos);
|
2010-12-03 02:50:48 +01:00
|
|
|
if (scan->numberOfKeys > 0)
|
|
|
|
so->keyData = (ScanKey) palloc(scan->numberOfKeys * sizeof(ScanKeyData));
|
|
|
|
else
|
|
|
|
so->keyData = NULL;
|
2011-10-16 21:39:24 +02:00
|
|
|
|
|
|
|
so->arrayKeyData = NULL; /* assume no array keys for now */
|
|
|
|
so->numArrayKeys = 0;
|
|
|
|
so->arrayKeys = NULL;
|
|
|
|
so->arrayContext = NULL;
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
so->killedItems = NULL; /* until needed */
|
|
|
|
so->numKilled = 0;
|
2011-10-09 06:21:08 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't know yet whether the scan will be index-only, so we do not
|
2011-10-17 01:15:04 +02:00
|
|
|
* allocate the tuple workspace arrays until btrescan. However, we set up
|
|
|
|
* scan->xs_itupdesc whether we'll need it or not, since that's so cheap.
|
2011-10-09 06:21:08 +02:00
|
|
|
*/
|
|
|
|
so->currTuples = so->markTuples = NULL;
|
|
|
|
|
2011-10-17 01:15:04 +02:00
|
|
|
scan->xs_itupdesc = RelationGetDescr(rel);
|
|
|
|
|
2010-12-03 02:50:48 +01:00
|
|
|
scan->opaque = so;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return scan;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btrescan() -- rescan an index relation
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
void
|
|
|
|
btrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
|
|
|
|
ScanKey orderbys, int norderbys)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2010-12-03 02:50:48 +01:00
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
2000-07-21 08:42:39 +02:00
|
|
|
|
|
|
|
/* we aren't holding any read locks, but gotta drop the pins */
|
2006-05-07 03:21:30 +02:00
|
|
|
if (BTScanPosIsValid(so->currPos))
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-07 03:21:30 +02:00
|
|
|
/* Before leaving current page, deal with any killed items */
|
|
|
|
if (so->numKilled > 0)
|
2015-03-25 20:24:43 +01:00
|
|
|
_bt_killitems(scan);
|
|
|
|
BTScanPosUnpinIfPinned(so->currPos);
|
|
|
|
BTScanPosInvalidate(so->currPos);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-08-24 03:18:34 +02:00
|
|
|
so->markItemIndex = -1;
|
2017-02-15 13:41:14 +01:00
|
|
|
so->arrayKeyCount = 0;
|
2015-03-25 20:24:43 +01:00
|
|
|
BTScanPosUnpinIfPinned(so->markPos);
|
|
|
|
BTScanPosInvalidate(so->markPos);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2011-10-09 06:21:08 +02:00
|
|
|
/*
|
|
|
|
* Allocate tuple workspace arrays, if needed for an index-only scan and
|
|
|
|
* not already done in a previous rescan call. To save on palloc
|
|
|
|
* overhead, both workspaces are allocated as one palloc block; only this
|
|
|
|
* function and btendscan know that.
|
2011-10-12 00:40:53 +02:00
|
|
|
*
|
|
|
|
* NOTE: this data structure also makes it safe to return data from a
|
|
|
|
* "name" column, even though btree name_ops uses an underlying storage
|
|
|
|
* datatype of cstring. The risk there is that "name" is supposed to be
|
|
|
|
* padded to NAMEDATALEN, but the actual index tuple is probably shorter.
|
|
|
|
* However, since we only return data out of tuples sitting in the
|
|
|
|
* currTuples array, a fetch of NAMEDATALEN bytes can at worst pull some
|
|
|
|
* data out of the markTuples array --- running off the end of memory for
|
|
|
|
* a SIGSEGV is not possible. Yeah, this is ugly as sin, but it beats
|
|
|
|
* adding special-case treatment for name_ops elsewhere.
|
2011-10-09 06:21:08 +02:00
|
|
|
*/
|
|
|
|
if (scan->xs_want_itup && so->currTuples == NULL)
|
|
|
|
{
|
|
|
|
so->currTuples = (char *) palloc(BLCKSZ * 2);
|
|
|
|
so->markTuples = so->currTuples + BLCKSZ;
|
|
|
|
}
|
|
|
|
|
1997-05-05 05:41:19 +02:00
|
|
|
/*
|
2020-12-16 00:55:07 +01:00
|
|
|
* Reset the scan keys
|
1997-05-05 05:41:19 +02:00
|
|
|
*/
|
2003-03-24 00:01:03 +01:00
|
|
|
if (scankey && scan->numberOfKeys > 0)
|
1996-07-09 08:22:35 +02:00
|
|
|
memmove(scan->keyData,
|
|
|
|
scankey,
|
|
|
|
scan->numberOfKeys * sizeof(ScanKeyData));
|
2003-11-12 22:15:59 +01:00
|
|
|
so->numberOfKeys = 0; /* until _bt_preprocess_keys sets it */
|
1996-07-30 09:56:04 +02:00
|
|
|
|
2011-10-16 21:39:24 +02:00
|
|
|
/* If any keys are SK_SEARCHARRAY type, set up array-key info */
|
|
|
|
_bt_preprocess_array_keys(scan);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btendscan() -- close down a scan
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
void
|
|
|
|
btendscan(IndexScanDesc scan)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-07 03:21:30 +02:00
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2000-07-21 08:42:39 +02:00
|
|
|
/* we aren't holding any read locks, but gotta drop the pins */
|
2006-05-07 03:21:30 +02:00
|
|
|
if (BTScanPosIsValid(so->currPos))
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-07 03:21:30 +02:00
|
|
|
/* Before leaving current page, deal with any killed items */
|
|
|
|
if (so->numKilled > 0)
|
2015-03-25 20:24:43 +01:00
|
|
|
_bt_killitems(scan);
|
|
|
|
BTScanPosUnpinIfPinned(so->currPos);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-08-24 03:18:34 +02:00
|
|
|
so->markItemIndex = -1;
|
2015-03-25 20:24:43 +01:00
|
|
|
BTScanPosUnpinIfPinned(so->markPos);
|
|
|
|
|
|
|
|
/* No need to invalidate positions, the RAM is about to be freed. */
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2011-10-09 06:21:08 +02:00
|
|
|
/* Release storage */
|
2004-01-07 19:56:30 +01:00
|
|
|
if (so->keyData != NULL)
|
1996-07-30 09:56:04 +02:00
|
|
|
pfree(so->keyData);
|
2011-10-16 21:39:24 +02:00
|
|
|
/* so->arrayKeyData and so->arrayKeys are in arrayContext */
|
|
|
|
if (so->arrayContext != NULL)
|
|
|
|
MemoryContextDelete(so->arrayContext);
|
|
|
|
if (so->killedItems != NULL)
|
|
|
|
pfree(so->killedItems);
|
2011-10-09 06:21:08 +02:00
|
|
|
if (so->currTuples != NULL)
|
|
|
|
pfree(so->currTuples);
|
|
|
|
/* so->markTuples should not be pfree'd, see btrescan */
|
1997-03-24 09:48:16 +01:00
|
|
|
pfree(so);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btmarkpos() -- save current scan position
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
void
|
|
|
|
btmarkpos(IndexScanDesc scan)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-07 03:21:30 +02:00
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2015-03-25 20:24:43 +01:00
|
|
|
/* There may be an old mark with a pin (but no lock). */
|
|
|
|
BTScanPosUnpinIfPinned(so->markPos);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-08-24 03:18:34 +02:00
|
|
|
/*
|
|
|
|
* Just record the current itemIndex. If we later step to next page
|
|
|
|
* before releasing the marked position, _bt_steppage makes a full copy of
|
|
|
|
* the currPos struct in markPos. If (as often happens) the mark is moved
|
|
|
|
* before we leave the page, we don't have to do that work.
|
|
|
|
*/
|
2006-05-07 03:21:30 +02:00
|
|
|
if (BTScanPosIsValid(so->currPos))
|
2006-08-24 03:18:34 +02:00
|
|
|
so->markItemIndex = so->currPos.itemIndex;
|
|
|
|
else
|
2015-03-25 20:24:43 +01:00
|
|
|
{
|
|
|
|
BTScanPosInvalidate(so->markPos);
|
2006-08-24 03:18:34 +02:00
|
|
|
so->markItemIndex = -1;
|
2015-03-25 20:24:43 +01:00
|
|
|
}
|
2000-06-13 09:35:40 +02:00
|
|
|
|
2012-09-27 22:59:59 +02:00
|
|
|
/* Also record the current positions of any array keys */
|
|
|
|
if (so->numArrayKeys)
|
|
|
|
_bt_mark_array_keys(scan);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btrestrpos() -- restore scan to last saved position
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
void
|
|
|
|
btrestrpos(IndexScanDesc scan)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-07 03:21:30 +02:00
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2012-09-27 22:59:59 +02:00
|
|
|
/* Restore the marked positions of any array keys */
|
|
|
|
if (so->numArrayKeys)
|
|
|
|
_bt_restore_array_keys(scan);
|
|
|
|
|
2006-08-24 03:18:34 +02:00
|
|
|
if (so->markItemIndex >= 0)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-08-24 03:18:34 +02:00
|
|
|
/*
|
2015-03-25 20:24:43 +01:00
|
|
|
* The scan has never moved to a new page since the last mark. Just
|
2006-08-24 03:18:34 +02:00
|
|
|
* restore the itemIndex.
|
2015-03-25 20:24:43 +01:00
|
|
|
*
|
|
|
|
* NB: In this case we can't count on anything in so->markPos to be
|
|
|
|
* accurate.
|
2006-08-24 03:18:34 +02:00
|
|
|
*/
|
|
|
|
so->currPos.itemIndex = so->markItemIndex;
|
|
|
|
}
|
|
|
|
else
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2015-03-25 20:24:43 +01:00
|
|
|
/*
|
|
|
|
* The scan moved to a new page after last mark or restore, and we are
|
|
|
|
* now restoring to the marked page. We aren't holding any read
|
|
|
|
* locks, but if we're still holding the pin for the current position,
|
|
|
|
* we must drop it.
|
|
|
|
*/
|
2006-08-24 03:18:34 +02:00
|
|
|
if (BTScanPosIsValid(so->currPos))
|
|
|
|
{
|
|
|
|
/* Before leaving current page, deal with any killed items */
|
2015-03-25 20:24:43 +01:00
|
|
|
if (so->numKilled > 0)
|
|
|
|
_bt_killitems(scan);
|
|
|
|
BTScanPosUnpinIfPinned(so->currPos);
|
2006-08-24 03:18:34 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (BTScanPosIsValid(so->markPos))
|
|
|
|
{
|
|
|
|
/* bump pin on mark buffer for assignment to current buffer */
|
2015-03-25 20:24:43 +01:00
|
|
|
if (BTScanPosIsPinned(so->markPos))
|
|
|
|
IncrBufferRefCount(so->markPos.buf);
|
2006-08-24 03:18:34 +02:00
|
|
|
memcpy(&so->currPos, &so->markPos,
|
|
|
|
offsetof(BTScanPosData, items[1]) +
|
|
|
|
so->markPos.lastItem * sizeof(BTScanPosItem));
|
2011-10-09 06:21:08 +02:00
|
|
|
if (so->currTuples)
|
|
|
|
memcpy(so->currTuples, so->markTuples,
|
|
|
|
so->markPos.nextTupleOffset);
|
2006-08-24 03:18:34 +02:00
|
|
|
}
|
2015-03-25 20:24:43 +01:00
|
|
|
else
|
|
|
|
BTScanPosInvalidate(so->currPos);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-15 13:41:14 +01:00
|
|
|
/*
|
|
|
|
* btestimateparallelscan -- estimate storage for BTParallelScanDescData
|
|
|
|
*/
|
|
|
|
Size
|
|
|
|
btestimateparallelscan(void)
|
|
|
|
{
|
|
|
|
return sizeof(BTParallelScanDescData);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btinitparallelscan -- initialize BTParallelScanDesc for parallel btree scan
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
btinitparallelscan(void *target)
|
|
|
|
{
|
|
|
|
BTParallelScanDesc bt_target = (BTParallelScanDesc) target;
|
|
|
|
|
|
|
|
SpinLockInit(&bt_target->btps_mutex);
|
|
|
|
bt_target->btps_scanPage = InvalidBlockNumber;
|
|
|
|
bt_target->btps_pageStatus = BTPARALLEL_NOT_INITIALIZED;
|
|
|
|
bt_target->btps_arrayKeyCount = 0;
|
|
|
|
ConditionVariableInit(&bt_target->btps_cv);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* btparallelrescan() -- reset parallel scan
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
btparallelrescan(IndexScanDesc scan)
|
|
|
|
{
|
|
|
|
BTParallelScanDesc btscan;
|
|
|
|
ParallelIndexScanDesc parallel_scan = scan->parallel_scan;
|
|
|
|
|
|
|
|
Assert(parallel_scan);
|
|
|
|
|
|
|
|
btscan = (BTParallelScanDesc) OffsetToPointer((void *) parallel_scan,
|
|
|
|
parallel_scan->ps_offset);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In theory, we don't need to acquire the spinlock here, because there
|
|
|
|
* shouldn't be any other workers running at this point, but we do so for
|
|
|
|
* consistency.
|
|
|
|
*/
|
|
|
|
SpinLockAcquire(&btscan->btps_mutex);
|
|
|
|
btscan->btps_scanPage = InvalidBlockNumber;
|
|
|
|
btscan->btps_pageStatus = BTPARALLEL_NOT_INITIALIZED;
|
|
|
|
btscan->btps_arrayKeyCount = 0;
|
|
|
|
SpinLockRelease(&btscan->btps_mutex);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* _bt_parallel_seize() -- Begin the process of advancing the scan to a new
|
2019-07-01 03:00:23 +02:00
|
|
|
* page. Other scans must wait until we call _bt_parallel_release()
|
|
|
|
* or _bt_parallel_done().
|
2017-02-15 13:41:14 +01:00
|
|
|
*
|
|
|
|
* The return value is true if we successfully seized the scan and false
|
|
|
|
* if we did not. The latter case occurs if no pages remain for the current
|
|
|
|
* set of scankeys.
|
|
|
|
*
|
|
|
|
* If the return value is true, *pageno returns the next or current page
|
|
|
|
* of the scan (depending on the scan direction). An invalid block number
|
|
|
|
* means the scan hasn't yet started, and P_NONE means we've reached the end.
|
|
|
|
* The first time a participating process reaches the last page, it will return
|
|
|
|
* true and set *pageno to P_NONE; after that, further attempts to seize the
|
|
|
|
* scan will return false.
|
|
|
|
*
|
|
|
|
* Callers should ignore the value of pageno if the return value is false.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
_bt_parallel_seize(IndexScanDesc scan, BlockNumber *pageno)
|
|
|
|
{
|
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
|
|
|
BTPS_State pageStatus;
|
|
|
|
bool exit_loop = false;
|
|
|
|
bool status = true;
|
|
|
|
ParallelIndexScanDesc parallel_scan = scan->parallel_scan;
|
|
|
|
BTParallelScanDesc btscan;
|
|
|
|
|
|
|
|
*pageno = P_NONE;
|
|
|
|
|
|
|
|
btscan = (BTParallelScanDesc) OffsetToPointer((void *) parallel_scan,
|
|
|
|
parallel_scan->ps_offset);
|
|
|
|
|
|
|
|
while (1)
|
|
|
|
{
|
|
|
|
SpinLockAcquire(&btscan->btps_mutex);
|
|
|
|
pageStatus = btscan->btps_pageStatus;
|
|
|
|
|
|
|
|
if (so->arrayKeyCount < btscan->btps_arrayKeyCount)
|
|
|
|
{
|
|
|
|
/* Parallel scan has already advanced to a new set of scankeys. */
|
|
|
|
status = false;
|
|
|
|
}
|
|
|
|
else if (pageStatus == BTPARALLEL_DONE)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We're done with this set of scankeys. This may be the end, or
|
|
|
|
* there could be more sets to try.
|
|
|
|
*/
|
|
|
|
status = false;
|
|
|
|
}
|
|
|
|
else if (pageStatus != BTPARALLEL_ADVANCING)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We have successfully seized control of the scan for the purpose
|
|
|
|
* of advancing it to a new page!
|
|
|
|
*/
|
|
|
|
btscan->btps_pageStatus = BTPARALLEL_ADVANCING;
|
|
|
|
*pageno = btscan->btps_scanPage;
|
|
|
|
exit_loop = true;
|
|
|
|
}
|
|
|
|
SpinLockRelease(&btscan->btps_mutex);
|
|
|
|
if (exit_loop || !status)
|
|
|
|
break;
|
|
|
|
ConditionVariableSleep(&btscan->btps_cv, WAIT_EVENT_BTREE_PAGE);
|
|
|
|
}
|
|
|
|
ConditionVariableCancelSleep();
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* _bt_parallel_release() -- Complete the process of advancing the scan to a
|
|
|
|
* new page. We now have the new value btps_scanPage; some other backend
|
|
|
|
* can now begin advancing the scan.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
_bt_parallel_release(IndexScanDesc scan, BlockNumber scan_page)
|
|
|
|
{
|
|
|
|
ParallelIndexScanDesc parallel_scan = scan->parallel_scan;
|
|
|
|
BTParallelScanDesc btscan;
|
|
|
|
|
|
|
|
btscan = (BTParallelScanDesc) OffsetToPointer((void *) parallel_scan,
|
|
|
|
parallel_scan->ps_offset);
|
|
|
|
|
|
|
|
SpinLockAcquire(&btscan->btps_mutex);
|
|
|
|
btscan->btps_scanPage = scan_page;
|
|
|
|
btscan->btps_pageStatus = BTPARALLEL_IDLE;
|
|
|
|
SpinLockRelease(&btscan->btps_mutex);
|
|
|
|
ConditionVariableSignal(&btscan->btps_cv);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* _bt_parallel_done() -- Mark the parallel scan as complete.
|
|
|
|
*
|
|
|
|
* When there are no pages left to scan, this function should be called to
|
|
|
|
* notify other workers. Otherwise, they might wait forever for the scan to
|
|
|
|
* advance to the next page.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
_bt_parallel_done(IndexScanDesc scan)
|
|
|
|
{
|
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
|
|
|
ParallelIndexScanDesc parallel_scan = scan->parallel_scan;
|
|
|
|
BTParallelScanDesc btscan;
|
|
|
|
bool status_changed = false;
|
|
|
|
|
|
|
|
/* Do nothing, for non-parallel scans */
|
|
|
|
if (parallel_scan == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
btscan = (BTParallelScanDesc) OffsetToPointer((void *) parallel_scan,
|
|
|
|
parallel_scan->ps_offset);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Mark the parallel scan as done for this combination of scan keys,
|
|
|
|
* unless some other process already did so. See also
|
|
|
|
* _bt_advance_array_keys.
|
|
|
|
*/
|
|
|
|
SpinLockAcquire(&btscan->btps_mutex);
|
|
|
|
if (so->arrayKeyCount >= btscan->btps_arrayKeyCount &&
|
|
|
|
btscan->btps_pageStatus != BTPARALLEL_DONE)
|
|
|
|
{
|
|
|
|
btscan->btps_pageStatus = BTPARALLEL_DONE;
|
|
|
|
status_changed = true;
|
|
|
|
}
|
|
|
|
SpinLockRelease(&btscan->btps_mutex);
|
|
|
|
|
|
|
|
/* wake up all the workers associated with this parallel scan */
|
|
|
|
if (status_changed)
|
|
|
|
ConditionVariableBroadcast(&btscan->btps_cv);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* _bt_parallel_advance_array_keys() -- Advances the parallel scan for array
|
|
|
|
* keys.
|
|
|
|
*
|
|
|
|
* Updates the count of array keys processed for both local and parallel
|
|
|
|
* scans.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
_bt_parallel_advance_array_keys(IndexScanDesc scan)
|
|
|
|
{
|
|
|
|
BTScanOpaque so = (BTScanOpaque) scan->opaque;
|
|
|
|
ParallelIndexScanDesc parallel_scan = scan->parallel_scan;
|
|
|
|
BTParallelScanDesc btscan;
|
|
|
|
|
|
|
|
btscan = (BTParallelScanDesc) OffsetToPointer((void *) parallel_scan,
|
|
|
|
parallel_scan->ps_offset);
|
|
|
|
|
|
|
|
so->arrayKeyCount++;
|
|
|
|
SpinLockAcquire(&btscan->btps_mutex);
|
|
|
|
if (btscan->btps_pageStatus == BTPARALLEL_DONE)
|
|
|
|
{
|
|
|
|
btscan->btps_scanPage = InvalidBlockNumber;
|
|
|
|
btscan->btps_pageStatus = BTPARALLEL_NOT_INITIALIZED;
|
|
|
|
btscan->btps_arrayKeyCount++;
|
|
|
|
}
|
|
|
|
SpinLockRelease(&btscan->btps_mutex);
|
|
|
|
}
|
|
|
|
|
Restructure index AM interface for index building and index tuple deletion,
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
2001-07-16 00:48:19 +02:00
|
|
|
/*
|
|
|
|
* Bulk deletion of all index entries pointing to a set of heap tuples.
|
|
|
|
* The set of target tuples is specified via a callback routine that tells
|
|
|
|
* whether any given heap tuple (identified by ItemPointer) is being deleted.
|
|
|
|
*
|
|
|
|
* Result: a palloc'd struct containing statistical info for VACUUM displays.
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
IndexBulkDeleteResult *
|
|
|
|
btbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
|
|
|
|
IndexBulkDeleteCallback callback, void *callback_state)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-03 00:25:10 +02:00
|
|
|
Relation rel = info->index;
|
2006-05-08 02:00:17 +02:00
|
|
|
BTCycleId cycleid;
|
Restructure index AM interface for index building and index tuple deletion,
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
2001-07-16 00:48:19 +02:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/* allocate stats if first time through, else re-use existing struct */
|
|
|
|
if (stats == NULL)
|
|
|
|
stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
|
2006-02-12 00:31:34 +01:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/* Establish the vacuum cycle ID to use for this scan */
|
2008-04-17 01:59:40 +02:00
|
|
|
/* The ENSURE stuff ensures we clean up shared memory on failure */
|
|
|
|
PG_ENSURE_ERROR_CLEANUP(_bt_end_vacuum_callback, PointerGetDatum(rel));
|
Restructure index AM interface for index building and index tuple deletion,
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
2001-07-16 00:48:19 +02:00
|
|
|
{
|
2006-05-08 02:00:17 +02:00
|
|
|
cycleid = _bt_start_vacuum(rel);
|
2003-08-04 02:43:34 +02:00
|
|
|
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
btvacuumscan(info, stats, callback, callback_state, cycleid);
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
2008-04-17 01:59:40 +02:00
|
|
|
PG_END_ENSURE_ERROR_CLEANUP(_bt_end_vacuum_callback, PointerGetDatum(rel));
|
|
|
|
_bt_end_vacuum(rel);
|
Restructure index AM interface for index building and index tuple deletion,
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
2001-07-16 00:48:19 +02:00
|
|
|
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return stats;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
1998-07-30 07:05:05 +02:00
|
|
|
|
2003-02-22 01:45:05 +01:00
|
|
|
/*
|
|
|
|
* Post-VACUUM cleanup.
|
|
|
|
*
|
|
|
|
* Result: a palloc'd struct containing statistical info for VACUUM displays.
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
IndexBulkDeleteResult *
|
|
|
|
btvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
|
2003-02-22 01:45:05 +01:00
|
|
|
{
|
2021-02-25 03:41:34 +01:00
|
|
|
BlockNumber num_delpages;
|
|
|
|
|
2009-03-24 21:17:18 +01:00
|
|
|
/* No-op in ANALYZE ONLY mode */
|
|
|
|
if (info->analyze_only)
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return stats;
|
2009-03-24 21:17:18 +01:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/*
|
2021-02-25 03:41:34 +01:00
|
|
|
* If btbulkdelete was called, we need not do anything (we just maintain
|
|
|
|
* the information used within _bt_vacuum_needs_cleanup() by calling
|
|
|
|
* _bt_set_cleanup_info() below).
|
2006-05-08 02:00:17 +02:00
|
|
|
*
|
2021-02-25 03:41:34 +01:00
|
|
|
* If btbulkdelete was _not_ called, then we have a choice to make: we
|
|
|
|
* must decide whether or not a btvacuumscan() call is needed now (i.e.
|
|
|
|
* whether the ongoing VACUUM operation can entirely avoid a physical scan
|
|
|
|
* of the index). A call to _bt_vacuum_needs_cleanup() decides it for us
|
|
|
|
* now.
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
2006-05-03 00:25:10 +02:00
|
|
|
if (stats == NULL)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
2021-02-25 03:41:34 +01:00
|
|
|
/* Check if VACUUM operation can entirely avoid btvacuumscan() call */
|
2021-03-12 22:11:47 +01:00
|
|
|
if (!_bt_vacuum_needs_cleanup(info->index))
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
return NULL;
|
|
|
|
|
2021-02-25 03:41:34 +01:00
|
|
|
/*
|
|
|
|
* Since we aren't going to actually delete any leaf items, there's no
|
2021-03-11 01:27:01 +01:00
|
|
|
* need to go through all the vacuum-cycle-ID pushups here.
|
|
|
|
*
|
|
|
|
* Posting list tuples are a source of inaccuracy for cleanup-only
|
|
|
|
* scans. btvacuumscan() will assume that the number of index tuples
|
|
|
|
* from each page can be used as num_index_tuples, even though
|
|
|
|
* num_index_tuples is supposed to represent the number of TIDs in the
|
|
|
|
* index. This naive approach can underestimate the number of tuples
|
|
|
|
* in the index significantly.
|
|
|
|
*
|
|
|
|
* We handle the problem by making num_index_tuples an estimate in
|
|
|
|
* cleanup-only case.
|
2021-02-25 03:41:34 +01:00
|
|
|
*/
|
2006-05-03 00:25:10 +02:00
|
|
|
stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
btvacuumscan(info, stats, NULL, NULL, 0);
|
2021-03-11 01:27:01 +01:00
|
|
|
stats->estimated_count = true;
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
2003-02-22 01:45:05 +01:00
|
|
|
|
2021-02-25 03:41:34 +01:00
|
|
|
/*
|
2021-03-11 23:18:23 +01:00
|
|
|
* Maintain num_delpages value in metapage for _bt_vacuum_needs_cleanup().
|
2021-02-25 03:41:34 +01:00
|
|
|
*
|
|
|
|
* num_delpages is the number of deleted pages now in the index that were
|
2021-03-21 23:25:39 +01:00
|
|
|
* not safe to place in the FSM to be recycled just yet. num_delpages is
|
|
|
|
* greater than 0 only when _bt_pagedel() actually deleted pages during
|
|
|
|
* our call to btvacuumscan(). Even then, _bt_pendingfsm_finalize() must
|
|
|
|
* have failed to place any newly deleted pages in the FSM just moments
|
|
|
|
* ago. (Actually, there are edge cases where recycling of the current
|
|
|
|
* VACUUM's newly deleted pages does not even become safe by the time the
|
|
|
|
* next VACUUM comes around. See nbtree/README.)
|
2021-02-25 03:41:34 +01:00
|
|
|
*/
|
|
|
|
Assert(stats->pages_deleted >= stats->pages_free);
|
|
|
|
num_delpages = stats->pages_deleted - stats->pages_free;
|
2021-03-11 01:27:01 +01:00
|
|
|
_bt_set_cleanup_info(info->index, num_delpages);
|
2021-02-25 03:41:34 +01:00
|
|
|
|
2005-05-07 23:32:24 +02:00
|
|
|
/*
|
2010-02-08 05:33:55 +01:00
|
|
|
* It's quite possible for us to be fooled by concurrent page splits into
|
|
|
|
* double-counting some index tuples, so disbelieve any total that exceeds
|
|
|
|
* the underlying heap's count ... if we know that accurately. Otherwise
|
|
|
|
* this might just make matters worse.
|
2005-05-07 23:32:24 +02:00
|
|
|
*/
|
2010-02-08 05:33:55 +01:00
|
|
|
if (!info->estimated_count)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
|
|
|
if (stats->num_index_tuples > info->num_heap_tuples)
|
|
|
|
stats->num_index_tuples = info->num_heap_tuples;
|
|
|
|
}
|
2005-05-07 23:32:24 +02:00
|
|
|
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return stats;
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
2003-02-22 01:45:05 +01:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/*
|
|
|
|
* btvacuumscan --- scan the index for VACUUMing purposes
|
|
|
|
*
|
|
|
|
* This combines the functions of looking for leaf tuples that are deletable
|
|
|
|
* according to the vacuum callback, looking for empty pages that can be
|
|
|
|
* deleted, and looking for old deleted pages that can be recycled. Both
|
|
|
|
* btbulkdelete and btvacuumcleanup invoke this (the latter only if no
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
* btbulkdelete call occurred and _bt_vacuum_needs_cleanup returned true).
|
2006-05-08 02:00:17 +02:00
|
|
|
*
|
|
|
|
* The caller is responsible for initially allocating/zeroing a stats struct
|
|
|
|
* and for obtaining a vacuum cycle ID if necessary.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
|
|
|
|
IndexBulkDeleteCallback callback, void *callback_state,
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
BTCycleId cycleid)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
|
|
|
Relation rel = info->index;
|
|
|
|
BTVacState vstate;
|
|
|
|
BlockNumber num_pages;
|
2020-05-02 23:04:33 +02:00
|
|
|
BlockNumber scanblkno;
|
2006-05-08 02:00:17 +02:00
|
|
|
bool needLock;
|
2005-05-07 23:32:24 +02:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/*
|
2021-02-25 03:41:34 +01:00
|
|
|
* Reset fields that track information about the entire index now. This
|
|
|
|
* avoids double-counting in the case where a single VACUUM command
|
|
|
|
* requires multiple scans of the index.
|
|
|
|
*
|
2021-02-25 23:32:18 +01:00
|
|
|
* Avoid resetting the tuples_removed and pages_newly_deleted fields here,
|
|
|
|
* since they track information about the VACUUM command, and so must last
|
|
|
|
* across each call to btvacuumscan().
|
2021-02-25 03:41:34 +01:00
|
|
|
*
|
|
|
|
* (Note that pages_free is treated as state about the whole index, not
|
|
|
|
* the current VACUUM. This is appropriate because RecordFreeIndexPage()
|
|
|
|
* calls are idempotent, and get repeated for the same deleted pages in
|
|
|
|
* some scenarios. The point for us is to track the number of recyclable
|
|
|
|
* pages in the index at the end of the VACUUM command.)
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
2021-02-25 03:41:34 +01:00
|
|
|
stats->num_pages = 0;
|
2006-05-08 02:00:17 +02:00
|
|
|
stats->num_index_tuples = 0;
|
|
|
|
stats->pages_deleted = 0;
|
2021-02-25 03:41:34 +01:00
|
|
|
stats->pages_free = 0;
|
2006-05-08 02:00:17 +02:00
|
|
|
|
|
|
|
/* Set up info to pass down to btvacuumpage */
|
|
|
|
vstate.info = info;
|
|
|
|
vstate.stats = stats;
|
|
|
|
vstate.callback = callback;
|
|
|
|
vstate.callback_state = callback_state;
|
|
|
|
vstate.cycleid = cycleid;
|
2003-02-22 01:45:05 +01:00
|
|
|
|
2003-02-23 07:17:13 +01:00
|
|
|
/* Create a temporary memory context to run _bt_pagedel in */
|
2006-05-08 02:00:17 +02:00
|
|
|
vstate.pagedelcontext = AllocSetContextCreate(CurrentMemoryContext,
|
|
|
|
"_bt_pagedel",
|
Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters. While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea. Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.
While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.
In passing, change TopMemoryContext to use the default allocation
parameters. Formerly it could only be extended 8K at a time. That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there. There seems no good reason
not to let it use growing blocks like most other contexts.
Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain. The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.
Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 23:50:38 +02:00
|
|
|
ALLOCSET_DEFAULT_SIZES);
|
2003-02-23 07:17:13 +01:00
|
|
|
|
2021-03-21 23:25:39 +01:00
|
|
|
/* Initialize vstate fields used by _bt_pendingfsm_finalize */
|
|
|
|
vstate.bufsize = 0;
|
|
|
|
vstate.maxbufsize = 0;
|
|
|
|
vstate.pendingpages = NULL;
|
|
|
|
vstate.npendingpages = 0;
|
|
|
|
/* Consider applying _bt_pendingfsm_finalize optimization */
|
|
|
|
_bt_pendingfsm_init(rel, &vstate, (callback == NULL));
|
|
|
|
|
2003-02-22 01:45:05 +01:00
|
|
|
/*
|
2006-05-08 02:00:17 +02:00
|
|
|
* The outer loop iterates over all index pages except the metapage, in
|
|
|
|
* physical order (we hope the kernel will cooperate in providing
|
|
|
|
* read-ahead for speed). It is critical that we visit all leaf pages,
|
|
|
|
* including ones added after we start the scan, else we might fail to
|
|
|
|
* delete some deletable tuples. Hence, we must repeatedly check the
|
|
|
|
* relation length. We must acquire the relation-extension lock while
|
|
|
|
* doing so to avoid a race condition: if someone else is extending the
|
|
|
|
* relation, there is a window where bufmgr/smgr have created a new
|
|
|
|
* all-zero page but it hasn't yet been write-locked by _bt_getbuf(). If
|
|
|
|
* we manage to scan such a page here, we'll improperly assume it can be
|
|
|
|
* recycled. Taking the lock synchronizes things enough to prevent a
|
|
|
|
* problem: either num_pages won't include the new page, or _bt_getbuf
|
|
|
|
* already has write lock on the buffer and it will be fully initialized
|
2021-09-01 01:55:39 +02:00
|
|
|
* before we can examine it. Also, we need not worry if a page is added
|
|
|
|
* immediately after we look; the page splitting code already has
|
|
|
|
* write-lock on the left page before it adds a right page, so we must
|
|
|
|
* already have processed any tuples due to be moved into such a page.
|
2006-05-08 02:00:17 +02:00
|
|
|
*
|
|
|
|
* We can skip locking for new or temp relations, however, since no one
|
|
|
|
* else could be accessing them.
|
2003-02-22 01:45:05 +01:00
|
|
|
*/
|
2006-05-08 02:00:17 +02:00
|
|
|
needLock = !RELATION_IS_LOCAL(rel);
|
2006-02-14 18:20:01 +01:00
|
|
|
|
2020-05-02 23:04:33 +02:00
|
|
|
scanblkno = BTREE_METAPAGE + 1;
|
2006-05-08 02:00:17 +02:00
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
/* Get the current relation length */
|
|
|
|
if (needLock)
|
|
|
|
LockRelationForExtension(rel, ExclusiveLock);
|
|
|
|
num_pages = RelationGetNumberOfBlocks(rel);
|
|
|
|
if (needLock)
|
|
|
|
UnlockRelationForExtension(rel, ExclusiveLock);
|
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
if (info->report_progress)
|
|
|
|
pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_TOTAL,
|
|
|
|
num_pages);
|
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/* Quit if we've scanned the whole relation */
|
2020-05-02 23:04:33 +02:00
|
|
|
if (scanblkno >= num_pages)
|
2006-05-08 02:00:17 +02:00
|
|
|
break;
|
|
|
|
/* Iterate over pages, then loop back to recheck length */
|
2020-05-02 23:04:33 +02:00
|
|
|
for (; scanblkno < num_pages; scanblkno++)
|
2006-02-12 01:18:17 +01:00
|
|
|
{
|
2020-05-02 23:04:33 +02:00
|
|
|
btvacuumpage(&vstate, scanblkno);
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
if (info->report_progress)
|
|
|
|
pgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE,
|
2020-05-02 23:04:33 +02:00
|
|
|
scanblkno);
|
2006-02-12 01:18:17 +01:00
|
|
|
}
|
2003-02-22 01:45:05 +01:00
|
|
|
}
|
|
|
|
|
2021-02-25 03:41:34 +01:00
|
|
|
/* Set statistics num_pages field to final size of index */
|
|
|
|
stats->num_pages = num_pages;
|
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
MemoryContextDelete(vstate.pagedelcontext);
|
2003-02-23 07:17:13 +01:00
|
|
|
|
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling
IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete
phase, and do it if and only if we found some pages that could be put into
FSM. As in commit 851a26e26, the idea is to make free pages visible to FSM
searchers sooner when vacuuming very large tables (large enough to need
multiple bulkdelete scans). This adds more redundant work than that commit
did, since we have to scan the entire index FSM each time rather than being
able to localize what needs to be updated; but it still seems worthwhile.
However, we can buy something back by not touching the FSM at all when
there are no pages that can be put in it. That will result in slower
recovery from corrupt upper FSM pages in such a scenario, but it doesn't
seem like that's a case we need to optimize for.
Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update
FSM during the vacuumcleanup phase not bulkdelete, so that doing something
comparable to this would be a much more invasive change, and it's not clear
it's worth it. BRIN indexes do things sufficiently differently that this
change doesn't apply to them, either.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
2018-03-30 17:48:17 +02:00
|
|
|
/*
|
2021-03-21 23:25:39 +01:00
|
|
|
* If there were any calls to _bt_pagedel() during scan of the index then
|
|
|
|
* see if any of the resulting pages can be placed in the FSM now. When
|
|
|
|
* it's not safe we'll have to leave it up to a future VACUUM operation.
|
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling
IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete
phase, and do it if and only if we found some pages that could be put into
FSM. As in commit 851a26e26, the idea is to make free pages visible to FSM
searchers sooner when vacuuming very large tables (large enough to need
multiple bulkdelete scans). This adds more redundant work than that commit
did, since we have to scan the entire index FSM each time rather than being
able to localize what needs to be updated; but it still seems worthwhile.
However, we can buy something back by not touching the FSM at all when
there are no pages that can be put in it. That will result in slower
recovery from corrupt upper FSM pages in such a scenario, but it doesn't
seem like that's a case we need to optimize for.
Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update
FSM during the vacuumcleanup phase not bulkdelete, so that doing something
comparable to this would be a much more invasive change, and it's not clear
it's worth it. BRIN indexes do things sufficiently differently that this
change doesn't apply to them, either.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
2018-03-30 17:48:17 +02:00
|
|
|
*
|
2021-03-21 23:25:39 +01:00
|
|
|
* Finally, if we placed any pages in the FSM (either just now or during
|
|
|
|
* the scan), forcibly update the upper-level FSM pages to ensure that
|
|
|
|
* searchers can find them.
|
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling
IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete
phase, and do it if and only if we found some pages that could be put into
FSM. As in commit 851a26e26, the idea is to make free pages visible to FSM
searchers sooner when vacuuming very large tables (large enough to need
multiple bulkdelete scans). This adds more redundant work than that commit
did, since we have to scan the entire index FSM each time rather than being
able to localize what needs to be updated; but it still seems worthwhile.
However, we can buy something back by not touching the FSM at all when
there are no pages that can be put in it. That will result in slower
recovery from corrupt upper FSM pages in such a scenario, but it doesn't
seem like that's a case we need to optimize for.
Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update
FSM during the vacuumcleanup phase not bulkdelete, so that doing something
comparable to this would be a much more invasive change, and it's not clear
it's worth it. BRIN indexes do things sufficiently differently that this
change doesn't apply to them, either.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
2018-03-30 17:48:17 +02:00
|
|
|
*/
|
2021-03-21 23:25:39 +01:00
|
|
|
_bt_pendingfsm_finalize(rel, &vstate);
|
2021-02-25 03:41:34 +01:00
|
|
|
if (stats->pages_free > 0)
|
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling
IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete
phase, and do it if and only if we found some pages that could be put into
FSM. As in commit 851a26e26, the idea is to make free pages visible to FSM
searchers sooner when vacuuming very large tables (large enough to need
multiple bulkdelete scans). This adds more redundant work than that commit
did, since we have to scan the entire index FSM each time rather than being
able to localize what needs to be updated; but it still seems worthwhile.
However, we can buy something back by not touching the FSM at all when
there are no pages that can be put in it. That will result in slower
recovery from corrupt upper FSM pages in such a scenario, but it doesn't
seem like that's a case we need to optimize for.
Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update
FSM during the vacuumcleanup phase not bulkdelete, so that doing something
comparable to this would be a much more invasive change, and it's not clear
it's worth it. BRIN indexes do things sufficiently differently that this
change doesn't apply to them, either.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
2018-03-30 17:48:17 +02:00
|
|
|
IndexFreeSpaceMapVacuum(rel);
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
2003-02-22 01:45:05 +01:00
|
|
|
|
2006-05-08 02:00:17 +02:00
|
|
|
/*
|
|
|
|
* btvacuumpage --- VACUUM one page
|
|
|
|
*
|
2020-05-02 23:04:33 +02:00
|
|
|
* This processes a single page for btvacuumscan(). In some cases we must
|
|
|
|
* backtrack to re-examine and VACUUM pages that were the scanblkno during
|
|
|
|
* a previous call here. This is how we handle page splits (that happened
|
|
|
|
* after our cycleid was acquired) whose right half page happened to reuse
|
|
|
|
* a block that we might have processed at some point before it was
|
|
|
|
* recycled (i.e. before the page split).
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
|
|
|
static void
|
2020-05-02 23:04:33 +02:00
|
|
|
btvacuumpage(BTVacState *vstate, BlockNumber scanblkno)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
|
|
|
IndexVacuumInfo *info = vstate->info;
|
|
|
|
IndexBulkDeleteResult *stats = vstate->stats;
|
|
|
|
IndexBulkDeleteCallback callback = vstate->callback;
|
|
|
|
void *callback_state = vstate->callback_state;
|
|
|
|
Relation rel = info->index;
|
2020-05-02 23:04:33 +02:00
|
|
|
bool attempt_pagedel;
|
|
|
|
BlockNumber blkno,
|
|
|
|
backtrack_to;
|
2006-05-08 02:00:17 +02:00
|
|
|
Buffer buf;
|
|
|
|
Page page;
|
2020-05-02 23:04:33 +02:00
|
|
|
BTPageOpaque opaque;
|
|
|
|
|
|
|
|
blkno = scanblkno;
|
|
|
|
|
|
|
|
backtrack:
|
2006-05-08 02:00:17 +02:00
|
|
|
|
2020-05-02 23:04:33 +02:00
|
|
|
attempt_pagedel = false;
|
|
|
|
backtrack_to = P_NONE;
|
2006-05-08 02:00:17 +02:00
|
|
|
|
|
|
|
/* call vacuum_delay_point while not holding any buffer lock */
|
|
|
|
vacuum_delay_point();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can't use _bt_getbuf() here because it always applies
|
|
|
|
* _bt_checkpage(), which will barf on an all-zero page. We want to
|
2007-05-30 22:12:03 +02:00
|
|
|
* recycle all-zero pages, not fail. Also, we want to use a nondefault
|
|
|
|
* buffer access strategy.
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
buf = ReadBufferExtended(rel, MAIN_FORKNUM, blkno, RBM_NORMAL,
|
|
|
|
info->strategy);
|
2020-07-22 00:50:58 +02:00
|
|
|
_bt_lockbuf(rel, buf, BT_READ);
|
2016-04-20 15:31:19 +02:00
|
|
|
page = BufferGetPage(buf);
|
2020-05-02 23:04:33 +02:00
|
|
|
opaque = NULL;
|
2006-05-08 02:00:17 +02:00
|
|
|
if (!PageIsNew(page))
|
2015-07-27 11:24:27 +02:00
|
|
|
{
|
2006-05-08 02:00:17 +02:00
|
|
|
_bt_checkpage(rel, buf);
|
2015-07-27 11:24:27 +02:00
|
|
|
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
|
|
|
|
}
|
2006-05-08 02:00:17 +02:00
|
|
|
|
2020-05-02 23:04:33 +02:00
|
|
|
Assert(blkno <= scanblkno);
|
|
|
|
if (blkno != scanblkno)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
2020-05-02 23:04:33 +02:00
|
|
|
/*
|
|
|
|
* We're backtracking.
|
|
|
|
*
|
|
|
|
* We followed a right link to a sibling leaf page (a page that
|
|
|
|
* happens to be from a block located before scanblkno). The only
|
|
|
|
* case we want to do anything with is a live leaf page having the
|
|
|
|
* current vacuum cycle ID.
|
|
|
|
*
|
|
|
|
* The page had better be in a state that's consistent with what we
|
|
|
|
* expect. Check for conditions that imply corruption in passing. It
|
|
|
|
* can't be half-dead because only an interrupted VACUUM process can
|
|
|
|
* leave pages in that state, so we'd definitely have dealt with it
|
|
|
|
* back when the page was the scanblkno page (half-dead pages are
|
|
|
|
* always marked fully deleted by _bt_pagedel()). This assumes that
|
|
|
|
* there can be only one vacuum process running at a time.
|
|
|
|
*/
|
|
|
|
if (!opaque || !P_ISLEAF(opaque) || P_ISHALFDEAD(opaque))
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
2020-05-02 23:04:33 +02:00
|
|
|
Assert(false);
|
|
|
|
ereport(LOG,
|
|
|
|
(errcode(ERRCODE_INDEX_CORRUPTED),
|
|
|
|
errmsg_internal("right sibling %u of scanblkno %u unexpectedly in an inconsistent state in index \"%s\"",
|
|
|
|
blkno, scanblkno, RelationGetRelationName(rel))));
|
|
|
|
_bt_relbuf(rel, buf);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We may have already processed the page in an earlier call, when the
|
|
|
|
* page was scanblkno. This happens when the leaf page split occurred
|
|
|
|
* after the scan began, but before the right sibling page became the
|
|
|
|
* scanblkno.
|
|
|
|
*
|
|
|
|
* Page may also have been deleted by current btvacuumpage() call,
|
|
|
|
* since _bt_pagedel() sometimes deletes the right sibling page of
|
|
|
|
* scanblkno in passing (it does so after we decided where to
|
|
|
|
* backtrack to). We don't need to process this page as a deleted
|
|
|
|
* page a second time now (in fact, it would be wrong to count it as a
|
|
|
|
* deleted page in the bulk delete statistics a second time).
|
|
|
|
*/
|
|
|
|
if (opaque->btpo_cycleid != vstate->cycleid || P_ISDELETED(opaque))
|
|
|
|
{
|
|
|
|
/* Done with current scanblkno (and all lower split pages) */
|
2006-05-08 02:00:17 +02:00
|
|
|
_bt_relbuf(rel, buf);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-25 03:41:34 +01:00
|
|
|
if (!opaque || BTPageIsRecyclable(page))
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
/* Okay to recycle this page (which could be leaf or internal) */
|
2008-09-30 12:52:14 +02:00
|
|
|
RecordFreeIndexPage(rel, blkno);
|
2006-05-08 02:00:17 +02:00
|
|
|
stats->pages_deleted++;
|
2021-02-25 03:41:34 +01:00
|
|
|
stats->pages_free++;
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
|
|
|
else if (P_ISDELETED(opaque))
|
|
|
|
{
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
/*
|
|
|
|
* Already deleted page (which could be leaf or internal). Can't
|
|
|
|
* recycle yet.
|
|
|
|
*/
|
2006-05-08 02:00:17 +02:00
|
|
|
stats->pages_deleted++;
|
|
|
|
}
|
2006-11-01 20:43:17 +01:00
|
|
|
else if (P_ISHALFDEAD(opaque))
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
2021-02-25 23:32:18 +01:00
|
|
|
/* Half-dead leaf page (from interrupted VACUUM) -- finish deleting */
|
|
|
|
attempt_pagedel = true;
|
|
|
|
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
/*
|
2021-02-25 23:32:18 +01:00
|
|
|
* _bt_pagedel() will increment both pages_newly_deleted and
|
|
|
|
* pages_deleted stats in all cases (barring corruption)
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
*/
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
|
|
|
else if (P_ISLEAF(opaque))
|
|
|
|
{
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
OffsetNumber deletable[MaxIndexTuplesPerPage];
|
2006-05-08 02:00:17 +02:00
|
|
|
int ndeletable;
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
BTVacuumPosting updatable[MaxIndexTuplesPerPage];
|
|
|
|
int nupdatable;
|
2006-05-08 02:00:17 +02:00
|
|
|
OffsetNumber offnum,
|
|
|
|
minoff,
|
|
|
|
maxoff;
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
int nhtidsdead,
|
|
|
|
nhtidslive;
|
2006-05-08 02:00:17 +02:00
|
|
|
|
|
|
|
/*
|
2021-12-09 02:24:45 +01:00
|
|
|
* Trade in the initial read lock for a full cleanup lock on this
|
|
|
|
* page. We must get such a lock on every leaf page over the course
|
|
|
|
* of the vacuum scan, whether or not it actually contains any
|
2006-05-08 02:00:17 +02:00
|
|
|
* deletable tuples --- see nbtree/README.
|
|
|
|
*/
|
2020-07-22 00:50:58 +02:00
|
|
|
_bt_upgradelockbufcleanup(rel, buf);
|
2006-05-08 02:00:17 +02:00
|
|
|
|
|
|
|
/*
|
2020-05-02 23:04:33 +02:00
|
|
|
* Check whether we need to backtrack to earlier pages. What we are
|
|
|
|
* concerned about is a page split that happened since we started the
|
|
|
|
* vacuum scan. If the split moved tuples on the right half of the
|
|
|
|
* split (i.e. the tuples that sort high) to a block that we already
|
|
|
|
* passed over, then we might have missed the tuples. We need to
|
|
|
|
* backtrack now. (Must do this before possibly clearing btpo_cycleid
|
|
|
|
* or deleting scanblkno page below!)
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
|
|
|
if (vstate->cycleid != 0 &&
|
|
|
|
opaque->btpo_cycleid == vstate->cycleid &&
|
|
|
|
!(opaque->btpo_flags & BTP_SPLIT_END) &&
|
|
|
|
!P_RIGHTMOST(opaque) &&
|
2020-05-02 23:04:33 +02:00
|
|
|
opaque->btpo_next < scanblkno)
|
|
|
|
backtrack_to = opaque->btpo_next;
|
2006-05-08 02:00:17 +02:00
|
|
|
|
|
|
|
ndeletable = 0;
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
nupdatable = 0;
|
2006-05-08 02:00:17 +02:00
|
|
|
minoff = P_FIRSTDATAKEY(opaque);
|
|
|
|
maxoff = PageGetMaxOffsetNumber(page);
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
nhtidsdead = 0;
|
|
|
|
nhtidslive = 0;
|
2006-05-08 02:00:17 +02:00
|
|
|
if (callback)
|
|
|
|
{
|
2021-09-27 05:25:14 +02:00
|
|
|
/* btbulkdelete callback tells us what to delete (or update) */
|
2006-05-08 02:00:17 +02:00
|
|
|
for (offnum = minoff;
|
|
|
|
offnum <= maxoff;
|
|
|
|
offnum = OffsetNumberNext(offnum))
|
|
|
|
{
|
|
|
|
IndexTuple itup;
|
|
|
|
|
|
|
|
itup = (IndexTuple) PageGetItem(page,
|
|
|
|
PageGetItemId(page, offnum));
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
Assert(!BTreeTupleIsPivot(itup));
|
|
|
|
if (!BTreeTupleIsPosting(itup))
|
|
|
|
{
|
|
|
|
/* Regular tuple, standard table TID representation */
|
|
|
|
if (callback(&itup->t_tid, callback_state))
|
|
|
|
{
|
|
|
|
deletable[ndeletable++] = offnum;
|
|
|
|
nhtidsdead++;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
nhtidslive++;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
BTVacuumPosting vacposting;
|
|
|
|
int nremaining;
|
|
|
|
|
|
|
|
/* Posting list tuple */
|
|
|
|
vacposting = btreevacuumposting(vstate, itup, offnum,
|
|
|
|
&nremaining);
|
|
|
|
if (vacposting == NULL)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* All table TIDs from the posting tuple remain, so no
|
|
|
|
* delete or update required
|
|
|
|
*/
|
|
|
|
Assert(nremaining == BTreeTupleGetNPosting(itup));
|
|
|
|
}
|
|
|
|
else if (nremaining > 0)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Store metadata about posting list tuple in
|
|
|
|
* updatable array for entire page. Existing tuple
|
|
|
|
* will be updated during the later call to
|
|
|
|
* _bt_delitems_vacuum().
|
|
|
|
*/
|
|
|
|
Assert(nremaining < BTreeTupleGetNPosting(itup));
|
|
|
|
updatable[nupdatable++] = vacposting;
|
|
|
|
nhtidsdead += BTreeTupleGetNPosting(itup) - nremaining;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* All table TIDs from the posting list must be
|
|
|
|
* deleted. We'll delete the index tuple completely
|
|
|
|
* (no update required).
|
|
|
|
*/
|
|
|
|
Assert(nremaining == 0);
|
|
|
|
deletable[ndeletable++] = offnum;
|
|
|
|
nhtidsdead += BTreeTupleGetNPosting(itup);
|
|
|
|
pfree(vacposting);
|
|
|
|
}
|
|
|
|
|
|
|
|
nhtidslive += nremaining;
|
|
|
|
}
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
* Apply any needed deletes or updates. We issue just one
|
|
|
|
* _bt_delitems_vacuum() call per page, so as to minimize WAL traffic.
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
if (ndeletable > 0 || nupdatable > 0)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
2020-05-02 23:04:33 +02:00
|
|
|
Assert(nhtidsdead >= ndeletable + nupdatable);
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
_bt_delitems_vacuum(rel, buf, deletable, ndeletable, updatable,
|
|
|
|
nupdatable);
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
stats->tuples_removed += nhtidsdead;
|
2006-05-08 02:00:17 +02:00
|
|
|
/* must recompute maxoff */
|
|
|
|
maxoff = PageGetMaxOffsetNumber(page);
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
|
|
|
|
/* can't leak memory here */
|
|
|
|
for (int i = 0; i < nupdatable; i++)
|
|
|
|
pfree(updatable[i]);
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
* If the leaf page has been split during this vacuum cycle, it
|
|
|
|
* seems worth expending a write to clear btpo_cycleid even if we
|
|
|
|
* don't have any deletions to do. (If we do, _bt_delitems_vacuum
|
|
|
|
* takes care of this.) This ensures we won't process the page
|
|
|
|
* again.
|
2006-05-08 02:00:17 +02:00
|
|
|
*
|
|
|
|
* We treat this like a hint-bit update because there's no need to
|
|
|
|
* WAL-log it.
|
|
|
|
*/
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
Assert(nhtidsdead == 0);
|
2006-05-08 02:00:17 +02:00
|
|
|
if (vstate->cycleid != 0 &&
|
|
|
|
opaque->btpo_cycleid == vstate->cycleid)
|
|
|
|
{
|
|
|
|
opaque->btpo_cycleid = 0;
|
2013-06-17 17:02:12 +02:00
|
|
|
MarkBufferDirtyHint(buf, true);
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Fix bug in nbtree VACUUM "skip full scan" feature.
Commit 857f9c36cda (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
2020-05-01 17:39:52 +02:00
|
|
|
* If the leaf page is now empty, try to delete it; else count the
|
|
|
|
* live tuples (live table TIDs in posting lists are counted as
|
2020-05-02 23:04:33 +02:00
|
|
|
* separate live tuples). We don't delete when backtracking, though,
|
|
|
|
* since that would require teaching _bt_pagedel() about backtracking
|
|
|
|
* (doesn't seem worth adding more complexity to deal with that).
|
2020-11-05 03:42:27 +01:00
|
|
|
*
|
|
|
|
* We don't count the number of live TIDs during cleanup-only calls to
|
|
|
|
* btvacuumscan (i.e. when callback is not set). We count the number
|
|
|
|
* of index tuples directly instead. This avoids the expense of
|
2021-03-11 01:27:01 +01:00
|
|
|
* directly examining all of the tuples on each page. VACUUM will
|
|
|
|
* treat num_index_tuples as an estimate in cleanup-only case, so it
|
|
|
|
* doesn't matter that this underestimates num_index_tuples
|
|
|
|
* significantly in some cases.
|
2006-05-08 02:00:17 +02:00
|
|
|
*/
|
|
|
|
if (minoff > maxoff)
|
2020-05-02 23:04:33 +02:00
|
|
|
attempt_pagedel = (blkno == scanblkno);
|
2020-11-05 03:42:27 +01:00
|
|
|
else if (callback)
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
stats->num_index_tuples += nhtidslive;
|
2020-11-05 03:42:27 +01:00
|
|
|
else
|
|
|
|
stats->num_index_tuples += maxoff - minoff + 1;
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
|
2020-05-02 23:04:33 +02:00
|
|
|
Assert(!attempt_pagedel || nhtidslive == 0);
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
|
|
|
|
2020-05-02 23:04:33 +02:00
|
|
|
if (attempt_pagedel)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
|
|
|
MemoryContext oldcontext;
|
|
|
|
|
|
|
|
/* Run pagedel in a temp context to avoid memory leakage */
|
|
|
|
MemoryContextReset(vstate->pagedelcontext);
|
|
|
|
oldcontext = MemoryContextSwitchTo(vstate->pagedelcontext);
|
|
|
|
|
2020-05-01 18:51:09 +02:00
|
|
|
/*
|
2021-02-25 23:32:18 +01:00
|
|
|
* _bt_pagedel maintains the bulk delete stats on our behalf;
|
|
|
|
* pages_newly_deleted and pages_deleted are likely to be incremented
|
|
|
|
* during call
|
2020-05-01 18:51:09 +02:00
|
|
|
*/
|
2020-05-02 23:04:33 +02:00
|
|
|
Assert(blkno == scanblkno);
|
2021-02-25 23:32:18 +01:00
|
|
|
_bt_pagedel(rel, buf, vstate);
|
2006-05-08 02:00:17 +02:00
|
|
|
|
|
|
|
MemoryContextSwitchTo(oldcontext);
|
|
|
|
/* pagedel released buffer, so we shouldn't */
|
|
|
|
}
|
|
|
|
else
|
|
|
|
_bt_relbuf(rel, buf);
|
|
|
|
|
2020-05-02 23:04:33 +02:00
|
|
|
if (backtrack_to != P_NONE)
|
2006-05-08 02:00:17 +02:00
|
|
|
{
|
2020-05-02 23:04:33 +02:00
|
|
|
blkno = backtrack_to;
|
|
|
|
goto backtrack;
|
2006-05-08 02:00:17 +02:00
|
|
|
}
|
2003-02-22 01:45:05 +01:00
|
|
|
}
|
2011-12-18 21:49:00 +01:00
|
|
|
|
Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method. The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs. Deduplication is only applied at
the point where a leaf page split would otherwise be required. New
posting list tuples are formed by merging together existing duplicate
tuples. The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed. Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.
The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach. Most individual inserts of index tuples have
exactly the same overhead as before. The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits. The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).
Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key). The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective. This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.
A new index storage parameter (deduplicate_items) controls the use of
deduplication. The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible. This decision will be
reviewed at the end of the Postgres 13 beta period.
There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values. The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely. Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").
Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.
No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12). However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from. This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.
Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
https://postgr.es/m/55E4051B.7020209@postgrespro.ru
https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 22:05:30 +01:00
|
|
|
/*
|
|
|
|
* btreevacuumposting --- determine TIDs still needed in posting list
|
|
|
|
*
|
|
|
|
* Returns metadata describing how to build replacement tuple without the TIDs
|
|
|
|
* that VACUUM needs to delete. Returned value is NULL in the common case
|
|
|
|
* where no changes are needed to caller's posting list tuple (we avoid
|
|
|
|
* allocating memory here as an optimization).
|
|
|
|
*
|
|
|
|
* The number of TIDs that should remain in the posting list tuple is set for
|
|
|
|
* caller in *nremaining.
|
|
|
|
*/
|
|
|
|
static BTVacuumPosting
|
|
|
|
btreevacuumposting(BTVacState *vstate, IndexTuple posting,
|
|
|
|
OffsetNumber updatedoffset, int *nremaining)
|
|
|
|
{
|
|
|
|
int live = 0;
|
|
|
|
int nitem = BTreeTupleGetNPosting(posting);
|
|
|
|
ItemPointer items = BTreeTupleGetPosting(posting);
|
|
|
|
BTVacuumPosting vacposting = NULL;
|
|
|
|
|
|
|
|
for (int i = 0; i < nitem; i++)
|
|
|
|
{
|
|
|
|
if (!vstate->callback(items + i, vstate->callback_state))
|
|
|
|
{
|
|
|
|
/* Live table TID */
|
|
|
|
live++;
|
|
|
|
}
|
|
|
|
else if (vacposting == NULL)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* First dead table TID encountered.
|
|
|
|
*
|
|
|
|
* It's now clear that we need to delete one or more dead table
|
|
|
|
* TIDs, so start maintaining metadata describing how to update
|
|
|
|
* existing posting list tuple.
|
|
|
|
*/
|
|
|
|
vacposting = palloc(offsetof(BTVacuumPostingData, deletetids) +
|
|
|
|
nitem * sizeof(uint16));
|
|
|
|
|
|
|
|
vacposting->itup = posting;
|
|
|
|
vacposting->updatedoffset = updatedoffset;
|
|
|
|
vacposting->ndeletedtids = 0;
|
|
|
|
vacposting->deletetids[vacposting->ndeletedtids++] = i;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Second or subsequent dead table TID */
|
|
|
|
vacposting->deletetids[vacposting->ndeletedtids++] = i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
*nremaining = live;
|
|
|
|
return vacposting;
|
|
|
|
}
|
|
|
|
|
2011-12-18 21:49:00 +01:00
|
|
|
/*
|
|
|
|
* btcanreturn() -- Check whether btree indexes support index-only scans.
|
|
|
|
*
|
|
|
|
* btrees always do, so this is trivial.
|
|
|
|
*/
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
bool
|
|
|
|
btcanreturn(Relation index, int attno)
|
2011-12-18 21:49:00 +01:00
|
|
|
{
|
Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function. All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function. This is similar to
the designs we've adopted for FDWs and tablesample methods. There
are multiple advantages. For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.
A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL. We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.
Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-18 01:36:59 +01:00
|
|
|
return true;
|
2011-12-18 21:49:00 +01:00
|
|
|
}
|