1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* cluster.c
|
2010-02-07 21:48:13 +01:00
|
|
|
* CLUSTER a table on an index. This is now also used for VACUUM FULL.
|
2002-08-11 23:17:35 +02:00
|
|
|
*
|
|
|
|
* There is hardly anything left of Paul Brown's original implementation...
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
2022-01-08 01:04:57 +01:00
|
|
|
* Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994-5, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/commands/cluster.c
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "postgres.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2019-12-27 00:09:00 +01:00
|
|
|
#include "access/amapi.h"
|
Don't include heapam.h from others headers.
heapam.h previously was included in a number of widely used
headers (e.g. execnodes.h, indirectly in executor.h, ...). That's
problematic on its own, as heapam.h contains a lot of low-level
details that don't need to be exposed that widely, but becomes more
problematic with the upcoming introduction of pluggable table storage
- it seems inappropriate for heapam.h to be included that widely
afterwards.
heapam.h was largely only included in other headers to get the
HeapScanDesc typedef (which was defined in heapam.h, even though
HeapScanDescData is defined in relscan.h). The better solution here
seems to be to just use the underlying struct (forward declared where
necessary). Similar for BulkInsertState.
Another problem was that LockTupleMode was used in executor.h - parts
of the file tried to cope without heapam.h, but due to the fact that
it indirectly included it, several subsequent violations of that goal
were not not noticed. We could just reuse the approach of declaring
parameters as int, but it seems nicer to move LockTupleMode to
lockoptions.h - that's not a perfect location, but also doesn't seem
bad.
As a number of files relied on implicitly included heapam.h, a
significant number of files grew an explicit include. It's quite
probably that a few external projects will need to do the same.
Author: Andres Freund
Reviewed-By: Alvaro Herrera
Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
2019-01-15 00:54:18 +01:00
|
|
|
#include "access/heapam.h"
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
#include "access/multixact.h"
|
2008-06-19 02:46:06 +02:00
|
|
|
#include "access/relscan.h"
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
#include "access/tableam.h"
|
2019-07-08 17:58:05 +02:00
|
|
|
#include "access/toast_internals.h"
|
2019-11-12 04:00:16 +01:00
|
|
|
#include "access/transam.h"
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "access/xact.h"
|
2014-11-06 12:52:08 +01:00
|
|
|
#include "access/xlog.h"
|
2002-08-11 23:17:35 +02:00
|
|
|
#include "catalog/catalog.h"
|
2002-07-12 20:43:19 +02:00
|
|
|
#include "catalog/dependency.h"
|
1999-07-16 07:00:38 +02:00
|
|
|
#include "catalog/heap.h"
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "catalog/index.h"
|
2011-12-21 21:17:28 +01:00
|
|
|
#include "catalog/namespace.h"
|
2013-03-18 03:55:14 +01:00
|
|
|
#include "catalog/objectaccess.h"
|
2019-11-12 04:00:16 +01:00
|
|
|
#include "catalog/pg_am.h"
|
2006-07-31 03:16:38 +02:00
|
|
|
#include "catalog/toasting.h"
|
1999-07-16 01:04:24 +02:00
|
|
|
#include "commands/cluster.h"
|
2020-12-03 02:13:21 +01:00
|
|
|
#include "commands/defrem.h"
|
2019-03-25 15:59:04 +01:00
|
|
|
#include "commands/progress.h"
|
2008-01-30 20:46:48 +01:00
|
|
|
#include "commands/tablecmds.h"
|
2007-05-17 17:28:29 +02:00
|
|
|
#include "commands/vacuum.h"
|
2000-07-04 08:11:54 +02:00
|
|
|
#include "miscadmin.h"
|
2019-01-29 21:48:51 +01:00
|
|
|
#include "optimizer/optimizer.h"
|
2019-03-25 15:59:04 +01:00
|
|
|
#include "pgstat.h"
|
2008-05-12 02:00:54 +02:00
|
|
|
#include "storage/bufmgr.h"
|
2011-05-01 23:57:33 +02:00
|
|
|
#include "storage/lmgr.h"
|
2011-06-08 12:47:21 +02:00
|
|
|
#include "storage/predicate.h"
|
2002-11-15 04:09:39 +01:00
|
|
|
#include "utils/acl.h"
|
2002-08-10 22:43:46 +02:00
|
|
|
#include "utils/fmgroids.h"
|
2004-05-06 18:10:57 +02:00
|
|
|
#include "utils/inval.h"
|
2002-03-29 23:10:34 +01:00
|
|
|
#include "utils/lsyscache.h"
|
2005-05-06 19:24:55 +02:00
|
|
|
#include "utils/memutils.h"
|
2010-10-08 03:46:46 +02:00
|
|
|
#include "utils/pg_rusage.h"
|
2010-02-07 21:48:13 +01:00
|
|
|
#include "utils/relmapper.h"
|
2008-03-26 19:48:59 +01:00
|
|
|
#include "utils/snapmgr.h"
|
2007-04-08 03:26:33 +02:00
|
|
|
#include "utils/syscache.h"
|
2010-10-08 02:00:28 +02:00
|
|
|
#include "utils/tuplesort.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/*
|
|
|
|
* This struct is used to pass around the information on tables to be
|
2002-11-15 04:09:39 +01:00
|
|
|
* clustered. We need this so we can make a list of them when invoked without
|
|
|
|
* a specific table/index pair.
|
|
|
|
*/
|
|
|
|
typedef struct
|
|
|
|
{
|
|
|
|
Oid tableOid;
|
|
|
|
Oid indexOid;
|
2002-12-30 19:42:17 +01:00
|
|
|
} RelToCluster;
|
2002-11-15 04:09:39 +01:00
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
|
2020-10-29 01:53:41 +01:00
|
|
|
static void rebuild_relation(Relation OldHeap, Oid indexOid, bool verbose);
|
2019-03-29 04:01:14 +01:00
|
|
|
static void copy_table_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex,
|
2020-10-29 01:53:41 +01:00
|
|
|
bool verbose, bool *pSwapToastByContent,
|
|
|
|
TransactionId *pFreezeXid, MultiXactId *pCutoffMulti);
|
2002-12-30 19:42:17 +01:00
|
|
|
static List *get_tables_to_cluster(MemoryContext cluster_context);
|
|
|
|
|
|
|
|
|
|
|
|
/*---------------------------------------------------------------------------
|
|
|
|
* This cluster code allows for clustering multiple tables at once. Because
|
|
|
|
* of this, we cannot just run everything on a single transaction, or we
|
|
|
|
* would be forced to acquire exclusive locks on all the tables being
|
|
|
|
* clustered, simultaneously --- very likely leading to deadlock.
|
|
|
|
*
|
|
|
|
* To solve this we follow a similar strategy to VACUUM code,
|
|
|
|
* clustering each relation in a separate transaction. For this to work,
|
|
|
|
* we need to:
|
|
|
|
* - provide a separate memory context so that we can pass information in
|
|
|
|
* a way that survives across transactions
|
|
|
|
* - start a new transaction every time a new relation is clustered
|
|
|
|
* - check for validity of the information on to-be-clustered relations,
|
|
|
|
* as someone might have deleted a relation behind our back, or
|
|
|
|
* clustered one on a different index
|
|
|
|
* - end the transaction
|
|
|
|
*
|
|
|
|
* The single-relation case does not have any such overhead.
|
|
|
|
*
|
2007-04-08 03:26:33 +02:00
|
|
|
* We also allow a relation to be specified without index. In that case,
|
2002-12-30 19:42:17 +01:00
|
|
|
* the indisclustered bit will be looked up, and an ERROR will be thrown
|
|
|
|
* if there is no index with the bit set.
|
|
|
|
*---------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
void
|
2020-12-03 02:13:21 +01:00
|
|
|
cluster(ParseState *pstate, ClusterStmt *stmt, bool isTopLevel)
|
2002-12-30 19:42:17 +01:00
|
|
|
{
|
2020-12-03 02:13:21 +01:00
|
|
|
ListCell *lc;
|
2021-01-18 06:03:10 +01:00
|
|
|
ClusterParams params = {0};
|
2020-12-03 02:13:21 +01:00
|
|
|
bool verbose = false;
|
|
|
|
|
|
|
|
/* Parse option list */
|
|
|
|
foreach(lc, stmt->params)
|
|
|
|
{
|
|
|
|
DefElem *opt = (DefElem *) lfirst(lc);
|
|
|
|
|
|
|
|
if (strcmp(opt->defname, "verbose") == 0)
|
|
|
|
verbose = defGetBoolean(opt);
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("unrecognized CLUSTER option \"%s\"",
|
|
|
|
opt->defname),
|
|
|
|
parser_errposition(pstate, opt->location)));
|
|
|
|
}
|
|
|
|
|
2021-01-18 06:03:10 +01:00
|
|
|
params.options = (verbose ? CLUOPT_VERBOSE : 0);
|
2020-12-03 02:13:21 +01:00
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
if (stmt->relation != NULL)
|
|
|
|
{
|
|
|
|
/* This is the single-relation case. */
|
|
|
|
Oid tableOid,
|
|
|
|
indexOid = InvalidOid;
|
|
|
|
Relation rel;
|
|
|
|
|
2011-12-21 21:17:28 +01:00
|
|
|
/* Find, lock, and check permissions on the table */
|
|
|
|
tableOid = RangeVarGetRelidExtended(stmt->relation,
|
|
|
|
AccessExclusiveLock,
|
2018-03-31 01:33:42 +02:00
|
|
|
0,
|
2011-12-21 21:17:28 +01:00
|
|
|
RangeVarCallbackOwnsTable, NULL);
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(tableOid, NoLock);
|
2002-12-30 19:42:17 +01:00
|
|
|
|
2007-09-10 23:59:37 +02:00
|
|
|
/*
|
|
|
|
* Reject clustering a remote temp table ... their local buffer
|
|
|
|
* manager is not going to cope.
|
|
|
|
*/
|
2009-04-01 00:12:48 +02:00
|
|
|
if (RELATION_IS_OTHER_TEMP(rel))
|
2007-09-10 23:59:37 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot cluster temporary tables of other sessions")));
|
|
|
|
|
2018-01-25 20:11:51 +01:00
|
|
|
/*
|
|
|
|
* Reject clustering a partitioned table.
|
|
|
|
*/
|
|
|
|
if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot cluster a partitioned table")));
|
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
if (stmt->indexname == NULL)
|
|
|
|
{
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *index;
|
2002-12-30 19:42:17 +01:00
|
|
|
|
|
|
|
/* We need to find the index that has indisclustered set. */
|
|
|
|
foreach(index, RelationGetIndexList(rel))
|
|
|
|
{
|
2004-05-26 06:41:50 +02:00
|
|
|
indexOid = lfirst_oid(index);
|
2020-04-06 04:44:23 +02:00
|
|
|
if (get_index_isclustered(indexOid))
|
2002-12-30 19:42:17 +01:00
|
|
|
break;
|
|
|
|
indexOid = InvalidOid;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!OidIsValid(indexOid))
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("there is no previously clustered index for table \"%s\"",
|
|
|
|
stmt->relation->relname)));
|
2002-12-30 19:42:17 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The index is expected to be in the same namespace as the
|
|
|
|
* relation.
|
|
|
|
*/
|
|
|
|
indexOid = get_relname_relid(stmt->indexname,
|
|
|
|
rel->rd_rel->relnamespace);
|
|
|
|
if (!OidIsValid(indexOid))
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("index \"%s\" for table \"%s\" does not exist",
|
|
|
|
stmt->indexname, stmt->relation->relname)));
|
2002-12-30 19:42:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* close relation, keep lock till commit */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, NoLock);
|
2002-12-30 19:42:17 +01:00
|
|
|
|
2014-01-02 21:09:21 +01:00
|
|
|
/* Do the job. */
|
2021-01-18 06:03:10 +01:00
|
|
|
cluster_rel(tableOid, indexOid, ¶ms);
|
2002-12-30 19:42:17 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* This is the "multi relation" case. We need to cluster all tables
|
|
|
|
* that have some index with indisclustered set.
|
|
|
|
*/
|
|
|
|
MemoryContext cluster_context;
|
2004-05-26 06:41:50 +02:00
|
|
|
List *rvs;
|
|
|
|
ListCell *rv;
|
2002-12-30 19:42:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We cannot run this form of CLUSTER inside a user transaction block;
|
|
|
|
* we'd be holding locks way too long.
|
|
|
|
*/
|
2018-02-17 02:44:15 +01:00
|
|
|
PreventInTransactionBlock(isTopLevel, "CLUSTER");
|
2002-12-30 19:42:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create special memory context for cross-transaction storage.
|
|
|
|
*
|
2003-05-02 22:54:36 +02:00
|
|
|
* Since it is a child of PortalContext, it will go away even in case
|
2002-12-30 19:42:17 +01:00
|
|
|
* of error.
|
|
|
|
*/
|
2003-05-02 22:54:36 +02:00
|
|
|
cluster_context = AllocSetContextCreate(PortalContext,
|
2002-12-30 19:42:17 +01:00
|
|
|
"Cluster",
|
Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters. While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea. Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.
While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.
In passing, change TopMemoryContext to use the default allocation
parameters. Formerly it could only be extended 8K at a time. That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there. There seems no good reason
not to let it use growing blocks like most other contexts.
Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain. The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.
Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 23:50:38 +02:00
|
|
|
ALLOCSET_DEFAULT_SIZES);
|
2002-12-30 19:42:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Build the list of relations to cluster. Note that this lives in
|
|
|
|
* cluster_context.
|
|
|
|
*/
|
|
|
|
rvs = get_tables_to_cluster(cluster_context);
|
|
|
|
|
|
|
|
/* Commit to get out of starting transaction */
|
2008-05-12 22:02:02 +02:00
|
|
|
PopActiveSnapshot();
|
2003-05-14 05:26:03 +02:00
|
|
|
CommitTransactionCommand();
|
2002-12-30 19:42:17 +01:00
|
|
|
|
|
|
|
/* Ok, now that we've got them all, cluster them one by one */
|
|
|
|
foreach(rv, rvs)
|
|
|
|
{
|
|
|
|
RelToCluster *rvtc = (RelToCluster *) lfirst(rv);
|
2021-01-18 06:03:10 +01:00
|
|
|
ClusterParams cluster_params = params;
|
2002-12-30 19:42:17 +01:00
|
|
|
|
|
|
|
/* Start a new transaction for each relation. */
|
2003-05-14 05:26:03 +02:00
|
|
|
StartTransactionCommand();
|
2004-09-13 22:10:13 +02:00
|
|
|
/* functions in indexes may want a snapshot set */
|
2008-05-12 22:02:02 +02:00
|
|
|
PushActiveSnapshot(GetTransactionSnapshot());
|
2014-01-02 21:09:21 +01:00
|
|
|
/* Do the job. */
|
2021-01-18 06:03:10 +01:00
|
|
|
cluster_params.options |= CLUOPT_RECHECK;
|
2018-07-24 04:37:32 +02:00
|
|
|
cluster_rel(rvtc->tableOid, rvtc->indexOid,
|
2021-01-18 06:03:10 +01:00
|
|
|
&cluster_params);
|
2008-05-12 22:02:02 +02:00
|
|
|
PopActiveSnapshot();
|
2003-05-14 05:26:03 +02:00
|
|
|
CommitTransactionCommand();
|
2002-12-30 19:42:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Start a new transaction for the cleanup work. */
|
2003-05-14 05:26:03 +02:00
|
|
|
StartTransactionCommand();
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/* Clean up working storage */
|
|
|
|
MemoryContextDelete(cluster_context);
|
|
|
|
}
|
|
|
|
}
|
1997-08-19 23:40:56 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2002-12-30 19:42:17 +01:00
|
|
|
* cluster_rel
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2002-08-10 22:43:46 +02:00
|
|
|
* This clusters the table by creating a new, clustered table and
|
|
|
|
* swapping the relfilenodes of the new table and the old table, so
|
|
|
|
* the OID of the original table is preserved. Thus we do not lose
|
|
|
|
* GRANT, inheritance nor references to this table (this was a bug
|
2016-06-03 16:13:36 +02:00
|
|
|
* in releases through 7.3).
|
2002-08-10 22:43:46 +02:00
|
|
|
*
|
2010-02-07 21:48:13 +01:00
|
|
|
* Indexes are rebuilt too, via REINDEX. Since we are effectively bulk-loading
|
2002-08-11 23:17:35 +02:00
|
|
|
* the new table, it's better to create the indexes afterwards than to fill
|
|
|
|
* them incrementally while we load the table.
|
2010-01-06 06:31:14 +01:00
|
|
|
*
|
|
|
|
* If indexOid is InvalidOid, the table will be rewritten in physical order
|
2010-02-07 21:48:13 +01:00
|
|
|
* instead of index order. This is the new implementation of VACUUM FULL,
|
|
|
|
* and error messages should refer to the operation as VACUUM not CLUSTER.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2010-01-06 06:31:14 +01:00
|
|
|
void
|
2021-01-18 06:03:10 +01:00
|
|
|
cluster_rel(Oid tableOid, Oid indexOid, ClusterParams *params)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2004-05-06 18:10:57 +02:00
|
|
|
Relation OldHeap;
|
2021-01-18 06:03:10 +01:00
|
|
|
bool verbose = ((params->options & CLUOPT_VERBOSE) != 0);
|
|
|
|
bool recheck = ((params->options & CLUOPT_RECHECK) != 0);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2002-11-15 04:09:39 +01:00
|
|
|
/* Check for user-requested abort. */
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
pgstat_progress_start_command(PROGRESS_COMMAND_CLUSTER, tableOid);
|
|
|
|
if (OidIsValid(indexOid))
|
|
|
|
pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,
|
|
|
|
PROGRESS_CLUSTER_COMMAND_CLUSTER);
|
|
|
|
else
|
|
|
|
pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,
|
|
|
|
PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);
|
|
|
|
|
2006-08-18 18:09:13 +02:00
|
|
|
/*
|
|
|
|
* We grab exclusive access to the target rel and index for the duration
|
|
|
|
* of the transaction. (This is redundant for the single-transaction
|
|
|
|
* case, since cluster() already did it.) The index lock is taken inside
|
|
|
|
* check_index_is_clusterable.
|
|
|
|
*/
|
2010-01-06 06:31:14 +01:00
|
|
|
OldHeap = try_relation_open(tableOid, AccessExclusiveLock);
|
2006-08-18 18:09:13 +02:00
|
|
|
|
|
|
|
/* If the table has gone away, we can skip processing it */
|
|
|
|
if (!OldHeap)
|
2019-03-25 15:59:04 +01:00
|
|
|
{
|
|
|
|
pgstat_progress_end_command();
|
2006-08-18 18:09:13 +02:00
|
|
|
return;
|
2019-03-25 15:59:04 +01:00
|
|
|
}
|
2006-08-18 18:09:13 +02:00
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/*
|
|
|
|
* Since we may open a new transaction for each relation, we have to check
|
|
|
|
* that the relation still is what we think it is.
|
|
|
|
*
|
|
|
|
* If this is a single-transaction CLUSTER, we can skip these tests. We
|
|
|
|
* *must* skip the one on indisclustered since it would reject an attempt
|
|
|
|
* to cluster a not-previously-clustered index.
|
2002-11-15 04:09:39 +01:00
|
|
|
*/
|
2002-12-30 19:42:17 +01:00
|
|
|
if (recheck)
|
|
|
|
{
|
2006-08-18 18:09:13 +02:00
|
|
|
/* Check that the user still owns the relation */
|
2010-01-06 06:31:14 +01:00
|
|
|
if (!pg_class_ownercheck(tableOid, GetUserId()))
|
2006-08-18 18:09:13 +02:00
|
|
|
{
|
|
|
|
relation_close(OldHeap, AccessExclusiveLock);
|
2019-03-25 15:59:04 +01:00
|
|
|
pgstat_progress_end_command();
|
2006-08-18 18:09:13 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2007-09-10 23:59:37 +02:00
|
|
|
/*
|
|
|
|
* Silently skip a temp table for a remote session. Only doing this
|
|
|
|
* check in the "recheck" case is appropriate (which currently means
|
|
|
|
* somebody is executing a database-wide CLUSTER), because there is
|
|
|
|
* another check in cluster() which will stop any attempt to cluster
|
|
|
|
* remote temp tables by name. There is another check in cluster_rel
|
2010-02-07 21:48:13 +01:00
|
|
|
* which is redundant, but we leave it for extra safety.
|
2007-09-10 23:59:37 +02:00
|
|
|
*/
|
2009-04-01 00:12:48 +02:00
|
|
|
if (RELATION_IS_OTHER_TEMP(OldHeap))
|
2007-09-10 23:59:37 +02:00
|
|
|
{
|
|
|
|
relation_close(OldHeap, AccessExclusiveLock);
|
2019-03-25 15:59:04 +01:00
|
|
|
pgstat_progress_end_command();
|
2007-09-10 23:59:37 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-01-06 06:31:14 +01:00
|
|
|
if (OidIsValid(indexOid))
|
2006-08-18 18:09:13 +02:00
|
|
|
{
|
2010-01-06 06:31:14 +01:00
|
|
|
/*
|
|
|
|
* Check that the index still exists
|
|
|
|
*/
|
2010-02-14 19:42:19 +01:00
|
|
|
if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexOid)))
|
2010-01-06 06:31:14 +01:00
|
|
|
{
|
|
|
|
relation_close(OldHeap, AccessExclusiveLock);
|
2019-03-25 15:59:04 +01:00
|
|
|
pgstat_progress_end_command();
|
2010-01-06 06:31:14 +01:00
|
|
|
return;
|
|
|
|
}
|
2002-11-15 04:09:39 +01:00
|
|
|
|
2010-01-06 06:31:14 +01:00
|
|
|
/*
|
|
|
|
* Check that the index is still the one with indisclustered set.
|
|
|
|
*/
|
2020-04-06 04:44:23 +02:00
|
|
|
if (!get_index_isclustered(indexOid))
|
2010-01-06 06:31:14 +01:00
|
|
|
{
|
|
|
|
relation_close(OldHeap, AccessExclusiveLock);
|
2019-03-25 15:59:04 +01:00
|
|
|
pgstat_progress_end_command();
|
2010-01-06 06:31:14 +01:00
|
|
|
return;
|
|
|
|
}
|
2002-11-15 04:09:39 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/*
|
|
|
|
* We allow VACUUM FULL, but not CLUSTER, on shared catalogs. CLUSTER
|
|
|
|
* would work in most respects, but the index would only get marked as
|
|
|
|
* indisclustered in the current database, leading to unexpected behavior
|
|
|
|
* if CLUSTER were later invoked in another database.
|
|
|
|
*/
|
|
|
|
if (OidIsValid(indexOid) && OldHeap->rd_rel->relisshared)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot cluster a shared catalog")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't process temp tables of other backends ... their local buffer
|
|
|
|
* manager is not going to cope.
|
|
|
|
*/
|
|
|
|
if (RELATION_IS_OTHER_TEMP(OldHeap))
|
|
|
|
{
|
|
|
|
if (OidIsValid(indexOid))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot cluster temporary tables of other sessions")));
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot vacuum temporary tables of other sessions")));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Also check for active uses of the relation in the current transaction,
|
|
|
|
* including open scans and pending AFTER trigger events.
|
|
|
|
*/
|
|
|
|
CheckTableNotInUse(OldHeap, OidIsValid(indexOid) ? "CLUSTER" : "VACUUM");
|
|
|
|
|
2010-01-06 06:31:14 +01:00
|
|
|
/* Check heap and index are valid to cluster on */
|
2010-02-07 21:48:13 +01:00
|
|
|
if (OidIsValid(indexOid))
|
2010-07-29 13:06:34 +02:00
|
|
|
check_index_is_clusterable(OldHeap, indexOid, recheck, AccessExclusiveLock);
|
2004-05-06 18:10:57 +02:00
|
|
|
|
2013-03-04 01:23:31 +01:00
|
|
|
/*
|
2013-04-09 20:02:49 +02:00
|
|
|
* Quietly ignore the request if this is a materialized view which has not
|
|
|
|
* been populated from its query. No harm is done because there is no data
|
|
|
|
* to deal with, and we don't want to throw an error if this is part of a
|
|
|
|
* multi-relation request -- for example, CLUSTER was run on the entire
|
|
|
|
* database.
|
2013-03-04 01:23:31 +01:00
|
|
|
*/
|
|
|
|
if (OldHeap->rd_rel->relkind == RELKIND_MATVIEW &&
|
2013-05-06 19:26:51 +02:00
|
|
|
!RelationIsPopulated(OldHeap))
|
2013-03-04 01:23:31 +01:00
|
|
|
{
|
|
|
|
relation_close(OldHeap, AccessExclusiveLock);
|
2019-03-25 15:59:04 +01:00
|
|
|
pgstat_progress_end_command();
|
2013-03-04 01:23:31 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2011-06-08 12:47:21 +02:00
|
|
|
/*
|
|
|
|
* All predicate locks on the tuples or pages are about to be made
|
|
|
|
* invalid, because we move tuples around. Promote them to relation
|
|
|
|
* locks. Predicate locks on indexes will be promoted when they are
|
|
|
|
* reindexed.
|
|
|
|
*/
|
|
|
|
TransferPredicateLocksToHeapRelation(OldHeap);
|
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/* rebuild_relation does all the dirty work */
|
2020-10-29 01:53:41 +01:00
|
|
|
rebuild_relation(OldHeap, indexOid, verbose);
|
2004-05-06 18:10:57 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
/* NB: rebuild_relation does table_close() on OldHeap */
|
2019-03-25 15:59:04 +01:00
|
|
|
|
|
|
|
pgstat_progress_end_command();
|
2004-05-06 18:10:57 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-01-06 06:31:14 +01:00
|
|
|
* Verify that the specified heap and index are valid to cluster on
|
2004-05-06 18:10:57 +02:00
|
|
|
*
|
2014-04-06 17:13:43 +02:00
|
|
|
* Side effect: obtains lock on the index. The caller may
|
|
|
|
* in some cases already have AccessExclusiveLock on the table, but
|
|
|
|
* not in all cases so we can't rely on the table-level lock for
|
|
|
|
* protection here.
|
2004-05-06 18:10:57 +02:00
|
|
|
*/
|
|
|
|
void
|
2010-07-29 13:06:34 +02:00
|
|
|
check_index_is_clusterable(Relation OldHeap, Oid indexOid, bool recheck, LOCKMODE lockmode)
|
2004-05-06 18:10:57 +02:00
|
|
|
{
|
|
|
|
Relation OldIndex;
|
|
|
|
|
2010-07-29 13:06:34 +02:00
|
|
|
OldIndex = index_open(indexOid, lockmode);
|
2001-01-10 02:12:28 +01:00
|
|
|
|
2000-05-11 05:54:18 +02:00
|
|
|
/*
|
2000-11-08 23:10:03 +01:00
|
|
|
* Check that index is in fact an index on the given relation
|
2000-05-11 05:54:18 +02:00
|
|
|
*/
|
2002-08-11 23:17:35 +02:00
|
|
|
if (OldIndex->rd_index == NULL ||
|
2004-05-06 18:10:57 +02:00
|
|
|
OldIndex->rd_index->indrelid != RelationGetRelid(OldHeap))
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("\"%s\" is not an index for table \"%s\"",
|
|
|
|
RelationGetRelationName(OldIndex),
|
|
|
|
RelationGetRelationName(OldHeap))));
|
2000-11-08 23:10:03 +01:00
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/* Index AM must allow clustering */
|
2019-01-22 02:36:55 +01:00
|
|
|
if (!OldIndex->rd_indam->amclusterable)
|
2010-02-07 21:48:13 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot cluster on index \"%s\" because access method does not support clustering",
|
|
|
|
RelationGetRelationName(OldIndex))));
|
|
|
|
|
2003-03-03 05:37:37 +01:00
|
|
|
/*
|
|
|
|
* Disallow clustering on incomplete indexes (those that might not index
|
|
|
|
* every row of the relation). We could relax this by making a separate
|
|
|
|
* seqscan pass over the table to copy the missing rows, but that seems
|
|
|
|
* expensive and tedious.
|
|
|
|
*/
|
2018-03-28 02:13:52 +02:00
|
|
|
if (!heap_attisnull(OldIndex->rd_indextuple, Anum_pg_index_indpred, NULL))
|
2003-07-20 23:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
2005-05-10 15:16:26 +02:00
|
|
|
errmsg("cannot cluster on partial index \"%s\"",
|
|
|
|
RelationGetRelationName(OldIndex))));
|
2005-10-15 04:49:52 +02:00
|
|
|
|
2007-09-29 20:05:20 +02:00
|
|
|
/*
|
|
|
|
* Disallow if index is left over from a failed CREATE INDEX CONCURRENTLY;
|
|
|
|
* it might well not contain entries for every heap row, or might not even
|
|
|
|
* be internally consistent. (But note that we don't check indcheckxmin;
|
|
|
|
* the worst consequence of following broken HOT chains would be that we
|
|
|
|
* might put recently-dead tuples out-of-order in the new table, and there
|
|
|
|
* is little harm in that.)
|
|
|
|
*/
|
2018-12-27 10:07:46 +01:00
|
|
|
if (!OldIndex->rd_index->indisvalid)
|
2007-09-29 20:05:20 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot cluster on invalid index \"%s\"",
|
|
|
|
RelationGetRelationName(OldIndex))));
|
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/* Drop relcache refcnt on OldIndex, but keep lock */
|
2006-07-31 22:09:10 +02:00
|
|
|
index_close(OldIndex, NoLock);
|
2002-11-23 05:05:52 +01:00
|
|
|
}
|
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/*
|
2004-05-08 02:34:49 +02:00
|
|
|
* mark_index_clustered: mark the specified index as the one clustered on
|
2002-12-30 19:42:17 +01:00
|
|
|
*
|
2004-05-08 02:34:49 +02:00
|
|
|
* With indexOid == InvalidOid, will mark all indexes of rel not-clustered.
|
|
|
|
*/
|
|
|
|
void
|
2013-03-18 03:55:14 +01:00
|
|
|
mark_index_clustered(Relation rel, Oid indexOid, bool is_internal)
|
2004-05-08 02:34:49 +02:00
|
|
|
{
|
|
|
|
HeapTuple indexTuple;
|
|
|
|
Form_pg_index indexForm;
|
|
|
|
Relation pg_index;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *index;
|
2004-05-08 02:34:49 +02:00
|
|
|
|
2018-01-25 20:11:51 +01:00
|
|
|
/* Disallow applying to a partitioned table */
|
|
|
|
if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot mark index clustered in partitioned table")));
|
|
|
|
|
2004-05-08 02:34:49 +02:00
|
|
|
/*
|
|
|
|
* If the index is already marked clustered, no need to do anything.
|
|
|
|
*/
|
|
|
|
if (OidIsValid(indexOid))
|
|
|
|
{
|
2020-04-06 04:44:23 +02:00
|
|
|
if (get_index_isclustered(indexOid))
|
2004-05-08 02:34:49 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check each index of the relation and set/clear the bit as needed.
|
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
pg_index = table_open(IndexRelationId, RowExclusiveLock);
|
2004-05-08 02:34:49 +02:00
|
|
|
|
|
|
|
foreach(index, RelationGetIndexList(rel))
|
|
|
|
{
|
2004-05-26 06:41:50 +02:00
|
|
|
Oid thisIndexOid = lfirst_oid(index);
|
2004-05-08 02:34:49 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
indexTuple = SearchSysCacheCopy1(INDEXRELID,
|
|
|
|
ObjectIdGetDatum(thisIndexOid));
|
2004-05-08 02:34:49 +02:00
|
|
|
if (!HeapTupleIsValid(indexTuple))
|
|
|
|
elog(ERROR, "cache lookup failed for index %u", thisIndexOid);
|
|
|
|
indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unset the bit if set. We know it's wrong because we checked this
|
|
|
|
* earlier.
|
|
|
|
*/
|
|
|
|
if (indexForm->indisclustered)
|
|
|
|
{
|
|
|
|
indexForm->indisclustered = false;
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(pg_index, &indexTuple->t_self, indexTuple);
|
2004-05-08 02:34:49 +02:00
|
|
|
}
|
|
|
|
else if (thisIndexOid == indexOid)
|
|
|
|
{
|
Fix assorted bugs in CREATE/DROP INDEX CONCURRENTLY.
Commit 8cb53654dbdb4c386369eb988062d0bbb6de725e, which introduced DROP
INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor
choice of catalog state representation. The pg_index state for an index
that's reached the final pre-drop stage was the same as the state for an
index just created by CREATE INDEX CONCURRENTLY. This meant that the
(necessary) change to make RelationGetIndexList ignore about-to-die indexes
also made it ignore freshly-created indexes; which is catastrophic because
the latter do need to be considered in HOT-safety decisions. Failure to
do so leads to incorrect index entries and subsequently wrong results from
queries depending on the concurrently-created index.
To fix, add an additional boolean column "indislive" to pg_index, so that
the freshly-created and about-to-die states can be distinguished. (This
change obviously is only possible in HEAD. This patch will need to be
back-patched, but in 9.2 we'll use a kluge consisting of overloading the
formerly-impossible state of indisvalid = true and indisready = false.)
In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index
flag changes they make without exclusive lock on the index are made via
heap_inplace_update() rather than a normal transactional update. The
latter is not very safe because moving the pg_index tuple could result in
concurrent SnapshotNow scans finding it twice or not at all, thus possibly
resulting in index corruption. This is a pre-existing bug in CREATE INDEX
CONCURRENTLY, which was copied into the DROP code.
In addition, fix various places in the code that ought to check to make
sure that the indexes they are manipulating are valid and/or ready as
appropriate. These represent bugs that have existed since 8.2, since
a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid
index behind, and we ought not try to do anything that might fail with
such an index.
Also fix RelationReloadIndexInfo to ensure it copies all the pg_index
columns that are allowed to change after initial creation. Previously we
could have been left with stale values of some fields in an index relcache
entry. It's not clear whether this actually had any user-visible
consequences, but it's at least a bug waiting to happen.
In addition, do some code and docs review for DROP INDEX CONCURRENTLY;
some cosmetic code cleanup but mostly addition and revision of comments.
This will need to be back-patched, but in a noticeably different form,
so I'm committing it to HEAD before working on the back-patch.
Problem reported by Amit Kapila, diagnosis by Pavan Deolassee,
fix by Tom Lane and Andres Freund.
2012-11-29 03:25:27 +01:00
|
|
|
/* this was checked earlier, but let's be real sure */
|
2018-12-27 10:07:46 +01:00
|
|
|
if (!indexForm->indisvalid)
|
Fix assorted bugs in CREATE/DROP INDEX CONCURRENTLY.
Commit 8cb53654dbdb4c386369eb988062d0bbb6de725e, which introduced DROP
INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor
choice of catalog state representation. The pg_index state for an index
that's reached the final pre-drop stage was the same as the state for an
index just created by CREATE INDEX CONCURRENTLY. This meant that the
(necessary) change to make RelationGetIndexList ignore about-to-die indexes
also made it ignore freshly-created indexes; which is catastrophic because
the latter do need to be considered in HOT-safety decisions. Failure to
do so leads to incorrect index entries and subsequently wrong results from
queries depending on the concurrently-created index.
To fix, add an additional boolean column "indislive" to pg_index, so that
the freshly-created and about-to-die states can be distinguished. (This
change obviously is only possible in HEAD. This patch will need to be
back-patched, but in 9.2 we'll use a kluge consisting of overloading the
formerly-impossible state of indisvalid = true and indisready = false.)
In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index
flag changes they make without exclusive lock on the index are made via
heap_inplace_update() rather than a normal transactional update. The
latter is not very safe because moving the pg_index tuple could result in
concurrent SnapshotNow scans finding it twice or not at all, thus possibly
resulting in index corruption. This is a pre-existing bug in CREATE INDEX
CONCURRENTLY, which was copied into the DROP code.
In addition, fix various places in the code that ought to check to make
sure that the indexes they are manipulating are valid and/or ready as
appropriate. These represent bugs that have existed since 8.2, since
a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid
index behind, and we ought not try to do anything that might fail with
such an index.
Also fix RelationReloadIndexInfo to ensure it copies all the pg_index
columns that are allowed to change after initial creation. Previously we
could have been left with stale values of some fields in an index relcache
entry. It's not clear whether this actually had any user-visible
consequences, but it's at least a bug waiting to happen.
In addition, do some code and docs review for DROP INDEX CONCURRENTLY;
some cosmetic code cleanup but mostly addition and revision of comments.
This will need to be back-patched, but in a noticeably different form,
so I'm committing it to HEAD before working on the back-patch.
Problem reported by Amit Kapila, diagnosis by Pavan Deolassee,
fix by Tom Lane and Andres Freund.
2012-11-29 03:25:27 +01:00
|
|
|
elog(ERROR, "cannot cluster on invalid index %u", indexOid);
|
2004-05-08 02:34:49 +02:00
|
|
|
indexForm->indisclustered = true;
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(pg_index, &indexTuple->t_self, indexTuple);
|
2004-05-08 02:34:49 +02:00
|
|
|
}
|
2013-03-18 03:55:14 +01:00
|
|
|
|
|
|
|
InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
|
|
|
|
InvalidOid, is_internal);
|
|
|
|
|
2004-05-08 02:34:49 +02:00
|
|
|
heap_freetuple(indexTuple);
|
|
|
|
}
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pg_index, RowExclusiveLock);
|
2004-05-08 02:34:49 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-01-06 06:31:14 +01:00
|
|
|
* rebuild_relation: rebuild an existing relation in index or physical order
|
2002-12-30 19:42:17 +01:00
|
|
|
*
|
|
|
|
* OldHeap: table to rebuild --- must be opened and exclusive-locked!
|
2010-01-06 06:31:14 +01:00
|
|
|
* indexOid: index to cluster by, or InvalidOid to rewrite in physical order.
|
2002-12-30 19:42:17 +01:00
|
|
|
*
|
|
|
|
* NB: this routine closes OldHeap at the right time; caller should not.
|
|
|
|
*/
|
2004-05-08 02:34:49 +02:00
|
|
|
static void
|
2020-10-29 01:53:41 +01:00
|
|
|
rebuild_relation(Relation OldHeap, Oid indexOid, bool verbose)
|
2002-11-23 05:05:52 +01:00
|
|
|
{
|
2002-12-30 19:42:17 +01:00
|
|
|
Oid tableOid = RelationGetRelid(OldHeap);
|
2021-07-28 03:10:44 +02:00
|
|
|
Oid accessMethod = OldHeap->rd_rel->relam;
|
2004-07-12 01:13:58 +02:00
|
|
|
Oid tableSpace = OldHeap->rd_rel->reltablespace;
|
2002-11-23 05:05:52 +01:00
|
|
|
Oid OIDNewHeap;
|
2017-03-04 22:09:33 +01:00
|
|
|
char relpersistence;
|
2010-02-07 21:48:13 +01:00
|
|
|
bool is_system_catalog;
|
2010-02-04 01:09:14 +01:00
|
|
|
bool swap_toast_by_content;
|
2007-05-19 01:19:42 +02:00
|
|
|
TransactionId frozenXid;
|
2013-09-16 20:45:00 +02:00
|
|
|
MultiXactId cutoffMulti;
|
2002-11-23 05:05:52 +01:00
|
|
|
|
2004-05-08 02:34:49 +02:00
|
|
|
/* Mark the correct index as clustered */
|
2010-01-06 06:31:14 +01:00
|
|
|
if (OidIsValid(indexOid))
|
2013-03-18 03:55:14 +01:00
|
|
|
mark_index_clustered(OldHeap, indexOid, true);
|
2002-12-30 19:42:17 +01:00
|
|
|
|
2017-03-04 22:09:33 +01:00
|
|
|
/* Remember info about rel before closing OldHeap */
|
|
|
|
relpersistence = OldHeap->rd_rel->relpersistence;
|
2010-02-07 21:48:13 +01:00
|
|
|
is_system_catalog = IsSystemRelation(OldHeap);
|
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/* Close relcache entry, but keep lock until transaction commit */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(OldHeap, NoLock);
|
2002-12-30 19:42:17 +01:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/* Create the transient table that will receive the re-ordered data */
|
2014-08-22 20:27:00 +02:00
|
|
|
OIDNewHeap = make_new_heap(tableOid, tableSpace,
|
2021-07-28 03:10:44 +02:00
|
|
|
accessMethod,
|
2017-03-04 22:09:33 +01:00
|
|
|
relpersistence,
|
2013-07-16 19:55:44 +02:00
|
|
|
AccessExclusiveLock);
|
2002-07-12 20:43:19 +02:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/* Copy the heap data into the new table in the desired order */
|
2020-10-29 01:53:41 +01:00
|
|
|
copy_table_data(OIDNewHeap, tableOid, indexOid, verbose,
|
2013-09-16 20:45:00 +02:00
|
|
|
&swap_toast_by_content, &frozenXid, &cutoffMulti);
|
1998-01-10 06:19:22 +01:00
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/*
|
|
|
|
* Swap the physical files of the target and transient tables, then
|
|
|
|
* rebuild the target's indexes and throw away the transient table.
|
|
|
|
*/
|
|
|
|
finish_heap_swap(tableOid, OIDNewHeap, is_system_catalog,
|
2013-03-18 03:55:14 +01:00
|
|
|
swap_toast_by_content, false, true,
|
2014-11-15 05:19:49 +01:00
|
|
|
frozenXid, cutoffMulti,
|
2017-03-04 22:09:33 +01:00
|
|
|
relpersistence);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
|
2002-08-11 23:17:35 +02:00
|
|
|
/*
|
2010-02-04 01:09:14 +01:00
|
|
|
* Create the transient table that will be filled with new data during
|
|
|
|
* CLUSTER, ALTER TABLE, and similar operations. The transient table
|
2021-07-28 03:10:44 +02:00
|
|
|
* duplicates the logical structure of the OldHeap; but will have the
|
|
|
|
* specified physical storage properties NewTableSpace, NewAccessMethod, and
|
|
|
|
* relpersistence.
|
2010-02-04 01:09:14 +01:00
|
|
|
*
|
|
|
|
* After this, the caller should load the new heap with transferred/modified
|
2010-02-07 21:48:13 +01:00
|
|
|
* data, then call finish_heap_swap to complete the operation.
|
2002-08-11 23:17:35 +02:00
|
|
|
*/
|
2004-05-05 06:48:48 +02:00
|
|
|
Oid
|
2021-07-28 03:10:44 +02:00
|
|
|
make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, Oid NewAccessMethod,
|
|
|
|
char relpersistence, LOCKMODE lockmode)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2011-06-17 15:43:32 +02:00
|
|
|
TupleDesc OldHeapDesc;
|
2010-02-04 01:09:14 +01:00
|
|
|
char NewHeapName[NAMEDATALEN];
|
1996-07-09 08:22:35 +02:00
|
|
|
Oid OIDNewHeap;
|
2009-02-02 20:31:40 +01:00
|
|
|
Oid toastid;
|
2000-11-08 23:10:03 +01:00
|
|
|
Relation OldHeap;
|
2006-07-02 04:23:23 +02:00
|
|
|
HeapTuple tuple;
|
2006-07-04 00:45:41 +02:00
|
|
|
Datum reloptions;
|
|
|
|
bool isNull;
|
2013-07-16 19:55:44 +02:00
|
|
|
Oid namespaceid;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
OldHeap = table_open(OIDOldHeap, lockmode);
|
1996-07-09 08:22:35 +02:00
|
|
|
OldHeapDesc = RelationGetDescr(OldHeap);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2011-06-17 15:43:32 +02:00
|
|
|
* Note that the NewHeap will not receive any of the defaults or
|
2008-05-10 01:32:05 +02:00
|
|
|
* constraints associated with the OldHeap; we don't need 'em, and there's
|
|
|
|
* no reason to spend cycles inserting them into the catalogs only to
|
|
|
|
* delete them.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-07-02 04:23:23 +02:00
|
|
|
/*
|
2010-02-04 01:09:14 +01:00
|
|
|
* But we do want to use reloptions of the old heap for new heap.
|
2006-07-02 04:23:23 +02:00
|
|
|
*/
|
2010-02-14 19:42:19 +01:00
|
|
|
tuple = SearchSysCache1(RELOID, ObjectIdGetDatum(OIDOldHeap));
|
2006-07-04 00:45:41 +02:00
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for relation %u", OIDOldHeap);
|
|
|
|
reloptions = SysCacheGetAttr(RELOID, tuple, Anum_pg_class_reloptions,
|
|
|
|
&isNull);
|
|
|
|
if (isNull)
|
|
|
|
reloptions = (Datum) 0;
|
2006-07-02 04:23:23 +02:00
|
|
|
|
2014-08-22 20:27:00 +02:00
|
|
|
if (relpersistence == RELPERSISTENCE_TEMP)
|
2013-07-16 19:55:44 +02:00
|
|
|
namespaceid = LookupCreationNamespace("pg_temp");
|
|
|
|
else
|
|
|
|
namespaceid = RelationGetNamespace(OldHeap);
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/*
|
|
|
|
* Create the new heap, using a temporary name in the same namespace as
|
|
|
|
* the existing table. NOTE: there is some risk of collision with user
|
|
|
|
* relnames. Working around this seems more trouble than it's worth; in
|
|
|
|
* particular, we can't create the new heap in a different namespace from
|
|
|
|
* the old, or we will have problems with the TEMP status of temp tables.
|
2010-02-07 21:48:13 +01:00
|
|
|
*
|
|
|
|
* Note: the new heap is not a shared relation, even if we are rebuilding
|
|
|
|
* a shared rel. However, we do make the new heap mapped if the source is
|
|
|
|
* mapped. This simplifies swap_relation_files, and is absolutely
|
|
|
|
* necessary for rebuilding pg_class, for reasons explained there.
|
2010-02-04 01:09:14 +01:00
|
|
|
*/
|
|
|
|
snprintf(NewHeapName, sizeof(NewHeapName), "pg_temp_%u", OIDOldHeap);
|
|
|
|
|
|
|
|
OIDNewHeap = heap_create_with_catalog(NewHeapName,
|
2013-07-16 19:55:44 +02:00
|
|
|
namespaceid,
|
2004-07-12 01:13:58 +02:00
|
|
|
NewTableSpace,
|
2005-04-14 03:38:22 +02:00
|
|
|
InvalidOid,
|
2009-09-27 00:42:03 +02:00
|
|
|
InvalidOid,
|
2010-01-29 00:21:13 +01:00
|
|
|
InvalidOid,
|
2005-08-26 05:08:15 +02:00
|
|
|
OldHeap->rd_rel->relowner,
|
2021-07-28 03:10:44 +02:00
|
|
|
NewAccessMethod,
|
2011-06-17 15:43:32 +02:00
|
|
|
OldHeapDesc,
|
2008-05-10 01:32:05 +02:00
|
|
|
NIL,
|
2013-07-16 19:55:44 +02:00
|
|
|
RELKIND_RELATION,
|
|
|
|
relpersistence,
|
2010-02-07 21:48:13 +01:00
|
|
|
false,
|
|
|
|
RelationIsMapped(OldHeap),
|
2002-11-11 23:19:25 +01:00
|
|
|
ONCOMMIT_NOOP,
|
2006-07-04 00:45:41 +02:00
|
|
|
reloptions,
|
2009-10-05 21:24:49 +02:00
|
|
|
false,
|
2012-10-23 23:07:26 +02:00
|
|
|
true,
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
true,
|
2018-03-21 14:13:24 +01:00
|
|
|
OIDOldHeap,
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
NULL);
|
2010-07-26 01:21:22 +02:00
|
|
|
Assert(OIDNewHeap != InvalidOid);
|
2006-07-02 04:23:23 +02:00
|
|
|
|
|
|
|
ReleaseSysCache(tuple);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-01 22:35:00 +01:00
|
|
|
/*
|
|
|
|
* Advance command counter so that the newly-created relation's catalog
|
2019-01-21 19:32:19 +01:00
|
|
|
* tuples will be visible to table_open.
|
2001-01-01 22:35:00 +01:00
|
|
|
*/
|
|
|
|
CommandCounterIncrement();
|
|
|
|
|
|
|
|
/*
|
2010-02-07 21:48:13 +01:00
|
|
|
* If necessary, create a TOAST table for the new relation.
|
|
|
|
*
|
|
|
|
* If the relation doesn't have a TOAST table already, we can't need one
|
|
|
|
* for the new relation. The other way around is possible though: if some
|
2014-04-06 17:13:43 +02:00
|
|
|
* wide columns have been dropped, NewHeapCreateToastTable can decide that
|
2010-02-07 21:48:13 +01:00
|
|
|
* no TOAST table is needed for the new table.
|
|
|
|
*
|
2014-04-06 17:13:43 +02:00
|
|
|
* Note that NewHeapCreateToastTable ends with CommandCounterIncrement, so
|
2010-02-07 21:48:13 +01:00
|
|
|
* that the TOAST table will be visible for insertion.
|
2001-01-01 22:35:00 +01:00
|
|
|
*/
|
2009-02-02 20:31:40 +01:00
|
|
|
toastid = OldHeap->rd_rel->reltoastrelid;
|
|
|
|
if (OidIsValid(toastid))
|
|
|
|
{
|
2010-02-07 21:48:13 +01:00
|
|
|
/* keep the existing toast table's reloptions, if any */
|
2010-02-14 19:42:19 +01:00
|
|
|
tuple = SearchSysCache1(RELOID, ObjectIdGetDatum(toastid));
|
2009-02-02 20:31:40 +01:00
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
elog(ERROR, "cache lookup failed for relation %u", toastid);
|
|
|
|
reloptions = SysCacheGetAttr(RELOID, tuple, Anum_pg_class_reloptions,
|
|
|
|
&isNull);
|
|
|
|
if (isNull)
|
|
|
|
reloptions = (Datum) 0;
|
|
|
|
|
2021-08-25 06:23:07 +02:00
|
|
|
NewHeapCreateToastTable(OIDNewHeap, reloptions, lockmode, toastid);
|
2010-02-07 21:48:13 +01:00
|
|
|
|
2009-02-02 20:31:40 +01:00
|
|
|
ReleaseSysCache(tuple);
|
2010-02-07 21:48:13 +01:00
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(OldHeap, NoLock);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2000-11-08 23:10:03 +01:00
|
|
|
return OIDNewHeap;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2002-08-11 23:17:35 +02:00
|
|
|
/*
|
2019-03-29 04:01:14 +01:00
|
|
|
* Do the physical copying of table data.
|
2010-02-04 01:09:14 +01:00
|
|
|
*
|
2014-01-20 18:22:49 +01:00
|
|
|
* There are three output parameters:
|
2010-02-04 01:09:14 +01:00
|
|
|
* *pSwapToastByContent is set true if toast tables must be swapped by content.
|
|
|
|
* *pFreezeXid receives the TransactionId used as freeze cutoff point.
|
2013-09-16 20:45:00 +02:00
|
|
|
* *pCutoffMulti receives the MultiXactId used as a cutoff point.
|
2002-08-11 23:17:35 +02:00
|
|
|
*/
|
2010-02-04 01:09:14 +01:00
|
|
|
static void
|
2020-10-29 01:53:41 +01:00
|
|
|
copy_table_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose,
|
2019-03-29 04:01:14 +01:00
|
|
|
bool *pSwapToastByContent, TransactionId *pFreezeXid,
|
|
|
|
MultiXactId *pCutoffMulti)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2002-08-11 23:17:35 +02:00
|
|
|
Relation NewHeap,
|
|
|
|
OldHeap,
|
|
|
|
OldIndex;
|
2017-12-27 16:25:37 +01:00
|
|
|
Relation relRelation;
|
|
|
|
HeapTuple reltup;
|
|
|
|
Form_pg_class relform;
|
2019-03-29 04:01:14 +01:00
|
|
|
TupleDesc oldTupDesc PG_USED_FOR_ASSERTS_ONLY;
|
|
|
|
TupleDesc newTupDesc PG_USED_FOR_ASSERTS_ONLY;
|
2007-04-08 03:26:33 +02:00
|
|
|
TransactionId OldestXmin;
|
2007-05-17 17:28:29 +02:00
|
|
|
TransactionId FreezeXid;
|
2013-09-16 20:45:00 +02:00
|
|
|
MultiXactId MultiXactCutoff;
|
2010-10-08 02:00:28 +02:00
|
|
|
bool use_sort;
|
2010-10-08 03:46:46 +02:00
|
|
|
double num_tuples = 0,
|
|
|
|
tups_vacuumed = 0,
|
|
|
|
tups_recently_dead = 0;
|
2017-12-27 16:25:37 +01:00
|
|
|
BlockNumber num_pages;
|
2010-10-08 03:46:46 +02:00
|
|
|
int elevel = verbose ? INFO : DEBUG2;
|
|
|
|
PGRUsage ru0;
|
2021-08-16 20:06:54 +02:00
|
|
|
char *nspname;
|
2010-10-08 03:46:46 +02:00
|
|
|
|
|
|
|
pg_rusage_init(&ru0);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2005-02-06 21:19:08 +01:00
|
|
|
* Open the relations we need.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
NewHeap = table_open(OIDNewHeap, AccessExclusiveLock);
|
|
|
|
OldHeap = table_open(OIDOldHeap, AccessExclusiveLock);
|
2010-01-06 06:31:14 +01:00
|
|
|
if (OidIsValid(OIDOldIndex))
|
|
|
|
OldIndex = index_open(OIDOldIndex, AccessExclusiveLock);
|
|
|
|
else
|
|
|
|
OldIndex = NULL;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2021-08-16 20:06:54 +02:00
|
|
|
/* Store a copy of the namespace name for logging purposes */
|
|
|
|
nspname = get_namespace_name(RelationGetNamespace(OldHeap));
|
|
|
|
|
2005-02-06 21:19:08 +01:00
|
|
|
/*
|
|
|
|
* Their tuple descriptors should be exactly alike, but here we only need
|
|
|
|
* assume that they have the same number of columns.
|
|
|
|
*/
|
|
|
|
oldTupDesc = RelationGetDescr(OldHeap);
|
|
|
|
newTupDesc = RelationGetDescr(NewHeap);
|
|
|
|
Assert(newTupDesc->natts == oldTupDesc->natts);
|
|
|
|
|
2011-05-01 23:57:33 +02:00
|
|
|
/*
|
|
|
|
* If the OldHeap has a toast table, get lock on the toast table to keep
|
|
|
|
* it from being vacuumed. This is needed because autovacuum processes
|
|
|
|
* toast tables independently of their main tables, with no lock on the
|
|
|
|
* latter. If an autovacuum were to start on the toast table after we
|
|
|
|
* compute our OldestXmin below, it would use a later OldestXmin, and then
|
|
|
|
* possibly remove as DEAD toast tuples belonging to main tuples we think
|
|
|
|
* are only RECENTLY_DEAD. Then we'd fail while trying to copy those
|
|
|
|
* tuples.
|
|
|
|
*
|
|
|
|
* We don't need to open the toast relation here, just lock it. The lock
|
|
|
|
* will be held till end of transaction.
|
|
|
|
*/
|
|
|
|
if (OldHeap->rd_rel->reltoastrelid)
|
|
|
|
LockRelationOid(OldHeap->rd_rel->reltoastrelid, AccessExclusiveLock);
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/*
|
|
|
|
* If both tables have TOAST tables, perform toast swap by content. It is
|
|
|
|
* possible that the old table has a toast table but the new one doesn't,
|
|
|
|
* if toastable columns have been dropped. In that case we have to do
|
|
|
|
* swap by links. This is okay because swap by content is only essential
|
|
|
|
* for system catalogs, and we don't support schema changes for them.
|
|
|
|
*/
|
|
|
|
if (OldHeap->rd_rel->reltoastrelid && NewHeap->rd_rel->reltoastrelid)
|
|
|
|
{
|
|
|
|
*pSwapToastByContent = true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When doing swap by content, any toast pointers written into NewHeap
|
|
|
|
* must use the old toast table's OID, because that's where the toast
|
|
|
|
* data will eventually be found. Set this up by setting rd_toastoid.
|
Fix CLUSTER/VACUUM FULL for toast values owned by recently-updated rows.
In commit 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05, I made CLUSTER and
VACUUM FULL try to preserve toast value OIDs from the original toast table
to the new one. However, if we have to copy both live and recently-dead
versions of a row that has a toasted column, those versions may well
reference the same toast value with the same OID. The patch then led to
duplicate-key failures as we tried to insert the toast value twice with the
same OID. (The previous behavior was not very desirable either, since it
would have silently inserted the same value twice with different OIDs.
That wastes space, but what's worse is that the toast values inserted for
already-dead heap rows would not be reclaimed by subsequent ordinary
VACUUMs, since they go into the new toast table marked live not deleted.)
To fix, check if the copied OID already exists in the new toast table, and
if so, assume that it stores the desired value. This is reasonably safe
since the only case where we will copy an OID from a previous toast pointer
is when toast_insert_or_update was given that toast pointer and so we just
pulled the data from the old table; if we got two different values that way
then we have big problems anyway. We do have to assume that no other
backend is inserting items into the new toast table concurrently, but
that's surely safe for CLUSTER and VACUUM FULL.
Per bug #6393 from Maxim Boguk. Back-patch to 9.0, same as the previous
patch.
2012-01-12 22:40:14 +01:00
|
|
|
* This also tells toast_save_datum() to preserve the toast value
|
|
|
|
* OIDs, which we want so as not to invalidate toast pointers in
|
|
|
|
* system catalog caches, and to avoid making multiple copies of a
|
|
|
|
* single toast value.
|
2011-08-16 19:48:04 +02:00
|
|
|
*
|
2010-02-04 01:09:14 +01:00
|
|
|
* Note that we must hold NewHeap open until we are done writing data,
|
|
|
|
* since the relcache will not guarantee to remember this setting once
|
|
|
|
* the relation is closed. Also, this technique depends on the fact
|
|
|
|
* that no one will try to read from the NewHeap until after we've
|
|
|
|
* finished writing it and swapping the rels --- otherwise they could
|
Fix CLUSTER/VACUUM FULL for toast values owned by recently-updated rows.
In commit 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05, I made CLUSTER and
VACUUM FULL try to preserve toast value OIDs from the original toast table
to the new one. However, if we have to copy both live and recently-dead
versions of a row that has a toasted column, those versions may well
reference the same toast value with the same OID. The patch then led to
duplicate-key failures as we tried to insert the toast value twice with the
same OID. (The previous behavior was not very desirable either, since it
would have silently inserted the same value twice with different OIDs.
That wastes space, but what's worse is that the toast values inserted for
already-dead heap rows would not be reclaimed by subsequent ordinary
VACUUMs, since they go into the new toast table marked live not deleted.)
To fix, check if the copied OID already exists in the new toast table, and
if so, assume that it stores the desired value. This is reasonably safe
since the only case where we will copy an OID from a previous toast pointer
is when toast_insert_or_update was given that toast pointer and so we just
pulled the data from the old table; if we got two different values that way
then we have big problems anyway. We do have to assume that no other
backend is inserting items into the new toast table concurrently, but
that's surely safe for CLUSTER and VACUUM FULL.
Per bug #6393 from Maxim Boguk. Back-patch to 9.0, same as the previous
patch.
2012-01-12 22:40:14 +01:00
|
|
|
* follow the toast pointers to the wrong place. (It would actually
|
|
|
|
* work for values copied over from the old toast table, but not for
|
|
|
|
* any values that we toast which were previously not toasted.)
|
2010-02-04 01:09:14 +01:00
|
|
|
*/
|
|
|
|
NewHeap->rd_toastoid = OldHeap->rd_rel->reltoastrelid;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
*pSwapToastByContent = false;
|
|
|
|
|
2007-05-17 17:28:29 +02:00
|
|
|
/*
|
2014-01-02 21:09:21 +01:00
|
|
|
* Compute xids used to freeze and weed out dead tuples and multixacts.
|
|
|
|
* Since we're going to rewrite the whole table anyway, there's no reason
|
|
|
|
* not to be aggressive about this.
|
2007-05-17 17:28:29 +02:00
|
|
|
*/
|
2020-10-29 01:53:41 +01:00
|
|
|
vacuum_set_xid_limits(OldHeap, 0, 0, 0, 0,
|
2013-11-28 20:52:54 +01:00
|
|
|
&OldestXmin, &FreezeXid, NULL, &MultiXactCutoff,
|
|
|
|
NULL);
|
2007-04-08 03:26:33 +02:00
|
|
|
|
2008-01-15 22:20:28 +01:00
|
|
|
/*
|
|
|
|
* FreezeXid will become the table's new relfrozenxid, and that mustn't go
|
|
|
|
* backwards, so take the max.
|
|
|
|
*/
|
2019-04-24 06:42:12 +02:00
|
|
|
if (TransactionIdIsValid(OldHeap->rd_rel->relfrozenxid) &&
|
|
|
|
TransactionIdPrecedes(FreezeXid, OldHeap->rd_rel->relfrozenxid))
|
2008-01-15 22:20:28 +01:00
|
|
|
FreezeXid = OldHeap->rd_rel->relfrozenxid;
|
|
|
|
|
2014-06-27 20:43:46 +02:00
|
|
|
/*
|
|
|
|
* MultiXactCutoff, similarly, shouldn't go backwards either.
|
|
|
|
*/
|
2019-04-24 06:42:12 +02:00
|
|
|
if (MultiXactIdIsValid(OldHeap->rd_rel->relminmxid) &&
|
|
|
|
MultiXactIdPrecedes(MultiXactCutoff, OldHeap->rd_rel->relminmxid))
|
2014-06-27 20:43:46 +02:00
|
|
|
MultiXactCutoff = OldHeap->rd_rel->relminmxid;
|
|
|
|
|
2005-02-06 21:19:08 +01:00
|
|
|
/*
|
2010-10-08 02:00:28 +02:00
|
|
|
* Decide whether to use an indexscan or seqscan-and-optional-sort to scan
|
|
|
|
* the OldHeap. We know how to use a sort to duplicate the ordering of a
|
|
|
|
* btree index, and will use seqscan-and-sort for that case if the planner
|
|
|
|
* tells us it's cheaper. Otherwise, always indexscan if an index is
|
|
|
|
* provided, else plain seqscan.
|
|
|
|
*/
|
|
|
|
if (OldIndex != NULL && OldIndex->rd_rel->relam == BTREE_AM_OID)
|
|
|
|
use_sort = plan_cluster_use_sort(OIDOldHeap, OIDOldIndex);
|
|
|
|
else
|
|
|
|
use_sort = false;
|
|
|
|
|
2010-10-08 03:46:46 +02:00
|
|
|
/* Log what we're doing */
|
2019-03-29 04:01:14 +01:00
|
|
|
if (OldIndex != NULL && !use_sort)
|
2010-10-08 03:46:46 +02:00
|
|
|
ereport(elevel,
|
|
|
|
(errmsg("clustering \"%s.%s\" using index scan on \"%s\"",
|
2021-08-16 20:06:54 +02:00
|
|
|
nspname,
|
2010-10-08 03:46:46 +02:00
|
|
|
RelationGetRelationName(OldHeap),
|
|
|
|
RelationGetRelationName(OldIndex))));
|
2019-03-29 04:01:14 +01:00
|
|
|
else if (use_sort)
|
2010-10-08 03:46:46 +02:00
|
|
|
ereport(elevel,
|
|
|
|
(errmsg("clustering \"%s.%s\" using sequential scan and sort",
|
2021-08-16 20:06:54 +02:00
|
|
|
nspname,
|
2010-10-08 03:46:46 +02:00
|
|
|
RelationGetRelationName(OldHeap))));
|
|
|
|
else
|
|
|
|
ereport(elevel,
|
|
|
|
(errmsg("vacuuming \"%s.%s\"",
|
2021-08-16 20:06:54 +02:00
|
|
|
nspname,
|
2010-10-08 03:46:46 +02:00
|
|
|
RelationGetRelationName(OldHeap))));
|
|
|
|
|
2010-10-08 02:00:28 +02:00
|
|
|
/*
|
2022-01-25 01:40:04 +01:00
|
|
|
* Hand off the actual copying to AM specific function, the generic code
|
2019-04-24 06:42:12 +02:00
|
|
|
* cannot know how to deal with visibility across AMs. Note that this
|
|
|
|
* routine is allowed to set FreezeXid / MultiXactCutoff to different
|
|
|
|
* values (e.g. because the AM doesn't use freezing).
|
2010-10-08 02:00:28 +02:00
|
|
|
*/
|
2019-03-29 04:01:14 +01:00
|
|
|
table_relation_copy_for_cluster(OldHeap, NewHeap, OldIndex, use_sort,
|
2019-04-24 06:42:12 +02:00
|
|
|
OldestXmin, &FreezeXid, &MultiXactCutoff,
|
2019-03-29 04:01:14 +01:00
|
|
|
&num_tuples, &tups_vacuumed,
|
|
|
|
&tups_recently_dead);
|
2005-02-06 21:19:08 +01:00
|
|
|
|
2019-04-24 06:42:12 +02:00
|
|
|
/* return selected values to caller, get set as relfrozenxid/minmxid */
|
|
|
|
*pFreezeXid = FreezeXid;
|
|
|
|
*pCutoffMulti = MultiXactCutoff;
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/* Reset rd_toastoid just to be tidy --- it shouldn't be looked at again */
|
|
|
|
NewHeap->rd_toastoid = InvalidOid;
|
|
|
|
|
2017-12-27 16:25:37 +01:00
|
|
|
num_pages = RelationGetNumberOfBlocks(NewHeap);
|
|
|
|
|
2010-10-08 03:46:46 +02:00
|
|
|
/* Log what we did */
|
|
|
|
ereport(elevel,
|
2021-08-16 20:06:54 +02:00
|
|
|
(errmsg("\"%s.%s\": found %.0f removable, %.0f nonremovable row versions in %u pages",
|
|
|
|
nspname,
|
2010-10-08 03:46:46 +02:00
|
|
|
RelationGetRelationName(OldHeap),
|
|
|
|
tups_vacuumed, num_tuples,
|
|
|
|
RelationGetNumberOfBlocks(OldHeap)),
|
|
|
|
errdetail("%.0f dead row versions cannot be removed yet.\n"
|
|
|
|
"%s.",
|
|
|
|
tups_recently_dead,
|
|
|
|
pg_rusage_show(&ru0))));
|
|
|
|
|
2010-01-06 06:31:14 +01:00
|
|
|
if (OldIndex != NULL)
|
|
|
|
index_close(OldIndex, NoLock);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(OldHeap, NoLock);
|
|
|
|
table_close(NewHeap, NoLock);
|
2017-12-27 16:25:37 +01:00
|
|
|
|
|
|
|
/* Update pg_class to reflect the correct values of pages and tuples. */
|
2019-01-21 19:32:19 +01:00
|
|
|
relRelation = table_open(RelationRelationId, RowExclusiveLock);
|
2017-12-27 16:25:37 +01:00
|
|
|
|
|
|
|
reltup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(OIDNewHeap));
|
|
|
|
if (!HeapTupleIsValid(reltup))
|
|
|
|
elog(ERROR, "cache lookup failed for relation %u", OIDNewHeap);
|
|
|
|
relform = (Form_pg_class) GETSTRUCT(reltup);
|
|
|
|
|
|
|
|
relform->relpages = num_pages;
|
|
|
|
relform->reltuples = num_tuples;
|
|
|
|
|
|
|
|
/* Don't update the stats for pg_class. See swap_relation_files. */
|
|
|
|
if (OIDOldHeap != RelationRelationId)
|
|
|
|
CatalogTupleUpdate(relRelation, &reltup->t_self, reltup);
|
|
|
|
else
|
|
|
|
CacheInvalidateRelcacheByTuple(reltup);
|
|
|
|
|
|
|
|
/* Clean up. */
|
|
|
|
heap_freetuple(reltup);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relRelation, RowExclusiveLock);
|
2017-12-27 16:25:37 +01:00
|
|
|
|
|
|
|
/* Make the update visible */
|
|
|
|
CommandCounterIncrement();
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2002-08-11 23:17:35 +02:00
|
|
|
/*
|
2004-07-12 01:13:58 +02:00
|
|
|
* Swap the physical files of two given relations.
|
|
|
|
*
|
2014-08-22 20:27:00 +02:00
|
|
|
* We swap the physical identity (reltablespace, relfilenode) while keeping the
|
|
|
|
* same logical identities of the two relations. relpersistence is also
|
|
|
|
* swapped, which is critical since it determines where buffers live for each
|
|
|
|
* relation.
|
2002-08-11 23:17:35 +02:00
|
|
|
*
|
2010-02-04 01:09:14 +01:00
|
|
|
* We can swap associated TOAST data in either of two ways: recursively swap
|
|
|
|
* the physical content of the toast tables (and their indexes), or swap the
|
|
|
|
* TOAST links in the given relations' pg_class entries. The former is needed
|
|
|
|
* to manage rewrites of shared catalogs (where we cannot change the pg_class
|
|
|
|
* links) while the latter is the only way to handle cases in which a toast
|
|
|
|
* table is added or removed altogether.
|
2007-05-19 01:19:42 +02:00
|
|
|
*
|
|
|
|
* Additionally, the first relation is marked with relfrozenxid set to
|
2010-02-07 21:48:13 +01:00
|
|
|
* frozenXid. It seems a bit ugly to have this here, but the caller would
|
2010-02-04 01:09:14 +01:00
|
|
|
* have to do it anyway, so having it here saves a heap_update. Note: in
|
|
|
|
* the swap-toast-links case, we assume we don't need to change the toast
|
|
|
|
* table's relfrozenxid: the new version of the toast table should already
|
|
|
|
* have relfrozenxid set to RecentXmin, which is good enough.
|
2010-02-07 21:48:13 +01:00
|
|
|
*
|
|
|
|
* Lastly, if r2 and its toast table and toast index (if any) are mapped,
|
|
|
|
* their OIDs are emitted into mapped_tables[]. This is hacky but beats
|
|
|
|
* having to look the information up again later in finish_heap_swap.
|
2002-08-10 22:43:46 +02:00
|
|
|
*/
|
2010-02-07 21:48:13 +01:00
|
|
|
static void
|
|
|
|
swap_relation_files(Oid r1, Oid r2, bool target_is_pg_class,
|
|
|
|
bool swap_toast_by_content,
|
2013-03-18 03:55:14 +01:00
|
|
|
bool is_internal,
|
2010-02-07 21:48:13 +01:00
|
|
|
TransactionId frozenXid,
|
2013-09-16 20:45:00 +02:00
|
|
|
MultiXactId cutoffMulti,
|
2010-02-07 21:48:13 +01:00
|
|
|
Oid *mapped_tables)
|
2002-08-10 22:43:46 +02:00
|
|
|
{
|
2005-03-20 23:00:54 +01:00
|
|
|
Relation relRelation;
|
2002-08-11 23:17:35 +02:00
|
|
|
HeapTuple reltup1,
|
|
|
|
reltup2;
|
|
|
|
Form_pg_class relform1,
|
|
|
|
relform2;
|
2010-02-07 21:48:13 +01:00
|
|
|
Oid relfilenode1,
|
|
|
|
relfilenode2;
|
2002-08-11 23:17:35 +02:00
|
|
|
Oid swaptemp;
|
2014-08-22 20:27:00 +02:00
|
|
|
char swptmpchr;
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2002-08-11 23:17:35 +02:00
|
|
|
/* We need writable copies of both pg_class tuples. */
|
2019-01-21 19:32:19 +01:00
|
|
|
relRelation = table_open(RelationRelationId, RowExclusiveLock);
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
reltup1 = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(r1));
|
2002-08-11 23:17:35 +02:00
|
|
|
if (!HeapTupleIsValid(reltup1))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for relation %u", r1);
|
2002-08-11 23:17:35 +02:00
|
|
|
relform1 = (Form_pg_class) GETSTRUCT(reltup1);
|
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
reltup2 = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(r2));
|
2002-08-11 23:17:35 +02:00
|
|
|
if (!HeapTupleIsValid(reltup2))
|
2003-07-20 23:56:35 +02:00
|
|
|
elog(ERROR, "cache lookup failed for relation %u", r2);
|
2002-08-11 23:17:35 +02:00
|
|
|
relform2 = (Form_pg_class) GETSTRUCT(reltup2);
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
relfilenode1 = relform1->relfilenode;
|
|
|
|
relfilenode2 = relform2->relfilenode;
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
if (OidIsValid(relfilenode1) && OidIsValid(relfilenode2))
|
|
|
|
{
|
2014-08-22 20:27:00 +02:00
|
|
|
/*
|
|
|
|
* Normal non-mapped relations: swap relfilenodes, reltablespaces,
|
|
|
|
* relpersistence
|
|
|
|
*/
|
2010-02-07 21:48:13 +01:00
|
|
|
Assert(!target_is_pg_class);
|
2004-07-12 01:13:58 +02:00
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
swaptemp = relform1->relfilenode;
|
|
|
|
relform1->relfilenode = relform2->relfilenode;
|
|
|
|
relform2->relfilenode = swaptemp;
|
|
|
|
|
|
|
|
swaptemp = relform1->reltablespace;
|
|
|
|
relform1->reltablespace = relform2->reltablespace;
|
|
|
|
relform2->reltablespace = swaptemp;
|
|
|
|
|
2021-07-28 03:10:44 +02:00
|
|
|
swaptemp = relform1->relam;
|
|
|
|
relform1->relam = relform2->relam;
|
|
|
|
relform2->relam = swaptemp;
|
|
|
|
|
2014-08-22 20:27:00 +02:00
|
|
|
swptmpchr = relform1->relpersistence;
|
|
|
|
relform1->relpersistence = relform2->relpersistence;
|
|
|
|
relform2->relpersistence = swptmpchr;
|
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/* Also swap toast links, if we're swapping by links */
|
|
|
|
if (!swap_toast_by_content)
|
|
|
|
{
|
|
|
|
swaptemp = relform1->reltoastrelid;
|
|
|
|
relform1->reltoastrelid = relform2->reltoastrelid;
|
|
|
|
relform2->reltoastrelid = swaptemp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
2010-02-04 01:09:14 +01:00
|
|
|
{
|
2010-02-07 21:48:13 +01:00
|
|
|
/*
|
|
|
|
* Mapped-relation case. Here we have to swap the relation mappings
|
|
|
|
* instead of modifying the pg_class columns. Both must be mapped.
|
|
|
|
*/
|
|
|
|
if (OidIsValid(relfilenode1) || OidIsValid(relfilenode2))
|
|
|
|
elog(ERROR, "cannot swap mapped relation \"%s\" with non-mapped relation",
|
|
|
|
NameStr(relform1->relname));
|
|
|
|
|
|
|
|
/*
|
2014-08-22 20:27:00 +02:00
|
|
|
* We can't change the tablespace nor persistence of a mapped rel, and
|
|
|
|
* we can't handle toast link swapping for one either, because we must
|
|
|
|
* not apply any critical changes to its pg_class row. These cases
|
|
|
|
* should be prevented by upstream permissions tests, so these checks
|
|
|
|
* are non-user-facing emergency backstop.
|
2010-02-07 21:48:13 +01:00
|
|
|
*/
|
|
|
|
if (relform1->reltablespace != relform2->reltablespace)
|
|
|
|
elog(ERROR, "cannot change tablespace of mapped relation \"%s\"",
|
|
|
|
NameStr(relform1->relname));
|
2014-08-22 20:27:00 +02:00
|
|
|
if (relform1->relpersistence != relform2->relpersistence)
|
|
|
|
elog(ERROR, "cannot change persistence of mapped relation \"%s\"",
|
|
|
|
NameStr(relform1->relname));
|
2021-07-28 03:10:44 +02:00
|
|
|
if (relform1->relam != relform2->relam)
|
|
|
|
elog(ERROR, "cannot change access method of mapped relation \"%s\"",
|
|
|
|
NameStr(relform1->relname));
|
2010-02-07 21:48:13 +01:00
|
|
|
if (!swap_toast_by_content &&
|
|
|
|
(relform1->reltoastrelid || relform2->reltoastrelid))
|
|
|
|
elog(ERROR, "cannot swap toast by links for mapped relation \"%s\"",
|
|
|
|
NameStr(relform1->relname));
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/*
|
|
|
|
* Fetch the mappings --- shouldn't fail, but be paranoid
|
|
|
|
*/
|
|
|
|
relfilenode1 = RelationMapOidToFilenode(r1, relform1->relisshared);
|
|
|
|
if (!OidIsValid(relfilenode1))
|
|
|
|
elog(ERROR, "could not find relation mapping for relation \"%s\", OID %u",
|
|
|
|
NameStr(relform1->relname), r1);
|
|
|
|
relfilenode2 = RelationMapOidToFilenode(r2, relform2->relisshared);
|
|
|
|
if (!OidIsValid(relfilenode2))
|
|
|
|
elog(ERROR, "could not find relation mapping for relation \"%s\", OID %u",
|
|
|
|
NameStr(relform2->relname), r2);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Send replacement mappings to relmapper. Note these won't actually
|
|
|
|
* take effect until CommandCounterIncrement.
|
|
|
|
*/
|
|
|
|
RelationMapUpdateMap(r1, relfilenode2, relform1->relisshared, false);
|
|
|
|
RelationMapUpdateMap(r2, relfilenode1, relform2->relisshared, false);
|
|
|
|
|
|
|
|
/* Pass OIDs of mapped r2 tables back to caller */
|
|
|
|
*mapped_tables++ = r2;
|
2010-02-04 01:09:14 +01:00
|
|
|
}
|
|
|
|
|
Skip WAL for new relfilenodes, under wal_level=minimal.
Until now, only selected bulk operations (e.g. COPY) did this. If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY. See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules. Maintainers of table access
methods should examine that section.
To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL. A new GUC,
wal_skip_threshold, guides that choice. If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold. Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.
Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node. Make relcache.c retain entries for certain
dropped relations until end of transaction.
Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN.
Future servers accept older WAL, so this bump is discretionary.
Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas. Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem. Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout.
Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
2020-04-04 21:25:34 +02:00
|
|
|
/*
|
|
|
|
* Recognize that rel1's relfilenode (swapped from rel2) is new in this
|
|
|
|
* subtransaction. The rel2 storage (swapped from rel1) may or may not be
|
|
|
|
* new.
|
|
|
|
*/
|
|
|
|
{
|
|
|
|
Relation rel1,
|
|
|
|
rel2;
|
|
|
|
|
|
|
|
rel1 = relation_open(r1, NoLock);
|
|
|
|
rel2 = relation_open(r2, NoLock);
|
|
|
|
rel2->rd_createSubid = rel1->rd_createSubid;
|
|
|
|
rel2->rd_newRelfilenodeSubid = rel1->rd_newRelfilenodeSubid;
|
|
|
|
rel2->rd_firstRelfilenodeSubid = rel1->rd_firstRelfilenodeSubid;
|
|
|
|
RelationAssumeNewRelfilenode(rel1);
|
|
|
|
relation_close(rel1, NoLock);
|
|
|
|
relation_close(rel2, NoLock);
|
|
|
|
}
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/*
|
2010-02-07 21:48:13 +01:00
|
|
|
* In the case of a shared catalog, these next few steps will only affect
|
|
|
|
* our own database's pg_class row; but that's okay, because they are all
|
|
|
|
* noncritical updates. That's also an important fact for the case of a
|
|
|
|
* mapped catalog, because it's possible that we'll commit the map change
|
|
|
|
* and then fail to commit the pg_class update.
|
2010-02-04 01:09:14 +01:00
|
|
|
*/
|
2002-08-10 22:43:46 +02:00
|
|
|
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
/* set rel1's frozen Xid and minimum MultiXid */
|
2010-02-04 01:09:14 +01:00
|
|
|
if (relform1->relkind != RELKIND_INDEX)
|
|
|
|
{
|
2019-04-24 06:42:12 +02:00
|
|
|
Assert(!TransactionIdIsValid(frozenXid) ||
|
|
|
|
TransactionIdIsNormal(frozenXid));
|
2010-02-04 01:09:14 +01:00
|
|
|
relform1->relfrozenxid = frozenXid;
|
2013-09-16 20:45:00 +02:00
|
|
|
relform1->relminmxid = cutoffMulti;
|
2010-02-04 01:09:14 +01:00
|
|
|
}
|
2007-05-19 01:19:42 +02:00
|
|
|
|
2002-11-02 22:20:40 +01:00
|
|
|
/* swap size statistics too, since new rel has freshly-updated stats */
|
|
|
|
{
|
2012-06-25 00:51:46 +02:00
|
|
|
int32 swap_pages;
|
2002-11-02 22:20:40 +01:00
|
|
|
float4 swap_tuples;
|
2012-06-25 00:51:46 +02:00
|
|
|
int32 swap_allvisible;
|
2002-11-02 22:20:40 +01:00
|
|
|
|
|
|
|
swap_pages = relform1->relpages;
|
|
|
|
relform1->relpages = relform2->relpages;
|
|
|
|
relform2->relpages = swap_pages;
|
|
|
|
|
|
|
|
swap_tuples = relform1->reltuples;
|
|
|
|
relform1->reltuples = relform2->reltuples;
|
|
|
|
relform2->reltuples = swap_tuples;
|
2011-10-14 23:23:01 +02:00
|
|
|
|
|
|
|
swap_allvisible = relform1->relallvisible;
|
|
|
|
relform1->relallvisible = relform2->relallvisible;
|
|
|
|
relform2->relallvisible = swap_allvisible;
|
2002-11-02 22:20:40 +01:00
|
|
|
}
|
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/*
|
|
|
|
* Update the tuples in pg_class --- unless the target relation of the
|
|
|
|
* swap is pg_class itself. In that case, there is zero point in making
|
|
|
|
* changes because we'd be updating the old data that we're about to throw
|
|
|
|
* away. Because the real work being done here for a mapped relation is
|
|
|
|
* just to change the relation map settings, it's all right to not update
|
2014-03-04 17:08:18 +01:00
|
|
|
* the pg_class rows in this case. The most important changes will instead
|
|
|
|
* performed later, in finish_heap_swap() itself.
|
2010-02-07 21:48:13 +01:00
|
|
|
*/
|
|
|
|
if (!target_is_pg_class)
|
|
|
|
{
|
2017-02-01 23:18:36 +01:00
|
|
|
CatalogIndexState indstate;
|
2010-02-07 21:48:13 +01:00
|
|
|
|
|
|
|
indstate = CatalogOpenIndexes(relRelation);
|
2017-02-01 23:18:36 +01:00
|
|
|
CatalogTupleUpdateWithInfo(relRelation, &reltup1->t_self, reltup1,
|
|
|
|
indstate);
|
|
|
|
CatalogTupleUpdateWithInfo(relRelation, &reltup2->t_self, reltup2,
|
|
|
|
indstate);
|
2010-02-07 21:48:13 +01:00
|
|
|
CatalogCloseIndexes(indstate);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* no update ... but we do still need relcache inval */
|
|
|
|
CacheInvalidateRelcacheByTuple(reltup1);
|
|
|
|
CacheInvalidateRelcacheByTuple(reltup2);
|
|
|
|
}
|
2002-08-10 22:43:46 +02:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
/*
|
|
|
|
* Post alter hook for modified relations. The change to r2 is always
|
|
|
|
* internal, but r1 depends on the invocation context.
|
|
|
|
*/
|
|
|
|
InvokeObjectPostAlterHookArg(RelationRelationId, r1, 0,
|
|
|
|
InvalidOid, is_internal);
|
|
|
|
InvokeObjectPostAlterHookArg(RelationRelationId, r2, 0,
|
|
|
|
InvalidOid, true);
|
|
|
|
|
2002-08-11 23:17:35 +02:00
|
|
|
/*
|
|
|
|
* If we have toast tables associated with the relations being swapped,
|
2010-02-04 01:09:14 +01:00
|
|
|
* deal with them too.
|
2002-08-11 23:17:35 +02:00
|
|
|
*/
|
|
|
|
if (relform1->reltoastrelid || relform2->reltoastrelid)
|
|
|
|
{
|
2010-02-04 01:09:14 +01:00
|
|
|
if (swap_toast_by_content)
|
2004-05-05 06:48:48 +02:00
|
|
|
{
|
2010-02-04 01:09:14 +01:00
|
|
|
if (relform1->reltoastrelid && relform2->reltoastrelid)
|
|
|
|
{
|
|
|
|
/* Recursively swap the contents of the toast tables */
|
|
|
|
swap_relation_files(relform1->reltoastrelid,
|
|
|
|
relform2->reltoastrelid,
|
2010-02-07 21:48:13 +01:00
|
|
|
target_is_pg_class,
|
|
|
|
swap_toast_by_content,
|
2013-03-18 03:55:14 +01:00
|
|
|
is_internal,
|
2010-02-07 21:48:13 +01:00
|
|
|
frozenXid,
|
2013-09-16 20:45:00 +02:00
|
|
|
cutoffMulti,
|
2010-02-07 21:48:13 +01:00
|
|
|
mapped_tables);
|
2010-02-04 01:09:14 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* caller messed up */
|
|
|
|
elog(ERROR, "cannot swap toast files by content when there's only one");
|
|
|
|
}
|
2004-05-05 06:48:48 +02:00
|
|
|
}
|
2010-02-04 01:09:14 +01:00
|
|
|
else
|
2004-05-05 06:48:48 +02:00
|
|
|
{
|
2010-02-04 01:09:14 +01:00
|
|
|
/*
|
|
|
|
* We swapped the ownership links, so we need to change dependency
|
|
|
|
* data to match.
|
|
|
|
*
|
|
|
|
* NOTE: it is possible that only one table has a toast table.
|
|
|
|
*
|
|
|
|
* NOTE: at present, a TOAST table's only dependency is the one on
|
|
|
|
* its owning table. If more are ever created, we'd need to use
|
|
|
|
* something more selective than deleteDependencyRecordsFor() to
|
|
|
|
* get rid of just the link we want.
|
|
|
|
*/
|
|
|
|
ObjectAddress baseobject,
|
|
|
|
toastobject;
|
|
|
|
long count;
|
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/*
|
|
|
|
* We disallow this case for system catalogs, to avoid the
|
|
|
|
* possibility that the catalog we're rebuilding is one of the
|
|
|
|
* ones the dependency changes would change. It's too late to be
|
|
|
|
* making any data changes to the target catalog.
|
|
|
|
*/
|
Refine our definition of what constitutes a system relation.
Although user-defined relations can't be directly created in
pg_catalog, it's possible for them to end up there, because you can
create them in some other schema and then use ALTER TABLE .. SET SCHEMA
to move them there. Previously, such relations couldn't afterwards
be manipulated, because IsSystemRelation()/IsSystemClass() rejected
all attempts to modify objects in the pg_catalog schema, regardless
of their origin. With this patch, they now reject only those
objects in pg_catalog which were created at initdb-time, allowing
most operations on user-created tables in pg_catalog to proceed
normally.
This patch also adds new functions IsCatalogRelation() and
IsCatalogClass(), which is similar to IsSystemRelation() and
IsSystemClass() but with a slightly narrower definition: only TOAST
tables of system catalogs are included, rather than *all* TOAST tables.
This is currently used only for making decisions about when
invalidation messages need to be sent, but upcoming logical decoding
patches will find other uses for this information.
Andres Freund, with some modifications by me.
2013-11-29 02:57:20 +01:00
|
|
|
if (IsSystemClass(r1, relform1))
|
2010-02-07 21:48:13 +01:00
|
|
|
elog(ERROR, "cannot swap toast files by links for system catalogs");
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/* Delete old dependencies */
|
|
|
|
if (relform1->reltoastrelid)
|
|
|
|
{
|
|
|
|
count = deleteDependencyRecordsFor(RelationRelationId,
|
2011-02-08 22:08:41 +01:00
|
|
|
relform1->reltoastrelid,
|
|
|
|
false);
|
2010-02-04 01:09:14 +01:00
|
|
|
if (count != 1)
|
|
|
|
elog(ERROR, "expected one dependency record for TOAST table, found %ld",
|
|
|
|
count);
|
|
|
|
}
|
|
|
|
if (relform2->reltoastrelid)
|
|
|
|
{
|
|
|
|
count = deleteDependencyRecordsFor(RelationRelationId,
|
2011-02-08 22:08:41 +01:00
|
|
|
relform2->reltoastrelid,
|
|
|
|
false);
|
2010-02-04 01:09:14 +01:00
|
|
|
if (count != 1)
|
|
|
|
elog(ERROR, "expected one dependency record for TOAST table, found %ld",
|
|
|
|
count);
|
|
|
|
}
|
2002-08-11 23:17:35 +02:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/* Register new dependencies */
|
|
|
|
baseobject.classId = RelationRelationId;
|
|
|
|
baseobject.objectSubId = 0;
|
|
|
|
toastobject.classId = RelationRelationId;
|
|
|
|
toastobject.objectSubId = 0;
|
2002-08-11 23:17:35 +02:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
if (relform1->reltoastrelid)
|
|
|
|
{
|
|
|
|
baseobject.objectId = r1;
|
|
|
|
toastobject.objectId = relform1->reltoastrelid;
|
|
|
|
recordDependencyOn(&toastobject, &baseobject,
|
|
|
|
DEPENDENCY_INTERNAL);
|
|
|
|
}
|
2002-08-11 23:17:35 +02:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
if (relform2->reltoastrelid)
|
|
|
|
{
|
|
|
|
baseobject.objectId = r2;
|
|
|
|
toastobject.objectId = relform2->reltoastrelid;
|
|
|
|
recordDependencyOn(&toastobject, &baseobject,
|
|
|
|
DEPENDENCY_INTERNAL);
|
|
|
|
}
|
2004-05-05 06:48:48 +02:00
|
|
|
}
|
2002-08-11 23:17:35 +02:00
|
|
|
}
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/*
|
|
|
|
* If we're swapping two toast tables by content, do the same for their
|
2013-07-03 20:24:09 +02:00
|
|
|
* valid index. The swap can actually be safely done only if the relations
|
|
|
|
* have indexes.
|
2010-02-04 01:09:14 +01:00
|
|
|
*/
|
|
|
|
if (swap_toast_by_content &&
|
2013-07-03 20:24:09 +02:00
|
|
|
relform1->relkind == RELKIND_TOASTVALUE &&
|
|
|
|
relform2->relkind == RELKIND_TOASTVALUE)
|
|
|
|
{
|
|
|
|
Oid toastIndex1,
|
|
|
|
toastIndex2;
|
|
|
|
|
|
|
|
/* Get valid index for each relation */
|
|
|
|
toastIndex1 = toast_get_valid_index(r1,
|
|
|
|
AccessExclusiveLock);
|
|
|
|
toastIndex2 = toast_get_valid_index(r2,
|
|
|
|
AccessExclusiveLock);
|
|
|
|
|
|
|
|
swap_relation_files(toastIndex1,
|
|
|
|
toastIndex2,
|
2010-02-07 21:48:13 +01:00
|
|
|
target_is_pg_class,
|
|
|
|
swap_toast_by_content,
|
2013-03-18 03:55:14 +01:00
|
|
|
is_internal,
|
2010-02-07 21:48:13 +01:00
|
|
|
InvalidTransactionId,
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
InvalidMultiXactId,
|
2010-02-07 21:48:13 +01:00
|
|
|
mapped_tables);
|
2013-07-03 20:24:09 +02:00
|
|
|
}
|
2002-08-11 23:17:35 +02:00
|
|
|
|
|
|
|
/* Clean up. */
|
|
|
|
heap_freetuple(reltup1);
|
|
|
|
heap_freetuple(reltup2);
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relRelation, RowExclusiveLock);
|
2010-02-07 21:48:13 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Close both relcache entries' smgr links. We need this kluge because
|
|
|
|
* both links will be invalidated during upcoming CommandCounterIncrement.
|
|
|
|
* Whichever of the rels is the second to be cleared will have a dangling
|
|
|
|
* reference to the other's smgr entry. Rather than trying to avoid this
|
|
|
|
* by ordering operations just so, it's easiest to close the links first.
|
|
|
|
* (Fortunately, since one of the entries is local in our transaction,
|
|
|
|
* it's sufficient to clear out our own relcache this way; the problem
|
|
|
|
* cannot arise for other backends when they see our update on the
|
|
|
|
* non-transient relation.)
|
|
|
|
*
|
|
|
|
* Caution: the placement of this step interacts with the decision to
|
|
|
|
* handle toast rels by recursion. When we are trying to rebuild pg_class
|
|
|
|
* itself, the smgr close on pg_class must happen after all accesses in
|
|
|
|
* this function.
|
|
|
|
*/
|
|
|
|
RelationCloseSmgrByOid(r1);
|
|
|
|
RelationCloseSmgrByOid(r2);
|
2002-08-10 22:43:46 +02:00
|
|
|
}
|
2002-11-15 04:09:39 +01:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/*
|
|
|
|
* Remove the transient table that was built by make_new_heap, and finish
|
|
|
|
* cleaning up (including rebuilding all indexes on the old heap).
|
|
|
|
*/
|
|
|
|
void
|
2010-02-07 21:48:13 +01:00
|
|
|
finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
|
|
|
|
bool is_system_catalog,
|
|
|
|
bool swap_toast_by_content,
|
2011-01-21 04:44:10 +01:00
|
|
|
bool check_constraints,
|
2013-03-18 03:55:14 +01:00
|
|
|
bool is_internal,
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
TransactionId frozenXid,
|
2014-11-15 05:19:49 +01:00
|
|
|
MultiXactId cutoffMulti,
|
|
|
|
char newrelpersistence)
|
2010-02-04 01:09:14 +01:00
|
|
|
{
|
|
|
|
ObjectAddress object;
|
2010-02-07 21:48:13 +01:00
|
|
|
Oid mapped_tables[4];
|
2011-01-21 04:44:10 +01:00
|
|
|
int reindex_flags;
|
2021-01-18 06:03:10 +01:00
|
|
|
ReindexParams reindex_params = {0};
|
2010-02-07 21:48:13 +01:00
|
|
|
int i;
|
2010-02-04 01:09:14 +01:00
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
/* Report that we are now swapping relation files */
|
|
|
|
pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,
|
|
|
|
PROGRESS_CLUSTER_PHASE_SWAP_REL_FILES);
|
|
|
|
|
2010-02-07 21:48:13 +01:00
|
|
|
/* Zero out possible results from swapped_relation_files */
|
|
|
|
memset(mapped_tables, 0, sizeof(mapped_tables));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Swap the contents of the heap relations (including any toast tables).
|
|
|
|
* Also set old heap's relfrozenxid to frozenXid.
|
|
|
|
*/
|
|
|
|
swap_relation_files(OIDOldHeap, OIDNewHeap,
|
|
|
|
(OIDOldHeap == RelationRelationId),
|
2013-03-18 03:55:14 +01:00
|
|
|
swap_toast_by_content, is_internal,
|
2013-09-16 20:45:00 +02:00
|
|
|
frozenXid, cutoffMulti, mapped_tables);
|
2010-02-07 21:48:13 +01:00
|
|
|
|
|
|
|
/*
|
2018-03-29 21:18:53 +02:00
|
|
|
* If it's a system catalog, queue a sinval message to flush all catcaches
|
2010-02-07 21:48:13 +01:00
|
|
|
* on the catalog when we reach CommandCounterIncrement.
|
|
|
|
*/
|
|
|
|
if (is_system_catalog)
|
|
|
|
CacheInvalidateCatalog(OIDOldHeap);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Rebuild each index on the relation (but not the toast table, which is
|
|
|
|
* all-new at this point). It is important to do this before the DROP
|
|
|
|
* step because if we are processing a system catalog that will be used
|
|
|
|
* during DROP, we want to have its indexes available. There is no
|
|
|
|
* advantage to the other order anyway because this is all transactional,
|
|
|
|
* so no chance to reclaim disk space before commit. We do not need a
|
|
|
|
* final CommandCounterIncrement() because reindex_relation does it.
|
2011-04-20 00:50:56 +02:00
|
|
|
*
|
|
|
|
* Note: because index_build is called via reindex_relation, it will never
|
|
|
|
* set indcheckxmin true for the indexes. This is OK even though in some
|
|
|
|
* sense we are building new indexes rather than rebuilding existing ones,
|
|
|
|
* because the new heap won't contain any HOT chains at all, let alone
|
|
|
|
* broken ones, so it can't be necessary to set indcheckxmin.
|
2010-02-07 21:48:13 +01:00
|
|
|
*/
|
2011-04-16 23:26:41 +02:00
|
|
|
reindex_flags = REINDEX_REL_SUPPRESS_INDEX_USE;
|
2011-01-21 04:44:10 +01:00
|
|
|
if (check_constraints)
|
2011-04-16 23:26:41 +02:00
|
|
|
reindex_flags |= REINDEX_REL_CHECK_CONSTRAINTS;
|
2014-11-15 05:19:49 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure that the indexes have the same persistence as the parent
|
|
|
|
* relation.
|
|
|
|
*/
|
|
|
|
if (newrelpersistence == RELPERSISTENCE_UNLOGGED)
|
|
|
|
reindex_flags |= REINDEX_REL_FORCE_INDEXES_UNLOGGED;
|
|
|
|
else if (newrelpersistence == RELPERSISTENCE_PERMANENT)
|
|
|
|
reindex_flags |= REINDEX_REL_FORCE_INDEXES_PERMANENT;
|
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
/* Report that we are now reindexing relations */
|
|
|
|
pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,
|
|
|
|
PROGRESS_CLUSTER_PHASE_REBUILD_INDEX);
|
|
|
|
|
2021-01-18 06:03:10 +01:00
|
|
|
reindex_relation(OIDOldHeap, reindex_flags, &reindex_params);
|
2010-02-04 01:09:14 +01:00
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
/* Report that we are now doing clean up */
|
|
|
|
pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,
|
|
|
|
PROGRESS_CLUSTER_PHASE_FINAL_CLEANUP);
|
|
|
|
|
2014-03-04 17:08:18 +01:00
|
|
|
/*
|
2021-04-09 06:53:07 +02:00
|
|
|
* If the relation being rebuilt is pg_class, swap_relation_files()
|
2014-03-04 17:08:18 +01:00
|
|
|
* couldn't update pg_class's own pg_class entry (check comments in
|
|
|
|
* swap_relation_files()), thus relfrozenxid was not updated. That's
|
|
|
|
* annoying because a potential reason for doing a VACUUM FULL is a
|
|
|
|
* imminent or actual anti-wraparound shutdown. So, now that we can
|
2018-07-09 15:10:44 +02:00
|
|
|
* access the new relation using its indices, update relfrozenxid.
|
2014-03-04 17:08:18 +01:00
|
|
|
* pg_class doesn't have a toast relation, so we don't need to update the
|
|
|
|
* corresponding toast relation. Not that there's little point moving all
|
|
|
|
* relfrozenxid updates here since swap_relation_files() needs to write to
|
|
|
|
* pg_class for non-mapped relations anyway.
|
|
|
|
*/
|
|
|
|
if (OIDOldHeap == RelationRelationId)
|
|
|
|
{
|
|
|
|
Relation relRelation;
|
|
|
|
HeapTuple reltup;
|
|
|
|
Form_pg_class relform;
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
relRelation = table_open(RelationRelationId, RowExclusiveLock);
|
2014-03-04 17:08:18 +01:00
|
|
|
|
|
|
|
reltup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(OIDOldHeap));
|
|
|
|
if (!HeapTupleIsValid(reltup))
|
|
|
|
elog(ERROR, "cache lookup failed for relation %u", OIDOldHeap);
|
|
|
|
relform = (Form_pg_class) GETSTRUCT(reltup);
|
|
|
|
|
|
|
|
relform->relfrozenxid = frozenXid;
|
|
|
|
relform->relminmxid = cutoffMulti;
|
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(relRelation, &reltup->t_self, reltup);
|
2014-03-04 17:08:18 +01:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relRelation, RowExclusiveLock);
|
2014-03-04 17:08:18 +01:00
|
|
|
}
|
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
/* Destroy new heap with old filenode */
|
|
|
|
object.classId = RelationRelationId;
|
|
|
|
object.objectId = OIDNewHeap;
|
|
|
|
object.objectSubId = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The new relation is local to our transaction and we know nothing
|
|
|
|
* depends on it, so DROP_RESTRICT should be OK.
|
|
|
|
*/
|
2012-01-26 15:24:54 +01:00
|
|
|
performDeletion(&object, DROP_RESTRICT, PERFORM_DELETION_INTERNAL);
|
2010-02-04 01:09:14 +01:00
|
|
|
|
|
|
|
/* performDeletion does CommandCounterIncrement at end */
|
|
|
|
|
|
|
|
/*
|
2010-02-07 21:48:13 +01:00
|
|
|
* Now we must remove any relation mapping entries that we set up for the
|
|
|
|
* transient table, as well as its toast table and toast index if any. If
|
|
|
|
* we fail to do this before commit, the relmapper will complain about new
|
|
|
|
* permanent map entries being added post-bootstrap.
|
2010-02-04 01:09:14 +01:00
|
|
|
*/
|
2010-02-07 21:48:13 +01:00
|
|
|
for (i = 0; OidIsValid(mapped_tables[i]); i++)
|
|
|
|
RelationMapRemoveMapping(mapped_tables[i]);
|
2010-02-04 01:09:14 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* At this point, everything is kosher except that, if we did toast swap
|
|
|
|
* by links, the toast table's name corresponds to the transient table.
|
|
|
|
* The name is irrelevant to the backend because it's referenced by OID,
|
|
|
|
* but users looking at the catalogs could be confused. Rename it to
|
|
|
|
* prevent this problem.
|
|
|
|
*
|
|
|
|
* Note no lock required on the relation, because we already hold an
|
|
|
|
* exclusive lock on it.
|
|
|
|
*/
|
|
|
|
if (!swap_toast_by_content)
|
|
|
|
{
|
|
|
|
Relation newrel;
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
newrel = table_open(OIDOldHeap, NoLock);
|
2010-02-04 01:09:14 +01:00
|
|
|
if (OidIsValid(newrel->rd_rel->reltoastrelid))
|
|
|
|
{
|
|
|
|
Oid toastidx;
|
|
|
|
char NewToastName[NAMEDATALEN];
|
|
|
|
|
2013-07-03 20:24:09 +02:00
|
|
|
/* Get the associated valid index to be renamed */
|
|
|
|
toastidx = toast_get_valid_index(newrel->rd_rel->reltoastrelid,
|
2020-03-21 17:38:26 +01:00
|
|
|
NoLock);
|
2010-02-04 01:09:14 +01:00
|
|
|
|
|
|
|
/* rename the toast table ... */
|
|
|
|
snprintf(NewToastName, NAMEDATALEN, "pg_toast_%u",
|
|
|
|
OIDOldHeap);
|
|
|
|
RenameRelationInternal(newrel->rd_rel->reltoastrelid,
|
2018-10-25 09:33:17 +02:00
|
|
|
NewToastName, true, false);
|
2010-02-04 01:09:14 +01:00
|
|
|
|
2013-07-03 20:24:09 +02:00
|
|
|
/* ... and its valid index too. */
|
2010-02-04 01:09:14 +01:00
|
|
|
snprintf(NewToastName, NAMEDATALEN, "pg_toast_%u_index",
|
|
|
|
OIDOldHeap);
|
2013-07-03 20:24:09 +02:00
|
|
|
|
2010-02-04 01:09:14 +01:00
|
|
|
RenameRelationInternal(toastidx,
|
2018-10-25 09:33:17 +02:00
|
|
|
NewToastName, true, true);
|
2021-08-25 06:23:07 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset the relrewrite for the toast. The command-counter
|
|
|
|
* increment is required here as we are about to update
|
|
|
|
* the tuple that is updated as part of RenameRelationInternal.
|
|
|
|
*/
|
|
|
|
CommandCounterIncrement();
|
|
|
|
ResetRelRewrite(newrel->rd_rel->reltoastrelid);
|
2010-02-04 01:09:14 +01:00
|
|
|
}
|
|
|
|
relation_close(newrel, NoLock);
|
|
|
|
}
|
2018-03-28 02:13:52 +02:00
|
|
|
|
|
|
|
/* if it's not a catalog table, clear any missing attribute settings */
|
|
|
|
if (!is_system_catalog)
|
|
|
|
{
|
|
|
|
Relation newrel;
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
newrel = table_open(OIDOldHeap, NoLock);
|
2018-03-28 02:13:52 +02:00
|
|
|
RelationClearMissing(newrel);
|
|
|
|
relation_close(newrel, NoLock);
|
|
|
|
}
|
2010-02-04 01:09:14 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
/*
|
|
|
|
* Get a list of tables that the current user owns and
|
2019-08-13 06:53:41 +02:00
|
|
|
* have indisclustered set. Return the list in a List * of RelToCluster
|
2020-02-15 21:22:40 +01:00
|
|
|
* (stored in the specified memory context), each one giving the tableOid
|
|
|
|
* and the indexOid on which the table is already clustered.
|
2002-11-15 04:09:39 +01:00
|
|
|
*/
|
2002-12-30 19:42:17 +01:00
|
|
|
static List *
|
|
|
|
get_tables_to_cluster(MemoryContext cluster_context)
|
2002-11-15 04:09:39 +01:00
|
|
|
{
|
|
|
|
Relation indRelation;
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
TableScanDesc scan;
|
2002-11-15 04:09:39 +01:00
|
|
|
ScanKeyData entry;
|
|
|
|
HeapTuple indexTuple;
|
|
|
|
Form_pg_index index;
|
2002-12-30 19:42:17 +01:00
|
|
|
MemoryContext old_context;
|
|
|
|
RelToCluster *rvtc;
|
2002-11-15 04:09:39 +01:00
|
|
|
List *rvs = NIL;
|
|
|
|
|
|
|
|
/*
|
2002-12-30 19:42:17 +01:00
|
|
|
* Get all indexes that have indisclustered set and are owned by
|
2020-02-15 21:22:40 +01:00
|
|
|
* appropriate user.
|
2002-11-15 04:09:39 +01:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
indRelation = table_open(IndexRelationId, AccessShareLock);
|
2003-11-12 22:15:59 +01:00
|
|
|
ScanKeyInit(&entry,
|
|
|
|
Anum_pg_index_indisclustered,
|
|
|
|
BTEqualStrategyNumber, F_BOOLEQ,
|
|
|
|
BoolGetDatum(true));
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
scan = table_beginscan_catalog(indRelation, 1, &entry);
|
2002-11-15 04:09:39 +01:00
|
|
|
while ((indexTuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
|
|
|
|
{
|
|
|
|
index = (Form_pg_index) GETSTRUCT(indexTuple);
|
2003-08-01 02:15:26 +02:00
|
|
|
|
|
|
|
if (!pg_class_ownercheck(index->indrelid, GetUserId()))
|
2002-11-15 04:09:39 +01:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
2002-12-30 19:42:17 +01:00
|
|
|
* We have to build the list in a different memory context so it will
|
2002-11-15 04:09:39 +01:00
|
|
|
* survive the cross-transaction processing
|
|
|
|
*/
|
|
|
|
old_context = MemoryContextSwitchTo(cluster_context);
|
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
rvtc = (RelToCluster *) palloc(sizeof(RelToCluster));
|
2002-11-15 04:09:39 +01:00
|
|
|
rvtc->tableOid = index->indrelid;
|
2002-12-30 19:42:17 +01:00
|
|
|
rvtc->indexOid = index->indexrelid;
|
Avoid using lcons and list_delete_first where it's easy to do so.
Formerly, lcons was about the same speed as lappend, but with the new
List implementation, that's not so; with a long List, data movement
imposes an O(N) cost on lcons and list_delete_first, but not lappend.
Hence, invent list_delete_last with semantics parallel to
list_delete_first (but O(1) cost), and change various places to use
lappend and list_delete_last where this can be done without much
violence to the code logic.
There are quite a few places that construct result lists using lcons not
lappend. Some have semantic rationales for that; I added comments about
it to a couple that didn't have them already. In many such places though,
I think the coding is that way only because back in the dark ages lcons
was faster than lappend. Hence, switch to lappend where this can be done
without causing semantic changes.
In ExecInitExprRec(), this results in aggregates and window functions that
are in the same plan node being executed in a different order than before.
Generally, the executions of such functions ought to be independent of
each other, so this shouldn't result in visibly different query results.
But if you push it, as one regression test case does, you can show that
the order is different. The new order seems saner; it's closer to
the order of the functions in the query text. And we never documented
or promised anything about this, anyway.
Also, in gistfinishsplit(), don't bother building a reverse-order list;
it's easy now to iterate backwards through the original list.
It'd be possible to go further towards removing uses of lcons and
list_delete_first, but it'd require more extensive logic changes,
and I'm not convinced it's worth it. Most of the remaining uses
deal with queues that probably never get long enough to be worth
sweating over. (Actually, I doubt that any of the changes in this
patch will have measurable performance effects either. But better
to have good examples than bad ones in the code base.)
Patch by me, thanks to David Rowley and Daniel Gustafsson for review.
Discussion: https://postgr.es/m/21272.1563318411@sss.pgh.pa.us
2019-07-17 17:15:28 +02:00
|
|
|
rvs = lappend(rvs, rvtc);
|
2002-11-15 04:09:39 +01:00
|
|
|
|
|
|
|
MemoryContextSwitchTo(old_context);
|
|
|
|
}
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
table_endscan(scan);
|
2002-11-15 04:09:39 +01:00
|
|
|
|
2002-12-30 19:42:17 +01:00
|
|
|
relation_close(indRelation, AccessShareLock);
|
|
|
|
|
2002-11-15 04:09:39 +01:00
|
|
|
return rvs;
|
|
|
|
}
|