Fix assorted bugs in CREATE/DROP INDEX CONCURRENTLY.

Commit 8cb53654db, which introduced DROP
INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor
choice of catalog state representation.  The pg_index state for an index
that's reached the final pre-drop stage was the same as the state for an
index just created by CREATE INDEX CONCURRENTLY.  This meant that the
(necessary) change to make RelationGetIndexList ignore about-to-die indexes
also made it ignore freshly-created indexes; which is catastrophic because
the latter do need to be considered in HOT-safety decisions.  Failure to
do so leads to incorrect index entries and subsequently wrong results from
queries depending on the concurrently-created index.

To fix, add an additional boolean column "indislive" to pg_index, so that
the freshly-created and about-to-die states can be distinguished.  (This
change obviously is only possible in HEAD.  This patch will need to be
back-patched, but in 9.2 we'll use a kluge consisting of overloading the
formerly-impossible state of indisvalid = true and indisready = false.)

In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index
flag changes they make without exclusive lock on the index are made via
heap_inplace_update() rather than a normal transactional update.  The
latter is not very safe because moving the pg_index tuple could result in
concurrent SnapshotNow scans finding it twice or not at all, thus possibly
resulting in index corruption.  This is a pre-existing bug in CREATE INDEX
CONCURRENTLY, which was copied into the DROP code.

In addition, fix various places in the code that ought to check to make
sure that the indexes they are manipulating are valid and/or ready as
appropriate.  These represent bugs that have existed since 8.2, since
a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid
index behind, and we ought not try to do anything that might fail with
such an index.

Also fix RelationReloadIndexInfo to ensure it copies all the pg_index
columns that are allowed to change after initial creation.  Previously we
could have been left with stale values of some fields in an index relcache
entry.  It's not clear whether this actually had any user-visible
consequences, but it's at least a bug waiting to happen.

In addition, do some code and docs review for DROP INDEX CONCURRENTLY;
some cosmetic code cleanup but mostly addition and revision of comments.

This will need to be back-patched, but in a noticeably different form,
so I'm committing it to HEAD before working on the back-patch.

Problem reported by Amit Kapila, diagnosis by Pavan Deolassee,
fix by Tom Lane and Andres Freund.
This commit is contained in:
Tom Lane 2012-11-28 21:25:27 -05:00
parent 1577b46b7c
commit 3c84046490
17 changed files with 463 additions and 264 deletions

View File

@ -141,8 +141,8 @@ triggered_change_notification(PG_FUNCTION_ARGS)
if (!HeapTupleIsValid(indexTuple)) /* should not happen */ if (!HeapTupleIsValid(indexTuple)) /* should not happen */
elog(ERROR, "cache lookup failed for index %u", indexoid); elog(ERROR, "cache lookup failed for index %u", indexoid);
index = (Form_pg_index) GETSTRUCT(indexTuple); index = (Form_pg_index) GETSTRUCT(indexTuple);
/* we're only interested if it is the primary key */ /* we're only interested if it is the primary key and valid */
if (index->indisprimary) if (index->indisprimary && IndexIsValid(index))
{ {
int numatts = index->indnatts; int numatts = index->indnatts;

View File

@ -3480,7 +3480,7 @@
index is possibly incomplete: it must still be modified by index is possibly incomplete: it must still be modified by
<command>INSERT</>/<command>UPDATE</> operations, but it cannot safely <command>INSERT</>/<command>UPDATE</> operations, but it cannot safely
be used for queries. If it is unique, the uniqueness property is not be used for queries. If it is unique, the uniqueness property is not
true either. guaranteed true either.
</entry> </entry>
</row> </row>
@ -3507,6 +3507,16 @@
</entry> </entry>
</row> </row>
<row>
<entry><structfield>indislive</structfield></entry>
<entry><type>bool</type></entry>
<entry></entry>
<entry>
If false, the index is in process of being dropped, and should be
ignored for all purposes (including HOT-safety decisions)
</entry>
</row>
<row> <row>
<entry><structfield>indkey</structfield></entry> <entry><structfield>indkey</structfield></entry>
<entry><type>int2vector</type></entry> <entry><type>int2vector</type></entry>

View File

@ -40,34 +40,33 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] <replaceable class="PARAMETER">name</r
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term><literal>IF EXISTS</literal></term> <term><literal>CONCURRENTLY</literal></term>
<listitem> <listitem>
<para> <para>
Do not throw an error if the index does not exist. A notice is issued Drop the index without locking out concurrent selects, inserts, updates,
in this case. and deletes on the index's table. A normal <command>DROP INDEX</>
acquires exclusive lock on the table, blocking other accesses until the
index drop can be completed. With this option, the command instead
waits until conflicting transactions have completed.
</para>
<para>
There are several caveats to be aware of when using this option.
Only one index name can be specified, and the <literal>CASCADE</> option
is not supported. (Thus, an index that supports a <literal>UNIQUE</> or
<literal>PRIMARY KEY</> constraint cannot be dropped this way.)
Also, regular <command>DROP INDEX</> commands can be
performed within a transaction block, but
<command>DROP INDEX CONCURRENTLY</> cannot.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term><literal>CONCURRENTLY</literal></term> <term><literal>IF EXISTS</literal></term>
<listitem> <listitem>
<para> <para>
When this option is used, <productname>PostgreSQL</> will drop the Do not throw an error if the index does not exist. A notice is issued
index without taking any locks that prevent concurrent selects, inserts, in this case.
updates, or deletes on the table; whereas a standard index drop
waits for a lock that locks out everything on the table until it's done.
Concurrent drop index is a two stage process. First, we mark the index
both invalid and not ready then commit the change. Next we wait until
there are no users locking the table who can see the index.
</para>
<para>
There are several caveats to be aware of when using this option.
Only one index name can be specified if the <literal>CONCURRENTLY</literal>
parameter is specified. Regular <command>DROP INDEX</> command can be
performed within a transaction block, but
<command>DROP INDEX CONCURRENTLY</> cannot.
The CASCADE option is not supported when dropping an index concurrently.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View File

@ -386,6 +386,34 @@ from the index, as well as ensuring that no one can see any inconsistent
rows in a broken HOT chain (the first condition is stronger than the rows in a broken HOT chain (the first condition is stronger than the
second). Finally, we can mark the index valid for searches. second). Finally, we can mark the index valid for searches.
Note that we do not need to set pg_index.indcheckxmin in this code path,
because we have outwaited any transactions that would need to avoid using
the index. (indcheckxmin is only needed because non-concurrent CREATE
INDEX doesn't want to wait; its stronger lock would create too much risk of
deadlock if it did.)
DROP INDEX CONCURRENTLY
-----------------------
DROP INDEX CONCURRENTLY is sort of the reverse sequence of CREATE INDEX
CONCURRENTLY. We first mark the index as not indisvalid, and then wait for
any transactions that could be using it in queries to end. (During this
time, index updates must still be performed as normal, since such
transactions might expect freshly inserted tuples to be findable.)
Then, we clear indisready and indislive, and again wait for transactions
that could be updating the index to end. Finally we can drop the index
normally (though taking only ShareUpdateExclusiveLock on its parent table).
The reason we need the pg_index.indislive flag is that after the second
wait step begins, we don't want transactions to be touching the index at
all; otherwise they might suffer errors if the DROP finally commits while
they are reading catalog entries for the index. If we had only indisvalid
and indisready, this state would be indistinguishable from the first stage
of CREATE INDEX CONCURRENTLY --- but in that state, we *do* want
transactions to examine the index, since they must consider it in
HOT-safety checks.
Limitations and Restrictions Limitations and Restrictions
---------------------------- ----------------------------

View File

@ -995,7 +995,6 @@ deleteOneObject(const ObjectAddress *object, Relation depRel, int flags)
int nkeys; int nkeys;
SysScanDesc scan; SysScanDesc scan;
HeapTuple tup; HeapTuple tup;
Oid depRelOid = depRel->rd_id;
/* DROP hook of the objects being removed */ /* DROP hook of the objects being removed */
if (object_access_hook) if (object_access_hook)
@ -1008,9 +1007,9 @@ deleteOneObject(const ObjectAddress *object, Relation depRel, int flags)
} }
/* /*
* Close depRel if we are doing a drop concurrently. The individual * Close depRel if we are doing a drop concurrently. The object deletion
* deletion has to commit the transaction and we don't want dangling * subroutine will commit the current transaction, so we can't keep the
* references. * relation open across doDeletion().
*/ */
if (flags & PERFORM_DELETION_CONCURRENTLY) if (flags & PERFORM_DELETION_CONCURRENTLY)
heap_close(depRel, RowExclusiveLock); heap_close(depRel, RowExclusiveLock);
@ -1018,24 +1017,23 @@ deleteOneObject(const ObjectAddress *object, Relation depRel, int flags)
/* /*
* Delete the object itself, in an object-type-dependent way. * Delete the object itself, in an object-type-dependent way.
* *
* Do this before removing outgoing dependencies as deletions can be * We used to do this after removing the outgoing dependency links, but it
* happening in concurrent mode. That will only happen for a single object * seems just as reasonable to do it beforehand. In the concurrent case
* at once and if so the object will be invalidated inside a separate * we *must* do it in this order, because we can't make any transactional
* transaction and only dropped inside a transaction thats in-progress when * updates before calling doDeletion() --- they'd get committed right
* doDeletion returns. This way no observer can see dangling dependency * away, which is not cool if the deletion then fails.
* entries.
*/ */
doDeletion(object, flags); doDeletion(object, flags);
/* /*
* Reopen depRel if we closed it before * Reopen depRel if we closed it above
*/ */
if (flags & PERFORM_DELETION_CONCURRENTLY) if (flags & PERFORM_DELETION_CONCURRENTLY)
depRel = heap_open(depRelOid, RowExclusiveLock); depRel = heap_open(DependRelationId, RowExclusiveLock);
/* /*
* Then remove any pg_depend records that link from this object to * Now remove any pg_depend records that link from this object to others.
* others. (Any records linking to this object should be gone already.) * (Any records linking to this object should be gone already.)
* *
* When dropping a whole object (subId = 0), remove all pg_depend records * When dropping a whole object (subId = 0), remove all pg_depend records
* for its sub-objects too. * for its sub-objects too.
@ -1258,15 +1256,23 @@ AcquireDeletionLock(const ObjectAddress *object, int flags)
{ {
if (object->classId == RelationRelationId) if (object->classId == RelationRelationId)
{ {
/*
* In DROP INDEX CONCURRENTLY, take only ShareUpdateExclusiveLock on
* the index for the moment. index_drop() will promote the lock once
* it's safe to do so. In all other cases we need full exclusive
* lock.
*/
if (flags & PERFORM_DELETION_CONCURRENTLY) if (flags & PERFORM_DELETION_CONCURRENTLY)
LockRelationOid(object->objectId, ShareUpdateExclusiveLock); LockRelationOid(object->objectId, ShareUpdateExclusiveLock);
else else
LockRelationOid(object->objectId, AccessExclusiveLock); LockRelationOid(object->objectId, AccessExclusiveLock);
} }
else else
{
/* assume we should lock the whole object not a sub-object */ /* assume we should lock the whole object not a sub-object */
LockDatabaseObject(object->classId, object->objectId, 0, LockDatabaseObject(object->classId, object->objectId, 0,
AccessExclusiveLock); AccessExclusiveLock);
}
} }
/* /*

View File

@ -125,6 +125,10 @@ static void ResetReindexPending(void);
* See whether an existing relation has a primary key. * See whether an existing relation has a primary key.
* *
* Caller must have suitable lock on the relation. * Caller must have suitable lock on the relation.
*
* Note: we intentionally do not check IndexIsValid here; that's because this
* is used to enforce the rule that there can be only one indisprimary index,
* and we want that to be true even if said index is invalid.
*/ */
static bool static bool
relationHasPrimaryKey(Relation rel) relationHasPrimaryKey(Relation rel)
@ -608,6 +612,7 @@ UpdateIndexRelation(Oid indexoid,
values[Anum_pg_index_indcheckxmin - 1] = BoolGetDatum(false); values[Anum_pg_index_indcheckxmin - 1] = BoolGetDatum(false);
/* we set isvalid and isready the same way */ /* we set isvalid and isready the same way */
values[Anum_pg_index_indisready - 1] = BoolGetDatum(isvalid); values[Anum_pg_index_indisready - 1] = BoolGetDatum(isvalid);
values[Anum_pg_index_indislive - 1] = BoolGetDatum(true);
values[Anum_pg_index_indkey - 1] = PointerGetDatum(indkey); values[Anum_pg_index_indkey - 1] = PointerGetDatum(indkey);
values[Anum_pg_index_indcollation - 1] = PointerGetDatum(indcollation); values[Anum_pg_index_indcollation - 1] = PointerGetDatum(indcollation);
values[Anum_pg_index_indclass - 1] = PointerGetDatum(indclass); values[Anum_pg_index_indclass - 1] = PointerGetDatum(indclass);
@ -1258,8 +1263,9 @@ index_constraint_create(Relation heapRelation,
* Note: since this is a transactional update, it's unsafe against * Note: since this is a transactional update, it's unsafe against
* concurrent SnapshotNow scans of pg_index. When making an existing * concurrent SnapshotNow scans of pg_index. When making an existing
* index into a constraint, caller must have a table lock that prevents * index into a constraint, caller must have a table lock that prevents
* concurrent table updates, and there is a risk that concurrent readers * concurrent table updates; if it's less than a full exclusive lock,
* of the table will miss seeing this index at all. * there is a risk that concurrent readers of the table will miss seeing
* this index at all.
*/ */
if (update_pgindex && (mark_as_primary || deferrable)) if (update_pgindex && (mark_as_primary || deferrable))
{ {
@ -1317,8 +1323,8 @@ index_drop(Oid indexId, bool concurrent)
LockRelId heaprelid, LockRelId heaprelid,
indexrelid; indexrelid;
LOCKTAG heaplocktag; LOCKTAG heaplocktag;
LOCKMODE lockmode;
VirtualTransactionId *old_lockholders; VirtualTransactionId *old_lockholders;
Form_pg_index indexForm;
/* /*
* To drop an index safely, we must grab exclusive lock on its parent * To drop an index safely, we must grab exclusive lock on its parent
@ -1330,105 +1336,101 @@ index_drop(Oid indexId, bool concurrent)
* proceeding until we commit and send out a shared-cache-inval notice * proceeding until we commit and send out a shared-cache-inval notice
* that will make them update their index lists. * that will make them update their index lists.
* *
* In the concurrent case we make sure that nobody can be looking at the * In the concurrent case we avoid this requirement by disabling index use
* indexes by dropping the index in multiple steps, so we don't need a full * in multiple steps and waiting out any transactions that might be using
* AccessExclusiveLock yet. * the index, so we don't need exclusive lock on the parent table. Instead
* * we take ShareUpdateExclusiveLock, to ensure that two sessions aren't
* All predicate locks on the index are about to be made invalid. Promote * doing CREATE/DROP INDEX CONCURRENTLY on the same index. (We will get
* them to relation locks on the heap. For correctness the index must not * AccessExclusiveLock on the index below, once we're sure nobody else is
* be seen with indisvalid = true during query planning after the move * using it.)
* starts, so that the index will not be used for a scan after the
* predicate lock move, as this could create new predicate locks on the
* index which would not ensure a heap relation lock. Also, the index must
* not be seen during execution of a heap tuple insert with indisready =
* false before the move is complete, since the conflict with the
* predicate lock on the index gap could be missed before the lock on the
* heap relation is in place to detect a conflict based on the heap tuple
* insert.
*/ */
heapId = IndexGetRelation(indexId, false); heapId = IndexGetRelation(indexId, false);
if (concurrent) lockmode = concurrent ? ShareUpdateExclusiveLock : AccessExclusiveLock;
{ userHeapRelation = heap_open(heapId, lockmode);
userHeapRelation = heap_open(heapId, ShareUpdateExclusiveLock); userIndexRelation = index_open(indexId, lockmode);
userIndexRelation = index_open(indexId, ShareUpdateExclusiveLock);
}
else
{
userHeapRelation = heap_open(heapId, AccessExclusiveLock);
userIndexRelation = index_open(indexId, AccessExclusiveLock);
}
/* /*
* We might still have open queries using it in our own session. * We might still have open queries using it in our own session, which the
* above locking won't prevent, so test explicitly.
*/ */
CheckTableNotInUse(userIndexRelation, "DROP INDEX"); CheckTableNotInUse(userIndexRelation, "DROP INDEX");
/* /*
* Drop Index concurrently is similar in many ways to creating an index * Drop Index Concurrently is more or less the reverse process of Create
* concurrently, so some actions are similar to DefineIndex() just in the * Index Concurrently.
* reverse order.
* *
* First we unset indisvalid so queries starting afterwards don't use the * First we unset indisvalid so queries starting afterwards don't use the
* index to answer queries anymore. We have to keep indisready = true * index to answer queries anymore. We have to keep indisready = true so
* so transactions that are still scanning the index can continue to * transactions that are still scanning the index can continue to see
* see valid index contents. E.g. when they are using READ COMMITTED mode, * valid index contents. For instance, if they are using READ COMMITTED
* and another transactions that started later commits makes changes and * mode, and another transaction makes changes and commits, they need to
* commits, they need to see those new tuples in the index. * see those new tuples in the index.
* *
* After all transactions that could possibly have used it for queries * After all transactions that could possibly have used the index for
* ended we can unset indisready and wait till nobody could be updating it * queries end, we can unset indisready and indislive, then wait till
* anymore. * nobody could be touching it anymore. (Note: we need indislive because
* this state must be distinct from the initial state during CREATE INDEX
* CONCURRENTLY, which has indislive true while indisready and indisvalid
* are false. That's because in that state, transactions must examine the
* index for HOT-safety decisions, while in this state we don't want them
* to open it at all.)
*
* Since all predicate locks on the index are about to be made invalid, we
* must promote them to predicate locks on the heap. In the
* non-concurrent case we can just do that now. In the concurrent case
* it's a bit trickier. The predicate locks must be moved when there are
* no index scans in progress on the index and no more can subsequently
* start, so that no new predicate locks can be made on the index. Also,
* they must be moved before heap inserts stop maintaining the index, else
* the conflict with the predicate lock on the index gap could be missed
* before the lock on the heap relation is in place to detect a conflict
* based on the heap tuple insert.
*/ */
if (concurrent) if (concurrent)
{ {
/* /*
* Mark index invalid by updating its pg_index entry * We must commit our transaction in order to make the first pg_index
* * state update visible to other sessions. If the DROP machinery has
* Don't Assert(indexForm->indisvalid) because we may be trying to * already performed any other actions (removal of other objects,
* clear up after an error when trying to create an index which left * pg_depend entries, etc), the commit would make those actions
* the index invalid * permanent, which would leave us with inconsistent catalog state if
* we fail partway through the following sequence. Since DROP INDEX
* CONCURRENTLY is restricted to dropping just one index that has no
* dependencies, we should get here before anything's been done ---
* but let's check that to be sure. We can verify that the current
* transaction has not executed any transactional updates by checking
* that no XID has been assigned.
*/ */
indexRelation = heap_open(IndexRelationId, RowExclusiveLock); if (GetTopTransactionIdIfAny() != InvalidTransactionId)
ereport(ERROR,
tuple = SearchSysCacheCopy1(INDEXRELID, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
ObjectIdGetDatum(indexId)); errmsg("DROP INDEX CONCURRENTLY must be first action in transaction")));
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for index %u", indexId);
indexForm = (Form_pg_index) GETSTRUCT(tuple);
/* /*
* If indisready == true we leave it set so the index still gets * Mark index invalid by updating its pg_index entry
* maintained by pre-existing transactions. We only need to ensure
* that indisvalid is false.
*/ */
if (indexForm->indisvalid) index_set_state_flags(indexId, INDEX_DROP_CLEAR_VALID);
{
indexForm->indisvalid = false; /* make unusable for new queries */
simple_heap_update(indexRelation, &tuple->t_self, tuple); /*
CatalogUpdateIndexes(indexRelation, tuple); * Invalidate the relcache for the table, so that after this commit
} * all sessions will refresh any cached plans that might reference the
* index.
heap_close(indexRelation, RowExclusiveLock); */
CacheInvalidateRelcache(userHeapRelation);
/* save lockrelid and locktag for below, then close but keep locks */ /* save lockrelid and locktag for below, then close but keep locks */
heaprelid = userHeapRelation->rd_lockInfo.lockRelId; heaprelid = userHeapRelation->rd_lockInfo.lockRelId;
SET_LOCKTAG_RELATION(heaplocktag, heaprelid.dbId, heaprelid.relId); SET_LOCKTAG_RELATION(heaplocktag, heaprelid.dbId, heaprelid.relId);
heap_close(userHeapRelation, NoLock);
indexrelid = userIndexRelation->rd_lockInfo.lockRelId; indexrelid = userIndexRelation->rd_lockInfo.lockRelId;
heap_close(userHeapRelation, NoLock);
index_close(userIndexRelation, NoLock); index_close(userIndexRelation, NoLock);
/* /*
* For a concurrent drop, it's important to make the catalog entries * We must commit our current transaction so that the indisvalid
* visible to other transactions before we drop the index. The index * update becomes visible to other transactions; then start another.
* will be marked not indisvalid, so that no one else tries to use it * Note that any previously-built data structures are lost in the
* for queries. * commit. The only data we keep past here are the relation IDs.
*
* We must commit our current transaction so that the index update
* becomes visible; then start another. Note that all the data
* structures we just built are lost in the commit. The only data we
* keep past here are the relation IDs.
* *
* Before committing, get a session-level lock on the table, to ensure * Before committing, get a session-level lock on the table, to ensure
* that neither it nor the index can be dropped before we finish. This * that neither it nor the index can be dropped before we finish. This
@ -1443,13 +1445,13 @@ index_drop(Oid indexId, bool concurrent)
StartTransactionCommand(); StartTransactionCommand();
/* /*
* Now we must wait until no running transaction could have the table * Now we must wait until no running transaction could be using the
* open with the old list of indexes. To do this, inquire which xacts * index for a query. To do this, inquire which xacts currently would
* currently would conflict with AccessExclusiveLock on the table -- * conflict with AccessExclusiveLock on the table -- ie, which ones
* ie, which ones have a lock of any kind on the table. Then wait for * have a lock of any kind on the table. Then wait for each of these
* each of these xacts to commit or abort. Note we do not need to * xacts to commit or abort. Note we do not need to worry about xacts
* worry about xacts that open the table for writing after this point; * that open the table for reading after this point; they will see the
* they will see the index as invalid when they open the relation. * index as invalid when they open the relation.
* *
* Note: the reason we use actual lock acquisition here, rather than * Note: the reason we use actual lock acquisition here, rather than
* just checking the ProcArray and sleeping, is that deadlock is * just checking the ProcArray and sleeping, is that deadlock is
@ -1480,28 +1482,19 @@ index_drop(Oid indexId, bool concurrent)
TransferPredicateLocksToHeapRelation(userIndexRelation); TransferPredicateLocksToHeapRelation(userIndexRelation);
/* /*
* Now we are sure that nobody uses the index for queries, they just * Now we are sure that nobody uses the index for queries; they just
* might have it opened for updating it. So now we can unset * might have it open for updating it. So now we can unset indisready
* indisready and wait till nobody could update the index anymore. * and indislive, then wait till nobody could be using it at all
* anymore.
*/ */
indexRelation = heap_open(IndexRelationId, RowExclusiveLock); index_set_state_flags(indexId, INDEX_DROP_SET_DEAD);
tuple = SearchSysCacheCopy1(INDEXRELID, /*
ObjectIdGetDatum(indexId)); * Invalidate the relcache for the table, so that after this commit
if (!HeapTupleIsValid(tuple)) * all sessions will refresh the table's index list. Forgetting just
elog(ERROR, "cache lookup failed for index %u", indexId); * the index's relcache entry is not enough.
indexForm = (Form_pg_index) GETSTRUCT(tuple); */
CacheInvalidateRelcache(userHeapRelation);
Assert(indexForm->indisvalid == false);
if (indexForm->indisready)
{
indexForm->indisready = false; /* don't update index anymore */
simple_heap_update(indexRelation, &tuple->t_self, tuple);
CatalogUpdateIndexes(indexRelation, tuple);
}
heap_close(indexRelation, RowExclusiveLock);
/* /*
* Close the relations again, though still holding session lock. * Close the relations again, though still holding session lock.
@ -1510,23 +1503,15 @@ index_drop(Oid indexId, bool concurrent)
index_close(userIndexRelation, NoLock); index_close(userIndexRelation, NoLock);
/* /*
* Invalidate the relcache for the table, so that after this * Again, commit the transaction to make the pg_index update visible
* transaction we will refresh the index list. Forgetting just the * to other sessions.
* index is not enough.
*/
CacheInvalidateRelcache(userHeapRelation);
/*
* Just as with indisvalid = false we need to make sure indisready
* is false is visible for everyone.
*/ */
CommitTransactionCommand(); CommitTransactionCommand();
StartTransactionCommand(); StartTransactionCommand();
/* /*
* Wait till everyone that saw indisready = true finished so we can * Wait till every transaction that saw the old index state has
* finally really remove the index. The logic here is the same as * finished. The logic here is the same as above.
* above.
*/ */
old_lockholders = GetLockConflicts(&heaplocktag, AccessExclusiveLock); old_lockholders = GetLockConflicts(&heaplocktag, AccessExclusiveLock);
@ -1547,7 +1532,10 @@ index_drop(Oid indexId, bool concurrent)
userIndexRelation = index_open(indexId, AccessExclusiveLock); userIndexRelation = index_open(indexId, AccessExclusiveLock);
} }
else else
{
/* Not concurrent, so just transfer predicate locks and we're good */
TransferPredicateLocksToHeapRelation(userIndexRelation); TransferPredicateLocksToHeapRelation(userIndexRelation);
}
/* /*
* Schedule physical removal of the files * Schedule physical removal of the files
@ -1601,7 +1589,8 @@ index_drop(Oid indexId, bool concurrent)
* of relhasindex (the next VACUUM will fix it if necessary). So there is * of relhasindex (the next VACUUM will fix it if necessary). So there is
* no need to update the pg_class tuple for the owning relation. But we * no need to update the pg_class tuple for the owning relation. But we
* must send out a shared-cache-inval notice on the owning relation to * must send out a shared-cache-inval notice on the owning relation to
* ensure other backends update their relcache lists of indexes. * ensure other backends update their relcache lists of indexes. (In the
* concurrent case, this is redundant but harmless.)
*/ */
CacheInvalidateRelcache(userHeapRelation); CacheInvalidateRelcache(userHeapRelation);
@ -1677,7 +1666,7 @@ BuildIndexInfo(Relation index)
/* other info */ /* other info */
ii->ii_Unique = indexStruct->indisunique; ii->ii_Unique = indexStruct->indisunique;
ii->ii_ReadyForInserts = indexStruct->indisready; ii->ii_ReadyForInserts = IndexIsReady(indexStruct);
/* initialize index-build state to default */ /* initialize index-build state to default */
ii->ii_Concurrent = false; ii->ii_Concurrent = false;
@ -2035,8 +2024,20 @@ index_build(Relation heapRelation,
* index's usability horizon. Moreover, we *must not* try to change the * index's usability horizon. Moreover, we *must not* try to change the
* index's pg_index entry while reindexing pg_index itself, and this * index's pg_index entry while reindexing pg_index itself, and this
* optimization nicely prevents that. * optimization nicely prevents that.
*
* We also need not set indcheckxmin during a concurrent index build,
* because we won't set indisvalid true until all transactions that care
* about the broken HOT chains are gone.
*
* Therefore, this code path can only be taken during non-concurrent
* CREATE INDEX. Thus the fact that heap_update will set the pg_index
* tuple's xmin doesn't matter, because that tuple was created in the
* current transaction anyway. That also means we don't need to worry
* about any concurrent readers of the tuple; no other transaction can see
* it yet.
*/ */
if (indexInfo->ii_BrokenHotChain && !isreindex) if (indexInfo->ii_BrokenHotChain && !isreindex &&
!indexInfo->ii_Concurrent)
{ {
Oid indexId = RelationGetRelid(indexRelation); Oid indexId = RelationGetRelid(indexRelation);
Relation pg_index; Relation pg_index;
@ -2999,6 +3000,97 @@ validate_index_heapscan(Relation heapRelation,
} }
/*
* index_set_state_flags - adjust pg_index state flags
*
* This is used during CREATE/DROP INDEX CONCURRENTLY to adjust the pg_index
* flags that denote the index's state. We must use an in-place update of
* the pg_index tuple, because we do not have exclusive lock on the parent
* table and so other sessions might concurrently be doing SnapshotNow scans
* of pg_index to identify the table's indexes. A transactional update would
* risk somebody not seeing the index at all. Because the update is not
* transactional and will not roll back on error, this must only be used as
* the last step in a transaction that has not made any transactional catalog
* updates!
*
* Note that heap_inplace_update does send a cache inval message for the
* tuple, so other sessions will hear about the update as soon as we commit.
*/
void
index_set_state_flags(Oid indexId, IndexStateFlagsAction action)
{
Relation pg_index;
HeapTuple indexTuple;
Form_pg_index indexForm;
/* Assert that current xact hasn't done any transactional updates */
Assert(GetTopTransactionIdIfAny() == InvalidTransactionId);
/* Open pg_index and fetch a writable copy of the index's tuple */
pg_index = heap_open(IndexRelationId, RowExclusiveLock);
indexTuple = SearchSysCacheCopy1(INDEXRELID,
ObjectIdGetDatum(indexId));
if (!HeapTupleIsValid(indexTuple))
elog(ERROR, "cache lookup failed for index %u", indexId);
indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
/* Perform the requested state change on the copy */
switch (action)
{
case INDEX_CREATE_SET_READY:
/* Set indisready during a CREATE INDEX CONCURRENTLY sequence */
Assert(indexForm->indislive);
Assert(!indexForm->indisready);
Assert(!indexForm->indisvalid);
indexForm->indisready = true;
break;
case INDEX_CREATE_SET_VALID:
/* Set indisvalid during a CREATE INDEX CONCURRENTLY sequence */
Assert(indexForm->indislive);
Assert(indexForm->indisready);
Assert(!indexForm->indisvalid);
indexForm->indisvalid = true;
break;
case INDEX_DROP_CLEAR_VALID:
/*
* Clear indisvalid during a DROP INDEX CONCURRENTLY sequence
*
* If indisready == true we leave it set so the index still gets
* maintained by active transactions. We only need to ensure that
* indisvalid is false. (We don't assert that either is initially
* true, though, since we want to be able to retry a DROP INDEX
* CONCURRENTLY that failed partway through.)
*
* Note: the CLUSTER logic assumes that indisclustered cannot be
* set on any invalid index, so clear that flag too.
*/
indexForm->indisvalid = false;
indexForm->indisclustered = false;
break;
case INDEX_DROP_SET_DEAD:
/*
* Clear indisready/indislive during DROP INDEX CONCURRENTLY
*
* We clear both indisready and indislive, because we not only
* want to stop updates, we want to prevent sessions from touching
* the index at all.
*/
Assert(!indexForm->indisvalid);
indexForm->indisready = false;
indexForm->indislive = false;
break;
}
/* ... and write it back in-place */
heap_inplace_update(pg_index, indexTuple);
heap_close(pg_index, RowExclusiveLock);
}
/* /*
* IndexGetRelation: given an index's relation OID, get the OID of the * IndexGetRelation: given an index's relation OID, get the OID of the
* relation it is an index on. Uses the system cache. * relation it is an index on. Uses the system cache.
@ -3032,12 +3124,9 @@ void
reindex_index(Oid indexId, bool skip_constraint_checks) reindex_index(Oid indexId, bool skip_constraint_checks)
{ {
Relation iRel, Relation iRel,
heapRelation, heapRelation;
pg_index;
Oid heapId; Oid heapId;
IndexInfo *indexInfo; IndexInfo *indexInfo;
HeapTuple indexTuple;
Form_pg_index indexForm;
volatile bool skipped_constraint = false; volatile bool skipped_constraint = false;
/* /*
@ -3110,32 +3199,48 @@ reindex_index(Oid indexId, bool skip_constraint_checks)
ResetReindexProcessing(); ResetReindexProcessing();
/* /*
* If the index is marked invalid or not ready (ie, it's from a failed * If the index is marked invalid/not-ready/dead (ie, it's from a failed
* CREATE INDEX CONCURRENTLY), and we didn't skip a uniqueness check, we * CREATE INDEX CONCURRENTLY, or a DROP INDEX CONCURRENTLY failed midway),
* can now mark it valid. This allows REINDEX to be used to clean up in * and we didn't skip a uniqueness check, we can now mark it valid. This
* such cases. * allows REINDEX to be used to clean up in such cases.
* *
* We can also reset indcheckxmin, because we have now done a * We can also reset indcheckxmin, because we have now done a
* non-concurrent index build, *except* in the case where index_build * non-concurrent index build, *except* in the case where index_build
* found some still-broken HOT chains. If it did, we normally leave * found some still-broken HOT chains. If it did, and we don't have to
* indcheckxmin alone (note that index_build won't have changed it, * change any of the other flags, we just leave indcheckxmin alone (note
* because this is a reindex). But if the index was invalid or not ready * that index_build won't have changed it, because this is a reindex).
* and there were broken HOT chains, it seems best to force indcheckxmin * This is okay and desirable because not updating the tuple leaves the
* true, because the normal argument that the HOT chains couldn't conflict * index's usability horizon (recorded as the tuple's xmin value) the same
* with the index is suspect for an invalid index. * as it was.
* *
* Note that it is important to not update the pg_index entry if we don't * But, if the index was invalid/not-ready/dead and there were broken HOT
* have to, because updating it will move the index's usability horizon * chains, we had better force indcheckxmin true, because the normal
* (recorded as the tuple's xmin value) if indcheckxmin is true. We don't * argument that the HOT chains couldn't conflict with the index is
* really want REINDEX to move the usability horizon forward ever, but we * suspect for an invalid index. (A conflict is definitely possible if
* have no choice if we are to fix indisvalid or indisready. Of course, * the index was dead. It probably shouldn't happen otherwise, but let's
* clearing indcheckxmin eliminates the issue, so we're happy to do that * be conservative.) In this case advancing the usability horizon is
* if we can. Another reason for caution here is that while reindexing * appropriate.
* pg_index itself, we must not try to update it. We assume that *
* pg_index's indexes will always have these flags in their clean state. * Note that if we have to update the tuple, there is a risk of concurrent
* transactions not seeing it during their SnapshotNow scans of pg_index.
* While not especially desirable, this is safe because no such
* transaction could be trying to update the table (since we have
* ShareLock on it). The worst case is that someone might transiently
* fail to use the index for a query --- but it was probably unusable
* before anyway, if we are updating the tuple.
*
* Another reason for avoiding unnecessary updates here is that while
* reindexing pg_index itself, we must not try to update tuples in it.
* pg_index's indexes should always have these flags in their clean state,
* so that won't happen.
*/ */
if (!skipped_constraint) if (!skipped_constraint)
{ {
Relation pg_index;
HeapTuple indexTuple;
Form_pg_index indexForm;
bool index_bad;
pg_index = heap_open(IndexRelationId, RowExclusiveLock); pg_index = heap_open(IndexRelationId, RowExclusiveLock);
indexTuple = SearchSysCacheCopy1(INDEXRELID, indexTuple = SearchSysCacheCopy1(INDEXRELID,
@ -3144,17 +3249,30 @@ reindex_index(Oid indexId, bool skip_constraint_checks)
elog(ERROR, "cache lookup failed for index %u", indexId); elog(ERROR, "cache lookup failed for index %u", indexId);
indexForm = (Form_pg_index) GETSTRUCT(indexTuple); indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
if (!indexForm->indisvalid || !indexForm->indisready || index_bad = (!indexForm->indisvalid ||
!indexForm->indisready ||
!indexForm->indislive);
if (index_bad ||
(indexForm->indcheckxmin && !indexInfo->ii_BrokenHotChain)) (indexForm->indcheckxmin && !indexInfo->ii_BrokenHotChain))
{ {
if (!indexInfo->ii_BrokenHotChain) if (!indexInfo->ii_BrokenHotChain)
indexForm->indcheckxmin = false; indexForm->indcheckxmin = false;
else if (!indexForm->indisvalid || !indexForm->indisready) else if (index_bad)
indexForm->indcheckxmin = true; indexForm->indcheckxmin = true;
indexForm->indisvalid = true; indexForm->indisvalid = true;
indexForm->indisready = true; indexForm->indisready = true;
indexForm->indislive = true;
simple_heap_update(pg_index, &indexTuple->t_self, indexTuple); simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
CatalogUpdateIndexes(pg_index, indexTuple); CatalogUpdateIndexes(pg_index, indexTuple);
/*
* Invalidate the relcache for the table, so that after we commit
* all sessions will refresh the table's index list. This ensures
* that if anyone misses seeing the pg_index row during this
* update, they'll refresh their list before attempting any update
* on the table.
*/
CacheInvalidateRelcache(heapRelation);
} }
heap_close(pg_index, RowExclusiveLock); heap_close(pg_index, RowExclusiveLock);

View File

@ -444,7 +444,7 @@ check_index_is_clusterable(Relation OldHeap, Oid indexOid, bool recheck, LOCKMOD
* might put recently-dead tuples out-of-order in the new table, and there * might put recently-dead tuples out-of-order in the new table, and there
* is little harm in that.) * is little harm in that.)
*/ */
if (!OldIndex->rd_index->indisvalid) if (!IndexIsValid(OldIndex->rd_index))
ereport(ERROR, ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED), (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot cluster on invalid index \"%s\"", errmsg("cannot cluster on invalid index \"%s\"",
@ -458,6 +458,11 @@ check_index_is_clusterable(Relation OldHeap, Oid indexOid, bool recheck, LOCKMOD
* mark_index_clustered: mark the specified index as the one clustered on * mark_index_clustered: mark the specified index as the one clustered on
* *
* With indexOid == InvalidOid, will mark all indexes of rel not-clustered. * With indexOid == InvalidOid, will mark all indexes of rel not-clustered.
*
* Note: we do transactional updates of the pg_index rows, which are unsafe
* against concurrent SnapshotNow scans of pg_index. Therefore this is unsafe
* to execute with less than full exclusive lock on the parent table;
* otherwise concurrent executions of RelationGetIndexList could miss indexes.
*/ */
void void
mark_index_clustered(Relation rel, Oid indexOid) mark_index_clustered(Relation rel, Oid indexOid)
@ -513,6 +518,9 @@ mark_index_clustered(Relation rel, Oid indexOid)
} }
else if (thisIndexOid == indexOid) else if (thisIndexOid == indexOid)
{ {
/* this was checked earlier, but let's be real sure */
if (!IndexIsValid(indexForm))
elog(ERROR, "cannot cluster on invalid index %u", indexOid);
indexForm->indisclustered = true; indexForm->indisclustered = true;
simple_heap_update(pg_index, &indexTuple->t_self, indexTuple); simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
CatalogUpdateIndexes(pg_index, indexTuple); CatalogUpdateIndexes(pg_index, indexTuple);

View File

@ -124,6 +124,7 @@ CheckIndexCompatible(Oid oldId,
Oid accessMethodId; Oid accessMethodId;
Oid relationId; Oid relationId;
HeapTuple tuple; HeapTuple tuple;
Form_pg_index indexForm;
Form_pg_am accessMethodForm; Form_pg_am accessMethodForm;
bool amcanorder; bool amcanorder;
int16 *coloptions; int16 *coloptions;
@ -193,17 +194,22 @@ CheckIndexCompatible(Oid oldId,
tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(oldId)); tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(oldId));
if (!HeapTupleIsValid(tuple)) if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for index %u", oldId); elog(ERROR, "cache lookup failed for index %u", oldId);
indexForm = (Form_pg_index) GETSTRUCT(tuple);
/* We don't assess expressions or predicates; assume incompatibility. */ /*
* We don't assess expressions or predicates; assume incompatibility.
* Also, if the index is invalid for any reason, treat it as incompatible.
*/
if (!(heap_attisnull(tuple, Anum_pg_index_indpred) && if (!(heap_attisnull(tuple, Anum_pg_index_indpred) &&
heap_attisnull(tuple, Anum_pg_index_indexprs))) heap_attisnull(tuple, Anum_pg_index_indexprs) &&
IndexIsValid(indexForm)))
{ {
ReleaseSysCache(tuple); ReleaseSysCache(tuple);
return false; return false;
} }
/* Any change in operator class or collation breaks compatibility. */ /* Any change in operator class or collation breaks compatibility. */
old_natts = ((Form_pg_index) GETSTRUCT(tuple))->indnatts; old_natts = indexForm->indnatts;
Assert(old_natts == numberOfAttributes); Assert(old_natts == numberOfAttributes);
d = SysCacheGetAttr(INDEXRELID, tuple, Anum_pg_index_indcollation, &isnull); d = SysCacheGetAttr(INDEXRELID, tuple, Anum_pg_index_indcollation, &isnull);
@ -320,9 +326,6 @@ DefineIndex(IndexStmt *stmt,
LockRelId heaprelid; LockRelId heaprelid;
LOCKTAG heaplocktag; LOCKTAG heaplocktag;
Snapshot snapshot; Snapshot snapshot;
Relation pg_index;
HeapTuple indexTuple;
Form_pg_index indexForm;
int i; int i;
/* /*
@ -717,23 +720,7 @@ DefineIndex(IndexStmt *stmt,
* commit this transaction, any new transactions that open the table must * commit this transaction, any new transactions that open the table must
* insert new entries into the index for insertions and non-HOT updates. * insert new entries into the index for insertions and non-HOT updates.
*/ */
pg_index = heap_open(IndexRelationId, RowExclusiveLock); index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
indexTuple = SearchSysCacheCopy1(INDEXRELID,
ObjectIdGetDatum(indexRelationId));
if (!HeapTupleIsValid(indexTuple))
elog(ERROR, "cache lookup failed for index %u", indexRelationId);
indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
Assert(!indexForm->indisready);
Assert(!indexForm->indisvalid);
indexForm->indisready = true;
simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
CatalogUpdateIndexes(pg_index, indexTuple);
heap_close(pg_index, RowExclusiveLock);
/* we can do away with our snapshot */ /* we can do away with our snapshot */
PopActiveSnapshot(); PopActiveSnapshot();
@ -857,23 +844,7 @@ DefineIndex(IndexStmt *stmt,
/* /*
* Index can now be marked valid -- update its pg_index entry * Index can now be marked valid -- update its pg_index entry
*/ */
pg_index = heap_open(IndexRelationId, RowExclusiveLock); index_set_state_flags(indexRelationId, INDEX_CREATE_SET_VALID);
indexTuple = SearchSysCacheCopy1(INDEXRELID,
ObjectIdGetDatum(indexRelationId));
if (!HeapTupleIsValid(indexTuple))
elog(ERROR, "cache lookup failed for index %u", indexRelationId);
indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
Assert(indexForm->indisready);
Assert(!indexForm->indisvalid);
indexForm->indisvalid = true;
simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
CatalogUpdateIndexes(pg_index, indexTuple);
heap_close(pg_index, RowExclusiveLock);
/* /*
* The pg_index update will cause backends (including this one) to update * The pg_index update will cause backends (including this one) to update
@ -881,7 +852,7 @@ DefineIndex(IndexStmt *stmt,
* relcache inval on the parent table to force replanning of cached plans. * relcache inval on the parent table to force replanning of cached plans.
* Otherwise existing sessions might fail to use the new index where it * Otherwise existing sessions might fail to use the new index where it
* would be useful. (Note that our earlier commits did not create reasons * would be useful. (Note that our earlier commits did not create reasons
* to replan; relcache flush on the index itself was sufficient.) * to replan; so relcache flush on the index itself was sufficient.)
*/ */
CacheInvalidateRelcacheByRelid(heaprelid.relId); CacheInvalidateRelcacheByRelid(heaprelid.relId);

View File

@ -744,10 +744,13 @@ RemoveRelations(DropStmt *drop)
int flags = 0; int flags = 0;
LOCKMODE lockmode = AccessExclusiveLock; LOCKMODE lockmode = AccessExclusiveLock;
/* DROP CONCURRENTLY uses a weaker lock, and has some restrictions */
if (drop->concurrent) if (drop->concurrent)
{ {
flags |= PERFORM_DELETION_CONCURRENTLY;
lockmode = ShareUpdateExclusiveLock; lockmode = ShareUpdateExclusiveLock;
if (list_length(drop->objects) > 1) Assert(drop->removeType == OBJECT_INDEX);
if (list_length(drop->objects) != 1)
ereport(ERROR, ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED), (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("DROP INDEX CONCURRENTLY does not support dropping multiple objects"))); errmsg("DROP INDEX CONCURRENTLY does not support dropping multiple objects")));
@ -839,19 +842,6 @@ RemoveRelations(DropStmt *drop)
add_exact_object_address(&obj, objects); add_exact_object_address(&obj, objects);
} }
/*
* Set options and check further requirements for concurrent drop
*/
if (drop->concurrent)
{
/*
* Confirm that concurrent behaviour is restricted in grammar.
*/
Assert(drop->removeType == OBJECT_INDEX);
flags |= PERFORM_DELETION_CONCURRENTLY;
}
performMultipleDeletions(objects, drop->behavior, flags); performMultipleDeletions(objects, drop->behavior, flags);
free_object_addresses(objects); free_object_addresses(objects);
@ -918,7 +908,7 @@ RangeVarCallbackForDropRelation(const RangeVar *rel, Oid relOid, Oid oldRelOid,
* locking the index. index_drop() will need this anyway, and since * locking the index. index_drop() will need this anyway, and since
* regular queries lock tables before their indexes, we risk deadlock if * regular queries lock tables before their indexes, we risk deadlock if
* we do it the other way around. No error if we don't find a pg_index * we do it the other way around. No error if we don't find a pg_index
* entry, though --- the relation may have been droppd. * entry, though --- the relation may have been dropped.
*/ */
if (relkind == RELKIND_INDEX && relOid != oldRelOid) if (relkind == RELKIND_INDEX && relOid != oldRelOid)
{ {
@ -4784,6 +4774,8 @@ ATExecDropNotNull(Relation rel, const char *colName, LOCKMODE lockmode)
/* /*
* Check that the attribute is not in a primary key * Check that the attribute is not in a primary key
*
* Note: we'll throw error even if the pkey index is not valid.
*/ */
/* Loop over all indexes on the relation */ /* Loop over all indexes on the relation */
@ -6318,7 +6310,7 @@ transformFkeyGetPrimaryKey(Relation pkrel, Oid *indexOid,
/* /*
* Get the list of index OIDs for the table from the relcache, and look up * Get the list of index OIDs for the table from the relcache, and look up
* each one in the pg_index syscache until we find one marked primary key * each one in the pg_index syscache until we find one marked primary key
* (hopefully there isn't more than one such). * (hopefully there isn't more than one such). Insist it's valid, too.
*/ */
*indexOid = InvalidOid; *indexOid = InvalidOid;
@ -6332,7 +6324,7 @@ transformFkeyGetPrimaryKey(Relation pkrel, Oid *indexOid,
if (!HeapTupleIsValid(indexTuple)) if (!HeapTupleIsValid(indexTuple))
elog(ERROR, "cache lookup failed for index %u", indexoid); elog(ERROR, "cache lookup failed for index %u", indexoid);
indexStruct = (Form_pg_index) GETSTRUCT(indexTuple); indexStruct = (Form_pg_index) GETSTRUCT(indexTuple);
if (indexStruct->indisprimary) if (indexStruct->indisprimary && IndexIsValid(indexStruct))
{ {
/* /*
* Refuse to use a deferrable primary key. This is per SQL spec, * Refuse to use a deferrable primary key. This is per SQL spec,
@ -6430,10 +6422,12 @@ transformFkeyCheckAttrs(Relation pkrel,
/* /*
* Must have the right number of columns; must be unique and not a * Must have the right number of columns; must be unique and not a
* partial index; forget it if there are any expressions, too * partial index; forget it if there are any expressions, too. Invalid
* indexes are out as well.
*/ */
if (indexStruct->indnatts == numattrs && if (indexStruct->indnatts == numattrs &&
indexStruct->indisunique && indexStruct->indisunique &&
IndexIsValid(indexStruct) &&
heap_attisnull(indexTuple, Anum_pg_index_indpred) && heap_attisnull(indexTuple, Anum_pg_index_indpred) &&
heap_attisnull(indexTuple, Anum_pg_index_indexprs)) heap_attisnull(indexTuple, Anum_pg_index_indexprs))
{ {

View File

@ -1097,9 +1097,16 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast, bool for_wraparound)
/* /*
* Open all the indexes of the given relation, obtaining the specified kind * Open all the vacuumable indexes of the given relation, obtaining the
* of lock on each. Return an array of Relation pointers for the indexes * specified kind of lock on each. Return an array of Relation pointers for
* into *Irel, and the number of indexes into *nindexes. * the indexes into *Irel, and the number of indexes into *nindexes.
*
* We consider an index vacuumable if it is marked insertable (IndexIsReady).
* If it isn't, probably a CREATE INDEX CONCURRENTLY command failed early in
* execution, and what we have is too corrupt to be processable. We will
* vacuum even if the index isn't indisvalid; this is important because in a
* unique index, uniqueness checks will be performed anyway and had better not
* hit dangling index pointers.
*/ */
void void
vac_open_indexes(Relation relation, LOCKMODE lockmode, vac_open_indexes(Relation relation, LOCKMODE lockmode,
@ -1113,21 +1120,30 @@ vac_open_indexes(Relation relation, LOCKMODE lockmode,
indexoidlist = RelationGetIndexList(relation); indexoidlist = RelationGetIndexList(relation);
*nindexes = list_length(indexoidlist); /* allocate enough memory for all indexes */
i = list_length(indexoidlist);
if (*nindexes > 0) if (i > 0)
*Irel = (Relation *) palloc(*nindexes * sizeof(Relation)); *Irel = (Relation *) palloc(i * sizeof(Relation));
else else
*Irel = NULL; *Irel = NULL;
/* collect just the ready indexes */
i = 0; i = 0;
foreach(indexoidscan, indexoidlist) foreach(indexoidscan, indexoidlist)
{ {
Oid indexoid = lfirst_oid(indexoidscan); Oid indexoid = lfirst_oid(indexoidscan);
Relation indrel;
(*Irel)[i++] = index_open(indexoid, lockmode); indrel = index_open(indexoid, lockmode);
if (IndexIsReady(indrel->rd_index))
(*Irel)[i++] = indrel;
else
index_close(indrel, lockmode);
} }
*nindexes = i;
list_free(indexoidlist); list_free(indexoidlist);
} }

View File

@ -906,6 +906,9 @@ ExecOpenIndices(ResultRelInfo *resultRelInfo)
/* /*
* For each index, open the index relation and save pg_index info. We * For each index, open the index relation and save pg_index info. We
* acquire RowExclusiveLock, signifying we will update the index. * acquire RowExclusiveLock, signifying we will update the index.
*
* Note: we do this even if the index is not IndexIsReady; it's not worth
* the trouble to optimize for the case where it isn't.
*/ */
i = 0; i = 0;
foreach(l, indexoidlist) foreach(l, indexoidlist)

View File

@ -170,9 +170,10 @@ get_relation_info(PlannerInfo *root, Oid relationObjectId, bool inhparent,
* Ignore invalid indexes, since they can't safely be used for * Ignore invalid indexes, since they can't safely be used for
* queries. Note that this is OK because the data structure we * queries. Note that this is OK because the data structure we
* are constructing is only used by the planner --- the executor * are constructing is only used by the planner --- the executor
* still needs to insert into "invalid" indexes! * still needs to insert into "invalid" indexes, if they're marked
* IndexIsReady.
*/ */
if (!index->indisvalid) if (!IndexIsValid(index))
{ {
index_close(indexRelation, NoLock); index_close(indexRelation, NoLock);
continue; continue;

View File

@ -1533,18 +1533,12 @@ transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt)
index_name, RelationGetRelationName(heap_rel)), index_name, RelationGetRelationName(heap_rel)),
parser_errposition(cxt->pstate, constraint->location))); parser_errposition(cxt->pstate, constraint->location)));
if (!index_form->indisvalid) if (!IndexIsValid(index_form))
ereport(ERROR, ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("index \"%s\" is not valid", index_name), errmsg("index \"%s\" is not valid", index_name),
parser_errposition(cxt->pstate, constraint->location))); parser_errposition(cxt->pstate, constraint->location)));
if (!index_form->indisready)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("index \"%s\" is not ready", index_name),
parser_errposition(cxt->pstate, constraint->location)));
if (!index_form->indisunique) if (!index_form->indisunique)
ereport(ERROR, ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE), (errcode(ERRCODE_WRONG_OBJECT_TYPE),

View File

@ -1731,9 +1731,23 @@ RelationReloadIndexInfo(Relation relation)
RelationGetRelid(relation)); RelationGetRelid(relation));
index = (Form_pg_index) GETSTRUCT(tuple); index = (Form_pg_index) GETSTRUCT(tuple);
/*
* Basically, let's just copy all the bool fields. There are one or
* two of these that can't actually change in the current code, but
* it's not worth it to track exactly which ones they are. None of
* the array fields are allowed to change, though.
*/
relation->rd_index->indisunique = index->indisunique;
relation->rd_index->indisprimary = index->indisprimary;
relation->rd_index->indisexclusion = index->indisexclusion;
relation->rd_index->indimmediate = index->indimmediate;
relation->rd_index->indisclustered = index->indisclustered;
relation->rd_index->indisvalid = index->indisvalid; relation->rd_index->indisvalid = index->indisvalid;
relation->rd_index->indcheckxmin = index->indcheckxmin; relation->rd_index->indcheckxmin = index->indcheckxmin;
relation->rd_index->indisready = index->indisready; relation->rd_index->indisready = index->indisready;
relation->rd_index->indislive = index->indislive;
/* Copy xmin too, as that is needed to make sense of indcheckxmin */
HeapTupleHeaderSetXmin(relation->rd_indextuple->t_data, HeapTupleHeaderSetXmin(relation->rd_indextuple->t_data,
HeapTupleHeaderGetXmin(tuple->t_data)); HeapTupleHeaderGetXmin(tuple->t_data));
@ -3299,6 +3313,10 @@ CheckConstraintFetch(Relation relation)
* so that we must recompute the index list on next request. This handles * so that we must recompute the index list on next request. This handles
* creation or deletion of an index. * creation or deletion of an index.
* *
* Indexes that are marked not IndexIsLive are omitted from the returned list.
* Such indexes are expected to be dropped momentarily, and should not be
* touched at all by any caller of this function.
*
* The returned list is guaranteed to be sorted in order by OID. This is * The returned list is guaranteed to be sorted in order by OID. This is
* needed by the executor, since for index types that we obtain exclusive * needed by the executor, since for index types that we obtain exclusive
* locks on when updating the index, all backends must lock the indexes in * locks on when updating the index, all backends must lock the indexes in
@ -3358,9 +3376,12 @@ RelationGetIndexList(Relation relation)
bool isnull; bool isnull;
/* /*
* Ignore any indexes that are currently being dropped * Ignore any indexes that are currently being dropped. This will
* prevent them from being searched, inserted into, or considered in
* HOT-safety decisions. It's unsafe to touch such an index at all
* since its catalog entries could disappear at any instant.
*/ */
if (!index->indisvalid && !index->indisready) if (!IndexIsLive(index))
continue; continue;
/* Add index's OID to result list in the proper order */ /* Add index's OID to result list in the proper order */
@ -3379,7 +3400,8 @@ RelationGetIndexList(Relation relation)
indclass = (oidvector *) DatumGetPointer(indclassDatum); indclass = (oidvector *) DatumGetPointer(indclassDatum);
/* Check to see if it is a unique, non-partial btree index on OID */ /* Check to see if it is a unique, non-partial btree index on OID */
if (index->indnatts == 1 && if (IndexIsValid(index) &&
index->indnatts == 1 &&
index->indisunique && index->indimmediate && index->indisunique && index->indimmediate &&
index->indkey.values[0] == ObjectIdAttributeNumber && index->indkey.values[0] == ObjectIdAttributeNumber &&
indclass->values[0] == OID_BTREE_OPS_OID && indclass->values[0] == OID_BTREE_OPS_OID &&
@ -3674,6 +3696,13 @@ RelationGetIndexAttrBitmap(Relation relation)
/* /*
* For each index, add referenced attributes to indexattrs. * For each index, add referenced attributes to indexattrs.
*
* Note: we consider all indexes returned by RelationGetIndexList, even if
* they are not indisready or indisvalid. This is important because an
* index for which CREATE INDEX CONCURRENTLY has just started must be
* included in HOT-safety decisions (see README.HOT). If a DROP INDEX
* CONCURRENTLY is far enough along that we should ignore the index, it
* won't be returned at all by RelationGetIndexList.
*/ */
indexattrs = NULL; indexattrs = NULL;
foreach(l, indexoidlist) foreach(l, indexoidlist)

View File

@ -53,6 +53,6 @@
*/ */
/* yyyymmddN */ /* yyyymmddN */
#define CATALOG_VERSION_NO 201210071 #define CATALOG_VERSION_NO 201211281
#endif #endif

View File

@ -27,6 +27,15 @@ typedef void (*IndexBuildCallback) (Relation index,
bool tupleIsAlive, bool tupleIsAlive,
void *state); void *state);
/* Action code for index_set_state_flags */
typedef enum
{
INDEX_CREATE_SET_READY,
INDEX_CREATE_SET_VALID,
INDEX_DROP_CLEAR_VALID,
INDEX_DROP_SET_DEAD
} IndexStateFlagsAction;
extern void index_check_primary_key(Relation heapRel, extern void index_check_primary_key(Relation heapRel,
IndexInfo *indexInfo, IndexInfo *indexInfo,
@ -90,6 +99,8 @@ extern double IndexBuildHeapScan(Relation heapRelation,
extern void validate_index(Oid heapId, Oid indexId, Snapshot snapshot); extern void validate_index(Oid heapId, Oid indexId, Snapshot snapshot);
extern void index_set_state_flags(Oid indexId, IndexStateFlagsAction action);
extern void reindex_index(Oid indexId, bool skip_constraint_checks); extern void reindex_index(Oid indexId, bool skip_constraint_checks);
/* Flag bits for reindex_relation(): */ /* Flag bits for reindex_relation(): */

View File

@ -41,6 +41,7 @@ CATALOG(pg_index,2610) BKI_WITHOUT_OIDS BKI_SCHEMA_MACRO
bool indisvalid; /* is this index valid for use by queries? */ bool indisvalid; /* is this index valid for use by queries? */
bool indcheckxmin; /* must we wait for xmin to be old? */ bool indcheckxmin; /* must we wait for xmin to be old? */
bool indisready; /* is this index ready for inserts? */ bool indisready; /* is this index ready for inserts? */
bool indislive; /* is this index alive at all? */
/* variable-length fields start here, but we allow direct access to indkey */ /* variable-length fields start here, but we allow direct access to indkey */
int2vector indkey; /* column numbers of indexed cols, or 0 */ int2vector indkey; /* column numbers of indexed cols, or 0 */
@ -68,7 +69,7 @@ typedef FormData_pg_index *Form_pg_index;
* compiler constants for pg_index * compiler constants for pg_index
* ---------------- * ----------------
*/ */
#define Natts_pg_index 17 #define Natts_pg_index 18
#define Anum_pg_index_indexrelid 1 #define Anum_pg_index_indexrelid 1
#define Anum_pg_index_indrelid 2 #define Anum_pg_index_indrelid 2
#define Anum_pg_index_indnatts 3 #define Anum_pg_index_indnatts 3
@ -80,12 +81,13 @@ typedef FormData_pg_index *Form_pg_index;
#define Anum_pg_index_indisvalid 9 #define Anum_pg_index_indisvalid 9
#define Anum_pg_index_indcheckxmin 10 #define Anum_pg_index_indcheckxmin 10
#define Anum_pg_index_indisready 11 #define Anum_pg_index_indisready 11
#define Anum_pg_index_indkey 12 #define Anum_pg_index_indislive 12
#define Anum_pg_index_indcollation 13 #define Anum_pg_index_indkey 13
#define Anum_pg_index_indclass 14 #define Anum_pg_index_indcollation 14
#define Anum_pg_index_indoption 15 #define Anum_pg_index_indclass 15
#define Anum_pg_index_indexprs 16 #define Anum_pg_index_indoption 16
#define Anum_pg_index_indpred 17 #define Anum_pg_index_indexprs 17
#define Anum_pg_index_indpred 18
/* /*
* Index AMs that support ordered scans must support these two indoption * Index AMs that support ordered scans must support these two indoption
@ -95,4 +97,13 @@ typedef FormData_pg_index *Form_pg_index;
#define INDOPTION_DESC 0x0001 /* values are in reverse order */ #define INDOPTION_DESC 0x0001 /* values are in reverse order */
#define INDOPTION_NULLS_FIRST 0x0002 /* NULLs are first instead of last */ #define INDOPTION_NULLS_FIRST 0x0002 /* NULLs are first instead of last */
/*
* Use of these macros is recommended over direct examination of the state
* flag columns where possible; this allows source code compatibility with
* the hacky representation used in 9.2.
*/
#define IndexIsValid(indexForm) ((indexForm)->indisvalid)
#define IndexIsReady(indexForm) ((indexForm)->indisready)
#define IndexIsLive(indexForm) ((indexForm)->indislive)
#endif /* PG_INDEX_H */ #endif /* PG_INDEX_H */