Revert the commits related to allowing page lock to conflict among parallel group members.

This commit reverts the work done by commits 3ba59ccc89 and 72e78d831a.
Those commits were incorrect in asserting that we never acquire any other
heavy-weight lock after acquring page lock other than relation extension
lock. We can acquire a lock on catalogs while doing catalog look up after
acquring page lock.

This won't impact any existing feature but we need to think some other way
to achieve this before parallelizing other write operations or even
improving the parallelism in vacuum (like allowing multiple workers
for an index).

Reported-by: Jaime Casanova
Author: Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAJKUy5jffnRKNvRHKQ0LynRb0RJC-o4P8Ku3x9vGAVLwDBWumQ@mail.gmail.com
This commit is contained in:
Amit Kapila 2023-07-06 08:52:10 +05:30
parent ae6d06f096
commit cc32ec24fd
5 changed files with 34 additions and 65 deletions

View File

@ -330,11 +330,13 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
* functions are present in the query tree.
*
* (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
* MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
* backend writes into a completely new table. In the future, we can
* extend it to allow workers to write into the table. However, to allow
* parallel updates and deletes, we have to solve other problems,
* especially around combo CIDs.)
* MATERIALIZED VIEW to use parallel plans, but this is safe only because
* the command is writing into a completely new table which workers won't
* be able to see. If the workers could see the table, the fact that
* group locking would cause them to ignore the leader's heavyweight
* GIN page locks would make this unsafe. We'll have to fix that somehow
* if we want to allow parallel inserts in general; updates and deletes
* have additional problems especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this

View File

@ -597,10 +597,10 @@ deadlock detection algorithm very much, but it makes the bookkeeping more
complicated.
We choose to regard locks held by processes in the same parallel group as
non-conflicting with the exception of relation extension and page locks. This
means that two processes in a parallel group can hold a self-exclusive lock on
the same relation at the same time, or one process can acquire an AccessShareLock
while the other already holds AccessExclusiveLock. This might seem dangerous and
non-conflicting with the exception of relation extension lock. This means that
two processes in a parallel group can hold a self-exclusive lock on the same
relation at the same time, or one process can acquire an AccessShareLock while
the other already holds AccessExclusiveLock. This might seem dangerous and
could be in some cases (more on that below), but if we didn't do this then
parallel query would be extremely prone to self-deadlock. For example, a
parallel query against a relation on which the leader already had
@ -638,23 +638,15 @@ the other is safe enough. Problems would occur if the leader initiated
parallelism from a point in the code at which it had some backend-private
state that made table access from another process unsafe, for example after
calling SetReindexProcessing and before calling ResetReindexProcessing,
catastrophe could ensue, because the worker won't have that state.
To allow parallel inserts and parallel copy, we have ensured that relation
extension and page locks don't participate in group locking which means such
locks can conflict among the same group members. This is required as it is no
safer for two related processes to extend the same relation or perform clean up
in gin indexes at a time than for unrelated processes to do the same. We don't
acquire a heavyweight lock on any other object after relation extension lock
which means such a lock can never participate in the deadlock cycle. After
acquiring page locks, we can acquire relation extension lock but reverse never
happens, so those will also not participate in deadlock. To allow for other
parallel writes like parallel update or parallel delete, we'll either need to
(1) further enhance the deadlock detector to handle those tuple locks in a
different way than other types; or (2) have parallel workers use some other
mutual exclusion method for such cases. Currently, the parallel mode is
strictly read-only, but now we have the infrastructure to allow parallel
inserts and parallel copy.
catastrophe could ensue, because the worker won't have that state. Similarly,
problems could occur with certain kinds of non-relation locks, such as
GIN page locks. It's no safer for two related processes to perform GIN clean
up at the same time than for unrelated processes to do the same.
However, since parallel mode is strictly read-only at present, neither this
nor most of the similar cases can arise at present. To allow parallel writes,
we'll either need to (1) further enhance the deadlock detector to handle those
types of locks in a different way than other types; or (2) have parallel
workers use some other mutual exclusion method for such cases.
Group locking adds three new members to each PGPROC: lockGroupLeader,
lockGroupMembers, and lockGroupLink. A PGPROC's lockGroupLeader is NULL for

View File

@ -546,12 +546,11 @@ FindLockCycleRecurseMember(PGPROC *checkProc,
lm;
/*
* The relation extension or page lock can never participate in actual
* deadlock cycle. See Asserts in LockAcquireExtended. So, there is no
* advantage in checking wait edges from them.
* The relation extension lock can never participate in actual deadlock
* cycle. See Assert in LockAcquireExtended. So, there is no advantage
* in checking wait edges from it.
*/
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND ||
(LOCK_LOCKTAG(*lock) == LOCKTAG_PAGE))
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND)
return false;
lockMethodTable = GetLocksMethodTable(lock);

View File

@ -186,18 +186,6 @@ static int FastPathLocalUseCount = 0;
*/
static bool IsRelationExtensionLockHeld PG_USED_FOR_ASSERTS_ONLY = false;
/*
* Flag to indicate if the page lock is held by this backend. We don't
* acquire any other heavyweight lock while holding the page lock except for
* relation extension. However, these locks are never taken in reverse order
* which implies that page locks will also never participate in the deadlock
* cycle.
*
* Similar to relation extension, page locks are also held for a short
* duration, so imposing such a restriction won't hurt.
*/
static bool IsPageLockHeld PG_USED_FOR_ASSERTS_ONLY = false;
/* Macros for manipulating proc->fpLockBits */
#define FAST_PATH_BITS_PER_SLOT 3
#define FAST_PATH_LOCKNUMBER_OFFSET 1
@ -886,13 +874,6 @@ LockAcquireExtended(const LOCKTAG *locktag,
*/
Assert(!IsRelationExtensionLockHeld);
/*
* We don't acquire any other heavyweight lock while holding the page lock
* except for relation extension.
*/
Assert(!IsPageLockHeld ||
(locktag->locktag_type == LOCKTAG_RELATION_EXTEND));
/*
* Prepare to emit a WAL record if acquisition of this lock needs to be
* replayed in a standby server.
@ -1340,10 +1321,10 @@ SetupLockInTable(LockMethod lockMethodTable, PGPROC *proc,
}
/*
* Check and set/reset the flag that we hold the relation extension/page lock.
* Check and set/reset the flag that we hold the relation extension lock.
*
* It is callers responsibility that this function is called after
* acquiring/releasing the relation extension/page lock.
* acquiring/releasing the relation extension lock.
*
* Pass acquired as true if lock is acquired, false otherwise.
*/
@ -1353,9 +1334,6 @@ CheckAndSetLockHeld(LOCALLOCK *locallock, bool acquired)
#ifdef USE_ASSERT_CHECKING
if (LOCALLOCK_LOCKTAG(*locallock) == LOCKTAG_RELATION_EXTEND)
IsRelationExtensionLockHeld = acquired;
else if (LOCALLOCK_LOCKTAG(*locallock) == LOCKTAG_PAGE)
IsPageLockHeld = acquired;
#endif
}
@ -1480,11 +1458,9 @@ LockCheckConflicts(LockMethod lockMethodTable,
}
/*
* The relation extension or page lock conflict even between the group
* members.
* The relation extension lock conflict even between the group members.
*/
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND ||
(LOCK_LOCKTAG(*lock) == LOCKTAG_PAGE))
if (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND)
{
PROCLOCK_PRINT("LockCheckConflicts: conflicting (group)",
proclock);

View File

@ -1021,12 +1021,12 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
/*
* If group locking is in use, locks held by members of my locking group
* need to be included in myHeldLocks. This is not required for relation
* extension or page locks which conflict among group members. However,
* including them in myHeldLocks will give group members the priority to
* get those locks as compared to other backends which are also trying to
* acquire those locks. OTOH, we can avoid giving priority to group
* members for that kind of locks, but there doesn't appear to be a clear
* advantage of the same.
* extension lock which conflict among group members. However, including
* them in myHeldLocks will give group members the priority to get those
* locks as compared to other backends which are also trying to acquire
* those locks. OTOH, we can avoid giving priority to group members for
* that kind of locks, but there doesn't appear to be a clear advantage of
* the same.
*/
if (leader != NULL)
{