1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* proc.c
|
1997-09-07 07:04:48 +02:00
|
|
|
* routines to manage per-process shared memory data structure
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2015-01-06 17:43:47 +01:00
|
|
|
* Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/storage/lmgr/proc.c
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* Interface (a):
|
2001-01-25 04:31:16 +01:00
|
|
|
* ProcSleep(), ProcWakeup(),
|
1997-09-07 07:04:48 +02:00
|
|
|
* ProcQueueAlloc() -- create a shm queue for sleeping processes
|
|
|
|
* ProcQueueInit() -- create a queue without allocing memory
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2005-12-11 22:02:18 +01:00
|
|
|
* Waiting for a lock causes the backend to be put to sleep. Whoever releases
|
|
|
|
* the lock wakes the process up again (and gives it an error code so it knows
|
1996-07-09 08:22:35 +02:00
|
|
|
* whether it was awoken on an error condition).
|
|
|
|
*
|
|
|
|
* Interface (b):
|
|
|
|
*
|
2000-12-18 01:44:50 +01:00
|
|
|
* ProcReleaseLocks -- frees the locks associated with current transaction
|
|
|
|
*
|
1996-07-09 08:22:35 +02:00
|
|
|
* ProcKill -- destroys the shared memory state (and locks)
|
2005-12-11 22:02:18 +01:00
|
|
|
* associated with the process.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2000-10-02 21:42:56 +02:00
|
|
|
#include "postgres.h"
|
|
|
|
|
1996-11-27 08:17:48 +01:00
|
|
|
#include <signal.h>
|
2001-10-01 20:16:35 +02:00
|
|
|
#include <unistd.h>
|
|
|
|
#include <sys/time.h>
|
1996-11-03 06:08:01 +01:00
|
|
|
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "access/transam.h"
|
2011-11-25 14:02:10 +01:00
|
|
|
#include "access/twophase.h"
|
2000-12-22 01:51:54 +01:00
|
|
|
#include "access/xact.h"
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "miscadmin.h"
|
2007-01-16 14:28:57 +01:00
|
|
|
#include "postmaster/autovacuum.h"
|
2014-02-01 04:45:17 +01:00
|
|
|
#include "replication/slot.h"
|
2011-03-06 23:49:16 +01:00
|
|
|
#include "replication/syncrep.h"
|
2002-05-05 02:03:29 +02:00
|
|
|
#include "storage/ipc.h"
|
2007-06-19 22:13:22 +02:00
|
|
|
#include "storage/lmgr.h"
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
#include "storage/pmsignal.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
#include "storage/proc.h"
|
2005-05-19 23:35:48 +02:00
|
|
|
#include "storage/procarray.h"
|
2010-02-13 02:32:20 +01:00
|
|
|
#include "storage/procsignal.h"
|
2001-09-29 06:02:27 +02:00
|
|
|
#include "storage/spin.h"
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
#include "utils/timeout.h"
|
2011-09-09 19:23:41 +02:00
|
|
|
#include "utils/timestamp.h"
|
2004-07-17 05:32:14 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2002-10-31 22:34:17 +01:00
|
|
|
/* GUC variables */
|
2001-03-22 05:01:46 +01:00
|
|
|
int DeadlockTimeout = 1000;
|
2002-07-13 03:02:14 +02:00
|
|
|
int StatementTimeout = 0;
|
2013-03-17 04:22:17 +01:00
|
|
|
int LockTimeout = 0;
|
2007-03-03 19:46:40 +01:00
|
|
|
bool log_lock_waits = false;
|
1998-08-25 23:20:32 +02:00
|
|
|
|
2012-05-14 09:22:44 +02:00
|
|
|
/* Pointer to this process's PGPROC and PGXACT structs, if any */
|
2002-06-11 15:40:53 +02:00
|
|
|
PGPROC *MyProc = NULL;
|
2011-11-25 14:02:10 +01:00
|
|
|
PGXACT *MyPgXact = NULL;
|
2001-09-29 06:02:27 +02:00
|
|
|
|
|
|
|
/*
|
2002-06-11 15:40:53 +02:00
|
|
|
* This spinlock protects the freelist of recycled PGPROC structures.
|
2002-05-05 02:03:29 +02:00
|
|
|
* We cannot use an LWLock because the LWLock manager depends on already
|
2002-06-11 15:40:53 +02:00
|
|
|
* having a PGPROC and a wait semaphore! But these structures are touched
|
2002-05-05 02:03:29 +02:00
|
|
|
* relatively infrequently (only at backend startup or shutdown) and not for
|
|
|
|
* very long, so a spinlock is okay.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2003-12-20 18:31:21 +01:00
|
|
|
NON_EXEC_STATIC slock_t *ProcStructLock = NULL;
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2004-05-30 00:48:23 +02:00
|
|
|
/* Pointers to shared-memory structures */
|
2012-06-10 21:20:04 +02:00
|
|
|
PROC_HDR *ProcGlobal = NULL;
|
2007-03-07 14:35:03 +01:00
|
|
|
NON_EXEC_STATIC PGPROC *AuxiliaryProcs = NULL;
|
2012-06-10 21:20:04 +02:00
|
|
|
PGPROC *PreparedXactProcs = NULL;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2005-12-11 22:02:18 +01:00
|
|
|
/* If we are waiting for a lock, this points to the associated LOCALLOCK */
|
|
|
|
static LOCALLOCK *lockAwaited = NULL;
|
2001-01-14 06:08:17 +01:00
|
|
|
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
/* Mark this volatile because it can be changed by signal handler */
|
2007-06-19 22:13:22 +02:00
|
|
|
static volatile DeadLockState deadlock_state = DS_NOT_YET_CHECKED;
|
2002-10-31 22:34:17 +01:00
|
|
|
|
|
|
|
|
2006-01-04 22:06:32 +01:00
|
|
|
static void RemoveProcFromArray(int code, Datum arg);
|
2003-12-12 19:45:10 +01:00
|
|
|
static void ProcKill(int code, Datum arg);
|
2007-03-07 14:35:03 +01:00
|
|
|
static void AuxiliaryProcKill(int code, Datum arg);
|
2001-01-14 06:08:17 +01:00
|
|
|
|
1999-05-07 03:23:11 +02:00
|
|
|
|
2004-09-29 17:15:56 +02:00
|
|
|
/*
|
|
|
|
* Report shared-memory space needed by InitProcGlobal.
|
|
|
|
*/
|
2005-08-21 01:26:37 +02:00
|
|
|
Size
|
2005-06-18 00:32:51 +02:00
|
|
|
ProcGlobalShmemSize(void)
|
2004-09-29 17:15:56 +02:00
|
|
|
{
|
2005-08-21 01:26:37 +02:00
|
|
|
Size size = 0;
|
|
|
|
|
|
|
|
/* ProcGlobal */
|
|
|
|
size = add_size(size, sizeof(PROC_HDR));
|
2009-08-31 21:41:00 +02:00
|
|
|
/* MyProcs, including autovacuum workers and launcher */
|
2005-08-21 01:26:37 +02:00
|
|
|
size = add_size(size, mul_size(MaxBackends, sizeof(PGPROC)));
|
2011-11-25 14:02:10 +01:00
|
|
|
/* AuxiliaryProcs */
|
|
|
|
size = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGPROC)));
|
|
|
|
/* Prepared xacts */
|
|
|
|
size = add_size(size, mul_size(max_prepared_xacts, sizeof(PGPROC)));
|
2005-08-21 01:26:37 +02:00
|
|
|
/* ProcStructLock */
|
|
|
|
size = add_size(size, sizeof(slock_t));
|
2004-09-29 17:15:56 +02:00
|
|
|
|
2011-11-25 14:02:10 +01:00
|
|
|
size = add_size(size, mul_size(MaxBackends, sizeof(PGXACT)));
|
|
|
|
size = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGXACT)));
|
|
|
|
size = add_size(size, mul_size(max_prepared_xacts, sizeof(PGXACT)));
|
|
|
|
|
2004-09-29 17:15:56 +02:00
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
2002-05-05 02:03:29 +02:00
|
|
|
/*
|
|
|
|
* Report number of semaphores needed by InitProcGlobal.
|
|
|
|
*/
|
|
|
|
int
|
2005-06-18 00:32:51 +02:00
|
|
|
ProcGlobalSemas(void)
|
2002-05-05 02:03:29 +02:00
|
|
|
{
|
2007-04-16 20:30:04 +02:00
|
|
|
/*
|
|
|
|
* We need a sema per backend (including autovacuum), plus one for each
|
|
|
|
* auxiliary process.
|
|
|
|
*/
|
2007-03-07 14:35:03 +01:00
|
|
|
return MaxBackends + NUM_AUXILIARY_PROCS;
|
2002-05-05 02:03:29 +02:00
|
|
|
}
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* InitProcGlobal -
|
2006-01-04 22:06:32 +01:00
|
|
|
* Initialize the global process table during postmaster or standalone
|
|
|
|
* backend startup.
|
1999-02-19 07:06:39 +01:00
|
|
|
*
|
2002-05-05 02:03:29 +02:00
|
|
|
* We also create all the per-process semaphores we will need to support
|
1999-02-19 07:06:39 +01:00
|
|
|
* the requested number of backends. We used to allocate semaphores
|
|
|
|
* only when backends were actually started up, but that is bad because
|
|
|
|
* it lets Postgres fail under load --- a lot of Unix systems are
|
|
|
|
* (mis)configured with small limits on the number of semaphores, and
|
|
|
|
* running out when trying to start another backend is a common failure.
|
|
|
|
* So, now we grab enough semaphores to support the desired max number
|
|
|
|
* of backends immediately at initialization --- if the sysadmin has set
|
Add new GUC, max_worker_processes, limiting number of bgworkers.
In 9.3, there's no particular limit on the number of bgworkers;
instead, we just count up the number that are actually registered,
and use that to set MaxBackends. However, that approach causes
problems for Hot Standby, which needs both MaxBackends and the
size of the lock table to be the same on the standby as on the
master, yet it may not be desirable to run the same bgworkers in
both places. 9.3 handles that by failing to notice the problem,
which will probably work fine in nearly all cases anyway, but is
not theoretically sound.
A further problem with simply counting the number of registered
workers is that new workers can't be registered without a
postmaster restart. This is inconvenient for administrators,
since bouncing the postmaster causes an interruption of service.
Moreover, there are a number of applications for background
processes where, by necessity, the background process must be
started on the fly (e.g. parallel query). While this patch
doesn't actually make it possible to register new background
workers after startup time, it's a necessary prerequisite.
Patch by me. Review by Michael Paquier.
2013-07-04 17:24:24 +02:00
|
|
|
* MaxConnections, max_worker_processes, or autovacuum_max_workers higher
|
|
|
|
* than his kernel will support, he'll find out sooner rather than later.
|
2002-05-05 02:03:29 +02:00
|
|
|
*
|
|
|
|
* Another reason for creating semaphores here is that the semaphore
|
|
|
|
* implementation typically requires us to create semaphores in the
|
|
|
|
* postmaster, not in backends.
|
2006-01-04 22:06:32 +01:00
|
|
|
*
|
|
|
|
* Note: this is NOT called by individual backends under a postmaster,
|
2007-03-07 14:35:03 +01:00
|
|
|
* not even in the EXEC_BACKEND case. The ProcGlobal and AuxiliaryProcs
|
2006-01-04 22:06:32 +01:00
|
|
|
* pointers must be propagated specially for EXEC_BACKEND operation.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
void
|
2005-06-18 00:32:51 +02:00
|
|
|
InitProcGlobal(void)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-01-04 22:06:32 +01:00
|
|
|
PGPROC *procs;
|
2011-11-25 14:02:10 +01:00
|
|
|
PGXACT *pgxacts;
|
2011-11-02 03:44:54 +01:00
|
|
|
int i,
|
|
|
|
j;
|
2006-01-04 22:06:32 +01:00
|
|
|
bool found;
|
2011-11-25 14:02:10 +01:00
|
|
|
uint32 TotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2006-01-04 22:06:32 +01:00
|
|
|
/* Create the ProcGlobal shared structure */
|
1997-09-07 07:04:48 +02:00
|
|
|
ProcGlobal = (PROC_HDR *)
|
2006-01-04 22:06:32 +01:00
|
|
|
ShmemInitStruct("Proc Header", sizeof(PROC_HDR), &found);
|
|
|
|
Assert(!found);
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2006-01-04 22:06:32 +01:00
|
|
|
/*
|
|
|
|
* Initialize the data structures.
|
|
|
|
*/
|
2011-06-12 06:07:04 +02:00
|
|
|
ProcGlobal->spins_per_delay = DEFAULT_SPINS_PER_DELAY;
|
2008-11-02 22:24:52 +01:00
|
|
|
ProcGlobal->freeProcs = NULL;
|
|
|
|
ProcGlobal->autovacFreeProcs = NULL;
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
ProcGlobal->bgworkerFreeProcs = NULL;
|
2011-08-02 19:23:52 +02:00
|
|
|
ProcGlobal->startupProc = NULL;
|
|
|
|
ProcGlobal->startupProcPid = 0;
|
|
|
|
ProcGlobal->startupBufferPinWaitBufId = -1;
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
ProcGlobal->walwriterLatch = NULL;
|
|
|
|
ProcGlobal->checkpointerLatch = NULL;
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2009-08-31 21:41:00 +02:00
|
|
|
/*
|
2012-05-14 09:22:44 +02:00
|
|
|
* Create and initialize all the PGPROC structures we'll need. There are
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
* five separate consumers: (1) normal backends, (2) autovacuum workers
|
|
|
|
* and the autovacuum launcher, (3) background workers, (4) auxiliary
|
|
|
|
* processes, and (5) prepared transactions. Each PGPROC structure is
|
2013-05-29 22:58:43 +02:00
|
|
|
* dedicated to exactly one of these purposes, and they do not move
|
|
|
|
* between groups.
|
2009-08-31 21:41:00 +02:00
|
|
|
*/
|
2011-06-12 06:07:04 +02:00
|
|
|
procs = (PGPROC *) ShmemAlloc(TotalProcs * sizeof(PGPROC));
|
2011-05-29 01:52:00 +02:00
|
|
|
ProcGlobal->allProcs = procs;
|
2014-01-27 17:07:44 +01:00
|
|
|
/* XXX allProcCount isn't really all of them; it excludes prepared xacts */
|
|
|
|
ProcGlobal->allProcCount = MaxBackends + NUM_AUXILIARY_PROCS;
|
2007-04-16 20:30:04 +02:00
|
|
|
if (!procs)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_OUT_OF_MEMORY),
|
|
|
|
errmsg("out of shared memory")));
|
2011-06-12 06:07:04 +02:00
|
|
|
MemSet(procs, 0, TotalProcs * sizeof(PGPROC));
|
2011-11-25 14:02:10 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Also allocate a separate array of PGXACT structures. This is separate
|
|
|
|
* from the main PGPROC array so that the most heavily accessed data is
|
|
|
|
* stored contiguously in memory in as few cache lines as possible. This
|
|
|
|
* provides significant performance benefits, especially on a
|
2012-05-14 09:22:44 +02:00
|
|
|
* multiprocessor system. There is one PGXACT structure for every PGPROC
|
2011-11-25 14:02:10 +01:00
|
|
|
* structure.
|
|
|
|
*/
|
|
|
|
pgxacts = (PGXACT *) ShmemAlloc(TotalProcs * sizeof(PGXACT));
|
|
|
|
MemSet(pgxacts, 0, TotalProcs * sizeof(PGXACT));
|
|
|
|
ProcGlobal->allPgXact = pgxacts;
|
|
|
|
|
2011-06-12 06:07:04 +02:00
|
|
|
for (i = 0; i < TotalProcs; i++)
|
2007-04-16 20:30:04 +02:00
|
|
|
{
|
2011-06-12 06:07:04 +02:00
|
|
|
/* Common initialization for all PGPROCs, regardless of type. */
|
2011-08-10 18:20:30 +02:00
|
|
|
|
2011-11-25 14:02:10 +01:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Set up per-PGPROC semaphore, latch, and backendLock. Prepared xact
|
|
|
|
* dummy PGPROCs don't need these though - they're never associated
|
|
|
|
* with a real process
|
2011-11-25 14:02:10 +01:00
|
|
|
*/
|
|
|
|
if (i < MaxBackends + NUM_AUXILIARY_PROCS)
|
|
|
|
{
|
|
|
|
PGSemaphoreCreate(&(procs[i].sem));
|
|
|
|
InitSharedLatch(&(procs[i].procLatch));
|
|
|
|
procs[i].backendLock = LWLockAssign();
|
|
|
|
}
|
|
|
|
procs[i].pgprocno = i;
|
2011-06-12 06:07:04 +02:00
|
|
|
|
|
|
|
/*
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
* Newly created PGPROCs for normal backends, autovacuum and bgworkers
|
2014-05-06 18:12:18 +02:00
|
|
|
* must be queued up on the appropriate free list. Because there can
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
* only ever be a small, fixed number of auxiliary processes, no free
|
|
|
|
* list is used in that case; InitAuxiliaryProcess() instead uses a
|
2014-05-06 18:12:18 +02:00
|
|
|
* linear search. PGPROCs for prepared transactions are added to a
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
* free list by TwoPhaseShmemInit().
|
2011-06-12 06:07:04 +02:00
|
|
|
*/
|
|
|
|
if (i < MaxConnections)
|
|
|
|
{
|
|
|
|
/* PGPROC for normal backend, add to freeProcs list */
|
|
|
|
procs[i].links.next = (SHM_QUEUE *) ProcGlobal->freeProcs;
|
|
|
|
ProcGlobal->freeProcs = &procs[i];
|
|
|
|
}
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
else if (i < MaxConnections + autovacuum_max_workers + 1)
|
2011-06-12 06:07:04 +02:00
|
|
|
{
|
|
|
|
/* PGPROC for AV launcher/worker, add to autovacFreeProcs list */
|
|
|
|
procs[i].links.next = (SHM_QUEUE *) ProcGlobal->autovacFreeProcs;
|
|
|
|
ProcGlobal->autovacFreeProcs = &procs[i];
|
|
|
|
}
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
else if (i < MaxBackends)
|
|
|
|
{
|
|
|
|
/* PGPROC for bgworker, add to bgworkerFreeProcs list */
|
|
|
|
procs[i].links.next = (SHM_QUEUE *) ProcGlobal->bgworkerFreeProcs;
|
|
|
|
ProcGlobal->bgworkerFreeProcs = &procs[i];
|
|
|
|
}
|
2011-11-02 03:44:54 +01:00
|
|
|
|
|
|
|
/* Initialize myProcLocks[] shared memory queues. */
|
|
|
|
for (j = 0; j < NUM_LOCK_PARTITIONS; j++)
|
|
|
|
SHMQueueInit(&(procs[i].myProcLocks[j]));
|
2007-04-16 20:30:04 +02:00
|
|
|
}
|
|
|
|
|
2009-08-31 21:41:00 +02:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Save pointers to the blocks of PGPROC structures reserved for auxiliary
|
|
|
|
* processes and prepared transactions.
|
2009-08-31 21:41:00 +02:00
|
|
|
*/
|
2011-06-12 06:07:04 +02:00
|
|
|
AuxiliaryProcs = &procs[MaxBackends];
|
2011-11-25 14:02:10 +01:00
|
|
|
PreparedXactProcs = &procs[MaxBackends + NUM_AUXILIARY_PROCS];
|
2006-01-04 22:06:32 +01:00
|
|
|
|
|
|
|
/* Create ProcStructLock spinlock, too */
|
|
|
|
ProcStructLock = (slock_t *) ShmemAlloc(sizeof(slock_t));
|
|
|
|
SpinLockInit(ProcStructLock);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
/*
|
2002-05-05 02:03:29 +02:00
|
|
|
* InitProcess -- initialize a per-process data structure for this backend
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
void
|
2000-11-29 00:27:57 +01:00
|
|
|
InitProcess(void)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2001-12-28 19:16:43 +01:00
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
2001-09-07 02:27:30 +02:00
|
|
|
|
|
|
|
/*
|
2006-01-04 22:06:32 +01:00
|
|
|
* ProcGlobal should be set up already (if we are a backend, we inherit
|
|
|
|
* this by fork() or EXEC_BACKEND mechanism from the postmaster).
|
2001-09-07 02:27:30 +02:00
|
|
|
*/
|
2001-12-28 19:16:43 +01:00
|
|
|
if (procglobal == NULL)
|
2003-07-25 00:04:15 +02:00
|
|
|
elog(PANIC, "proc header uninitialized");
|
2001-09-07 02:27:30 +02:00
|
|
|
|
|
|
|
if (MyProc != NULL)
|
2003-07-25 00:04:15 +02:00
|
|
|
elog(ERROR, "you already exist");
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-09-04 23:42:17 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Try to get a proc struct from the free list. If this fails, we must be
|
|
|
|
* out of PGPROC structures (not to mention semaphores).
|
2005-10-11 22:41:32 +02:00
|
|
|
*
|
2005-10-15 04:49:52 +02:00
|
|
|
* While we are holding the ProcStructLock, also copy the current shared
|
|
|
|
* estimate of spins_per_delay to local storage.
|
2001-09-04 23:42:17 +02:00
|
|
|
*/
|
2001-09-29 06:02:27 +02:00
|
|
|
SpinLockAcquire(ProcStructLock);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2005-10-11 22:41:32 +02:00
|
|
|
set_spins_per_delay(procglobal->spins_per_delay);
|
|
|
|
|
2009-08-31 21:41:00 +02:00
|
|
|
if (IsAnyAutoVacuumProcess())
|
2008-11-02 22:24:52 +01:00
|
|
|
MyProc = procglobal->autovacFreeProcs;
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
else if (IsBackgroundWorker)
|
|
|
|
MyProc = procglobal->bgworkerFreeProcs;
|
2007-04-16 20:30:04 +02:00
|
|
|
else
|
2008-11-02 22:24:52 +01:00
|
|
|
MyProc = procglobal->freeProcs;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2008-11-02 22:24:52 +01:00
|
|
|
if (MyProc != NULL)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2009-08-31 21:41:00 +02:00
|
|
|
if (IsAnyAutoVacuumProcess())
|
2008-11-02 22:24:52 +01:00
|
|
|
procglobal->autovacFreeProcs = (PGPROC *) MyProc->links.next;
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
else if (IsBackgroundWorker)
|
|
|
|
procglobal->bgworkerFreeProcs = (PGPROC *) MyProc->links.next;
|
2007-04-16 20:30:04 +02:00
|
|
|
else
|
2008-11-02 22:24:52 +01:00
|
|
|
procglobal->freeProcs = (PGPROC *) MyProc->links.next;
|
2001-09-29 06:02:27 +02:00
|
|
|
SpinLockRelease(ProcStructLock);
|
1997-09-07 07:04:48 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* If we reach here, all the PGPROCs are in use. This is one of the
|
|
|
|
* possible places to detect "too many backends", so give the standard
|
2007-04-16 20:30:04 +02:00
|
|
|
* error message. XXX do we need to give a different failure message
|
|
|
|
* in the autovacuum case?
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2001-09-29 06:02:27 +02:00
|
|
|
SpinLockRelease(ProcStructLock);
|
2003-07-25 00:04:15 +02:00
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_TOO_MANY_CONNECTIONS),
|
|
|
|
errmsg("sorry, too many clients already")));
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact = &ProcGlobal->allPgXact[MyProc->pgprocno];
|
1997-09-07 07:04:48 +02:00
|
|
|
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
/*
|
|
|
|
* Now that we have a PGPROC, mark ourselves as an active postmaster
|
2009-06-11 16:49:15 +02:00
|
|
|
* child; this is so that the postmaster can detect it if we exit without
|
2009-08-31 21:41:00 +02:00
|
|
|
* cleaning up. (XXX autovac launcher currently doesn't participate in
|
|
|
|
* this; it probably should.)
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
*/
|
2009-08-31 21:41:00 +02:00
|
|
|
if (IsUnderPostmaster && !IsAutoVacuumLauncherProcess())
|
2010-08-23 19:20:01 +02:00
|
|
|
MarkPostmasterChildActive();
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Initialize all fields of MyProc, except for those previously
|
|
|
|
* initialized by InitProcGlobal.
|
2001-01-14 06:08:17 +01:00
|
|
|
*/
|
2001-01-22 23:30:06 +01:00
|
|
|
SHMQueueElemInit(&(MyProc->links));
|
2004-07-17 05:32:14 +02:00
|
|
|
MyProc->waitStatus = STATUS_OK;
|
2007-09-05 20:10:48 +02:00
|
|
|
MyProc->lxid = InvalidLocalTransactionId;
|
2012-11-29 23:15:52 +01:00
|
|
|
MyProc->fpVXIDLock = false;
|
|
|
|
MyProc->fpLocalTransactionId = InvalidLocalTransactionId;
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact->xid = InvalidTransactionId;
|
|
|
|
MyPgXact->xmin = InvalidTransactionId;
|
2001-09-29 06:02:27 +02:00
|
|
|
MyProc->pid = MyProcPid;
|
2007-09-05 20:10:48 +02:00
|
|
|
/* backendId, databaseId and roleId will be filled in later */
|
|
|
|
MyProc->backendId = InvalidBackendId;
|
2006-01-04 22:06:32 +01:00
|
|
|
MyProc->databaseId = InvalidOid;
|
2005-07-31 19:19:22 +02:00
|
|
|
MyProc->roleId = InvalidOid;
|
2012-12-03 14:13:53 +01:00
|
|
|
MyPgXact->delayChkpt = false;
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact->vacuumFlags = 0;
|
2009-08-31 21:41:00 +02:00
|
|
|
/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */
|
2007-10-24 22:55:36 +02:00
|
|
|
if (IsAutoVacuumWorkerProcess())
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact->vacuumFlags |= PROC_IS_AUTOVACUUM;
|
2001-09-29 06:02:27 +02:00
|
|
|
MyProc->lwWaiting = false;
|
Make group commit more effective.
When a backend needs to flush the WAL, and someone else is already flushing
the WAL, wait until it releases the WALInsertLock and check if we still need
to do the flush or if the other backend already did the work for us, before
acquiring WALInsertLock. This helps group commit, because when the WAL flush
finishes, all the backends that were waiting for it can be woken up in one
go, and the can all concurrently observe that they're done, rather than
waking them up one by one in a cascading fashion.
This is based on a new LWLock function, LWLockWaitUntilFree(), which has
peculiar semantics. If the lock is immediately free, it grabs the lock and
returns true. If it's not free, it waits until it is released, but then
returns false without grabbing the lock. This is used in XLogFlush(), so
that when the lock is acquired, the backend flushes the WAL, but if it's
not, the backend first checks the current flush location before retrying.
Original patch and benchmarking by Peter Geoghegan and Simon Riggs, although
this patch as committed ended up being very different from that.
2012-01-30 15:40:58 +01:00
|
|
|
MyProc->lwWaitMode = 0;
|
2001-01-22 23:30:06 +01:00
|
|
|
MyProc->waitLock = NULL;
|
2004-08-27 19:07:42 +02:00
|
|
|
MyProc->waitProcLock = NULL;
|
2011-11-02 03:44:54 +01:00
|
|
|
#ifdef USE_ASSERT_CHECKING
|
|
|
|
{
|
2012-06-10 21:20:04 +02:00
|
|
|
int i;
|
2011-11-02 03:44:54 +01:00
|
|
|
|
|
|
|
/* Last process should have released all locks. */
|
|
|
|
for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
|
|
|
|
Assert(SHMQueueEmpty(&(MyProc->myProcLocks[i])));
|
|
|
|
}
|
|
|
|
#endif
|
2010-01-16 11:05:59 +01:00
|
|
|
MyProc->recoveryConflictPending = false;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2011-08-10 18:20:30 +02:00
|
|
|
/* Initialize fields for sync rep */
|
2012-06-24 17:51:37 +02:00
|
|
|
MyProc->waitLSN = 0;
|
2011-03-06 23:49:16 +01:00
|
|
|
MyProc->syncRepState = SYNC_REP_NOT_WAITING;
|
|
|
|
SHMQueueElemInit(&(MyProc->syncRepLinks));
|
2011-08-10 18:20:30 +02:00
|
|
|
|
|
|
|
/*
|
2015-01-14 18:45:22 +01:00
|
|
|
* Acquire ownership of the PGPROC's latch, so that we can use WaitLatch
|
|
|
|
* on it. That allows us to repoint the process latch, which so far
|
|
|
|
* points to process local one, to the shared one.
|
2011-08-10 18:20:30 +02:00
|
|
|
*/
|
|
|
|
OwnLatch(&MyProc->procLatch);
|
2015-01-14 18:45:22 +01:00
|
|
|
SwitchToSharedLatch();
|
2011-03-06 23:49:16 +01:00
|
|
|
|
2005-05-19 23:35:48 +02:00
|
|
|
/*
|
2006-01-04 22:06:32 +01:00
|
|
|
* We might be reusing a semaphore that belonged to a failed process. So
|
2014-05-06 18:12:18 +02:00
|
|
|
* be careful and reinitialize its value here. (This is not strictly
|
2006-04-14 05:38:56 +02:00
|
|
|
* necessary anymore, but seems like a good idea for cleanliness.)
|
2005-05-19 23:35:48 +02:00
|
|
|
*/
|
2006-01-04 22:06:32 +01:00
|
|
|
PGSemaphoreReset(&MyProc->sem);
|
2005-05-19 23:35:48 +02:00
|
|
|
|
2001-01-25 04:31:16 +01:00
|
|
|
/*
|
2001-09-29 06:02:27 +02:00
|
|
|
* Arrange to clean up at backend exit.
|
2001-01-25 04:31:16 +01:00
|
|
|
*/
|
2001-01-14 06:08:17 +01:00
|
|
|
on_shmem_exit(ProcKill, 0);
|
2001-01-25 04:31:16 +01:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Now that we have a PGPROC, we could try to acquire locks, so initialize
|
2014-06-30 09:13:48 +02:00
|
|
|
* local state needed for LWLocks, and the deadlock checker.
|
2001-01-25 04:31:16 +01:00
|
|
|
*/
|
2014-06-30 09:13:48 +02:00
|
|
|
InitLWLockAccess();
|
2001-01-25 04:31:16 +01:00
|
|
|
InitDeadLockChecking();
|
2001-01-14 06:08:17 +01:00
|
|
|
}
|
|
|
|
|
2006-01-04 22:06:32 +01:00
|
|
|
/*
|
|
|
|
* InitProcessPhase2 -- make MyProc visible in the shared ProcArray.
|
|
|
|
*
|
|
|
|
* This is separate from InitProcess because we can't acquire LWLocks until
|
2009-08-12 22:53:31 +02:00
|
|
|
* we've created a PGPROC, but in the EXEC_BACKEND case ProcArrayAdd won't
|
|
|
|
* work until after we've done CreateSharedMemoryAndSemaphores.
|
2006-01-04 22:06:32 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
InitProcessPhase2(void)
|
|
|
|
{
|
|
|
|
Assert(MyProc != NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add our PGPROC to the PGPROC array in shared memory.
|
|
|
|
*/
|
|
|
|
ProcArrayAdd(MyProc);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Arrange to clean that up at backend exit.
|
|
|
|
*/
|
|
|
|
on_shmem_exit(RemoveProcFromArray, 0);
|
|
|
|
}
|
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
/*
|
2007-03-07 14:35:03 +01:00
|
|
|
* InitAuxiliaryProcess -- create a per-auxiliary-process data structure
|
2001-09-29 06:02:27 +02:00
|
|
|
*
|
2005-08-08 05:12:16 +02:00
|
|
|
* This is called by bgwriter and similar processes so that they will have a
|
|
|
|
* MyProc value that's real enough to let them wait for LWLocks. The PGPROC
|
2006-01-04 22:06:32 +01:00
|
|
|
* and sema that are assigned are one of the extra ones created during
|
2005-08-08 05:12:16 +02:00
|
|
|
* InitProcGlobal.
|
2004-05-30 00:48:23 +02:00
|
|
|
*
|
2007-03-07 14:35:03 +01:00
|
|
|
* Auxiliary processes are presently not expected to wait for real (lockmgr)
|
2006-01-04 22:06:32 +01:00
|
|
|
* locks, so we need not set up the deadlock checker. They are never added
|
2014-05-06 18:12:18 +02:00
|
|
|
* to the ProcArray or the sinval messaging mechanism, either. They also
|
2007-09-05 20:10:48 +02:00
|
|
|
* don't get a VXID assigned, since this is only useful when we actually
|
|
|
|
* hold lockmgr locks.
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
*
|
|
|
|
* Startup process however uses locks but never waits for them in the
|
|
|
|
* normal backend sense. Startup process also takes part in sinval messaging
|
|
|
|
* as a sendOnly process, so never reads messages from sinval queue. So
|
|
|
|
* Startup process does have a VXID and does show up in pg_locks.
|
2001-09-29 06:02:27 +02:00
|
|
|
*/
|
|
|
|
void
|
2007-03-07 14:35:03 +01:00
|
|
|
InitAuxiliaryProcess(void)
|
2001-09-29 06:02:27 +02:00
|
|
|
{
|
2007-03-07 14:35:03 +01:00
|
|
|
PGPROC *auxproc;
|
2006-01-04 22:06:32 +01:00
|
|
|
int proctype;
|
2003-11-19 16:55:08 +01:00
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
/*
|
2006-01-04 22:06:32 +01:00
|
|
|
* ProcGlobal should be set up already (if we are a backend, we inherit
|
|
|
|
* this by fork() or EXEC_BACKEND mechanism from the postmaster).
|
2001-09-29 06:02:27 +02:00
|
|
|
*/
|
2007-03-07 14:35:03 +01:00
|
|
|
if (ProcGlobal == NULL || AuxiliaryProcs == NULL)
|
2003-07-25 00:04:15 +02:00
|
|
|
elog(PANIC, "proc header uninitialized");
|
2001-09-29 06:02:27 +02:00
|
|
|
|
|
|
|
if (MyProc != NULL)
|
2003-07-25 00:04:15 +02:00
|
|
|
elog(ERROR, "you already exist");
|
2001-09-29 06:02:27 +02:00
|
|
|
|
2005-10-11 22:41:32 +02:00
|
|
|
/*
|
2006-01-04 22:06:32 +01:00
|
|
|
* We use the ProcStructLock to protect assignment and releasing of
|
2007-03-07 14:35:03 +01:00
|
|
|
* AuxiliaryProcs entries.
|
2005-10-11 22:41:32 +02:00
|
|
|
*
|
2005-10-15 04:49:52 +02:00
|
|
|
* While we are holding the ProcStructLock, also copy the current shared
|
|
|
|
* estimate of spins_per_delay to local storage.
|
2005-10-11 22:41:32 +02:00
|
|
|
*/
|
|
|
|
SpinLockAcquire(ProcStructLock);
|
|
|
|
|
|
|
|
set_spins_per_delay(ProcGlobal->spins_per_delay);
|
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
/*
|
2007-03-07 14:35:03 +01:00
|
|
|
* Find a free auxproc ... *big* trouble if there isn't one ...
|
2001-09-29 06:02:27 +02:00
|
|
|
*/
|
2007-03-07 14:35:03 +01:00
|
|
|
for (proctype = 0; proctype < NUM_AUXILIARY_PROCS; proctype++)
|
2006-01-04 22:06:32 +01:00
|
|
|
{
|
2007-03-07 14:35:03 +01:00
|
|
|
auxproc = &AuxiliaryProcs[proctype];
|
|
|
|
if (auxproc->pid == 0)
|
2006-01-04 22:06:32 +01:00
|
|
|
break;
|
|
|
|
}
|
2007-03-07 14:35:03 +01:00
|
|
|
if (proctype >= NUM_AUXILIARY_PROCS)
|
2005-10-11 22:41:32 +02:00
|
|
|
{
|
|
|
|
SpinLockRelease(ProcStructLock);
|
2007-03-07 14:35:03 +01:00
|
|
|
elog(FATAL, "all AuxiliaryProcs are in use");
|
2005-10-11 22:41:32 +02:00
|
|
|
}
|
2001-09-29 06:02:27 +02:00
|
|
|
|
2007-03-07 14:35:03 +01:00
|
|
|
/* Mark auxiliary proc as in use by me */
|
2006-01-04 22:06:32 +01:00
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
2007-03-07 14:35:03 +01:00
|
|
|
((volatile PGPROC *) auxproc)->pid = MyProcPid;
|
2006-01-04 22:06:32 +01:00
|
|
|
|
2007-03-07 14:35:03 +01:00
|
|
|
MyProc = auxproc;
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact = &ProcGlobal->allPgXact[auxproc->pgprocno];
|
2005-10-11 22:41:32 +02:00
|
|
|
|
|
|
|
SpinLockRelease(ProcStructLock);
|
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Initialize all fields of MyProc, except for those previously
|
|
|
|
* initialized by InitProcGlobal.
|
2001-09-29 06:02:27 +02:00
|
|
|
*/
|
|
|
|
SHMQueueElemInit(&(MyProc->links));
|
2004-07-17 05:32:14 +02:00
|
|
|
MyProc->waitStatus = STATUS_OK;
|
2007-09-05 20:10:48 +02:00
|
|
|
MyProc->lxid = InvalidLocalTransactionId;
|
2012-11-29 23:15:52 +01:00
|
|
|
MyProc->fpVXIDLock = false;
|
|
|
|
MyProc->fpLocalTransactionId = InvalidLocalTransactionId;
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact->xid = InvalidTransactionId;
|
|
|
|
MyPgXact->xmin = InvalidTransactionId;
|
2007-09-05 20:10:48 +02:00
|
|
|
MyProc->backendId = InvalidBackendId;
|
2006-01-04 22:06:32 +01:00
|
|
|
MyProc->databaseId = InvalidOid;
|
2005-07-31 19:19:22 +02:00
|
|
|
MyProc->roleId = InvalidOid;
|
2012-12-03 14:13:53 +01:00
|
|
|
MyPgXact->delayChkpt = false;
|
2011-11-25 14:02:10 +01:00
|
|
|
MyPgXact->vacuumFlags = 0;
|
2001-09-29 06:02:27 +02:00
|
|
|
MyProc->lwWaiting = false;
|
Make group commit more effective.
When a backend needs to flush the WAL, and someone else is already flushing
the WAL, wait until it releases the WALInsertLock and check if we still need
to do the flush or if the other backend already did the work for us, before
acquiring WALInsertLock. This helps group commit, because when the WAL flush
finishes, all the backends that were waiting for it can be woken up in one
go, and the can all concurrently observe that they're done, rather than
waking them up one by one in a cascading fashion.
This is based on a new LWLock function, LWLockWaitUntilFree(), which has
peculiar semantics. If the lock is immediately free, it grabs the lock and
returns true. If it's not free, it waits until it is released, but then
returns false without grabbing the lock. This is used in XLogFlush(), so
that when the lock is acquired, the backend flushes the WAL, but if it's
not, the backend first checks the current flush location before retrying.
Original patch and benchmarking by Peter Geoghegan and Simon Riggs, although
this patch as committed ended up being very different from that.
2012-01-30 15:40:58 +01:00
|
|
|
MyProc->lwWaitMode = 0;
|
2001-09-29 06:02:27 +02:00
|
|
|
MyProc->waitLock = NULL;
|
2004-08-27 19:07:42 +02:00
|
|
|
MyProc->waitProcLock = NULL;
|
2011-11-02 03:44:54 +01:00
|
|
|
#ifdef USE_ASSERT_CHECKING
|
|
|
|
{
|
2012-06-10 21:20:04 +02:00
|
|
|
int i;
|
2011-11-02 03:44:54 +01:00
|
|
|
|
|
|
|
/* Last process should have released all locks. */
|
|
|
|
for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
|
|
|
|
Assert(SHMQueueEmpty(&(MyProc->myProcLocks[i])));
|
|
|
|
}
|
|
|
|
#endif
|
2001-09-29 06:02:27 +02:00
|
|
|
|
2011-08-10 18:20:30 +02:00
|
|
|
/*
|
2015-01-14 18:45:22 +01:00
|
|
|
* Acquire ownership of the PGPROC's latch, so that we can use WaitLatch
|
|
|
|
* on it. That allows us to repoint the process latch, which so far
|
|
|
|
* points to process local one, to the shared one.
|
2011-08-10 18:20:30 +02:00
|
|
|
*/
|
|
|
|
OwnLatch(&MyProc->procLatch);
|
2015-01-14 18:45:22 +01:00
|
|
|
SwitchToSharedLatch();
|
2011-08-10 18:20:30 +02:00
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* We might be reusing a semaphore that belonged to a failed process. So
|
2014-05-06 18:12:18 +02:00
|
|
|
* be careful and reinitialize its value here. (This is not strictly
|
2006-04-14 05:38:56 +02:00
|
|
|
* necessary anymore, but seems like a good idea for cleanliness.)
|
2001-09-29 06:02:27 +02:00
|
|
|
*/
|
2002-05-05 02:03:29 +02:00
|
|
|
PGSemaphoreReset(&MyProc->sem);
|
2006-01-04 22:06:32 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Arrange to clean up at process exit.
|
|
|
|
*/
|
2007-03-07 14:35:03 +01:00
|
|
|
on_shmem_exit(AuxiliaryProcKill, Int32GetDatum(proctype));
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
/*
|
|
|
|
* Record the PID and PGPROC structures for the Startup process, for use in
|
|
|
|
* ProcSendSignal(). See comments there for further explanation.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
PublishStartupProcessInformation(void)
|
|
|
|
{
|
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
|
|
|
|
|
|
|
SpinLockAcquire(ProcStructLock);
|
|
|
|
|
|
|
|
procglobal->startupProc = MyProc;
|
|
|
|
procglobal->startupProcPid = MyProcPid;
|
|
|
|
|
|
|
|
SpinLockRelease(ProcStructLock);
|
|
|
|
}
|
|
|
|
|
2010-01-23 17:37:12 +01:00
|
|
|
/*
|
|
|
|
* Used from bufgr to share the value of the buffer that Startup waits on,
|
|
|
|
* or to reset the value to "not waiting" (-1). This allows processing
|
|
|
|
* of recovery conflicts for buffer pins. Set is made before backends look
|
|
|
|
* at this value, so locking not required, especially since the set is
|
|
|
|
* an atomic integer set operation.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
SetStartupBufferPinWaitBufId(int bufid)
|
|
|
|
{
|
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
|
|
|
|
|
|
|
procglobal->startupBufferPinWaitBufId = bufid;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used by backends when they receive a request to check for buffer pin waits.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
GetStartupBufferPinWaitBufId(void)
|
|
|
|
{
|
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
|
|
|
|
2011-08-02 19:23:52 +02:00
|
|
|
return procglobal->startupBufferPinWaitBufId;
|
2010-01-23 17:37:12 +01:00
|
|
|
}
|
|
|
|
|
2005-06-18 00:32:51 +02:00
|
|
|
/*
|
|
|
|
* Check whether there are at least N free PGPROC objects.
|
|
|
|
*
|
|
|
|
* Note: this is designed on the assumption that N will generally be small.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
HaveNFreeProcs(int n)
|
|
|
|
{
|
|
|
|
PGPROC *proc;
|
2005-10-15 04:49:52 +02:00
|
|
|
|
2005-06-18 00:32:51 +02:00
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
|
|
|
|
|
|
|
SpinLockAcquire(ProcStructLock);
|
|
|
|
|
2008-11-02 22:24:52 +01:00
|
|
|
proc = procglobal->freeProcs;
|
2005-06-18 00:32:51 +02:00
|
|
|
|
2008-11-02 22:24:52 +01:00
|
|
|
while (n > 0 && proc != NULL)
|
2005-06-18 00:32:51 +02:00
|
|
|
{
|
2008-11-02 22:24:52 +01:00
|
|
|
proc = (PGPROC *) proc->links.next;
|
2005-06-18 00:32:51 +02:00
|
|
|
n--;
|
|
|
|
}
|
|
|
|
|
|
|
|
SpinLockRelease(ProcStructLock);
|
|
|
|
|
|
|
|
return (n <= 0);
|
|
|
|
}
|
|
|
|
|
Improve control logic for bgwriter hibernation mode.
Commit 6d90eaaa89a007e0d365f49d6436f35d2392cfeb added a hibernation mode
to the bgwriter to reduce the server's idle-power consumption. However,
its interaction with the detailed behavior of BgBufferSync's feedback
control loop wasn't very well thought out. That control loop depends
primarily on the rate of buffer allocation, not the rate of buffer
dirtying, so the hibernation mode has to be designed to operate only when
no new buffer allocations are happening. Also, the check for whether the
system is effectively idle was not quite right and would fail to detect
a constant low level of activity, thus allowing the bgwriter to go into
hibernation mode in a way that would let the cycle time vary quite a bit,
possibly further confusing the feedback loop. To fix, move the wakeup
support from MarkBufferDirty and SetBufferCommitInfoNeedsSave into
StrategyGetBuffer, and prevent the bgwriter from entering hibernation mode
unless no buffer allocations have happened recently.
In addition, fix the delaying logic to remove the problem of possibly not
responding to signals promptly, which was basically caused by trying to use
the process latch's is_set flag for multiple purposes. I can't prove it
but I'm suspicious that that hack was responsible for the intermittent
"postmaster does not shut down" failures we've been seeing in the buildfarm
lately. In any case it did nothing to improve the readability or
robustness of the code.
In passing, express the hibernation sleep time as a multiplier on
BgWriterDelay, not a constant. I'm not sure whether there's any value in
exposing the longer sleep time as an independently configurable setting,
but we can at least make it act like this for little extra code.
2012-05-10 05:36:01 +02:00
|
|
|
/*
|
|
|
|
* Check if the current process is awaiting a lock.
|
|
|
|
*/
|
2010-02-13 02:32:20 +01:00
|
|
|
bool
|
|
|
|
IsWaitingForLock(void)
|
|
|
|
{
|
|
|
|
if (lockAwaited == NULL)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
/*
|
2012-04-18 17:17:30 +02:00
|
|
|
* Cancel any pending wait for lock, when aborting a transaction, and revert
|
|
|
|
* any strong lock count acquisition for a lock being acquired.
|
2001-01-14 06:08:17 +01:00
|
|
|
*
|
|
|
|
* (Normally, this would only happen if we accept a cancel/die
|
2012-04-18 17:17:30 +02:00
|
|
|
* interrupt while waiting; but an ereport(ERROR) before or during the lock
|
|
|
|
* wait is within the realm of possibility, too.)
|
2001-01-14 06:08:17 +01:00
|
|
|
*/
|
2008-01-26 20:55:08 +01:00
|
|
|
void
|
2012-04-18 17:17:30 +02:00
|
|
|
LockErrorCleanup(void)
|
2001-01-14 06:08:17 +01:00
|
|
|
{
|
2014-01-27 17:07:44 +01:00
|
|
|
LWLock *partitionLock;
|
2013-03-17 04:22:17 +01:00
|
|
|
DisableTimeoutParams timeouts[2];
|
2005-12-11 22:02:18 +01:00
|
|
|
|
2012-04-18 17:17:30 +02:00
|
|
|
AbortStrongLockAcquire();
|
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
/* Nothing to do if we weren't waiting for a lock */
|
2005-12-11 22:02:18 +01:00
|
|
|
if (lockAwaited == NULL)
|
2008-01-26 20:55:08 +01:00
|
|
|
return;
|
2001-01-16 21:59:34 +01:00
|
|
|
|
2013-03-17 04:22:17 +01:00
|
|
|
/*
|
|
|
|
* Turn off the deadlock and lock timeout timers, if they are still
|
|
|
|
* running (see ProcSleep). Note we must preserve the LOCK_TIMEOUT
|
|
|
|
* indicator flag, since this function is executed before
|
|
|
|
* ProcessInterrupts when responding to SIGINT; else we'd lose the
|
|
|
|
* knowledge that the SIGINT came from a lock timeout and not an external
|
|
|
|
* source.
|
|
|
|
*/
|
|
|
|
timeouts[0].id = DEADLOCK_TIMEOUT;
|
|
|
|
timeouts[0].keep_indicator = false;
|
|
|
|
timeouts[1].id = LOCK_TIMEOUT;
|
|
|
|
timeouts[1].keep_indicator = true;
|
|
|
|
disable_timeouts(timeouts, 2);
|
2001-01-14 06:08:17 +01:00
|
|
|
|
|
|
|
/* Unlink myself from the wait queue, if on it (might not be anymore!) */
|
2006-07-24 01:08:46 +02:00
|
|
|
partitionLock = LockHashPartitionLock(lockAwaited->hashcode);
|
2005-12-11 22:02:18 +01:00
|
|
|
LWLockAcquire(partitionLock, LW_EXCLUSIVE);
|
2004-07-17 05:32:14 +02:00
|
|
|
|
2008-11-02 22:24:52 +01:00
|
|
|
if (MyProc->links.next != NULL)
|
2004-07-17 05:32:14 +02:00
|
|
|
{
|
|
|
|
/* We could not have been granted the lock yet */
|
2006-07-24 01:08:46 +02:00
|
|
|
RemoveFromWaitQueue(MyProc, lockAwaited->hashcode);
|
2004-07-17 05:32:14 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Somebody kicked us off the lock queue already. Perhaps they
|
2005-10-15 04:49:52 +02:00
|
|
|
* granted us the lock, or perhaps they detected a deadlock. If they
|
|
|
|
* did grant us the lock, we'd better remember it in our local lock
|
|
|
|
* table.
|
2004-07-17 05:32:14 +02:00
|
|
|
*/
|
2004-08-27 19:07:42 +02:00
|
|
|
if (MyProc->waitStatus == STATUS_OK)
|
|
|
|
GrantAwaitedLock();
|
2004-07-17 05:32:14 +02:00
|
|
|
}
|
|
|
|
|
2005-12-11 22:02:18 +01:00
|
|
|
lockAwaited = NULL;
|
2004-07-17 05:32:14 +02:00
|
|
|
|
2005-12-11 22:02:18 +01:00
|
|
|
LWLockRelease(partitionLock);
|
2000-02-21 03:42:37 +01:00
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
/*
|
2006-04-14 05:38:56 +02:00
|
|
|
* We used to do PGSemaphoreReset() here to ensure that our proc's wait
|
2014-05-06 18:12:18 +02:00
|
|
|
* semaphore is reset to zero. This prevented a leftover wakeup signal
|
2006-10-04 02:30:14 +02:00
|
|
|
* from remaining in the semaphore if someone else had granted us the lock
|
|
|
|
* we wanted before we were able to remove ourselves from the wait-list.
|
|
|
|
* However, now that ProcSleep loops until waitStatus changes, a leftover
|
|
|
|
* wakeup signal isn't harmful, and it seems not worth expending cycles to
|
|
|
|
* get rid of a signal that most likely isn't there.
|
2001-01-14 06:08:17 +01:00
|
|
|
*/
|
2000-02-21 03:42:37 +01:00
|
|
|
}
|
2000-04-12 19:17:23 +02:00
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2000-12-22 01:51:54 +01:00
|
|
|
* ProcReleaseLocks() -- release locks associated with current transaction
|
2004-07-17 05:32:14 +02:00
|
|
|
* at main transaction commit or abort
|
2004-07-01 02:52:04 +02:00
|
|
|
*
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
* At main transaction commit, we release standard locks except session locks.
|
2010-02-08 05:33:55 +01:00
|
|
|
* At main transaction abort, we release all locks including session locks.
|
2004-07-01 02:52:04 +02:00
|
|
|
*
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
* Advisory locks are released only if they are transaction-level;
|
|
|
|
* session-level holds remain, whether this is a commit or not.
|
|
|
|
*
|
2004-07-01 02:52:04 +02:00
|
|
|
* At subtransaction commit, we don't release any locks (so this func is not
|
2004-07-17 05:32:14 +02:00
|
|
|
* needed at all); we will defer the releasing to the parent transaction.
|
2004-07-01 02:52:04 +02:00
|
|
|
* At subtransaction abort, we release all locks held by the subtransaction;
|
2004-07-17 05:32:14 +02:00
|
|
|
* this is implemented by retail releasing of the locks under control of
|
|
|
|
* the ResourceOwner mechanism.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
void
|
2004-07-17 05:32:14 +02:00
|
|
|
ProcReleaseLocks(bool isCommit)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
1997-09-07 07:04:48 +02:00
|
|
|
if (!MyProc)
|
|
|
|
return;
|
2001-01-14 06:08:17 +01:00
|
|
|
/* If waiting, get off wait queue (should only be needed after error) */
|
2012-04-18 17:17:30 +02:00
|
|
|
LockErrorCleanup();
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
/* Release standard locks, including session-level if aborting */
|
2004-08-27 19:07:42 +02:00
|
|
|
LockReleaseAll(DEFAULT_LOCKMETHOD, !isCommit);
|
Overdue code review for transaction-level advisory locks patch.
Commit 62c7bd31c8878dd45c9b9b2429ab7a12103f3590 had assorted problems, most
visibly that it broke PREPARE TRANSACTION in the presence of session-level
advisory locks (which should be ignored by PREPARE), as per a recent
complaint from Stephen Rees. More abstractly, the patch made the
LockMethodData.transactional flag not merely useless but outright
dangerous, because in point of fact that flag no longer tells you anything
at all about whether a lock is held transactionally. This fix therefore
removes that flag altogether. We now rely entirely on the convention
already in use in lock.c that transactional lock holds must be owned by
some ResourceOwner, while session holds are never so owned. Setting the
locallock struct's owner link to NULL thus denotes a session hold, and
there is no redundant marker for that.
PREPARE TRANSACTION now works again when there are session-level advisory
locks, and it is also able to transfer transactional advisory locks to the
prepared transaction, but for implementation reasons it throws an error if
we hold both types of lock on a single lockable object. Perhaps it will be
worth improving that someday.
Assorted other minor cleanup and documentation editing, as well.
Back-patch to 9.1, except that in the 9.1 branch I did not remove the
LockMethodData.transactional flag for fear of causing an ABI break for
any external code that might be examining those structs.
2012-05-04 23:43:27 +02:00
|
|
|
/* Release transaction-level advisory locks */
|
2011-02-18 06:04:34 +01:00
|
|
|
LockReleaseAll(USER_LOCKMETHOD, false);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-01-04 22:06:32 +01:00
|
|
|
/*
|
|
|
|
* RemoveProcFromArray() -- Remove this process from the shared ProcArray.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
RemoveProcFromArray(int code, Datum arg)
|
|
|
|
{
|
|
|
|
Assert(MyProc != NULL);
|
2007-09-08 22:31:15 +02:00
|
|
|
ProcArrayRemove(MyProc, InvalidTransactionId);
|
2006-01-04 22:06:32 +01:00
|
|
|
}
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* ProcKill() -- Destroy the per-proc data structure for
|
2001-09-29 06:02:27 +02:00
|
|
|
* this process. Release any of its held LW locks.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
static void
|
2003-12-12 19:45:10 +01:00
|
|
|
ProcKill(int code, Datum arg)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2001-12-28 19:16:43 +01:00
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
2014-02-01 03:31:08 +01:00
|
|
|
PGPROC *proc;
|
2001-12-28 19:16:43 +01:00
|
|
|
|
2001-09-07 02:27:30 +02:00
|
|
|
Assert(MyProc != NULL);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2011-08-10 18:20:30 +02:00
|
|
|
/* Make sure we're out of the sync rep lists */
|
|
|
|
SyncRepCleanupAtProcExit();
|
|
|
|
|
2011-11-02 03:44:54 +01:00
|
|
|
#ifdef USE_ASSERT_CHECKING
|
|
|
|
{
|
2012-06-10 21:20:04 +02:00
|
|
|
int i;
|
2011-11-02 03:44:54 +01:00
|
|
|
|
|
|
|
/* Last process should have released all locks. */
|
|
|
|
for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
|
|
|
|
Assert(SHMQueueEmpty(&(MyProc->myProcLocks[i])));
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-09-25 22:31:40 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Release any LW locks I am holding. There really shouldn't be any, but
|
|
|
|
* it's cheap to check again before we cut the knees off the LWLock
|
2005-08-08 05:12:16 +02:00
|
|
|
* facility by releasing our PGPROC ...
|
2002-09-25 22:31:40 +02:00
|
|
|
*/
|
2005-08-08 05:12:16 +02:00
|
|
|
LWLockReleaseAll();
|
2001-01-14 06:08:17 +01:00
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
/* Make sure active replication slots are released */
|
|
|
|
if (MyReplicationSlot != NULL)
|
|
|
|
ReplicationSlotRelease();
|
|
|
|
|
2014-02-01 03:31:08 +01:00
|
|
|
/*
|
2015-01-14 18:45:22 +01:00
|
|
|
* Reset MyLatch to the process local one. This is so that signal
|
|
|
|
* handlers et al can continue using the latch after the shared latch
|
|
|
|
* isn't ours anymore. After that clear MyProc and disown the shared
|
|
|
|
* latch.
|
2014-02-01 03:31:08 +01:00
|
|
|
*/
|
2015-01-14 18:45:22 +01:00
|
|
|
SwitchBackToLocalLatch();
|
2014-02-01 03:31:08 +01:00
|
|
|
proc = MyProc;
|
|
|
|
MyProc = NULL;
|
|
|
|
DisownLatch(&proc->procLatch);
|
2011-08-10 18:20:30 +02:00
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
SpinLockAcquire(ProcStructLock);
|
2001-06-17 00:58:17 +02:00
|
|
|
|
2009-08-31 21:41:00 +02:00
|
|
|
/* Return PGPROC structure (and semaphore) to appropriate freelist */
|
|
|
|
if (IsAnyAutoVacuumProcess())
|
2007-04-16 20:30:04 +02:00
|
|
|
{
|
2014-02-01 03:31:08 +01:00
|
|
|
proc->links.next = (SHM_QUEUE *) procglobal->autovacFreeProcs;
|
|
|
|
procglobal->autovacFreeProcs = proc;
|
2007-04-16 20:30:04 +02:00
|
|
|
}
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
else if (IsBackgroundWorker)
|
|
|
|
{
|
2014-02-01 03:31:08 +01:00
|
|
|
proc->links.next = (SHM_QUEUE *) procglobal->bgworkerFreeProcs;
|
|
|
|
procglobal->bgworkerFreeProcs = proc;
|
Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code. They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.
Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on. Modules can install more than one
bgworker, if necessary.
Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration. This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests. Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)
The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).
In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing. The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).
More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.
An elementary sample module is supplied.
Author: Álvaro Herrera
This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.
Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 18:57:52 +01:00
|
|
|
}
|
2007-04-16 20:30:04 +02:00
|
|
|
else
|
|
|
|
{
|
2014-02-01 03:31:08 +01:00
|
|
|
proc->links.next = (SHM_QUEUE *) procglobal->freeProcs;
|
|
|
|
procglobal->freeProcs = proc;
|
2007-04-16 20:30:04 +02:00
|
|
|
}
|
2001-06-17 00:58:17 +02:00
|
|
|
|
2005-10-11 22:41:32 +02:00
|
|
|
/* Update shared estimate of spins_per_delay */
|
|
|
|
procglobal->spins_per_delay = update_spins_per_delay(procglobal->spins_per_delay);
|
|
|
|
|
2001-09-29 06:02:27 +02:00
|
|
|
SpinLockRelease(ProcStructLock);
|
2007-04-16 20:30:04 +02:00
|
|
|
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
/*
|
|
|
|
* This process is no longer present in shared memory in any meaningful
|
2010-02-26 03:01:40 +01:00
|
|
|
* way, so tell the postmaster we've cleaned up acceptably well. (XXX
|
|
|
|
* autovac launcher should be included here someday)
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
*/
|
2009-08-31 21:41:00 +02:00
|
|
|
if (IsUnderPostmaster && !IsAutoVacuumLauncherProcess())
|
Install a "dead man switch" to allow the postmaster to detect cases where
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
2009-05-05 21:59:00 +02:00
|
|
|
MarkPostmasterChildInactive();
|
|
|
|
|
2007-04-16 20:30:04 +02:00
|
|
|
/* wake autovac launcher if needed -- see comments in FreeWorkerInfo */
|
|
|
|
if (AutovacuumLauncherPid != 0)
|
2009-08-31 21:41:00 +02:00
|
|
|
kill(AutovacuumLauncherPid, SIGUSR2);
|
2001-09-29 06:02:27 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2007-03-07 14:35:03 +01:00
|
|
|
* AuxiliaryProcKill() -- Cut-down version of ProcKill for auxiliary
|
2014-05-06 18:12:18 +02:00
|
|
|
* processes (bgwriter, etc). The PGPROC and sema are not released, only
|
2007-03-07 14:35:03 +01:00
|
|
|
* marked as not-in-use.
|
2001-09-29 06:02:27 +02:00
|
|
|
*/
|
|
|
|
static void
|
2007-03-07 14:35:03 +01:00
|
|
|
AuxiliaryProcKill(int code, Datum arg)
|
2001-09-29 06:02:27 +02:00
|
|
|
{
|
2004-08-29 07:07:03 +02:00
|
|
|
int proctype = DatumGetInt32(arg);
|
2012-03-21 22:30:14 +01:00
|
|
|
PGPROC *auxproc PG_USED_FOR_ASSERTS_ONLY;
|
2014-02-01 03:31:08 +01:00
|
|
|
PGPROC *proc;
|
2003-11-19 16:55:08 +01:00
|
|
|
|
2007-03-07 14:35:03 +01:00
|
|
|
Assert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);
|
2003-11-19 16:55:08 +01:00
|
|
|
|
2007-03-07 14:35:03 +01:00
|
|
|
auxproc = &AuxiliaryProcs[proctype];
|
2003-11-19 16:55:08 +01:00
|
|
|
|
2007-03-07 14:35:03 +01:00
|
|
|
Assert(MyProc == auxproc);
|
2001-09-29 06:02:27 +02:00
|
|
|
|
2005-08-08 05:12:16 +02:00
|
|
|
/* Release any LW locks I am holding (see notes above) */
|
2001-09-29 06:02:27 +02:00
|
|
|
LWLockReleaseAll();
|
|
|
|
|
2014-02-01 03:31:08 +01:00
|
|
|
/*
|
2015-01-14 18:45:22 +01:00
|
|
|
* Reset MyLatch to the process local one. This is so that signal
|
|
|
|
* handlers et al can continue using the latch after the shared latch
|
|
|
|
* isn't ours anymore. After that clear MyProc and disown the shared
|
|
|
|
* latch.
|
2014-02-01 03:31:08 +01:00
|
|
|
*/
|
2015-01-14 18:45:22 +01:00
|
|
|
SwitchBackToLocalLatch();
|
2014-02-01 03:31:08 +01:00
|
|
|
proc = MyProc;
|
|
|
|
MyProc = NULL;
|
|
|
|
DisownLatch(&proc->procLatch);
|
2011-08-10 18:20:30 +02:00
|
|
|
|
2005-10-11 22:41:32 +02:00
|
|
|
SpinLockAcquire(ProcStructLock);
|
|
|
|
|
2007-03-07 14:35:03 +01:00
|
|
|
/* Mark auxiliary proc no longer in use */
|
2014-02-01 03:31:08 +01:00
|
|
|
proc->pid = 0;
|
2005-10-11 22:41:32 +02:00
|
|
|
|
|
|
|
/* Update shared estimate of spins_per_delay */
|
|
|
|
ProcGlobal->spins_per_delay = update_spins_per_delay(ProcGlobal->spins_per_delay);
|
|
|
|
|
|
|
|
SpinLockRelease(ProcStructLock);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2001-06-17 00:58:17 +02:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
|
|
|
* ProcQueue package: routines for putting processes to sleep
|
1997-09-07 07:04:48 +02:00
|
|
|
* and waking them up
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ProcQueueAlloc -- alloc/attach to a shared memory process queue
|
|
|
|
*
|
2010-04-28 18:54:16 +02:00
|
|
|
* Returns: a pointer to the queue
|
|
|
|
* Side Effects: Initializes the queue if it wasn't there before
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
1997-08-19 23:40:56 +02:00
|
|
|
#ifdef NOT_USED
|
1997-09-08 04:41:22 +02:00
|
|
|
PROC_QUEUE *
|
2010-04-28 18:54:16 +02:00
|
|
|
ProcQueueAlloc(const char *name)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2010-04-28 18:54:16 +02:00
|
|
|
PROC_QUEUE *queue;
|
1997-09-08 04:41:22 +02:00
|
|
|
bool found;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2010-04-28 18:54:16 +02:00
|
|
|
queue = (PROC_QUEUE *)
|
|
|
|
ShmemInitStruct(name, sizeof(PROC_QUEUE), &found);
|
|
|
|
|
1997-09-07 07:04:48 +02:00
|
|
|
if (!found)
|
|
|
|
ProcQueueInit(queue);
|
2010-04-28 18:54:16 +02:00
|
|
|
|
1998-09-01 05:29:17 +02:00
|
|
|
return queue;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
1997-08-19 23:40:56 +02:00
|
|
|
#endif
|
1996-07-09 08:22:35 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ProcQueueInit -- initialize a shared memory process queue
|
|
|
|
*/
|
|
|
|
void
|
1997-09-08 23:56:23 +02:00
|
|
|
ProcQueueInit(PROC_QUEUE *queue)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
1997-09-07 07:04:48 +02:00
|
|
|
SHMQueueInit(&(queue->links));
|
|
|
|
queue->size = 0;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2005-12-11 22:02:18 +01:00
|
|
|
* ProcSleep -- put a process to sleep on the specified lock
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2001-01-25 04:31:16 +01:00
|
|
|
* Caller must have set MyProc->heldLocks to reflect locks already held
|
|
|
|
* on the lockable object by this process (under all XIDs).
|
2001-01-14 06:08:17 +01:00
|
|
|
*
|
2005-12-11 22:02:18 +01:00
|
|
|
* The lock table's partition lock must be held at entry, and will be held
|
2001-01-14 06:08:17 +01:00
|
|
|
* at exit.
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2001-01-25 04:31:16 +01:00
|
|
|
* Result: STATUS_OK if we acquired the lock, STATUS_ERROR if not (deadlock).
|
2000-11-29 00:27:57 +01:00
|
|
|
*
|
1996-07-09 08:22:35 +02:00
|
|
|
* ASSUME: that no one will fiddle with the queue until after
|
2005-12-11 22:02:18 +01:00
|
|
|
* we release the partition lock.
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
* NOTES: The process queue is now a priority queue for locking.
|
2001-01-25 04:31:16 +01:00
|
|
|
*
|
|
|
|
* P() on the semaphore should put us to sleep. The process
|
|
|
|
* semaphore is normally zero, so when we try to acquire it, we sleep.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
int
|
2005-12-11 22:02:18 +01:00
|
|
|
ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2005-12-11 22:02:18 +01:00
|
|
|
LOCKMODE lockmode = locallock->tag.mode;
|
|
|
|
LOCK *lock = locallock->lock;
|
|
|
|
PROCLOCK *proclock = locallock->proclock;
|
2006-07-24 01:08:46 +02:00
|
|
|
uint32 hashcode = locallock->hashcode;
|
2014-01-27 17:07:44 +01:00
|
|
|
LWLock *partitionLock = LockHashPartitionLock(hashcode);
|
2001-01-25 04:31:16 +01:00
|
|
|
PROC_QUEUE *waitQueue = &(lock->waitProcs);
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
LOCKMASK myHeldLocks = MyProc->heldLocks;
|
2001-09-04 04:26:57 +02:00
|
|
|
bool early_deadlock = false;
|
2007-11-15 22:14:46 +01:00
|
|
|
bool allow_autovacuum_cancel = true;
|
2007-08-28 05:23:44 +02:00
|
|
|
int myWaitStatus;
|
2002-06-11 15:40:53 +02:00
|
|
|
PGPROC *proc;
|
2000-12-22 01:51:54 +01:00
|
|
|
int i;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2001-01-25 04:31:16 +01:00
|
|
|
* Determine where to add myself in the wait queue.
|
|
|
|
*
|
2005-11-22 19:17:34 +01:00
|
|
|
* Normally I should go at the end of the queue. However, if I already
|
|
|
|
* hold locks that conflict with the request of any previous waiter, put
|
|
|
|
* myself in the queue just in front of the first such waiter. This is not
|
|
|
|
* a necessary step, since deadlock detection would move me to before that
|
2005-10-15 04:49:52 +02:00
|
|
|
* waiter anyway; but it's relatively cheap to detect such a conflict
|
|
|
|
* immediately, and avoid delaying till deadlock timeout.
|
2001-01-25 04:31:16 +01:00
|
|
|
*
|
2005-11-22 19:17:34 +01:00
|
|
|
* Special case: if I find I should go in front of some waiter, check to
|
|
|
|
* see if I conflict with already-held locks or the requests before that
|
2014-05-06 18:12:18 +02:00
|
|
|
* waiter. If not, then just grant myself the requested lock immediately.
|
2005-10-15 04:49:52 +02:00
|
|
|
* This is the same as the test for immediate grant in LockAcquire, except
|
|
|
|
* we are only considering the part of the wait queue before my insertion
|
|
|
|
* point.
|
2001-01-25 04:31:16 +01:00
|
|
|
*/
|
|
|
|
if (myHeldLocks != 0)
|
1999-05-07 03:23:11 +02:00
|
|
|
{
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
LOCKMASK aheadRequests = 0;
|
2001-01-26 19:23:12 +01:00
|
|
|
|
2008-11-02 22:24:52 +01:00
|
|
|
proc = (PGPROC *) waitQueue->links.next;
|
2001-01-25 04:31:16 +01:00
|
|
|
for (i = 0; i < waitQueue->size; i++)
|
1999-05-07 03:23:11 +02:00
|
|
|
{
|
2001-01-25 04:31:16 +01:00
|
|
|
/* Must he wait for me? */
|
2002-07-19 01:06:20 +02:00
|
|
|
if (lockMethodTable->conflictTab[proc->waitLockMode] & myHeldLocks)
|
1999-05-07 03:23:11 +02:00
|
|
|
{
|
2001-01-25 04:31:16 +01:00
|
|
|
/* Must I wait for him ? */
|
2002-07-19 01:06:20 +02:00
|
|
|
if (lockMethodTable->conflictTab[lockmode] & proc->heldLocks)
|
2001-01-25 04:31:16 +01:00
|
|
|
{
|
2001-09-04 04:26:57 +02:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Yes, so we have a deadlock. Easiest way to clean up
|
2005-10-15 04:49:52 +02:00
|
|
|
* correctly is to call RemoveFromWaitQueue(), but we
|
|
|
|
* can't do that until we are *on* the wait queue. So, set
|
|
|
|
* a flag to check below, and break out of loop. Also,
|
|
|
|
* record deadlock info for later message.
|
2001-09-04 04:26:57 +02:00
|
|
|
*/
|
2003-01-16 22:01:45 +01:00
|
|
|
RememberSimpleDeadLock(MyProc, lockmode, lock, proc);
|
2001-09-04 04:26:57 +02:00
|
|
|
early_deadlock = true;
|
|
|
|
break;
|
2001-01-25 04:31:16 +01:00
|
|
|
}
|
2001-01-26 19:23:12 +01:00
|
|
|
/* I must go before this waiter. Check special case. */
|
2002-07-19 01:06:20 +02:00
|
|
|
if ((lockMethodTable->conflictTab[lockmode] & aheadRequests) == 0 &&
|
2001-01-26 19:23:12 +01:00
|
|
|
LockCheckConflicts(lockMethodTable,
|
|
|
|
lockmode,
|
|
|
|
lock,
|
2013-09-17 03:14:14 +02:00
|
|
|
proclock) == STATUS_OK)
|
2001-01-25 04:31:16 +01:00
|
|
|
{
|
2001-01-26 19:23:12 +01:00
|
|
|
/* Skip the wait and just grant myself the lock. */
|
2003-02-18 03:13:24 +01:00
|
|
|
GrantLock(lock, proclock, lockmode);
|
2004-08-27 19:07:42 +02:00
|
|
|
GrantAwaitedLock();
|
2001-01-26 19:23:12 +01:00
|
|
|
return STATUS_OK;
|
2001-01-25 04:31:16 +01:00
|
|
|
}
|
|
|
|
/* Break out of loop to put myself before him */
|
1999-05-07 03:23:11 +02:00
|
|
|
break;
|
2001-01-25 04:31:16 +01:00
|
|
|
}
|
2001-01-26 19:23:12 +01:00
|
|
|
/* Nope, so advance to next waiter */
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
aheadRequests |= LOCKBIT_ON(proc->waitLockMode);
|
2008-11-02 22:24:52 +01:00
|
|
|
proc = (PGPROC *) proc->links.next;
|
1999-05-07 03:23:11 +02:00
|
|
|
}
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2001-01-26 19:23:12 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* If we fall out of loop normally, proc points to waitQueue head, so
|
|
|
|
* we will insert at tail of queue as desired.
|
2001-01-26 19:23:12 +01:00
|
|
|
*/
|
2001-01-25 04:31:16 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* I hold no locks, so I can't push in front of anyone. */
|
2002-06-11 15:40:53 +02:00
|
|
|
proc = (PGPROC *) &(waitQueue->links);
|
1999-05-07 03:23:11 +02:00
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Insert self into queue, ahead of the given proc (or at tail of queue).
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2001-01-22 23:30:06 +01:00
|
|
|
SHMQueueInsertBefore(&(proc->links), &(MyProc->links));
|
1998-01-27 04:00:43 +01:00
|
|
|
waitQueue->size++;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
lock->waitMask |= LOCKBIT_ON(lockmode);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2002-06-11 15:40:53 +02:00
|
|
|
/* Set up wait information in PGPROC object, too */
|
2001-01-22 23:30:06 +01:00
|
|
|
MyProc->waitLock = lock;
|
2004-08-27 19:07:42 +02:00
|
|
|
MyProc->waitProcLock = proclock;
|
2001-01-22 23:30:06 +01:00
|
|
|
MyProc->waitLockMode = lockmode;
|
|
|
|
|
2006-04-14 05:38:56 +02:00
|
|
|
MyProc->waitStatus = STATUS_WAITING;
|
2001-09-04 04:26:57 +02:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* If we detected deadlock, give up without waiting. This must agree with
|
|
|
|
* CheckDeadLock's recovery code, except that we shouldn't release the
|
|
|
|
* semaphore since we haven't tried to lock it yet.
|
2001-09-04 04:26:57 +02:00
|
|
|
*/
|
|
|
|
if (early_deadlock)
|
|
|
|
{
|
2006-07-24 01:08:46 +02:00
|
|
|
RemoveFromWaitQueue(MyProc, hashcode);
|
2001-09-04 04:26:57 +02:00
|
|
|
return STATUS_ERROR;
|
|
|
|
}
|
2000-11-29 00:27:57 +01:00
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
/* mark that we are waiting for a lock */
|
2005-12-11 22:02:18 +01:00
|
|
|
lockAwaited = locallock;
|
2001-01-14 06:08:17 +01:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2005-12-11 22:02:18 +01:00
|
|
|
* Release the lock table's partition lock.
|
2001-01-14 06:08:17 +01:00
|
|
|
*
|
2001-03-22 07:16:21 +01:00
|
|
|
* NOTE: this may also cause us to exit critical-section state, possibly
|
2005-10-15 04:49:52 +02:00
|
|
|
* allowing a cancel/die interrupt to be accepted. This is OK because we
|
|
|
|
* have recorded the fact that we are waiting for a lock, and so
|
2012-04-18 17:17:30 +02:00
|
|
|
* LockErrorCleanup will clean up if cancel/die happens.
|
2001-01-14 06:08:17 +01:00
|
|
|
*/
|
2005-12-11 22:02:18 +01:00
|
|
|
LWLockRelease(partitionLock);
|
2000-12-18 18:33:42 +01:00
|
|
|
|
2011-08-02 21:16:29 +02:00
|
|
|
/*
|
|
|
|
* Also, now that we will successfully clean up after an ereport, it's
|
|
|
|
* safe to check to see if there's a buffer pin deadlock against the
|
2012-06-10 21:20:04 +02:00
|
|
|
* Startup process. Of course, that's only necessary if we're doing Hot
|
|
|
|
* Standby and are not the Startup process ourselves.
|
2011-08-02 21:16:29 +02:00
|
|
|
*/
|
|
|
|
if (RecoveryInProgress() && !InRecovery)
|
|
|
|
CheckRecoveryConflictDeadlock();
|
|
|
|
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
/* Reset deadlock_state before enabling the timeout handler */
|
2007-06-19 22:13:22 +02:00
|
|
|
deadlock_state = DS_NOT_YET_CHECKED;
|
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Set timer so we can wake up after awhile and check for a deadlock. If a
|
|
|
|
* deadlock is detected, the handler releases the process's semaphore and
|
|
|
|
* sets MyProc->waitStatus = STATUS_ERROR, allowing us to know that we
|
|
|
|
* must report failure rather than success.
|
2000-11-29 00:27:57 +01:00
|
|
|
*
|
2005-11-22 19:17:34 +01:00
|
|
|
* By delaying the check until we've waited for a bit, we can avoid
|
|
|
|
* running the rather expensive deadlock-check code in most cases.
|
2013-03-17 04:22:17 +01:00
|
|
|
*
|
|
|
|
* If LockTimeout is set, also enable the timeout for that. We can save a
|
|
|
|
* few cycles by enabling both timeout sources in one call.
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2013-03-17 04:22:17 +01:00
|
|
|
if (LockTimeout > 0)
|
|
|
|
{
|
|
|
|
EnableTimeoutParams timeouts[2];
|
|
|
|
|
|
|
|
timeouts[0].id = DEADLOCK_TIMEOUT;
|
|
|
|
timeouts[0].type = TMPARAM_AFTER;
|
|
|
|
timeouts[0].delay_ms = DeadlockTimeout;
|
|
|
|
timeouts[1].id = LOCK_TIMEOUT;
|
|
|
|
timeouts[1].type = TMPARAM_AFTER;
|
|
|
|
timeouts[1].delay_ms = LockTimeout;
|
|
|
|
enable_timeouts(timeouts, 2);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
enable_timeout_after(DEADLOCK_TIMEOUT, DeadlockTimeout);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2002-05-05 02:03:29 +02:00
|
|
|
* If someone wakes us between LWLockRelease and PGSemaphoreLock,
|
2014-05-06 18:12:18 +02:00
|
|
|
* PGSemaphoreLock will not block. The wakeup is "saved" by the semaphore
|
|
|
|
* implementation. While this is normally good, there are cases where a
|
2006-10-04 02:30:14 +02:00
|
|
|
* saved wakeup might be leftover from a previous operation (for example,
|
|
|
|
* we aborted ProcWaitForSignal just before someone did ProcSendSignal).
|
|
|
|
* So, loop to wait again if the waitStatus shows we haven't been granted
|
|
|
|
* nor denied the lock yet.
|
2001-01-14 06:08:17 +01:00
|
|
|
*
|
2005-11-22 19:17:34 +01:00
|
|
|
* We pass interruptOK = true, which eliminates a window in which
|
|
|
|
* cancel/die interrupts would be held off undesirably. This is a promise
|
|
|
|
* that we don't mind losing control to a cancel/die interrupt here. We
|
|
|
|
* don't, because we have no shared-state-change work to do after being
|
|
|
|
* granted the lock (the grantor did it all). We do have to worry about
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
* canceling the deadlock timeout and updating the locallock table, but if
|
|
|
|
* we lose control to an error, LockErrorCleanup will fix that up.
|
2000-11-29 00:27:57 +01:00
|
|
|
*/
|
2006-10-04 02:30:14 +02:00
|
|
|
do
|
|
|
|
{
|
2006-04-14 05:38:56 +02:00
|
|
|
PGSemaphoreLock(&MyProc->sem, true);
|
2007-06-19 22:13:22 +02:00
|
|
|
|
2007-08-28 05:23:44 +02:00
|
|
|
/*
|
|
|
|
* waitStatus could change from STATUS_WAITING to something else
|
2014-05-06 18:12:18 +02:00
|
|
|
* asynchronously. Read it just once per loop to prevent surprising
|
2007-08-28 05:23:44 +02:00
|
|
|
* behavior (such as missing log messages).
|
|
|
|
*/
|
|
|
|
myWaitStatus = MyProc->waitStatus;
|
|
|
|
|
2007-10-26 22:45:10 +02:00
|
|
|
/*
|
|
|
|
* If we are not deadlocked, but are waiting on an autovacuum-induced
|
2007-11-15 22:14:46 +01:00
|
|
|
* task, send a signal to interrupt it.
|
2007-10-26 22:45:10 +02:00
|
|
|
*/
|
|
|
|
if (deadlock_state == DS_BLOCKED_BY_AUTOVACUUM && allow_autovacuum_cancel)
|
|
|
|
{
|
2007-11-15 22:14:46 +01:00
|
|
|
PGPROC *autovac = GetBlockingAutoVacuumPgproc();
|
2011-11-25 14:02:10 +01:00
|
|
|
PGXACT *autovac_pgxact = &ProcGlobal->allPgXact[autovac->pgprocno];
|
2007-10-26 22:45:10 +02:00
|
|
|
|
|
|
|
LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only do it if the worker is not working to protect against Xid
|
|
|
|
* wraparound.
|
|
|
|
*/
|
2014-03-02 04:14:14 +01:00
|
|
|
if ((autovac_pgxact->vacuumFlags & PROC_IS_AUTOVACUUM) &&
|
2011-11-25 14:02:10 +01:00
|
|
|
!(autovac_pgxact->vacuumFlags & PROC_VACUUM_FOR_WRAPAROUND))
|
2007-10-26 22:45:10 +02:00
|
|
|
{
|
2007-11-15 22:14:46 +01:00
|
|
|
int pid = autovac->pid;
|
2012-07-26 15:18:32 +02:00
|
|
|
StringInfoData locktagbuf;
|
2013-05-29 22:58:43 +02:00
|
|
|
StringInfoData logbuf; /* errdetail for server log */
|
2012-07-26 15:18:32 +02:00
|
|
|
|
|
|
|
initStringInfo(&locktagbuf);
|
|
|
|
initStringInfo(&logbuf);
|
|
|
|
DescribeLockTag(&locktagbuf, &lock->tag);
|
|
|
|
appendStringInfo(&logbuf,
|
2013-05-29 22:58:43 +02:00
|
|
|
_("Process %d waits for %s on %s."),
|
|
|
|
MyProcPid,
|
|
|
|
GetLockmodeName(lock->tag.locktag_lockmethodid,
|
|
|
|
lockmode),
|
|
|
|
locktagbuf.data);
|
2012-07-26 15:18:32 +02:00
|
|
|
|
|
|
|
/* release lock as quickly as possible */
|
|
|
|
LWLockRelease(ProcArrayLock);
|
2007-10-26 22:45:10 +02:00
|
|
|
|
2012-07-26 15:18:32 +02:00
|
|
|
ereport(LOG,
|
2013-05-29 22:58:43 +02:00
|
|
|
(errmsg("sending cancel to blocking autovacuum PID %d",
|
|
|
|
pid),
|
|
|
|
errdetail_log("%s", logbuf.data)));
|
2007-10-26 22:45:10 +02:00
|
|
|
|
2012-07-26 15:18:32 +02:00
|
|
|
pfree(logbuf.data);
|
|
|
|
pfree(locktagbuf.data);
|
2007-10-26 22:45:10 +02:00
|
|
|
|
|
|
|
/* send the autovacuum worker Back to Old Kent Road */
|
|
|
|
if (kill(pid, SIGINT) < 0)
|
|
|
|
{
|
|
|
|
/* Just a warning to allow multiple callers */
|
|
|
|
ereport(WARNING,
|
|
|
|
(errmsg("could not send signal to process %d: %m",
|
|
|
|
pid)));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
LWLockRelease(ProcArrayLock);
|
|
|
|
|
|
|
|
/* prevent signal from being resent more than once */
|
|
|
|
allow_autovacuum_cancel = false;
|
|
|
|
}
|
|
|
|
|
2007-06-19 22:13:22 +02:00
|
|
|
/*
|
|
|
|
* If awoken after the deadlock check interrupt has run, and
|
|
|
|
* log_lock_waits is on, then report about the wait.
|
|
|
|
*/
|
2007-08-28 05:23:44 +02:00
|
|
|
if (log_lock_waits && deadlock_state != DS_NOT_YET_CHECKED)
|
2007-06-19 22:13:22 +02:00
|
|
|
{
|
2014-03-12 19:26:47 +01:00
|
|
|
StringInfoData buf,
|
|
|
|
lock_waiters_sbuf,
|
|
|
|
lock_holders_sbuf;
|
2007-08-28 05:23:44 +02:00
|
|
|
const char *modename;
|
|
|
|
long secs;
|
|
|
|
int usecs;
|
|
|
|
long msecs;
|
2014-03-12 19:26:47 +01:00
|
|
|
SHM_QUEUE *procLocks;
|
|
|
|
PROCLOCK *proclock;
|
|
|
|
bool first_holder = true,
|
|
|
|
first_waiter = true;
|
|
|
|
int lockHoldersNum = 0;
|
2007-08-28 05:23:44 +02:00
|
|
|
|
|
|
|
initStringInfo(&buf);
|
2014-03-12 19:26:47 +01:00
|
|
|
initStringInfo(&lock_waiters_sbuf);
|
|
|
|
initStringInfo(&lock_holders_sbuf);
|
|
|
|
|
2007-08-28 05:23:44 +02:00
|
|
|
DescribeLockTag(&buf, &locallock->tag.lock);
|
|
|
|
modename = GetLockmodeName(locallock->tag.lock.locktag_lockmethodid,
|
|
|
|
lockmode);
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
TimestampDifference(get_timeout_start_time(DEADLOCK_TIMEOUT),
|
|
|
|
GetCurrentTimestamp(),
|
2007-08-28 05:23:44 +02:00
|
|
|
&secs, &usecs);
|
|
|
|
msecs = secs * 1000 + usecs / 1000;
|
|
|
|
usecs = usecs % 1000;
|
|
|
|
|
2014-03-12 19:26:47 +01:00
|
|
|
/*
|
|
|
|
* we loop over the lock's procLocks to gather a list of all
|
|
|
|
* holders and waiters. Thus we will be able to provide more
|
|
|
|
* detailed information for lock debugging purposes.
|
|
|
|
*
|
|
|
|
* lock->procLocks contains all processes which hold or wait for
|
|
|
|
* this lock.
|
|
|
|
*/
|
|
|
|
|
|
|
|
LWLockAcquire(partitionLock, LW_SHARED);
|
|
|
|
|
|
|
|
procLocks = &(lock->procLocks);
|
|
|
|
proclock = (PROCLOCK *) SHMQueueNext(procLocks, procLocks,
|
|
|
|
offsetof(PROCLOCK, lockLink));
|
|
|
|
|
|
|
|
while (proclock)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* we are a waiter if myProc->waitProcLock == proclock; we are
|
|
|
|
* a holder if it is NULL or something different
|
|
|
|
*/
|
|
|
|
if (proclock->tag.myProc->waitProcLock == proclock)
|
|
|
|
{
|
|
|
|
if (first_waiter)
|
|
|
|
{
|
|
|
|
appendStringInfo(&lock_waiters_sbuf, "%d",
|
|
|
|
proclock->tag.myProc->pid);
|
|
|
|
first_waiter = false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
appendStringInfo(&lock_waiters_sbuf, ", %d",
|
|
|
|
proclock->tag.myProc->pid);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (first_holder)
|
|
|
|
{
|
|
|
|
appendStringInfo(&lock_holders_sbuf, "%d",
|
|
|
|
proclock->tag.myProc->pid);
|
|
|
|
first_holder = false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
appendStringInfo(&lock_holders_sbuf, ", %d",
|
|
|
|
proclock->tag.myProc->pid);
|
|
|
|
|
|
|
|
lockHoldersNum++;
|
|
|
|
}
|
|
|
|
|
|
|
|
proclock = (PROCLOCK *) SHMQueueNext(procLocks, &proclock->lockLink,
|
|
|
|
offsetof(PROCLOCK, lockLink));
|
|
|
|
}
|
|
|
|
|
|
|
|
LWLockRelease(partitionLock);
|
|
|
|
|
2007-08-28 05:23:44 +02:00
|
|
|
if (deadlock_state == DS_SOFT_DEADLOCK)
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("process %d avoided deadlock for %s on %s by rearranging queue order after %ld.%03d ms",
|
2014-03-12 19:26:47 +01:00
|
|
|
MyProcPid, modename, buf.data, msecs, usecs),
|
|
|
|
(errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
|
|
|
|
"Processes holding the lock: %s. Wait queue: %s.",
|
|
|
|
lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
|
2007-08-28 05:23:44 +02:00
|
|
|
else if (deadlock_state == DS_HARD_DEADLOCK)
|
2007-06-19 22:13:22 +02:00
|
|
|
{
|
2007-08-28 05:23:44 +02:00
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* This message is a bit redundant with the error that will be
|
|
|
|
* reported subsequently, but in some cases the error report
|
|
|
|
* might not make it to the log (eg, if it's caught by an
|
|
|
|
* exception handler), and we want to ensure all long-wait
|
2007-08-28 05:23:44 +02:00
|
|
|
* events get logged.
|
|
|
|
*/
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("process %d detected deadlock while waiting for %s on %s after %ld.%03d ms",
|
2014-03-12 19:26:47 +01:00
|
|
|
MyProcPid, modename, buf.data, msecs, usecs),
|
|
|
|
(errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
|
|
|
|
"Processes holding the lock: %s. Wait queue: %s.",
|
|
|
|
lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
|
2007-06-19 22:13:22 +02:00
|
|
|
}
|
2007-08-28 05:23:44 +02:00
|
|
|
|
|
|
|
if (myWaitStatus == STATUS_WAITING)
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("process %d still waiting for %s on %s after %ld.%03d ms",
|
2014-03-12 19:26:47 +01:00
|
|
|
MyProcPid, modename, buf.data, msecs, usecs),
|
|
|
|
(errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
|
|
|
|
"Processes holding the lock: %s. Wait queue: %s.",
|
|
|
|
lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
|
2007-08-28 05:23:44 +02:00
|
|
|
else if (myWaitStatus == STATUS_OK)
|
|
|
|
ereport(LOG,
|
2007-11-15 22:14:46 +01:00
|
|
|
(errmsg("process %d acquired %s on %s after %ld.%03d ms",
|
|
|
|
MyProcPid, modename, buf.data, msecs, usecs)));
|
2007-08-28 05:23:44 +02:00
|
|
|
else
|
|
|
|
{
|
|
|
|
Assert(myWaitStatus == STATUS_ERROR);
|
2007-11-15 22:14:46 +01:00
|
|
|
|
2007-08-28 05:23:44 +02:00
|
|
|
/*
|
|
|
|
* Currently, the deadlock checker always kicks its own
|
2007-11-15 22:14:46 +01:00
|
|
|
* process, which means that we'll only see STATUS_ERROR when
|
|
|
|
* deadlock_state == DS_HARD_DEADLOCK, and there's no need to
|
|
|
|
* print redundant messages. But for completeness and
|
|
|
|
* future-proofing, print a message if it looks like someone
|
|
|
|
* else kicked us off the lock.
|
2007-08-28 05:23:44 +02:00
|
|
|
*/
|
|
|
|
if (deadlock_state != DS_HARD_DEADLOCK)
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("process %d failed to acquire %s on %s after %ld.%03d ms",
|
2014-03-12 19:26:47 +01:00
|
|
|
MyProcPid, modename, buf.data, msecs, usecs),
|
|
|
|
(errdetail_log_plural("Process holding the lock: %s. Wait queue: %s.",
|
|
|
|
"Processes holding the lock: %s. Wait queue: %s.",
|
|
|
|
lockHoldersNum, lock_holders_sbuf.data, lock_waiters_sbuf.data))));
|
2007-08-28 05:23:44 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* At this point we might still need to wait for the lock. Reset
|
|
|
|
* state so we don't print the above messages again.
|
2007-08-28 05:23:44 +02:00
|
|
|
*/
|
|
|
|
deadlock_state = DS_NO_DEADLOCK;
|
|
|
|
|
|
|
|
pfree(buf.data);
|
2014-03-12 19:26:47 +01:00
|
|
|
pfree(lock_holders_sbuf.data);
|
|
|
|
pfree(lock_waiters_sbuf.data);
|
2007-06-19 22:13:22 +02:00
|
|
|
}
|
2007-08-28 05:23:44 +02:00
|
|
|
} while (myWaitStatus == STATUS_WAITING);
|
1998-02-26 05:46:47 +01:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
Fix assorted race conditions in the new timeout infrastructure.
Prevent handle_sig_alarm from losing control partway through due to a query
cancel (either an asynchronous SIGINT, or a cancel triggered by one of the
timeout handler functions). That would at least result in failure to
schedule any required future interrupt, and might result in actual
corruption of timeout.c's data structures, if the interrupt happened while
we were updating those.
We could still lose control if an asynchronous SIGINT arrives just as the
function is entered. This wouldn't break any data structures, but it would
have the same effect as if the SIGALRM interrupt had been silently lost:
we'd not fire any currently-due handlers, nor schedule any new interrupt.
To forestall that scenario, forcibly reschedule any pending timer interrupt
during AbortTransaction and AbortSubTransaction. We can avoid any extra
kernel call in most cases by not doing that until we've allowed
LockErrorCleanup to kill the DEADLOCK_TIMEOUT and LOCK_TIMEOUT events.
Another hazard is that some platforms (at least Linux and *BSD) block a
signal before calling its handler and then unblock it on return. When we
longjmp out of the handler, the unblock doesn't happen, and the signal is
left blocked indefinitely. Again, we can fix that by forcibly unblocking
signals during AbortTransaction and AbortSubTransaction.
These latter two problems do not manifest when the longjmp reaches
postgres.c, because the error recovery code there kills all pending timeout
events anyway, and it uses sigsetjmp(..., 1) so that the appropriate signal
mask is restored. So errors thrown outside any transaction should be OK
already, and cleaning up in AbortTransaction and AbortSubTransaction should
be enough to fix these issues. (We're assuming that any code that catches
a query cancel error and doesn't re-throw it will do at least a
subtransaction abort to clean up; but that was pretty much required already
by other subsystems.)
Lastly, ProcSleep should not clear the LOCK_TIMEOUT indicator flag when
disabling that event: if a lock timeout interrupt happened after the lock
was granted, the ensuing query cancel is still going to happen at the next
CHECK_FOR_INTERRUPTS, and we want to report it as a lock timeout not a user
cancel.
Per reports from Dan Wood.
Back-patch to 9.3 where the new timeout handling infrastructure was
introduced. We may at some point decide to back-patch the signal
unblocking changes further, but I'll desist from that until we hear
actual field complaints about it.
2013-11-29 22:41:00 +01:00
|
|
|
* Disable the timers, if they are still running. As in LockErrorCleanup,
|
|
|
|
* we must preserve the LOCK_TIMEOUT indicator flag: if a lock timeout has
|
|
|
|
* already caused QueryCancelPending to become set, we want the cancel to
|
|
|
|
* be reported as a lock timeout, not a user cancel.
|
1998-12-29 20:32:08 +01:00
|
|
|
*/
|
2013-03-17 04:22:17 +01:00
|
|
|
if (LockTimeout > 0)
|
|
|
|
{
|
|
|
|
DisableTimeoutParams timeouts[2];
|
|
|
|
|
|
|
|
timeouts[0].id = DEADLOCK_TIMEOUT;
|
|
|
|
timeouts[0].keep_indicator = false;
|
|
|
|
timeouts[1].id = LOCK_TIMEOUT;
|
Fix assorted race conditions in the new timeout infrastructure.
Prevent handle_sig_alarm from losing control partway through due to a query
cancel (either an asynchronous SIGINT, or a cancel triggered by one of the
timeout handler functions). That would at least result in failure to
schedule any required future interrupt, and might result in actual
corruption of timeout.c's data structures, if the interrupt happened while
we were updating those.
We could still lose control if an asynchronous SIGINT arrives just as the
function is entered. This wouldn't break any data structures, but it would
have the same effect as if the SIGALRM interrupt had been silently lost:
we'd not fire any currently-due handlers, nor schedule any new interrupt.
To forestall that scenario, forcibly reschedule any pending timer interrupt
during AbortTransaction and AbortSubTransaction. We can avoid any extra
kernel call in most cases by not doing that until we've allowed
LockErrorCleanup to kill the DEADLOCK_TIMEOUT and LOCK_TIMEOUT events.
Another hazard is that some platforms (at least Linux and *BSD) block a
signal before calling its handler and then unblock it on return. When we
longjmp out of the handler, the unblock doesn't happen, and the signal is
left blocked indefinitely. Again, we can fix that by forcibly unblocking
signals during AbortTransaction and AbortSubTransaction.
These latter two problems do not manifest when the longjmp reaches
postgres.c, because the error recovery code there kills all pending timeout
events anyway, and it uses sigsetjmp(..., 1) so that the appropriate signal
mask is restored. So errors thrown outside any transaction should be OK
already, and cleaning up in AbortTransaction and AbortSubTransaction should
be enough to fix these issues. (We're assuming that any code that catches
a query cancel error and doesn't re-throw it will do at least a
subtransaction abort to clean up; but that was pretty much required already
by other subsystems.)
Lastly, ProcSleep should not clear the LOCK_TIMEOUT indicator flag when
disabling that event: if a lock timeout interrupt happened after the lock
was granted, the ensuing query cancel is still going to happen at the next
CHECK_FOR_INTERRUPTS, and we want to report it as a lock timeout not a user
cancel.
Per reports from Dan Wood.
Back-patch to 9.3 where the new timeout handling infrastructure was
introduced. We may at some point decide to back-patch the signal
unblocking changes further, but I'll desist from that until we hear
actual field complaints about it.
2013-11-29 22:41:00 +01:00
|
|
|
timeouts[1].keep_indicator = true;
|
2013-03-17 04:22:17 +01:00
|
|
|
disable_timeouts(timeouts, 2);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
disable_timeout(DEADLOCK_TIMEOUT, false);
|
1998-12-29 20:32:08 +01:00
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Re-acquire the lock table's partition lock. We have to do this to hold
|
|
|
|
* off cancel/die interrupts before we can mess with lockAwaited (else we
|
|
|
|
* might have a missed or duplicated locallock update).
|
2004-07-17 05:32:14 +02:00
|
|
|
*/
|
2005-12-11 22:02:18 +01:00
|
|
|
LWLockAcquire(partitionLock, LW_EXCLUSIVE);
|
2004-07-17 05:32:14 +02:00
|
|
|
|
|
|
|
/*
|
2012-04-18 17:17:30 +02:00
|
|
|
* We no longer want LockErrorCleanup to do anything.
|
2001-01-14 06:08:17 +01:00
|
|
|
*/
|
2005-12-11 22:02:18 +01:00
|
|
|
lockAwaited = NULL;
|
2001-01-14 06:08:17 +01:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
2004-08-27 19:07:42 +02:00
|
|
|
* If we got the lock, be sure to remember it in the locallock table.
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2004-07-17 05:32:14 +02:00
|
|
|
if (MyProc->waitStatus == STATUS_OK)
|
2004-08-27 19:07:42 +02:00
|
|
|
GrantAwaitedLock();
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-22 23:30:06 +01:00
|
|
|
/*
|
|
|
|
* We don't have to do anything else, because the awaker did all the
|
|
|
|
* necessary update of the lock table and MyProc.
|
|
|
|
*/
|
2004-07-17 05:32:14 +02:00
|
|
|
return MyProc->waitStatus;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ProcWakeup -- wake up a process by releasing its private semaphore.
|
|
|
|
*
|
2000-12-22 01:51:54 +01:00
|
|
|
* Also remove the process from the wait queue and set its links invalid.
|
1997-09-07 07:04:48 +02:00
|
|
|
* RETURN: the next process in the wait queue.
|
2001-09-04 04:26:57 +02:00
|
|
|
*
|
2005-12-11 22:02:18 +01:00
|
|
|
* The appropriate lock partition lock must be held by caller.
|
|
|
|
*
|
2001-09-04 04:26:57 +02:00
|
|
|
* XXX: presently, this code is only used for the "success" case, and only
|
|
|
|
* works correctly for that case. To clean up in failure case, would need
|
|
|
|
* to twiddle the lock's request counts too --- see RemoveFromWaitQueue.
|
2006-04-14 05:38:56 +02:00
|
|
|
* Hence, in practice the waitStatus parameter must be STATUS_OK.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2002-06-11 15:40:53 +02:00
|
|
|
PGPROC *
|
2004-07-17 05:32:14 +02:00
|
|
|
ProcWakeup(PGPROC *proc, int waitStatus)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2002-06-11 15:40:53 +02:00
|
|
|
PGPROC *retProc;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-22 23:30:06 +01:00
|
|
|
/* Proc should be sleeping ... */
|
2008-11-02 22:24:52 +01:00
|
|
|
if (proc->links.prev == NULL ||
|
|
|
|
proc->links.next == NULL)
|
2004-01-07 19:56:30 +01:00
|
|
|
return NULL;
|
2006-04-14 05:38:56 +02:00
|
|
|
Assert(proc->waitStatus == STATUS_WAITING);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-22 23:30:06 +01:00
|
|
|
/* Save next process before we zap the list link */
|
2008-11-02 22:24:52 +01:00
|
|
|
retProc = (PGPROC *) proc->links.next;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-22 23:30:06 +01:00
|
|
|
/* Remove process from wait queue */
|
1997-09-07 07:04:48 +02:00
|
|
|
SHMQueueDelete(&(proc->links));
|
2000-12-22 01:51:54 +01:00
|
|
|
(proc->waitLock->waitProcs.size)--;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-22 23:30:06 +01:00
|
|
|
/* Clean up process' state and pass it the ok/fail signal */
|
|
|
|
proc->waitLock = NULL;
|
2004-08-27 19:07:42 +02:00
|
|
|
proc->waitProcLock = NULL;
|
2004-07-17 05:32:14 +02:00
|
|
|
proc->waitStatus = waitStatus;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-22 23:30:06 +01:00
|
|
|
/* And awaken it */
|
2002-05-05 02:03:29 +02:00
|
|
|
PGSemaphoreUnlock(&proc->sem);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
|
|
|
return retProc;
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ProcLockWakeup -- routine for waking up processes when a lock is
|
2001-01-25 04:31:16 +01:00
|
|
|
* released (or a prior waiter is aborted). Scan all waiters
|
|
|
|
* for lock, waken any that are no longer blocked.
|
2005-12-11 22:02:18 +01:00
|
|
|
*
|
|
|
|
* The appropriate lock partition lock must be held by caller.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
2001-01-25 04:31:16 +01:00
|
|
|
void
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
ProcLockWakeup(LockMethod lockMethodTable, LOCK *lock)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2001-01-25 04:31:16 +01:00
|
|
|
PROC_QUEUE *waitQueue = &(lock->waitProcs);
|
|
|
|
int queue_size = waitQueue->size;
|
2002-06-11 15:40:53 +02:00
|
|
|
PGPROC *proc;
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
LOCKMASK aheadRequests = 0;
|
1998-08-25 23:20:32 +02:00
|
|
|
|
2000-12-22 01:51:54 +01:00
|
|
|
Assert(queue_size >= 0);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-01-25 04:31:16 +01:00
|
|
|
if (queue_size == 0)
|
|
|
|
return;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2008-11-02 22:24:52 +01:00
|
|
|
proc = (PGPROC *) waitQueue->links.next;
|
1998-09-01 06:40:42 +02:00
|
|
|
|
2000-12-22 01:51:54 +01:00
|
|
|
while (queue_size-- > 0)
|
|
|
|
{
|
2001-03-22 05:01:46 +01:00
|
|
|
LOCKMODE lockmode = proc->waitLockMode;
|
1998-08-25 23:20:32 +02:00
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Waken if (a) doesn't conflict with requests of earlier waiters, and
|
|
|
|
* (b) doesn't conflict with already-held locks.
|
1998-08-25 23:20:32 +02:00
|
|
|
*/
|
2002-07-19 01:06:20 +02:00
|
|
|
if ((lockMethodTable->conflictTab[lockmode] & aheadRequests) == 0 &&
|
2001-01-25 04:31:16 +01:00
|
|
|
LockCheckConflicts(lockMethodTable,
|
|
|
|
lockmode,
|
|
|
|
lock,
|
2013-09-17 03:14:14 +02:00
|
|
|
proc->waitProcLock) == STATUS_OK)
|
1998-08-25 23:20:32 +02:00
|
|
|
{
|
2001-01-25 04:31:16 +01:00
|
|
|
/* OK to waken */
|
2004-08-27 19:07:42 +02:00
|
|
|
GrantLock(lock, proc->waitProcLock, lockmode);
|
2001-01-25 04:31:16 +01:00
|
|
|
proc = ProcWakeup(proc, STATUS_OK);
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2001-01-25 04:31:16 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* ProcWakeup removes proc from the lock's waiting process queue
|
|
|
|
* and returns the next proc in chain; don't use proc's next-link,
|
|
|
|
* because it's been cleared.
|
2001-01-25 04:31:16 +01:00
|
|
|
*/
|
1998-08-25 23:20:32 +02:00
|
|
|
}
|
2001-01-25 04:31:16 +01:00
|
|
|
else
|
2000-05-31 02:28:42 +02:00
|
|
|
{
|
2001-03-22 05:01:46 +01:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Cannot wake this guy. Remember his request for later checks.
|
2001-03-22 05:01:46 +01:00
|
|
|
*/
|
Try to reduce confusion about what is a lock method identifier, a lock
method control structure, or a table of control structures.
. Use type LOCKMASK where an int is not a counter.
. Get rid of INVALID_TABLEID, use INVALID_LOCKMETHOD instead.
. Use INVALID_LOCKMETHOD instead of (LOCKMETHOD) NULL, because
LOCKMETHOD is not a pointer.
. Define and use macro LockMethodIsValid.
. Rename LOCKMETHOD to LOCKMETHODID.
. Remove global variable LongTermTableId in lmgr.c, because it is
never used.
. Make LockTableId static in lmgr.c, because it is used nowhere else.
Why not remove it and use DEFAULT_LOCKMETHOD?
. Rename the lock method control structure from LOCKMETHODTABLE to
LockMethodData. Introduce a pointer type named LockMethod.
. Remove elog(FATAL) after InitLockTable() call in
CreateSharedMemoryAndSemaphores(), because if something goes wrong,
there is elog(FATAL) in LockMethodTableInit(), and if this doesn't
help, an elog(ERROR) in InitLockTable() is promoted to FATAL.
. Make InitLockTable() void, because its only caller does not use its
return value any more.
. Rename variables in lock.c to avoid statements like
LockMethodTable[NumLockMethods] = lockMethodTable;
lockMethodTable = LockMethodTable[lockmethod];
. Change LOCKMETHODID type to uint16 to fit into struct LOCKTAG.
. Remove static variables BITS_OFF and BITS_ON from lock.c, because
I agree to this doubt:
* XXX is a fetch from a static array really faster than a shift?
. Define and use macros LOCKBIT_ON/OFF.
Manfred Koizar
2003-12-01 22:59:25 +01:00
|
|
|
aheadRequests |= LOCKBIT_ON(lockmode);
|
2008-11-02 22:24:52 +01:00
|
|
|
proc = (PGPROC *) proc->links.next;
|
2000-05-31 02:28:42 +02:00
|
|
|
}
|
1998-08-25 23:20:32 +02:00
|
|
|
}
|
2001-01-25 04:31:16 +01:00
|
|
|
|
|
|
|
Assert(waitQueue->size >= 0);
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
2005-12-11 22:02:18 +01:00
|
|
|
/*
|
|
|
|
* CheckDeadLock
|
|
|
|
*
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
* We only get to this routine if the DEADLOCK_TIMEOUT fired
|
2001-01-14 06:08:17 +01:00
|
|
|
* while waiting for a lock to be released by some other process. Look
|
|
|
|
* to see if there's a deadlock; if not, just return and continue waiting.
|
2007-06-19 22:13:22 +02:00
|
|
|
* (But signal ProcSleep to log a message, if log_lock_waits is true.)
|
2001-01-14 06:08:17 +01:00
|
|
|
* If we have a real deadlock, remove ourselves from the lock's wait queue
|
|
|
|
* and signal an error to ProcSleep.
|
2007-06-19 22:13:22 +02:00
|
|
|
*
|
|
|
|
* NB: this is run inside a signal handler, so be very wary about what is done
|
|
|
|
* here or in called routines.
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
Introduce timeout handling framework
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
2012-07-17 00:43:21 +02:00
|
|
|
void
|
2002-07-13 03:02:14 +02:00
|
|
|
CheckDeadLock(void)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2005-12-11 22:02:18 +01:00
|
|
|
int i;
|
|
|
|
|
2001-01-14 06:08:17 +01:00
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Acquire exclusive lock on the entire shared lock data structures. Must
|
|
|
|
* grab LWLocks in partition-number order to avoid LWLock deadlock.
|
2005-12-11 22:02:18 +01:00
|
|
|
*
|
|
|
|
* Note that the deadlock check interrupt had better not be enabled
|
|
|
|
* anywhere that this process itself holds lock partition locks, else this
|
|
|
|
* will wait forever. Also note that LWLockAcquire creates a critical
|
|
|
|
* section, so that this routine cannot be interrupted by cancel/die
|
|
|
|
* interrupts.
|
2001-01-14 06:08:17 +01:00
|
|
|
*/
|
2005-12-11 22:02:18 +01:00
|
|
|
for (i = 0; i < NUM_LOCK_PARTITIONS; i++)
|
2014-01-27 17:07:44 +01:00
|
|
|
LWLockAcquire(LockHashPartitionLockByIndex(i), LW_EXCLUSIVE);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2001-03-22 07:16:21 +01:00
|
|
|
/*
|
1997-09-07 07:04:48 +02:00
|
|
|
* Check to see if we've been awoken by anyone in the interim.
|
|
|
|
*
|
2008-06-09 20:23:05 +02:00
|
|
|
* If we have, we can return and resume our transaction -- happy day.
|
2009-06-11 16:49:15 +02:00
|
|
|
* Before we are awoken the process releasing the lock grants it to us so
|
|
|
|
* we know that we don't have to wait anymore.
|
1997-09-07 07:04:48 +02:00
|
|
|
*
|
2000-11-29 00:27:57 +01:00
|
|
|
* We check by looking to see if we've been unlinked from the wait queue.
|
2005-10-15 04:49:52 +02:00
|
|
|
* This is quicker than checking our semaphore's state, since no kernel
|
2005-12-11 22:02:18 +01:00
|
|
|
* call is needed, and it is safe because we hold the lock partition lock.
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2008-11-02 22:24:52 +01:00
|
|
|
if (MyProc->links.prev == NULL ||
|
|
|
|
MyProc->links.next == NULL)
|
2007-06-19 22:13:22 +02:00
|
|
|
goto check_done;
|
|
|
|
|
|
|
|
#ifdef LOCK_DEBUG
|
|
|
|
if (Debug_deadlocks)
|
|
|
|
DumpAllLocks();
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Run the deadlock check, and set deadlock_state for use by ProcSleep */
|
|
|
|
deadlock_state = DeadLockCheck(MyProc);
|
|
|
|
|
2007-03-03 19:46:40 +01:00
|
|
|
if (deadlock_state == DS_HARD_DEADLOCK)
|
1998-01-27 04:00:43 +01:00
|
|
|
{
|
2007-03-03 19:46:40 +01:00
|
|
|
/*
|
|
|
|
* Oops. We have a deadlock.
|
|
|
|
*
|
2007-06-19 22:13:22 +02:00
|
|
|
* Get this process out of wait state. (Note: we could do this more
|
|
|
|
* efficiently by relying on lockAwaited, but use this coding to
|
|
|
|
* preserve the flexibility to kill some other transaction than the
|
|
|
|
* one detecting the deadlock.)
|
2007-03-03 19:46:40 +01:00
|
|
|
*
|
|
|
|
* RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
|
2007-06-19 22:13:22 +02:00
|
|
|
* ProcSleep will report an error after we return from the signal
|
|
|
|
* handler.
|
2007-03-03 19:46:40 +01:00
|
|
|
*/
|
|
|
|
Assert(MyProc->waitLock != NULL);
|
|
|
|
RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));
|
2001-01-14 06:08:17 +01:00
|
|
|
|
2007-03-03 19:46:40 +01:00
|
|
|
/*
|
|
|
|
* Unlock my semaphore so that the interrupted ProcSleep() call can
|
|
|
|
* finish.
|
|
|
|
*/
|
|
|
|
PGSemaphoreUnlock(&MyProc->sem);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2007-03-03 19:46:40 +01:00
|
|
|
/*
|
2007-06-19 22:13:22 +02:00
|
|
|
* We're done here. Transaction abort caused by the error that
|
|
|
|
* ProcSleep will raise will cause any other locks we hold to be
|
|
|
|
* released, thus allowing other processes to wake up; we don't need
|
|
|
|
* to do that here. NOTE: an exception is that releasing locks we
|
|
|
|
* hold doesn't consider the possibility of waiters that were blocked
|
|
|
|
* behind us on the lock we just failed to get, and might now be
|
|
|
|
* wakable because we're not in front of them anymore. However,
|
|
|
|
* RemoveFromWaitQueue took care of waking up any such processes.
|
2007-03-03 19:46:40 +01:00
|
|
|
*/
|
|
|
|
}
|
2007-10-26 22:45:10 +02:00
|
|
|
else if (log_lock_waits || deadlock_state == DS_BLOCKED_BY_AUTOVACUUM)
|
2007-06-19 22:13:22 +02:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Unlock my semaphore so that the interrupted ProcSleep() call can
|
|
|
|
* print the log message (we daren't do it here because we are inside
|
2007-11-15 22:14:46 +01:00
|
|
|
* a signal handler). It will then sleep again until someone releases
|
|
|
|
* the lock.
|
2007-10-26 22:45:10 +02:00
|
|
|
*
|
|
|
|
* If blocked by autovacuum, this wakeup will enable ProcSleep to send
|
2011-06-29 08:26:14 +02:00
|
|
|
* the canceling signal to the autovacuum worker.
|
2007-06-19 22:13:22 +02:00
|
|
|
*/
|
|
|
|
PGSemaphoreUnlock(&MyProc->sem);
|
|
|
|
}
|
2005-12-11 22:02:18 +01:00
|
|
|
|
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* And release locks. We do this in reverse order for two reasons: (1)
|
|
|
|
* Anyone else who needs more than one of the locks will be trying to lock
|
|
|
|
* them in increasing order; we don't want to release the other process
|
|
|
|
* until it can get all the locks it needs. (2) This avoids O(N^2)
|
|
|
|
* behavior inside LWLockRelease.
|
2005-12-11 22:02:18 +01:00
|
|
|
*/
|
2007-06-19 22:13:22 +02:00
|
|
|
check_done:
|
2006-10-04 02:30:14 +02:00
|
|
|
for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
|
2014-01-27 17:07:44 +01:00
|
|
|
LWLockRelease(LockHashPartitionLockByIndex(i));
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2001-07-06 23:04:26 +02:00
|
|
|
/*
|
|
|
|
* ProcWaitForSignal - wait for a signal from another backend.
|
|
|
|
*
|
|
|
|
* This can share the semaphore normally used for waiting for locks,
|
|
|
|
* since a backend could never be waiting for a lock and a signal at
|
|
|
|
* the same time. As with locks, it's OK if the signal arrives just
|
2014-05-06 18:12:18 +02:00
|
|
|
* before we actually reach the waiting state. Also as with locks,
|
2006-04-14 05:38:56 +02:00
|
|
|
* it's necessary that the caller be robust against bogus wakeups:
|
|
|
|
* always check that the desired state has occurred, and wait again
|
2014-05-06 18:12:18 +02:00
|
|
|
* if not. This copes with possible "leftover" wakeups.
|
2001-07-06 23:04:26 +02:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ProcWaitForSignal(void)
|
|
|
|
{
|
2002-05-05 02:03:29 +02:00
|
|
|
PGSemaphoreLock(&MyProc->sem, true);
|
2001-07-06 23:04:26 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2005-05-19 23:35:48 +02:00
|
|
|
* ProcSendSignal - send a signal to a backend identified by PID
|
2001-07-06 23:04:26 +02:00
|
|
|
*/
|
|
|
|
void
|
2005-05-19 23:35:48 +02:00
|
|
|
ProcSendSignal(int pid)
|
2001-07-06 23:04:26 +02:00
|
|
|
{
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
PGPROC *proc = NULL;
|
|
|
|
|
|
|
|
if (RecoveryInProgress())
|
|
|
|
{
|
|
|
|
/* use volatile pointer to prevent code rearrangement */
|
|
|
|
volatile PROC_HDR *procglobal = ProcGlobal;
|
|
|
|
|
|
|
|
SpinLockAcquire(ProcStructLock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check to see whether it is the Startup process we wish to signal.
|
2010-02-26 03:01:40 +01:00
|
|
|
* This call is made by the buffer manager when it wishes to wake up a
|
|
|
|
* process that has been waiting for a pin in so it can obtain a
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
* cleanup lock using LockBufferForCleanup(). Startup is not a normal
|
2010-02-26 03:01:40 +01:00
|
|
|
* backend, so BackendPidGetProc() will not return any pid at all. So
|
|
|
|
* we remember the information for this special case.
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
*/
|
|
|
|
if (pid == procglobal->startupProcPid)
|
|
|
|
proc = procglobal->startupProc;
|
|
|
|
|
|
|
|
SpinLockRelease(ProcStructLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (proc == NULL)
|
|
|
|
proc = BackendPidGetProc(pid);
|
2001-07-06 23:04:26 +02:00
|
|
|
|
|
|
|
if (proc != NULL)
|
2002-05-05 02:03:29 +02:00
|
|
|
PGSemaphoreUnlock(&proc->sem);
|
2001-07-06 23:04:26 +02:00
|
|
|
}
|