2011-11-01 19:07:29 +01:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* checkpointer.c
|
|
|
|
*
|
2014-05-06 18:12:18 +02:00
|
|
|
* The checkpointer is new as of Postgres 9.2. It handles all checkpoints.
|
2011-11-01 19:07:29 +01:00
|
|
|
* Checkpoints are automatically dispatched after a certain amount of time has
|
|
|
|
* elapsed since the last one, and it can be signaled to perform requested
|
|
|
|
* checkpoints as well. (The GUC parameter that mandates a checkpoint every
|
|
|
|
* so many WAL segments is implemented by having backends signal when they
|
|
|
|
* fill WAL segments; the checkpointer itself doesn't watch for the
|
|
|
|
* condition.)
|
|
|
|
*
|
2012-05-10 06:01:10 +02:00
|
|
|
* The checkpointer is started by the postmaster as soon as the startup
|
|
|
|
* subprocess finishes, or as soon as recovery begins if we are doing archive
|
|
|
|
* recovery. It remains alive until the postmaster commands it to terminate.
|
|
|
|
* Normal termination is by SIGUSR2, which instructs the checkpointer to
|
2014-05-06 18:12:18 +02:00
|
|
|
* execute a shutdown checkpoint and then exit(0). (All backends must be
|
2012-05-10 06:01:10 +02:00
|
|
|
* stopped before SIGUSR2 is issued!) Emergency termination is by SIGQUIT;
|
|
|
|
* like any backend, the checkpointer will simply abort and exit on SIGQUIT.
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* If the checkpointer exits unexpectedly, the postmaster treats that the same
|
|
|
|
* as a backend crash: shared memory may be corrupted, so remaining backends
|
|
|
|
* should be killed by SIGQUIT and then a recovery cycle started. (Even if
|
|
|
|
* shared memory isn't corrupted, we have lost information about which
|
|
|
|
* files need to be fsync'd for the next checkpoint, and so a system
|
|
|
|
* restart needs to be forced.)
|
|
|
|
*
|
|
|
|
*
|
2016-01-02 19:33:40 +01:00
|
|
|
* Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
|
|
|
* src/backend/postmaster/checkpointer.c
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include <signal.h>
|
|
|
|
#include <sys/time.h>
|
|
|
|
#include <time.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
|
2012-12-13 13:59:13 +01:00
|
|
|
#include "access/xlog.h"
|
2011-11-01 19:07:29 +01:00
|
|
|
#include "access/xlog_internal.h"
|
|
|
|
#include "libpq/pqsignal.h"
|
|
|
|
#include "miscadmin.h"
|
|
|
|
#include "pgstat.h"
|
|
|
|
#include "postmaster/bgwriter.h"
|
|
|
|
#include "replication/syncrep.h"
|
|
|
|
#include "storage/bufmgr.h"
|
2012-08-29 00:02:07 +02:00
|
|
|
#include "storage/fd.h"
|
2011-11-01 19:07:29 +01:00
|
|
|
#include "storage/ipc.h"
|
|
|
|
#include "storage/lwlock.h"
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
#include "storage/proc.h"
|
2011-11-01 19:07:29 +01:00
|
|
|
#include "storage/shmem.h"
|
|
|
|
#include "storage/smgr.h"
|
|
|
|
#include "storage/spin.h"
|
|
|
|
#include "utils/guc.h"
|
|
|
|
#include "utils/memutils.h"
|
|
|
|
#include "utils/resowner.h"
|
|
|
|
|
|
|
|
|
|
|
|
/*----------
|
|
|
|
* Shared memory area for communication between checkpointer and backends
|
|
|
|
*
|
|
|
|
* The ckpt counters allow backends to watch for completion of a checkpoint
|
|
|
|
* request they send. Here's how it works:
|
2012-05-10 06:01:10 +02:00
|
|
|
* * At start of a checkpoint, checkpointer reads (and clears) the request
|
|
|
|
* flags and increments ckpt_started, while holding ckpt_lck.
|
2011-11-01 19:07:29 +01:00
|
|
|
* * On completion of a checkpoint, checkpointer sets ckpt_done to
|
|
|
|
* equal ckpt_started.
|
|
|
|
* * On failure of a checkpoint, checkpointer increments ckpt_failed
|
|
|
|
* and sets ckpt_done to equal ckpt_started.
|
|
|
|
*
|
|
|
|
* The algorithm for backends is:
|
|
|
|
* 1. Record current values of ckpt_failed and ckpt_started, and
|
|
|
|
* set request flags, while holding ckpt_lck.
|
|
|
|
* 2. Send signal to request checkpoint.
|
|
|
|
* 3. Sleep until ckpt_started changes. Now you know a checkpoint has
|
|
|
|
* begun since you started this algorithm (although *not* that it was
|
|
|
|
* specifically initiated by your signal), and that it is using your flags.
|
|
|
|
* 4. Record new value of ckpt_started.
|
|
|
|
* 5. Sleep until ckpt_done >= saved value of ckpt_started. (Use modulo
|
|
|
|
* arithmetic here in case counters wrap around.) Now you know a
|
|
|
|
* checkpoint has started and completed, but not whether it was
|
|
|
|
* successful.
|
|
|
|
* 6. If ckpt_failed is different from the originally saved value,
|
|
|
|
* assume request failed; otherwise it was definitely successful.
|
|
|
|
*
|
|
|
|
* ckpt_flags holds the OR of the checkpoint request flags sent by all
|
|
|
|
* requesting backends since the last checkpoint start. The flags are
|
|
|
|
* chosen so that OR'ing is the correct way to combine multiple requests.
|
|
|
|
*
|
|
|
|
* num_backend_writes is used to count the number of buffer writes performed
|
|
|
|
* by user backend processes. This counter should be wide enough that it
|
2012-01-30 15:34:25 +01:00
|
|
|
* can't overflow during a single processing cycle. num_backend_fsync
|
2011-11-01 19:07:29 +01:00
|
|
|
* counts the subset of those writes that also had to do their own fsync,
|
2012-05-10 06:01:10 +02:00
|
|
|
* because the checkpointer failed to absorb their request.
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* The requests array holds fsync requests sent by backends and not yet
|
|
|
|
* absorbed by the checkpointer.
|
|
|
|
*
|
|
|
|
* Unlike the checkpoint fields, num_backend_writes, num_backend_fsync, and
|
2012-05-09 15:11:48 +02:00
|
|
|
* the requests fields are protected by CheckpointerCommLock.
|
2011-11-01 19:07:29 +01:00
|
|
|
*----------
|
|
|
|
*/
|
|
|
|
typedef struct
|
|
|
|
{
|
2013-05-29 22:58:43 +02:00
|
|
|
RelFileNode rnode;
|
2011-11-01 19:07:29 +01:00
|
|
|
ForkNumber forknum;
|
|
|
|
BlockNumber segno; /* see md.c for special values */
|
|
|
|
/* might add a real request-type field later; not needed yet */
|
2012-05-09 15:23:45 +02:00
|
|
|
} CheckpointerRequest;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
typedef struct
|
|
|
|
{
|
2012-06-10 21:20:04 +02:00
|
|
|
pid_t checkpointer_pid; /* PID (0 if not started) */
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
slock_t ckpt_lck; /* protects all the ckpt_* fields */
|
|
|
|
|
|
|
|
int ckpt_started; /* advances when checkpoint starts */
|
|
|
|
int ckpt_done; /* advances when checkpoint done */
|
|
|
|
int ckpt_failed; /* advances when checkpoint fails */
|
|
|
|
|
|
|
|
int ckpt_flags; /* checkpoint flags, as defined in xlog.h */
|
|
|
|
|
|
|
|
uint32 num_backend_writes; /* counts user backend buffer writes */
|
|
|
|
uint32 num_backend_fsync; /* counts user backend fsync calls */
|
|
|
|
|
|
|
|
int num_requests; /* current # of requests */
|
|
|
|
int max_requests; /* allocated array size */
|
2015-02-20 23:32:01 +01:00
|
|
|
CheckpointerRequest requests[FLEXIBLE_ARRAY_MEMBER];
|
2012-05-09 15:23:45 +02:00
|
|
|
} CheckpointerShmemStruct;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2012-05-09 15:23:45 +02:00
|
|
|
static CheckpointerShmemStruct *CheckpointerShmem;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/* interval for calling AbsorbFsyncRequests in CheckpointWriteDelay */
|
|
|
|
#define WRITES_PER_ABSORB 1000
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GUC parameters
|
|
|
|
*/
|
|
|
|
int CheckPointTimeout = 300;
|
|
|
|
int CheckPointWarning = 30;
|
|
|
|
double CheckPointCompletionTarget = 0.5;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Flags set by interrupt handlers for later service in the main loop.
|
|
|
|
*/
|
|
|
|
static volatile sig_atomic_t got_SIGHUP = false;
|
|
|
|
static volatile sig_atomic_t checkpoint_requested = false;
|
|
|
|
static volatile sig_atomic_t shutdown_requested = false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Private state
|
|
|
|
*/
|
|
|
|
static bool ckpt_active = false;
|
|
|
|
|
|
|
|
/* these values are valid when ckpt_active is true: */
|
|
|
|
static pg_time_t ckpt_start_time;
|
|
|
|
static XLogRecPtr ckpt_start_recptr;
|
|
|
|
static double ckpt_cached_elapsed;
|
|
|
|
|
|
|
|
static pg_time_t last_checkpoint_time;
|
|
|
|
static pg_time_t last_xlog_switch_time;
|
|
|
|
|
|
|
|
/* Prototypes for private functions */
|
|
|
|
|
|
|
|
static void CheckArchiveTimeout(void);
|
|
|
|
static bool IsCheckpointOnSchedule(double progress);
|
|
|
|
static bool ImmediateCheckpointRequested(void);
|
|
|
|
static bool CompactCheckpointerRequestQueue(void);
|
2012-01-25 19:02:04 +01:00
|
|
|
static void UpdateSharedMemoryConfig(void);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/* Signal handlers */
|
|
|
|
|
|
|
|
static void chkpt_quickdie(SIGNAL_ARGS);
|
|
|
|
static void ChkptSigHupHandler(SIGNAL_ARGS);
|
|
|
|
static void ReqCheckpointHandler(SIGNAL_ARGS);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
static void chkpt_sigusr1_handler(SIGNAL_ARGS);
|
2011-11-01 19:07:29 +01:00
|
|
|
static void ReqShutdownHandler(SIGNAL_ARGS);
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Main entry point for checkpointer process
|
|
|
|
*
|
Fix management of pendingOpsTable in auxiliary processes.
mdinit() was misusing IsBootstrapProcessingMode() to decide whether to
create an fsync pending-operations table in the current process. This led
to creating a table not only in the startup and checkpointer processes as
intended, but also in the bgwriter process, not to mention other auxiliary
processes such as walwriter and walreceiver. Creation of the table in the
bgwriter is fatal, because it absorbs fsync requests that should have gone
to the checkpointer; instead they just sit in bgwriter local memory and are
never acted on. So writes performed by the bgwriter were not being fsync'd
which could result in data loss after an OS crash. I think there is no
live bug with respect to walwriter and walreceiver because those never
perform any writes of shared buffers; but the potential is there for
future breakage in those processes too.
To fix, make AuxiliaryProcessMain() export the current process's
AuxProcType as a global variable, and then make mdinit() test directly for
the types of aux process that should have a pendingOpsTable. Having done
that, we might as well also get rid of the random bool flags such as
am_walreceiver that some of the aux processes had grown. (Note that we
could not have fixed the bug by examining those variables in mdinit(),
because it's called from BaseInit() which is run by AuxiliaryProcessMain()
before entering any of the process-type-specific code.)
Back-patch to 9.2, where the problem was introduced by the split-up of
bgwriter and checkpointer processes. The bogus pendingOpsTable exists
in walwriter and walreceiver processes in earlier branches, but absent
any evidence that it causes actual problems there, I'll leave the older
branches alone.
2012-07-18 21:28:10 +02:00
|
|
|
* This is invoked from AuxiliaryProcessMain, which has already created the
|
|
|
|
* basic execution environment, but not enabled signals yet.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
CheckpointerMain(void)
|
|
|
|
{
|
|
|
|
sigjmp_buf local_sigjmp_buf;
|
|
|
|
MemoryContext checkpointer_context;
|
|
|
|
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem->checkpointer_pid = MyProcPid;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Properly accept or ignore signals the postmaster might send us
|
|
|
|
*
|
|
|
|
* Note: we deliberately ignore SIGTERM, because during a standard Unix
|
2014-05-06 18:12:18 +02:00
|
|
|
* system shutdown cycle, init will SIGTERM all processes at once. We
|
2011-11-01 19:07:29 +01:00
|
|
|
* want to wait for the backends to exit, whereupon the postmaster will
|
|
|
|
* tell us it's okay to shut down (via SIGUSR2).
|
|
|
|
*/
|
2012-06-10 21:20:04 +02:00
|
|
|
pqsignal(SIGHUP, ChkptSigHupHandler); /* set flag to read config
|
|
|
|
* file */
|
|
|
|
pqsignal(SIGINT, ReqCheckpointHandler); /* request checkpoint */
|
|
|
|
pqsignal(SIGTERM, SIG_IGN); /* ignore SIGTERM */
|
|
|
|
pqsignal(SIGQUIT, chkpt_quickdie); /* hard crash time */
|
2011-11-01 19:07:29 +01:00
|
|
|
pqsignal(SIGALRM, SIG_IGN);
|
|
|
|
pqsignal(SIGPIPE, SIG_IGN);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
pqsignal(SIGUSR1, chkpt_sigusr1_handler);
|
2011-11-01 19:07:29 +01:00
|
|
|
pqsignal(SIGUSR2, ReqShutdownHandler); /* request shutdown */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset some signals that are accepted by postmaster but not here
|
|
|
|
*/
|
|
|
|
pqsignal(SIGCHLD, SIG_DFL);
|
|
|
|
pqsignal(SIGTTIN, SIG_DFL);
|
|
|
|
pqsignal(SIGTTOU, SIG_DFL);
|
|
|
|
pqsignal(SIGCONT, SIG_DFL);
|
|
|
|
pqsignal(SIGWINCH, SIG_DFL);
|
|
|
|
|
|
|
|
/* We allow SIGQUIT (quickdie) at all times */
|
|
|
|
sigdelset(&BlockSig, SIGQUIT);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize so that first time-driven event happens at the correct time.
|
|
|
|
*/
|
|
|
|
last_checkpoint_time = last_xlog_switch_time = (pg_time_t) time(NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a resource owner to keep track of our resources (currently only
|
|
|
|
* buffer pins).
|
|
|
|
*/
|
|
|
|
CurrentResourceOwner = ResourceOwnerCreate(NULL, "Checkpointer");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a memory context that we will do all our work in. We do this so
|
|
|
|
* that we can reset the context during error recovery and thereby avoid
|
|
|
|
* possible memory leaks. Formerly this code just ran in
|
|
|
|
* TopMemoryContext, but resetting that would be a really bad idea.
|
|
|
|
*/
|
|
|
|
checkpointer_context = AllocSetContextCreate(TopMemoryContext,
|
2012-06-10 21:20:04 +02:00
|
|
|
"Checkpointer",
|
|
|
|
ALLOCSET_DEFAULT_MINSIZE,
|
|
|
|
ALLOCSET_DEFAULT_INITSIZE,
|
|
|
|
ALLOCSET_DEFAULT_MAXSIZE);
|
2011-11-01 19:07:29 +01:00
|
|
|
MemoryContextSwitchTo(checkpointer_context);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If an exception is encountered, processing resumes here.
|
|
|
|
*
|
|
|
|
* See notes in postgres.c about the design of this coding.
|
|
|
|
*/
|
|
|
|
if (sigsetjmp(local_sigjmp_buf, 1) != 0)
|
|
|
|
{
|
|
|
|
/* Since not using PG_TRY, must reset error stack by hand */
|
|
|
|
error_context_stack = NULL;
|
|
|
|
|
|
|
|
/* Prevent interrupts while cleaning up */
|
|
|
|
HOLD_INTERRUPTS();
|
|
|
|
|
|
|
|
/* Report the error to the server log */
|
|
|
|
EmitErrorReport();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* These operations are really just a minimal subset of
|
2014-05-06 18:12:18 +02:00
|
|
|
* AbortTransaction(). We don't have very many resources to worry
|
2012-06-10 21:20:04 +02:00
|
|
|
* about in checkpointer, but we do have LWLocks, buffers, and temp
|
|
|
|
* files.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
LWLockReleaseAll();
|
2016-03-10 18:44:09 +01:00
|
|
|
pgstat_report_wait_end();
|
2011-11-01 19:07:29 +01:00
|
|
|
AbortBufferIO();
|
|
|
|
UnlockBuffers();
|
|
|
|
/* buffer pins are released here: */
|
|
|
|
ResourceOwnerRelease(CurrentResourceOwner,
|
|
|
|
RESOURCE_RELEASE_BEFORE_LOCKS,
|
|
|
|
false, true);
|
|
|
|
/* we needn't bother with the other ResourceOwnerRelease phases */
|
|
|
|
AtEOXact_Buffers(false);
|
2012-10-17 18:38:21 +02:00
|
|
|
AtEOXact_SMgr();
|
2011-11-01 19:07:29 +01:00
|
|
|
AtEOXact_Files();
|
|
|
|
AtEOXact_HashTables(false);
|
|
|
|
|
|
|
|
/* Warn any waiting backends that the checkpoint failed. */
|
|
|
|
if (ckpt_active)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
|
|
|
CheckpointerShmem->ckpt_failed++;
|
|
|
|
CheckpointerShmem->ckpt_done = CheckpointerShmem->ckpt_started;
|
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
ckpt_active = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now return to normal top-level context and clear ErrorContext for
|
|
|
|
* next time.
|
|
|
|
*/
|
|
|
|
MemoryContextSwitchTo(checkpointer_context);
|
|
|
|
FlushErrorState();
|
|
|
|
|
|
|
|
/* Flush any leaked data in the top-level context */
|
|
|
|
MemoryContextResetAndDeleteChildren(checkpointer_context);
|
|
|
|
|
|
|
|
/* Now we can allow interrupts again */
|
|
|
|
RESUME_INTERRUPTS();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sleep at least 1 second after any error. A write error is likely
|
|
|
|
* to be repeated, and we don't want to be filling the error logs as
|
|
|
|
* fast as we can.
|
|
|
|
*/
|
|
|
|
pg_usleep(1000000L);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Close all open files after any error. This is helpful on Windows,
|
|
|
|
* where holding deleted files open causes various strange errors.
|
|
|
|
* It's not clear we need it elsewhere, but shouldn't hurt.
|
|
|
|
*/
|
|
|
|
smgrcloseall();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We can now handle ereport(ERROR) */
|
|
|
|
PG_exception_stack = &local_sigjmp_buf;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unblock signals (they were blocked when the postmaster forked us)
|
|
|
|
*/
|
|
|
|
PG_SETMASK(&UnBlockSig);
|
|
|
|
|
2012-01-25 19:02:04 +01:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Ensure all shared memory values are set correctly for the config. Doing
|
|
|
|
* this here ensures no race conditions from other concurrent updaters.
|
2012-01-25 19:02:04 +01:00
|
|
|
*/
|
|
|
|
UpdateSharedMemoryConfig();
|
2011-11-01 19:07:29 +01:00
|
|
|
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
/*
|
|
|
|
* Advertise our latch that backends can use to wake us up while we're
|
|
|
|
* sleeping.
|
|
|
|
*/
|
|
|
|
ProcGlobal->checkpointerLatch = &MyProc->procLatch;
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
/*
|
|
|
|
* Loop forever
|
|
|
|
*/
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
bool do_checkpoint = false;
|
|
|
|
int flags = 0;
|
|
|
|
pg_time_t now;
|
|
|
|
int elapsed_secs;
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
int cur_timeout;
|
2012-05-10 06:54:32 +02:00
|
|
|
int rc;
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
/* Clear any already-pending wakeups */
|
2015-01-14 18:45:22 +01:00
|
|
|
ResetLatch(MyLatch);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Process any requests or signals received recently.
|
|
|
|
*/
|
|
|
|
AbsorbFsyncRequests();
|
|
|
|
|
|
|
|
if (got_SIGHUP)
|
|
|
|
{
|
|
|
|
got_SIGHUP = false;
|
|
|
|
ProcessConfigFile(PGC_SIGHUP);
|
2012-01-25 19:02:04 +01:00
|
|
|
|
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Checkpointer is the last process to shut down, so we ask it to
|
|
|
|
* hold the keys for a range of other tasks required most of which
|
|
|
|
* have nothing to do with checkpointing at all.
|
2012-01-25 19:02:04 +01:00
|
|
|
*
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
* For various reasons, some config values can change dynamically
|
|
|
|
* so the primary copy of them is held in shared memory to make
|
|
|
|
* sure all backends see the same value. We make Checkpointer
|
|
|
|
* responsible for updating the shared memory copy if the
|
|
|
|
* parameter setting changes because of SIGHUP.
|
2012-01-25 19:02:04 +01:00
|
|
|
*/
|
|
|
|
UpdateSharedMemoryConfig();
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
if (checkpoint_requested)
|
|
|
|
{
|
|
|
|
checkpoint_requested = false;
|
|
|
|
do_checkpoint = true;
|
|
|
|
BgWriterStats.m_requested_checkpoints++;
|
|
|
|
}
|
|
|
|
if (shutdown_requested)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* From here on, elog(ERROR) should end with exit(1), not send
|
|
|
|
* control back to the sigsetjmp block above
|
|
|
|
*/
|
|
|
|
ExitOnAnyError = true;
|
|
|
|
/* Close down the database */
|
|
|
|
ShutdownXLOG(0, 0);
|
|
|
|
/* Normal exit from the checkpointer is here */
|
|
|
|
proc_exit(0); /* done */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force a checkpoint if too much time has elapsed since the last one.
|
|
|
|
* Note that we count a timed checkpoint in stats only when this
|
|
|
|
* occurs without an external request, but we set the CAUSE_TIME flag
|
|
|
|
* bit even if there is also an external request.
|
|
|
|
*/
|
|
|
|
now = (pg_time_t) time(NULL);
|
|
|
|
elapsed_secs = now - last_checkpoint_time;
|
|
|
|
if (elapsed_secs >= CheckPointTimeout)
|
|
|
|
{
|
|
|
|
if (!do_checkpoint)
|
|
|
|
BgWriterStats.m_timed_checkpoints++;
|
|
|
|
do_checkpoint = true;
|
|
|
|
flags |= CHECKPOINT_CAUSE_TIME;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do a checkpoint if requested.
|
|
|
|
*/
|
|
|
|
if (do_checkpoint)
|
|
|
|
{
|
|
|
|
bool ckpt_performed = false;
|
|
|
|
bool do_restartpoint;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if we should perform a checkpoint or a restartpoint. As a
|
|
|
|
* side-effect, RecoveryInProgress() initializes TimeLineID if
|
|
|
|
* it's not set yet.
|
|
|
|
*/
|
|
|
|
do_restartpoint = RecoveryInProgress();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Atomically fetch the request flags to figure out what kind of a
|
|
|
|
* checkpoint we should perform, and increase the started-counter
|
|
|
|
* to acknowledge that we've started a new checkpoint.
|
|
|
|
*/
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
|
|
|
flags |= CheckpointerShmem->ckpt_flags;
|
|
|
|
CheckpointerShmem->ckpt_flags = 0;
|
|
|
|
CheckpointerShmem->ckpt_started++;
|
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The end-of-recovery checkpoint is a real checkpoint that's
|
|
|
|
* performed while we're still in recovery.
|
|
|
|
*/
|
|
|
|
if (flags & CHECKPOINT_END_OF_RECOVERY)
|
|
|
|
do_restartpoint = false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We will warn if (a) too soon since last checkpoint (whatever
|
|
|
|
* caused it) and (b) somebody set the CHECKPOINT_CAUSE_XLOG flag
|
|
|
|
* since the last checkpoint start. Note in particular that this
|
|
|
|
* implementation will not generate warnings caused by
|
|
|
|
* CheckPointTimeout < CheckPointWarning.
|
|
|
|
*/
|
|
|
|
if (!do_restartpoint &&
|
|
|
|
(flags & CHECKPOINT_CAUSE_XLOG) &&
|
|
|
|
elapsed_secs < CheckPointWarning)
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg_plural("checkpoints are occurring too frequently (%d second apart)",
|
|
|
|
"checkpoints are occurring too frequently (%d seconds apart)",
|
|
|
|
elapsed_secs,
|
|
|
|
elapsed_secs),
|
2015-02-23 17:53:02 +01:00
|
|
|
errhint("Consider increasing the configuration parameter \"max_wal_size\".")));
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Initialize checkpointer-private variables used during
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
* checkpoint.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
ckpt_active = true;
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
if (do_restartpoint)
|
|
|
|
ckpt_start_recptr = GetXLogReplayRecPtr(NULL);
|
|
|
|
else
|
2011-11-01 19:07:29 +01:00
|
|
|
ckpt_start_recptr = GetInsertRecPtr();
|
|
|
|
ckpt_start_time = now;
|
|
|
|
ckpt_cached_elapsed = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do the checkpoint.
|
|
|
|
*/
|
|
|
|
if (!do_restartpoint)
|
|
|
|
{
|
|
|
|
CreateCheckPoint(flags);
|
|
|
|
ckpt_performed = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
ckpt_performed = CreateRestartPoint(flags);
|
|
|
|
|
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* After any checkpoint, close all smgr files. This is so we
|
2011-11-01 19:07:29 +01:00
|
|
|
* won't hang onto smgr references to deleted files indefinitely.
|
|
|
|
*/
|
|
|
|
smgrcloseall();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Indicate checkpoint completion to any waiting backends.
|
|
|
|
*/
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
|
|
|
CheckpointerShmem->ckpt_done = CheckpointerShmem->ckpt_started;
|
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
if (ckpt_performed)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Note we record the checkpoint start time not end time as
|
|
|
|
* last_checkpoint_time. This is so that time-driven
|
|
|
|
* checkpoints happen at a predictable spacing.
|
|
|
|
*/
|
|
|
|
last_checkpoint_time = now;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We were not able to perform the restartpoint (checkpoints
|
|
|
|
* throw an ERROR in case of error). Most likely because we
|
|
|
|
* have not received any new checkpoint WAL records since the
|
|
|
|
* last restartpoint. Try again in 15 s.
|
|
|
|
*/
|
|
|
|
last_checkpoint_time = now - CheckPointTimeout + 15;
|
|
|
|
}
|
|
|
|
|
|
|
|
ckpt_active = false;
|
|
|
|
}
|
|
|
|
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
/* Check for archive_timeout and switch xlog files if necessary. */
|
|
|
|
CheckArchiveTimeout();
|
|
|
|
|
2011-11-01 19:38:27 +01:00
|
|
|
/*
|
2012-05-10 06:01:10 +02:00
|
|
|
* Send off activity statistics to the stats collector. (The reason
|
|
|
|
* why we re-use bgwriter-related code for this is that the bgwriter
|
|
|
|
* and checkpointer used to be just one process. It's probably not
|
|
|
|
* worth the trouble to split the stats support into two independent
|
|
|
|
* stats message types.)
|
2011-11-01 19:38:27 +01:00
|
|
|
*/
|
|
|
|
pgstat_send_bgwriter();
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Sleep until we are signaled or it's time for another checkpoint or
|
|
|
|
* xlog file switch.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
now = (pg_time_t) time(NULL);
|
|
|
|
elapsed_secs = now - last_checkpoint_time;
|
|
|
|
if (elapsed_secs >= CheckPointTimeout)
|
|
|
|
continue; /* no sleep for us ... */
|
|
|
|
cur_timeout = CheckPointTimeout - elapsed_secs;
|
|
|
|
if (XLogArchiveTimeout > 0 && !RecoveryInProgress())
|
|
|
|
{
|
|
|
|
elapsed_secs = now - last_xlog_switch_time;
|
|
|
|
if (elapsed_secs >= XLogArchiveTimeout)
|
|
|
|
continue; /* no sleep for us ... */
|
|
|
|
cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);
|
|
|
|
}
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2015-01-14 18:45:22 +01:00
|
|
|
rc = WaitLatch(MyLatch,
|
2012-05-10 06:54:32 +02:00
|
|
|
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
|
2012-06-10 21:20:04 +02:00
|
|
|
cur_timeout * 1000L /* convert to ms */ );
|
2012-05-10 06:54:32 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Emergency bailout if postmaster has died. This is to avoid the
|
2012-05-10 20:34:22 +02:00
|
|
|
* necessity for manual cleanup of all postmaster children.
|
2012-05-10 06:54:32 +02:00
|
|
|
*/
|
2012-05-10 20:34:22 +02:00
|
|
|
if (rc & WL_POSTMASTER_DEATH)
|
2012-05-10 06:54:32 +02:00
|
|
|
exit(1);
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CheckArchiveTimeout -- check for archive_timeout and switch xlog files
|
|
|
|
*
|
|
|
|
* This will switch to a new WAL file and force an archive file write
|
|
|
|
* if any activity is recorded in the current WAL file, including just
|
|
|
|
* a single checkpoint record.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
CheckArchiveTimeout(void)
|
|
|
|
{
|
|
|
|
pg_time_t now;
|
|
|
|
pg_time_t last_time;
|
|
|
|
|
|
|
|
if (XLogArchiveTimeout <= 0 || RecoveryInProgress())
|
|
|
|
return;
|
|
|
|
|
|
|
|
now = (pg_time_t) time(NULL);
|
|
|
|
|
|
|
|
/* First we do a quick check using possibly-stale local state. */
|
|
|
|
if ((int) (now - last_xlog_switch_time) < XLogArchiveTimeout)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update local state ... note that last_xlog_switch_time is the last time
|
|
|
|
* a switch was performed *or requested*.
|
|
|
|
*/
|
|
|
|
last_time = GetLastSegSwitchTime();
|
|
|
|
|
|
|
|
last_xlog_switch_time = Max(last_xlog_switch_time, last_time);
|
|
|
|
|
|
|
|
/* Now we can do the real check */
|
|
|
|
if ((int) (now - last_xlog_switch_time) >= XLogArchiveTimeout)
|
|
|
|
{
|
|
|
|
XLogRecPtr switchpoint;
|
|
|
|
|
|
|
|
/* OK, it's time to switch */
|
|
|
|
switchpoint = RequestXLogSwitch();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the returned pointer points exactly to a segment boundary,
|
|
|
|
* assume nothing happened.
|
|
|
|
*/
|
2012-06-24 17:51:37 +02:00
|
|
|
if ((switchpoint % XLogSegSize) != 0)
|
2011-11-01 19:07:29 +01:00
|
|
|
ereport(DEBUG1,
|
|
|
|
(errmsg("transaction log switch forced (archive_timeout=%d)",
|
|
|
|
XLogArchiveTimeout)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update state in any case, so we don't retry constantly when the
|
|
|
|
* system is idle.
|
|
|
|
*/
|
|
|
|
last_xlog_switch_time = now;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Returns true if an immediate checkpoint request is pending. (Note that
|
2011-11-01 19:07:29 +01:00
|
|
|
* this does not check the *current* checkpoint's IMMEDIATE flag, but whether
|
|
|
|
* there is one pending behind it.)
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
ImmediateCheckpointRequested(void)
|
|
|
|
{
|
|
|
|
if (checkpoint_requested)
|
|
|
|
{
|
2012-05-10 06:01:10 +02:00
|
|
|
volatile CheckpointerShmemStruct *cps = CheckpointerShmem;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't need to acquire the ckpt_lck in this case because we're
|
|
|
|
* only looking at a single flag bit.
|
|
|
|
*/
|
2012-05-10 06:01:10 +02:00
|
|
|
if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)
|
2011-11-01 19:07:29 +01:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CheckpointWriteDelay -- control rate of checkpoint
|
|
|
|
*
|
|
|
|
* This function is called after each page write performed by BufferSync().
|
|
|
|
* It is responsible for throttling BufferSync()'s write rate to hit
|
|
|
|
* checkpoint_completion_target.
|
|
|
|
*
|
|
|
|
* The checkpoint request flags should be passed in; currently the only one
|
|
|
|
* examined is CHECKPOINT_IMMEDIATE, which disables delays between writes.
|
|
|
|
*
|
|
|
|
* 'progress' is an estimate of how much of the work has been done, as a
|
|
|
|
* fraction between 0.0 meaning none, and 1.0 meaning all done.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
CheckpointWriteDelay(int flags, double progress)
|
|
|
|
{
|
|
|
|
static int absorb_counter = WRITES_PER_ABSORB;
|
|
|
|
|
|
|
|
/* Do nothing if checkpoint is being executed by non-checkpointer process */
|
Fix management of pendingOpsTable in auxiliary processes.
mdinit() was misusing IsBootstrapProcessingMode() to decide whether to
create an fsync pending-operations table in the current process. This led
to creating a table not only in the startup and checkpointer processes as
intended, but also in the bgwriter process, not to mention other auxiliary
processes such as walwriter and walreceiver. Creation of the table in the
bgwriter is fatal, because it absorbs fsync requests that should have gone
to the checkpointer; instead they just sit in bgwriter local memory and are
never acted on. So writes performed by the bgwriter were not being fsync'd
which could result in data loss after an OS crash. I think there is no
live bug with respect to walwriter and walreceiver because those never
perform any writes of shared buffers; but the potential is there for
future breakage in those processes too.
To fix, make AuxiliaryProcessMain() export the current process's
AuxProcType as a global variable, and then make mdinit() test directly for
the types of aux process that should have a pendingOpsTable. Having done
that, we might as well also get rid of the random bool flags such as
am_walreceiver that some of the aux processes had grown. (Note that we
could not have fixed the bug by examining those variables in mdinit(),
because it's called from BaseInit() which is run by AuxiliaryProcessMain()
before entering any of the process-type-specific code.)
Back-patch to 9.2, where the problem was introduced by the split-up of
bgwriter and checkpointer processes. The bogus pendingOpsTable exists
in walwriter and walreceiver processes in earlier branches, but absent
any evidence that it causes actual problems there, I'll leave the older
branches alone.
2012-07-18 21:28:10 +02:00
|
|
|
if (!AmCheckpointerProcess())
|
2011-11-01 19:07:29 +01:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Perform the usual duties and take a nap, unless we're behind schedule,
|
|
|
|
* in which case we just try to catch up as quickly as possible.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
if (!(flags & CHECKPOINT_IMMEDIATE) &&
|
|
|
|
!shutdown_requested &&
|
|
|
|
!ImmediateCheckpointRequested() &&
|
|
|
|
IsCheckpointOnSchedule(progress))
|
|
|
|
{
|
|
|
|
if (got_SIGHUP)
|
|
|
|
{
|
|
|
|
got_SIGHUP = false;
|
|
|
|
ProcessConfigFile(PGC_SIGHUP);
|
2012-05-10 06:01:10 +02:00
|
|
|
/* update shmem copies of config variables */
|
2012-01-30 15:36:17 +01:00
|
|
|
UpdateSharedMemoryConfig();
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
AbsorbFsyncRequests();
|
|
|
|
absorb_counter = WRITES_PER_ABSORB;
|
|
|
|
|
|
|
|
CheckArchiveTimeout();
|
|
|
|
|
2012-06-01 09:19:06 +02:00
|
|
|
/*
|
|
|
|
* Report interim activity statistics to the stats collector.
|
|
|
|
*/
|
|
|
|
pgstat_send_bgwriter();
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
/*
|
2012-05-10 06:01:10 +02:00
|
|
|
* This sleep used to be connected to bgwriter_delay, typically 200ms.
|
2011-11-01 19:07:29 +01:00
|
|
|
* That resulted in more frequent wakeups if not much work to do.
|
2012-06-10 21:20:04 +02:00
|
|
|
* Checkpointer and bgwriter are no longer related so take the Big
|
|
|
|
* Sleep.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
pg_usleep(100000L);
|
|
|
|
}
|
|
|
|
else if (--absorb_counter <= 0)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Absorb pending fsync requests after each WRITES_PER_ABSORB write
|
|
|
|
* operations even when we don't sleep, to prevent overflow of the
|
|
|
|
* fsync request queue.
|
|
|
|
*/
|
|
|
|
AbsorbFsyncRequests();
|
|
|
|
absorb_counter = WRITES_PER_ABSORB;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* IsCheckpointOnSchedule -- are we on schedule to finish this checkpoint
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
* (or restartpoint) in time?
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* Compares the current progress against the time/segments elapsed since last
|
|
|
|
* checkpoint, and returns true if the progress we've made this far is greater
|
|
|
|
* than the elapsed time/segments.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
IsCheckpointOnSchedule(double progress)
|
|
|
|
{
|
|
|
|
XLogRecPtr recptr;
|
|
|
|
struct timeval now;
|
|
|
|
double elapsed_xlogs,
|
|
|
|
elapsed_time;
|
|
|
|
|
|
|
|
Assert(ckpt_active);
|
|
|
|
|
|
|
|
/* Scale progress according to checkpoint_completion_target. */
|
|
|
|
progress *= CheckPointCompletionTarget;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check against the cached value first. Only do the more expensive
|
|
|
|
* calculations once we reach the target previously calculated. Since
|
|
|
|
* neither time or WAL insert pointer moves backwards, a freshly
|
|
|
|
* calculated value can only be greater than or equal to the cached value.
|
|
|
|
*/
|
|
|
|
if (progress < ckpt_cached_elapsed)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/*
|
2015-02-23 17:53:02 +01:00
|
|
|
* Check progress against WAL segments written and CheckPointSegments.
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* We compare the current WAL insert location against the location
|
|
|
|
* computed before calling CreateCheckPoint. The code in XLogInsert that
|
2015-02-23 17:53:02 +01:00
|
|
|
* actually triggers a checkpoint when CheckPointSegments is exceeded
|
2011-11-01 19:07:29 +01:00
|
|
|
* compares against RedoRecptr, so this is not completely accurate.
|
|
|
|
* However, it's good enough for our purposes, we're only calculating an
|
|
|
|
* estimate anyway.
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
*
|
|
|
|
* During recovery, we compare last replayed WAL record's location with
|
|
|
|
* the location computed before calling CreateRestartPoint. That maintains
|
|
|
|
* the same pacing as we have during checkpoints in normal operation, but
|
|
|
|
* we might exceed max_wal_size by a fair amount. That's because there can
|
|
|
|
* be a large gap between a checkpoint's redo-pointer and the checkpoint
|
|
|
|
* record itself, and we only start the restartpoint after we've seen the
|
|
|
|
* checkpoint record. (The gap is typically up to CheckPointSegments *
|
|
|
|
* checkpoint_completion_target where checkpoint_completion_target is the
|
|
|
|
* value that was in effect when the WAL was generated).
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
if (RecoveryInProgress())
|
|
|
|
recptr = GetXLogReplayRecPtr(NULL);
|
|
|
|
else
|
2011-11-01 19:07:29 +01:00
|
|
|
recptr = GetInsertRecPtr();
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
elapsed_xlogs = (((double) (recptr - ckpt_start_recptr)) / XLogSegSize) / CheckPointSegments;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
Also trigger restartpoints based on max_wal_size on standby.
When archive recovery and restartpoints were initially introduced,
checkpoint_segments was ignored on the grounds that the files restored from
archive don't consume any space in the recovery server. That was changed in
later releases, but even then it was arguably a feature rather than a bug,
as performing restartpoints as often as checkpoints during normal operation
might be excessive, but you might nevertheless not want to waste a lot of
space for pre-allocated WAL by setting checkpoint_segments to a high value.
But now that we have separate min_wal_size and max_wal_size settings, you
can bound WAL usage with max_wal_size, and still avoid consuming excessive
space usage by setting min_wal_size to a lower value, so that argument is
moot.
There are still some issues with actually limiting the space usage to
max_wal_size: restartpoints in recovery can only start after seeing the
checkpoint record, while a checkpoint starts flushing buffers as soon as
the redo-pointer is set. Restartpoint is paced to happen at the same
leisurily speed, determined by checkpoint_completion_target, as checkpoints,
but because they are started later, max_wal_size can be exceeded by upto
one checkpoint cycle's worth of WAL, depending on
checkpoint_completion_target. But that seems better than not trying at all,
and max_wal_size is a soft limit anyway.
The documentation already claimed that max_wal_size is obeyed in recovery,
so this just fixes the behaviour to match the docs. However, add some
weasel-words there to mention that max_wal_size may well be exceeded by
some amount in recovery.
2015-06-28 23:09:10 +02:00
|
|
|
if (progress < elapsed_xlogs)
|
|
|
|
{
|
|
|
|
ckpt_cached_elapsed = elapsed_xlogs;
|
|
|
|
return false;
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check progress against time elapsed and checkpoint_timeout.
|
|
|
|
*/
|
|
|
|
gettimeofday(&now, NULL);
|
|
|
|
elapsed_time = ((double) ((pg_time_t) now.tv_sec - ckpt_start_time) +
|
|
|
|
now.tv_usec / 1000000.0) / CheckPointTimeout;
|
|
|
|
|
|
|
|
if (progress < elapsed_time)
|
|
|
|
{
|
|
|
|
ckpt_cached_elapsed = elapsed_time;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* It looks like we're on schedule. */
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* --------------------------------
|
|
|
|
* signal handler routines
|
|
|
|
* --------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* chkpt_quickdie() occurs when signalled SIGQUIT by the postmaster.
|
|
|
|
*
|
|
|
|
* Some backend has bought the farm,
|
|
|
|
* so we need to stop what we're doing and exit.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
chkpt_quickdie(SIGNAL_ARGS)
|
|
|
|
{
|
|
|
|
PG_SETMASK(&BlockSig);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We DO NOT want to run proc_exit() callbacks -- we're here because
|
|
|
|
* shared memory may be corrupted, so we don't want to try to clean up our
|
|
|
|
* transaction. Just nail the windows shut and get out of town. Now that
|
|
|
|
* there's an atexit callback to prevent third-party code from breaking
|
|
|
|
* things by calling exit() directly, we have to reset the callbacks
|
|
|
|
* explicitly to make this work as intended.
|
|
|
|
*/
|
|
|
|
on_exit_reset();
|
|
|
|
|
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Note we do exit(2) not exit(0). This is to force the postmaster into a
|
2011-11-01 19:07:29 +01:00
|
|
|
* system reset cycle if some idiot DBA sends a manual SIGQUIT to a random
|
|
|
|
* backend. This is necessary precisely because we don't clean up our
|
|
|
|
* shared memory state. (The "dead man switch" mechanism in pmsignal.c
|
|
|
|
* should ensure the postmaster sees this as a crash, too, but no harm in
|
|
|
|
* being doubly sure.)
|
|
|
|
*/
|
|
|
|
exit(2);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* SIGHUP: set flag to re-read config file at next convenient time */
|
|
|
|
static void
|
|
|
|
ChkptSigHupHandler(SIGNAL_ARGS)
|
|
|
|
{
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
got_SIGHUP = true;
|
2015-01-14 18:45:22 +01:00
|
|
|
SetLatch(MyLatch);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
errno = save_errno;
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* SIGINT: set flag to run a normal checkpoint right away */
|
|
|
|
static void
|
|
|
|
ReqCheckpointHandler(SIGNAL_ARGS)
|
|
|
|
{
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
checkpoint_requested = true;
|
2015-01-14 18:45:22 +01:00
|
|
|
SetLatch(MyLatch);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
errno = save_errno;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* SIGUSR1: used for latch wakeups */
|
|
|
|
static void
|
|
|
|
chkpt_sigusr1_handler(SIGNAL_ARGS)
|
|
|
|
{
|
|
|
|
int save_errno = errno;
|
|
|
|
|
|
|
|
latch_sigusr1_handler();
|
|
|
|
|
|
|
|
errno = save_errno;
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* SIGUSR2: set flag to run a shutdown checkpoint and exit */
|
|
|
|
static void
|
|
|
|
ReqShutdownHandler(SIGNAL_ARGS)
|
|
|
|
{
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
shutdown_requested = true;
|
2015-01-14 18:45:22 +01:00
|
|
|
SetLatch(MyLatch);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
errno = save_errno;
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* --------------------------------
|
|
|
|
* communication with backends
|
|
|
|
* --------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2012-05-09 15:23:45 +02:00
|
|
|
* CheckpointerShmemSize
|
2012-05-10 06:01:10 +02:00
|
|
|
* Compute space needed for checkpointer-related shared memory
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
Size
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmemSize(void)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
|
|
|
Size size;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Currently, the size of the requests[] array is arbitrarily set equal to
|
|
|
|
* NBuffers. This may prove too large or small ...
|
|
|
|
*/
|
2012-05-09 15:23:45 +02:00
|
|
|
size = offsetof(CheckpointerShmemStruct, requests);
|
|
|
|
size = add_size(size, mul_size(NBuffers, sizeof(CheckpointerRequest)));
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2012-05-09 15:23:45 +02:00
|
|
|
* CheckpointerShmemInit
|
2012-05-10 06:01:10 +02:00
|
|
|
* Allocate and initialize checkpointer-related shared memory
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
|
|
|
void
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmemInit(void)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
Size size = CheckpointerShmemSize();
|
2011-11-01 19:07:29 +01:00
|
|
|
bool found;
|
|
|
|
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem = (CheckpointerShmemStruct *)
|
2012-05-10 06:01:10 +02:00
|
|
|
ShmemInitStruct("Checkpointer Data",
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
size,
|
2011-11-01 19:07:29 +01:00
|
|
|
&found);
|
|
|
|
|
|
|
|
if (!found)
|
|
|
|
{
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
/*
|
|
|
|
* First time through, so initialize. Note that we zero the whole
|
2013-05-29 22:58:43 +02:00
|
|
|
* requests array; this is so that CompactCheckpointerRequestQueue can
|
|
|
|
* assume that any pad bytes in the request structs are zeroes.
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
*/
|
|
|
|
MemSet(CheckpointerShmem, 0, size);
|
2012-05-09 15:23:45 +02:00
|
|
|
SpinLockInit(&CheckpointerShmem->ckpt_lck);
|
|
|
|
CheckpointerShmem->max_requests = NBuffers;
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* RequestCheckpoint
|
|
|
|
* Called in backend processes to request a checkpoint
|
|
|
|
*
|
|
|
|
* flags is a bitwise OR of the following:
|
|
|
|
* CHECKPOINT_IS_SHUTDOWN: checkpoint is for database shutdown.
|
|
|
|
* CHECKPOINT_END_OF_RECOVERY: checkpoint is for end of WAL recovery.
|
|
|
|
* CHECKPOINT_IMMEDIATE: finish the checkpoint ASAP,
|
|
|
|
* ignoring checkpoint_completion_target parameter.
|
2012-04-24 04:43:09 +02:00
|
|
|
* CHECKPOINT_FORCE: force a checkpoint even if no XLOG activity has occurred
|
2011-11-01 19:07:29 +01:00
|
|
|
* since the last one (implied by CHECKPOINT_IS_SHUTDOWN or
|
|
|
|
* CHECKPOINT_END_OF_RECOVERY).
|
|
|
|
* CHECKPOINT_WAIT: wait for completion before returning (otherwise,
|
2012-01-30 15:34:25 +01:00
|
|
|
* just signal checkpointer to do it, and return).
|
2011-11-01 19:07:29 +01:00
|
|
|
* CHECKPOINT_CAUSE_XLOG: checkpoint is requested due to xlog filling.
|
|
|
|
* (This affects logging, and in particular enables CheckPointWarning.)
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
RequestCheckpoint(int flags)
|
|
|
|
{
|
|
|
|
int ntries;
|
|
|
|
int old_failed,
|
|
|
|
old_started;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If in a standalone backend, just do it ourselves.
|
|
|
|
*/
|
|
|
|
if (!IsPostmasterEnvironment)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* There's no point in doing slow checkpoints in a standalone backend,
|
|
|
|
* because there's no other backends the checkpoint could disrupt.
|
|
|
|
*/
|
|
|
|
CreateCheckPoint(flags | CHECKPOINT_IMMEDIATE);
|
|
|
|
|
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* After any checkpoint, close all smgr files. This is so we won't
|
2011-11-01 19:07:29 +01:00
|
|
|
* hang onto smgr references to deleted files indefinitely.
|
|
|
|
*/
|
|
|
|
smgrcloseall();
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Atomically set the request flags, and take a snapshot of the counters.
|
|
|
|
* When we see ckpt_started > old_started, we know the flags we set here
|
2012-01-30 15:34:25 +01:00
|
|
|
* have been seen by checkpointer.
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* Note that we OR the flags with any existing flags, to avoid overriding
|
|
|
|
* a "stronger" request by another backend. The flag senses must be
|
|
|
|
* chosen to make this work!
|
|
|
|
*/
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
old_failed = CheckpointerShmem->ckpt_failed;
|
|
|
|
old_started = CheckpointerShmem->ckpt_started;
|
|
|
|
CheckpointerShmem->ckpt_flags |= flags;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
2012-01-30 15:34:25 +01:00
|
|
|
* Send signal to request checkpoint. It's possible that the checkpointer
|
2011-11-01 19:07:29 +01:00
|
|
|
* hasn't started yet, or is in process of restarting, so we will retry a
|
|
|
|
* few times if needed. Also, if not told to wait for the checkpoint to
|
|
|
|
* occur, we consider failure to send the signal to be nonfatal and merely
|
|
|
|
* LOG it.
|
|
|
|
*/
|
|
|
|
for (ntries = 0;; ntries++)
|
|
|
|
{
|
2012-05-09 15:23:45 +02:00
|
|
|
if (CheckpointerShmem->checkpointer_pid == 0)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
|
|
|
if (ntries >= 20) /* max wait 2.0 sec */
|
|
|
|
{
|
|
|
|
elog((flags & CHECKPOINT_WAIT) ? ERROR : LOG,
|
2012-06-10 21:20:04 +02:00
|
|
|
"could not request checkpoint because checkpointer not running");
|
2011-11-01 19:07:29 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2012-05-09 15:23:45 +02:00
|
|
|
else if (kill(CheckpointerShmem->checkpointer_pid, SIGINT) != 0)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
|
|
|
if (ntries >= 20) /* max wait 2.0 sec */
|
|
|
|
{
|
|
|
|
elog((flags & CHECKPOINT_WAIT) ? ERROR : LOG,
|
|
|
|
"could not signal for checkpoint: %m");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
break; /* signal sent successfully */
|
|
|
|
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
pg_usleep(100000L); /* wait 0.1 sec, then retry */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If requested, wait for completion. We detect completion according to
|
|
|
|
* the algorithm given above.
|
|
|
|
*/
|
|
|
|
if (flags & CHECKPOINT_WAIT)
|
|
|
|
{
|
|
|
|
int new_started,
|
|
|
|
new_failed;
|
|
|
|
|
|
|
|
/* Wait for a new checkpoint to start. */
|
|
|
|
for (;;)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
|
|
|
new_started = CheckpointerShmem->ckpt_started;
|
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
if (new_started != old_started)
|
|
|
|
break;
|
|
|
|
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
pg_usleep(100000L);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We are waiting for ckpt_done >= new_started, in a modulo sense.
|
|
|
|
*/
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
int new_done;
|
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
|
|
|
new_done = CheckpointerShmem->ckpt_done;
|
|
|
|
new_failed = CheckpointerShmem->ckpt_failed;
|
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
if (new_done - new_started >= 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
pg_usleep(100000L);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new_failed != old_failed)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errmsg("checkpoint request failed"),
|
|
|
|
errhint("Consult recent messages in the server log for details.")));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ForwardFsyncRequest
|
2012-01-30 15:34:25 +01:00
|
|
|
* Forward a file-fsync request from a backend to the checkpointer
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* Whenever a backend is compelled to write directly to a relation
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
* (which should be seldom, if the background writer is getting its job done),
|
2011-11-01 19:07:29 +01:00
|
|
|
* the backend calls this routine to pass over knowledge that the relation
|
|
|
|
* is dirty and must be fsync'd before next checkpoint. We also use this
|
|
|
|
* opportunity to count such writes for statistical purposes.
|
|
|
|
*
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
* This functionality is only supported for regular (not backend-local)
|
|
|
|
* relations, so the rnode argument is intentionally RelFileNode not
|
|
|
|
* RelFileNodeBackend.
|
|
|
|
*
|
2011-11-01 19:07:29 +01:00
|
|
|
* segno specifies which segment (not block!) of the relation needs to be
|
|
|
|
* fsync'd. (Since the valid range is much less than BlockNumber, we can
|
|
|
|
* use high values for special flags; that's all internal to md.c, which
|
|
|
|
* see for details.)
|
|
|
|
*
|
|
|
|
* To avoid holding the lock for longer than necessary, we normally write
|
2012-01-30 15:34:25 +01:00
|
|
|
* to the requests[] queue without checking for duplicates. The checkpointer
|
2011-11-01 19:07:29 +01:00
|
|
|
* will have to eliminate dups internally anyway. However, if we discover
|
|
|
|
* that the queue is full, we make a pass over the entire queue to compact
|
2014-05-06 18:12:18 +02:00
|
|
|
* it. This is somewhat expensive, but the alternative is for the backend
|
2011-11-01 19:07:29 +01:00
|
|
|
* to perform its own fsync, which is far more expensive in practice. It
|
|
|
|
* is theoretically possible a backend fsync might still be necessary, if
|
|
|
|
* the queue is full and contains no duplicate entries. In that case, we
|
|
|
|
* let the backend know by returning false.
|
|
|
|
*/
|
|
|
|
bool
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
ForwardFsyncRequest(RelFileNode rnode, ForkNumber forknum, BlockNumber segno)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerRequest *request;
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
bool too_full;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
if (!IsUnderPostmaster)
|
|
|
|
return false; /* probably shouldn't even get here */
|
|
|
|
|
Fix management of pendingOpsTable in auxiliary processes.
mdinit() was misusing IsBootstrapProcessingMode() to decide whether to
create an fsync pending-operations table in the current process. This led
to creating a table not only in the startup and checkpointer processes as
intended, but also in the bgwriter process, not to mention other auxiliary
processes such as walwriter and walreceiver. Creation of the table in the
bgwriter is fatal, because it absorbs fsync requests that should have gone
to the checkpointer; instead they just sit in bgwriter local memory and are
never acted on. So writes performed by the bgwriter were not being fsync'd
which could result in data loss after an OS crash. I think there is no
live bug with respect to walwriter and walreceiver because those never
perform any writes of shared buffers; but the potential is there for
future breakage in those processes too.
To fix, make AuxiliaryProcessMain() export the current process's
AuxProcType as a global variable, and then make mdinit() test directly for
the types of aux process that should have a pendingOpsTable. Having done
that, we might as well also get rid of the random bool flags such as
am_walreceiver that some of the aux processes had grown. (Note that we
could not have fixed the bug by examining those variables in mdinit(),
because it's called from BaseInit() which is run by AuxiliaryProcessMain()
before entering any of the process-type-specific code.)
Back-patch to 9.2, where the problem was introduced by the split-up of
bgwriter and checkpointer processes. The bogus pendingOpsTable exists
in walwriter and walreceiver processes in earlier branches, but absent
any evidence that it causes actual problems there, I'll leave the older
branches alone.
2012-07-18 21:28:10 +02:00
|
|
|
if (AmCheckpointerProcess())
|
2012-01-30 15:34:25 +01:00
|
|
|
elog(ERROR, "ForwardFsyncRequest must not be called in checkpointer");
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2012-05-09 15:11:48 +02:00
|
|
|
LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/* Count all backend writes regardless of if they fit in the queue */
|
2012-07-18 21:40:31 +02:00
|
|
|
if (!AmBackgroundWriterProcess())
|
|
|
|
CheckpointerShmem->num_backend_writes++;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* If the checkpointer isn't running or the request queue is full, the
|
2014-05-06 18:12:18 +02:00
|
|
|
* backend will have to perform its own fsync request. But before forcing
|
2012-06-10 21:20:04 +02:00
|
|
|
* that to happen, we can try to compact the request queue.
|
2011-11-01 19:07:29 +01:00
|
|
|
*/
|
2012-05-09 15:23:45 +02:00
|
|
|
if (CheckpointerShmem->checkpointer_pid == 0 ||
|
|
|
|
(CheckpointerShmem->num_requests >= CheckpointerShmem->max_requests &&
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
!CompactCheckpointerRequestQueue()))
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Count the subset of writes where backends have to do their own
|
|
|
|
* fsync
|
|
|
|
*/
|
2012-07-18 21:40:31 +02:00
|
|
|
if (!AmBackgroundWriterProcess())
|
|
|
|
CheckpointerShmem->num_backend_fsync++;
|
2012-05-09 15:11:48 +02:00
|
|
|
LWLockRelease(CheckpointerCommLock);
|
2011-11-01 19:07:29 +01:00
|
|
|
return false;
|
|
|
|
}
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
/* OK, insert request */
|
2012-05-09 15:23:45 +02:00
|
|
|
request = &CheckpointerShmem->requests[CheckpointerShmem->num_requests++];
|
2011-11-01 19:07:29 +01:00
|
|
|
request->rnode = rnode;
|
|
|
|
request->forknum = forknum;
|
|
|
|
request->segno = segno;
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
/* If queue is more than half full, nudge the checkpointer to empty it */
|
2012-05-09 15:23:45 +02:00
|
|
|
too_full = (CheckpointerShmem->num_requests >=
|
|
|
|
CheckpointerShmem->max_requests / 2);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
2012-05-09 15:11:48 +02:00
|
|
|
LWLockRelease(CheckpointerCommLock);
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
|
|
|
|
/* ... but not till after we release the lock */
|
|
|
|
if (too_full && ProcGlobal->checkpointerLatch)
|
|
|
|
SetLatch(ProcGlobal->checkpointerLatch);
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CompactCheckpointerRequestQueue
|
|
|
|
* Remove duplicates from the request queue to avoid backend fsyncs.
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
* Returns "true" if any entries were removed.
|
2011-11-01 19:07:29 +01:00
|
|
|
*
|
|
|
|
* Although a full fsync request queue is not common, it can lead to severe
|
|
|
|
* performance problems when it does happen. So far, this situation has
|
|
|
|
* only been observed to occur when the system is under heavy write load,
|
2014-05-06 18:12:18 +02:00
|
|
|
* and especially during the "sync" phase of a checkpoint. Without this
|
2011-11-01 19:07:29 +01:00
|
|
|
* logic, each backend begins doing an fsync for every block written, which
|
|
|
|
* gets very expensive and can slow down the whole system.
|
|
|
|
*
|
|
|
|
* Trying to do this every time the queue is full could lose if there
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
* aren't any removable entries. But that should be vanishingly rare in
|
2011-11-01 19:07:29 +01:00
|
|
|
* practice: there's one queue entry per shared buffer.
|
|
|
|
*/
|
|
|
|
static bool
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
CompactCheckpointerRequestQueue(void)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
2012-05-10 06:01:10 +02:00
|
|
|
struct CheckpointerSlotMapping
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerRequest request;
|
2011-11-01 19:07:29 +01:00
|
|
|
int slot;
|
|
|
|
};
|
|
|
|
|
|
|
|
int n,
|
|
|
|
preserve_count;
|
|
|
|
int num_skipped = 0;
|
|
|
|
HASHCTL ctl;
|
|
|
|
HTAB *htab;
|
|
|
|
bool *skip_slot;
|
|
|
|
|
2012-05-09 15:11:48 +02:00
|
|
|
/* must hold CheckpointerCommLock in exclusive mode */
|
|
|
|
Assert(LWLockHeldByMe(CheckpointerCommLock));
|
2011-11-01 19:07:29 +01:00
|
|
|
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
/* Initialize skip_slot array */
|
|
|
|
skip_slot = palloc0(sizeof(bool) * CheckpointerShmem->num_requests);
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
/* Initialize temporary hash table */
|
|
|
|
MemSet(&ctl, 0, sizeof(ctl));
|
2012-05-09 15:23:45 +02:00
|
|
|
ctl.keysize = sizeof(CheckpointerRequest);
|
2012-05-10 06:01:10 +02:00
|
|
|
ctl.entrysize = sizeof(struct CheckpointerSlotMapping);
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
ctl.hcxt = CurrentMemoryContext;
|
|
|
|
|
2012-01-30 15:34:25 +01:00
|
|
|
htab = hash_create("CompactCheckpointerRequestQueue",
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem->num_requests,
|
2011-11-01 19:07:29 +01:00
|
|
|
&ctl,
|
Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create(). Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate. Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions. hash_create() itself will
take care of optimizing when the key size is four bytes.
This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.
In future we could look into offering a similar optimized hashing function
for 8-byte keys. Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.
For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules. Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.
Teodor Sigaev and Tom Lane
2014-12-18 19:36:29 +01:00
|
|
|
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The basic idea here is that a request can be skipped if it's followed
|
|
|
|
* by a later, identical request. It might seem more sensible to work
|
|
|
|
* backwards from the end of the queue and check whether a request is
|
|
|
|
* *preceded* by an earlier, identical request, in the hopes of doing less
|
|
|
|
* copying. But that might change the semantics, if there's an
|
|
|
|
* intervening FORGET_RELATION_FSYNC or FORGET_DATABASE_FSYNC request, so
|
|
|
|
* we do it this way. It would be possible to be even smarter if we made
|
|
|
|
* the code below understand the specific semantics of such requests (it
|
|
|
|
* could blow away preceding entries that would end up being canceled
|
|
|
|
* anyhow), but it's not clear that the extra complexity would buy us
|
|
|
|
* anything.
|
|
|
|
*/
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
for (n = 0; n < CheckpointerShmem->num_requests; n++)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerRequest *request;
|
2012-05-10 06:01:10 +02:00
|
|
|
struct CheckpointerSlotMapping *slotmap;
|
2011-11-01 19:07:29 +01:00
|
|
|
bool found;
|
|
|
|
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
/*
|
|
|
|
* We use the request struct directly as a hashtable key. This
|
|
|
|
* assumes that any padding bytes in the structs are consistently the
|
|
|
|
* same, which should be okay because we zeroed them in
|
|
|
|
* CheckpointerShmemInit. Note also that RelFileNode had better
|
|
|
|
* contain no pad bytes.
|
|
|
|
*/
|
2012-05-09 15:23:45 +02:00
|
|
|
request = &CheckpointerShmem->requests[n];
|
2011-11-01 19:07:29 +01:00
|
|
|
slotmap = hash_search(htab, request, HASH_ENTER, &found);
|
|
|
|
if (found)
|
|
|
|
{
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
/* Duplicate, so mark the previous occurrence as skippable */
|
2011-11-01 19:07:29 +01:00
|
|
|
skip_slot[slotmap->slot] = true;
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
num_skipped++;
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
/* Remember slot containing latest occurrence of this request value */
|
2011-11-01 19:07:29 +01:00
|
|
|
slotmap->slot = n;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Done with the hash table. */
|
|
|
|
hash_destroy(htab);
|
|
|
|
|
|
|
|
/* If no duplicates, we're out of luck. */
|
|
|
|
if (!num_skipped)
|
|
|
|
{
|
|
|
|
pfree(skip_slot);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We found some duplicates; remove them. */
|
Improve coding around the fsync request queue.
In all branches back to 8.3, this patch fixes a questionable assumption in
CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are
no uninitialized pad bytes in the request queue structs. This would only
cause trouble if (a) there were such pad bytes, which could happen in 8.4
and up if the compiler makes enum ForkNumber narrower than 32 bits, but
otherwise would require not-currently-planned changes in the widths of
other typedefs; and (b) the kernel has not uniformly initialized the
contents of shared memory to zeroes. Still, it seems a tad risky, and we
can easily remove any risk by pre-zeroing the request array for ourselves.
In addition to that, we need to establish a coding rule that struct
RelFileNode can't contain any padding bytes, since such structs are copied
into the request array verbatim. (There are other places that are assuming
this anyway, it turns out.)
In 9.1 and up, the risk was a bit larger because we were also effectively
assuming that struct RelFileNodeBackend contained no pad bytes, and with
fields of different types in there, that would be much easier to break.
However, there is no good reason to ever transmit fsync or delete requests
for temp files to the bgwriter/checkpointer, so we can revert the request
structs to plain RelFileNode, getting rid of the padding risk and saving
some marginal number of bytes and cycles in fsync queue manipulation while
we are at it. The savings might be more than marginal during deletion of
a temp relation, because the old code transmitted an entirely useless but
nonetheless expensive-to-process ForgetRelationFsync request to the
background process, and also had the background process perform the file
deletion even though that can safely be done immediately.
In addition, make some cleanup of nearby comments and small improvements to
the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
2012-07-17 22:55:39 +02:00
|
|
|
preserve_count = 0;
|
|
|
|
for (n = 0; n < CheckpointerShmem->num_requests; n++)
|
2011-11-01 19:07:29 +01:00
|
|
|
{
|
|
|
|
if (skip_slot[n])
|
|
|
|
continue;
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem->requests[preserve_count++] = CheckpointerShmem->requests[n];
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
|
|
|
ereport(DEBUG1,
|
|
|
|
(errmsg("compacted fsync request queue from %d entries to %d entries",
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem->num_requests, preserve_count)));
|
|
|
|
CheckpointerShmem->num_requests = preserve_count;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
/* Cleanup. */
|
|
|
|
pfree(skip_slot);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AbsorbFsyncRequests
|
|
|
|
* Retrieve queued fsync requests and pass them to local smgr.
|
|
|
|
*
|
|
|
|
* This is exported because it must be called during CreateCheckPoint;
|
|
|
|
* we have to be sure we have accepted all pending requests just before
|
|
|
|
* we start fsync'ing. Since CreateCheckPoint sometimes runs in
|
|
|
|
* non-checkpointer processes, do nothing if not checkpointer.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
AbsorbFsyncRequests(void)
|
|
|
|
{
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerRequest *requests = NULL;
|
|
|
|
CheckpointerRequest *request;
|
2011-11-01 19:07:29 +01:00
|
|
|
int n;
|
|
|
|
|
Fix management of pendingOpsTable in auxiliary processes.
mdinit() was misusing IsBootstrapProcessingMode() to decide whether to
create an fsync pending-operations table in the current process. This led
to creating a table not only in the startup and checkpointer processes as
intended, but also in the bgwriter process, not to mention other auxiliary
processes such as walwriter and walreceiver. Creation of the table in the
bgwriter is fatal, because it absorbs fsync requests that should have gone
to the checkpointer; instead they just sit in bgwriter local memory and are
never acted on. So writes performed by the bgwriter were not being fsync'd
which could result in data loss after an OS crash. I think there is no
live bug with respect to walwriter and walreceiver because those never
perform any writes of shared buffers; but the potential is there for
future breakage in those processes too.
To fix, make AuxiliaryProcessMain() export the current process's
AuxProcType as a global variable, and then make mdinit() test directly for
the types of aux process that should have a pendingOpsTable. Having done
that, we might as well also get rid of the random bool flags such as
am_walreceiver that some of the aux processes had grown. (Note that we
could not have fixed the bug by examining those variables in mdinit(),
because it's called from BaseInit() which is run by AuxiliaryProcessMain()
before entering any of the process-type-specific code.)
Back-patch to 9.2, where the problem was introduced by the split-up of
bgwriter and checkpointer processes. The bogus pendingOpsTable exists
in walwriter and walreceiver processes in earlier branches, but absent
any evidence that it causes actual problems there, I'll leave the older
branches alone.
2012-07-18 21:28:10 +02:00
|
|
|
if (!AmCheckpointerProcess())
|
2011-11-01 19:07:29 +01:00
|
|
|
return;
|
|
|
|
|
2012-05-09 15:11:48 +02:00
|
|
|
LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
Reduce idle power consumption of walwriter and checkpointer processes.
This patch modifies the walwriter process so that, when it has not found
anything useful to do for many consecutive wakeup cycles, it extends its
sleep time to reduce the server's idle power consumption. It reverts to
normal as soon as it's done any successful flushes. It's still true that
during any async commit, backends check for completed, unflushed pages of
WAL and signal the walwriter if there are any; so that in practice the
walwriter can get awakened and returned to normal operation sooner than the
sleep time might suggest.
Also, improve the checkpointer so that it uses a latch and a computed delay
time to not wake up at all except when it has something to do, replacing a
previous hardcoded 0.5 sec wakeup cycle. This also is primarily useful for
reducing the server's power consumption when idle.
In passing, get rid of the dedicated latch for signaling the walwriter in
favor of using its procLatch, since that comports better with possible
generic signal handlers using that latch. Also, fix a pre-existing bug
with failure to save/restore errno in walwriter's signal handlers.
Peter Geoghegan, somewhat simplified by Tom
2012-05-09 02:03:26 +02:00
|
|
|
/* Transfer stats counts into pending pgstats message */
|
2012-05-09 15:23:45 +02:00
|
|
|
BgWriterStats.m_buf_written_backend += CheckpointerShmem->num_backend_writes;
|
|
|
|
BgWriterStats.m_buf_fsync_backend += CheckpointerShmem->num_backend_fsync;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem->num_backend_writes = 0;
|
|
|
|
CheckpointerShmem->num_backend_fsync = 0;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2014-06-30 09:13:48 +02:00
|
|
|
/*
|
|
|
|
* We try to avoid holding the lock for a long time by copying the request
|
|
|
|
* array, and processing the requests after releasing the lock.
|
|
|
|
*
|
|
|
|
* Once we have cleared the requests from shared memory, we have to PANIC
|
|
|
|
* if we then fail to absorb them (eg, because our hashtable runs out of
|
|
|
|
* memory). This is because the system cannot run safely if we are unable
|
|
|
|
* to fsync what we have been told to fsync. Fortunately, the hashtable
|
|
|
|
* is so small that the problem is quite unlikely to arise in practice.
|
|
|
|
*/
|
2012-05-09 15:23:45 +02:00
|
|
|
n = CheckpointerShmem->num_requests;
|
2011-11-01 19:07:29 +01:00
|
|
|
if (n > 0)
|
|
|
|
{
|
2012-05-09 15:23:45 +02:00
|
|
|
requests = (CheckpointerRequest *) palloc(n * sizeof(CheckpointerRequest));
|
|
|
|
memcpy(requests, CheckpointerShmem->requests, n * sizeof(CheckpointerRequest));
|
2011-11-01 19:07:29 +01:00
|
|
|
}
|
2014-06-30 09:13:48 +02:00
|
|
|
|
|
|
|
START_CRIT_SECTION();
|
|
|
|
|
2012-05-09 15:23:45 +02:00
|
|
|
CheckpointerShmem->num_requests = 0;
|
2011-11-01 19:07:29 +01:00
|
|
|
|
2012-05-09 15:11:48 +02:00
|
|
|
LWLockRelease(CheckpointerCommLock);
|
2011-11-01 19:07:29 +01:00
|
|
|
|
|
|
|
for (request = requests; n > 0; request++, n--)
|
|
|
|
RememberFsyncRequest(request->rnode, request->forknum, request->segno);
|
|
|
|
|
2014-06-30 09:13:48 +02:00
|
|
|
END_CRIT_SECTION();
|
|
|
|
|
2011-11-01 19:07:29 +01:00
|
|
|
if (requests)
|
|
|
|
pfree(requests);
|
|
|
|
}
|
2012-01-25 19:02:04 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Update any shared memory configurations based on config parameters
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
UpdateSharedMemoryConfig(void)
|
|
|
|
{
|
|
|
|
/* update global shmem state for sync rep */
|
|
|
|
SyncRepUpdateSyncStandbysDefined();
|
|
|
|
|
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* If full_page_writes has been changed by SIGHUP, we update it in shared
|
|
|
|
* memory and write an XLOG_FPW_CHANGE record.
|
2012-01-25 19:02:04 +01:00
|
|
|
*/
|
|
|
|
UpdateFullPageWrites();
|
|
|
|
|
|
|
|
elog(DEBUG2, "checkpointer updated shared memory configuration values");
|
|
|
|
}
|
2012-06-01 10:24:53 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* FirstCallSinceLastCheckpoint allows a process to take an action once
|
|
|
|
* per checkpoint cycle by asynchronously checking for checkpoint completion.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
FirstCallSinceLastCheckpoint(void)
|
|
|
|
{
|
2012-06-10 21:20:04 +02:00
|
|
|
static int ckpt_done = 0;
|
|
|
|
int new_done;
|
|
|
|
bool FirstCall = false;
|
2012-06-01 10:24:53 +02:00
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&CheckpointerShmem->ckpt_lck);
|
|
|
|
new_done = CheckpointerShmem->ckpt_done;
|
|
|
|
SpinLockRelease(&CheckpointerShmem->ckpt_lck);
|
2012-06-01 10:24:53 +02:00
|
|
|
|
|
|
|
if (new_done != ckpt_done)
|
|
|
|
FirstCall = true;
|
|
|
|
|
|
|
|
ckpt_done = new_done;
|
|
|
|
|
|
|
|
return FirstCall;
|
|
|
|
}
|