2010-01-15 10:19:10 +01:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* walsender.h
|
|
|
|
* Exports from replication/walsender.c.
|
|
|
|
*
|
2020-01-01 18:21:45 +01:00
|
|
|
* Portions Copyright (c) 2010-2020, PostgreSQL Global Development Group
|
2010-01-15 10:19:10 +01:00
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/include/replication/walsender.h
|
2010-01-15 10:19:10 +01:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#ifndef _WALSENDER_H
|
|
|
|
#define _WALSENDER_H
|
|
|
|
|
2011-09-12 20:24:29 +02:00
|
|
|
#include <signal.h>
|
2011-03-06 23:49:16 +01:00
|
|
|
|
2017-03-23 13:36:36 +01:00
|
|
|
/*
|
|
|
|
* What to do with a snapshot in create replication slot command.
|
|
|
|
*/
|
|
|
|
typedef enum
|
|
|
|
{
|
|
|
|
CRS_EXPORT_SNAPSHOT,
|
|
|
|
CRS_NOEXPORT_SNAPSHOT,
|
|
|
|
CRS_USE_SNAPSHOT
|
|
|
|
} CRSSnapshotAction;
|
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/* global state */
|
2010-02-26 03:01:40 +01:00
|
|
|
extern bool am_walsender;
|
2011-07-19 04:40:03 +02:00
|
|
|
extern bool am_cascading_walsender;
|
2014-03-10 18:50:28 +01:00
|
|
|
extern bool am_db_walsender;
|
2012-07-02 15:36:34 +02:00
|
|
|
extern bool wake_wal_senders;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
/* user-settable parameters */
|
Introduce wal_level GUC to explicitly control if information needed for
archival or hot standby should be WAL-logged, instead of deducing that from
other options like archive_mode. This replaces recovery_connections GUC in
the primary, where it now has no effect, but it's still used in the standby
to enable/disable hot standby.
Remove the WAL-logging of "unlogged operations", like creating an index
without WAL-logging and fsyncing it at the end. Instead, we keep a copy of
the wal_mode setting and the settings that affect how much shared memory a
hot standby server needs to track master transactions (max_connections,
max_prepared_xacts, max_locks_per_xact) in pg_control. Whenever the settings
change, at server restart, write a WAL record noting the new settings and
update pg_control. This allows us to notice the change in those settings in
the standby at the right moment, they used to be included in checkpoint
records, but that meant that a changed value was not reflected in the
standby until the first checkpoint after the change.
Bump PG_CONTROL_VERSION and XLOG_PAGE_MAGIC. Whack XLOG_PAGE_MAGIC back to
the sequence it used to follow, before hot standby and subsequent patches
changed it to 0x9003.
2010-04-28 18:10:43 +02:00
|
|
|
extern int max_wal_senders;
|
2012-10-11 16:39:52 +02:00
|
|
|
extern int wal_sender_timeout;
|
2015-05-24 03:35:49 +02:00
|
|
|
extern bool log_replication_commands;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2012-10-05 16:13:07 +02:00
|
|
|
extern void InitWalSender(void);
|
2017-03-23 13:36:36 +01:00
|
|
|
extern bool exec_replication_command(const char *query_string);
|
2012-10-05 16:13:07 +02:00
|
|
|
extern void WalSndErrorCleanup(void);
|
2010-01-15 10:19:10 +01:00
|
|
|
extern void WalSndSignals(void);
|
|
|
|
extern Size WalSndShmemSize(void);
|
|
|
|
extern void WalSndShmemInit(void);
|
Introduce latches. A latch is a boolean variable, with the capability to
wait until it is set. Latches can be used to reliably wait until a signal
arrives, which is hard otherwise because signals don't interrupt select()
on some platforms, and even when they do, there's race conditions.
On Unix, latches use the so called self-pipe trick under the covers to
implement the sleep until the latch is set, without race conditions. On
Windows, Windows events are used.
Use the new latch abstraction to sleep in walsender, so that as soon as
a transaction finishes, walsender is woken up to immediately send the WAL
to the standby. This reduces the latency between master and standby, which
is good.
Preliminary work by Fujii Masao. The latch implementation is by me, with
helpful comments from many people.
2010-09-11 17:48:04 +02:00
|
|
|
extern void WalSndWakeup(void);
|
Prevent possibility of panics during shutdown checkpoint.
When the checkpointer writes the shutdown checkpoint, it checks
afterwards whether any WAL has been written since it started and
throws a PANIC if so. At that point, only walsenders are still
active, so one might think this could not happen, but walsenders can
also generate WAL, for instance in BASE_BACKUP and logical decoding
related commands (e.g. via hint bits). So they can trigger this panic
if such a command is run while the shutdown checkpoint is being
written.
To fix this, divide the walsender shutdown into two phases. First,
checkpointer, itself triggered by postmaster, sends a
PROCSIG_WALSND_INIT_STOPPING signal to all walsenders. If the backend
is idle or runs an SQL query this causes the backend to shutdown, if
logical replication is in progress all existing WAL records are
processed followed by a shutdown. Otherwise this causes the walsender
to switch to the "stopping" state. In this state, the walsender will
reject any further replication commands. The checkpointer begins the
shutdown checkpoint once all walsenders are confirmed as
stopping. When the shutdown checkpoint finishes, the postmaster sends
us SIGUSR2. This instructs walsender to send any outstanding WAL,
including the shutdown checkpoint record, wait for it to be replicated
to the standby, and then exit.
Author: Andres Freund, based on an earlier patch by Michael Paquier
Reported-By: Fujii Masao, Andres Freund
Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20170602002912.tqlwn4gymzlxpvs2@alap3.anarazel.de
Backpatch: 9.4, where logical decoding was introduced
2017-06-06 03:53:41 +02:00
|
|
|
extern void WalSndInitStopping(void);
|
|
|
|
extern void WalSndWaitStopping(void);
|
|
|
|
extern void HandleWalSndInitStopping(void);
|
2011-07-19 04:40:03 +02:00
|
|
|
extern void WalSndRqstFileReload(void);
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2012-07-02 15:36:34 +02:00
|
|
|
/*
|
|
|
|
* Remember that we want to wakeup walsenders later
|
|
|
|
*
|
|
|
|
* This is separated from doing the actual wakeup because the writeout is done
|
|
|
|
* while holding contended locks.
|
|
|
|
*/
|
|
|
|
#define WalSndWakeupRequest() \
|
|
|
|
do { wake_wal_senders = true; } while (0)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* wakeup walsenders if there is work to be done
|
|
|
|
*/
|
|
|
|
#define WalSndWakeupProcessRequests() \
|
|
|
|
do \
|
|
|
|
{ \
|
|
|
|
if (wake_wal_senders) \
|
|
|
|
{ \
|
|
|
|
wake_wal_senders = false; \
|
|
|
|
if (max_wal_senders > 0) \
|
|
|
|
WalSndWakeup(); \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
#endif /* _WALSENDER_H */
|