2010-01-15 10:19:10 +01:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* walreceiver.c
|
|
|
|
*
|
2010-02-17 05:19:41 +01:00
|
|
|
* The WAL receiver process (walreceiver) is new as of Postgres 9.0. It
|
2010-01-15 10:19:10 +01:00
|
|
|
* is the process in the standby server that takes charge of receiving
|
|
|
|
* XLOG records from a primary server during streaming replication.
|
|
|
|
*
|
|
|
|
* When the startup process determines that it's time to start streaming,
|
|
|
|
* it instructs postmaster to start walreceiver. Walreceiver first connects
|
2010-06-09 02:54:39 +02:00
|
|
|
* to the primary server (it will be served by a walsender process
|
2010-01-15 10:19:10 +01:00
|
|
|
* in the primary server), and then keeps receiving XLOG records and
|
|
|
|
* writing them to the disk as long as the connection is alive. As XLOG
|
|
|
|
* records are received and flushed to disk, it updates the
|
2011-02-11 17:55:12 +01:00
|
|
|
* WalRcv->receivedUpto variable in shared memory, to inform the startup
|
2010-01-15 10:19:10 +01:00
|
|
|
* process of how far it can proceed with XLOG replay.
|
|
|
|
*
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
* If the primary server ends streaming, but doesn't disconnect, walreceiver
|
|
|
|
* goes into "waiting" mode, and waits for the startup process to give new
|
|
|
|
* instructions. The startup process will treat that the same as
|
2016-10-20 17:24:37 +02:00
|
|
|
* disconnection, and will rescan the archive/pg_wal directory. But when the
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
* startup process wants to try streaming replication again, it will just
|
|
|
|
* nudge the existing walreceiver process that's waiting, instead of launching
|
|
|
|
* a new one.
|
|
|
|
*
|
2010-01-15 10:19:10 +01:00
|
|
|
* Normal termination is by SIGTERM, which instructs the walreceiver to
|
|
|
|
* exit(0). Emergency termination is by SIGQUIT; like any postmaster child
|
|
|
|
* process, the walreceiver will simply abort and exit on SIGQUIT. A close
|
|
|
|
* of the connection and a FATAL error are treated not as a crash but as
|
|
|
|
* normal operation.
|
|
|
|
*
|
2010-01-20 10:16:24 +01:00
|
|
|
* This file contains the server-facing parts of walreceiver. The libpq-
|
|
|
|
* specific parts are in the libpqwalreceiver module. It's loaded
|
|
|
|
* dynamically to avoid linking the server with libpq.
|
2010-01-15 10:19:10 +01:00
|
|
|
*
|
2019-01-02 18:44:25 +01:00
|
|
|
* Portions Copyright (c) 2010-2019, PostgreSQL Global Development Group
|
2010-01-15 10:19:10 +01:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/replication/walreceiver.c
|
2010-01-15 10:19:10 +01:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
2010-01-20 10:16:24 +01:00
|
|
|
#include <signal.h>
|
2010-01-15 10:19:10 +01:00
|
|
|
#include <unistd.h>
|
|
|
|
|
2016-01-07 20:21:19 +01:00
|
|
|
#include "access/htup_details.h"
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
#include "access/timeline.h"
|
2013-02-04 11:29:22 +01:00
|
|
|
#include "access/transam.h"
|
2010-01-15 10:19:10 +01:00
|
|
|
#include "access/xlog_internal.h"
|
2017-03-30 20:18:53 +02:00
|
|
|
#include "catalog/pg_authid.h"
|
2016-01-07 20:21:19 +01:00
|
|
|
#include "catalog/pg_type.h"
|
2018-03-31 00:51:22 +02:00
|
|
|
#include "common/ip.h"
|
2016-01-07 20:21:19 +01:00
|
|
|
#include "funcapi.h"
|
2012-11-07 17:59:12 +01:00
|
|
|
#include "libpq/pqformat.h"
|
2010-01-15 10:19:10 +01:00
|
|
|
#include "libpq/pqsignal.h"
|
|
|
|
#include "miscadmin.h"
|
2016-10-04 16:50:13 +02:00
|
|
|
#include "pgstat.h"
|
2010-01-15 10:19:10 +01:00
|
|
|
#include "replication/walreceiver.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "replication/walsender.h"
|
2010-01-15 10:19:10 +01:00
|
|
|
#include "storage/ipc.h"
|
|
|
|
#include "storage/pmsignal.h"
|
2011-02-16 20:29:37 +01:00
|
|
|
#include "storage/procarray.h"
|
2016-01-07 20:21:19 +01:00
|
|
|
#include "utils/builtins.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "utils/guc.h"
|
2016-01-07 20:21:19 +01:00
|
|
|
#include "utils/pg_lsn.h"
|
2010-01-15 10:19:10 +01:00
|
|
|
#include "utils/ps_status.h"
|
|
|
|
#include "utils/resowner.h"
|
2011-09-09 19:23:41 +02:00
|
|
|
#include "utils/timestamp.h"
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2010-02-19 11:51:04 +01:00
|
|
|
|
Fix management of pendingOpsTable in auxiliary processes.
mdinit() was misusing IsBootstrapProcessingMode() to decide whether to
create an fsync pending-operations table in the current process. This led
to creating a table not only in the startup and checkpointer processes as
intended, but also in the bgwriter process, not to mention other auxiliary
processes such as walwriter and walreceiver. Creation of the table in the
bgwriter is fatal, because it absorbs fsync requests that should have gone
to the checkpointer; instead they just sit in bgwriter local memory and are
never acted on. So writes performed by the bgwriter were not being fsync'd
which could result in data loss after an OS crash. I think there is no
live bug with respect to walwriter and walreceiver because those never
perform any writes of shared buffers; but the potential is there for
future breakage in those processes too.
To fix, make AuxiliaryProcessMain() export the current process's
AuxProcType as a global variable, and then make mdinit() test directly for
the types of aux process that should have a pendingOpsTable. Having done
that, we might as well also get rid of the random bool flags such as
am_walreceiver that some of the aux processes had grown. (Note that we
could not have fixed the bug by examining those variables in mdinit(),
because it's called from BaseInit() which is run by AuxiliaryProcessMain()
before entering any of the process-type-specific code.)
Back-patch to 9.2, where the problem was introduced by the split-up of
bgwriter and checkpointer processes. The bogus pendingOpsTable exists
in walwriter and walreceiver processes in earlier branches, but absent
any evidence that it causes actual problems there, I'll leave the older
branches alone.
2012-07-18 21:28:10 +02:00
|
|
|
/* GUC variables */
|
2011-02-10 20:00:29 +01:00
|
|
|
int wal_receiver_status_interval;
|
2012-10-11 16:39:52 +02:00
|
|
|
int wal_receiver_timeout;
|
2011-02-16 20:29:37 +01:00
|
|
|
bool hot_standby_feedback;
|
2011-02-10 20:00:29 +01:00
|
|
|
|
2016-11-30 18:00:00 +01:00
|
|
|
/* libpqwalreceiver connection */
|
|
|
|
static WalReceiverConn *wrconn = NULL;
|
|
|
|
WalReceiverFunctionsType *WalReceiverFunctions = NULL;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
#define NAPTIME_PER_CYCLE 100 /* max sleep time between cycles (100ms) */
|
|
|
|
|
|
|
|
/*
|
2012-06-27 16:53:53 +02:00
|
|
|
* These variables are used similarly to openLogFile/SegNo/Off,
|
2012-06-24 17:06:38 +02:00
|
|
|
* but for walreceiver to write the XLOG. recvFileTLI is the TimeLineID
|
2012-08-09 00:58:49 +02:00
|
|
|
* corresponding the filename of recvFile.
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
|
|
|
static int recvFile = -1;
|
2013-05-29 22:58:43 +02:00
|
|
|
static TimeLineID recvFileTLI = 0;
|
2012-06-24 17:06:38 +02:00
|
|
|
static XLogSegNo recvSegNo = 0;
|
2010-01-15 10:19:10 +01:00
|
|
|
static uint32 recvOff = 0;
|
|
|
|
|
2010-01-20 10:16:24 +01:00
|
|
|
/*
|
|
|
|
* Flags set by interrupt handlers of walreceiver for later service in the
|
|
|
|
* main loop.
|
|
|
|
*/
|
2010-01-15 10:19:10 +01:00
|
|
|
static volatile sig_atomic_t got_SIGHUP = false;
|
|
|
|
static volatile sig_atomic_t got_SIGTERM = false;
|
|
|
|
|
2010-04-21 00:55:03 +02:00
|
|
|
/*
|
|
|
|
* LogstreamResult indicates the byte positions that we have already
|
|
|
|
* written/fsynced.
|
|
|
|
*/
|
|
|
|
static struct
|
|
|
|
{
|
|
|
|
XLogRecPtr Write; /* last byte + 1 written out in the standby */
|
|
|
|
XLogRecPtr Flush; /* last byte + 1 flushed in the standby */
|
2017-06-21 20:39:04 +02:00
|
|
|
} LogstreamResult;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2013-05-29 22:58:43 +02:00
|
|
|
static StringInfoData reply_message;
|
|
|
|
static StringInfoData incoming_message;
|
2011-02-10 20:00:29 +01:00
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/*
|
|
|
|
* About SIGTERM handling:
|
|
|
|
*
|
|
|
|
* We can't just exit(1) within SIGTERM signal handler, because the signal
|
|
|
|
* might arrive in the middle of some critical operation, like while we're
|
|
|
|
* holding a spinlock. We also can't just set a flag in signal handler and
|
2010-04-19 16:10:45 +02:00
|
|
|
* check it in the main loop, because we perform some blocking operations
|
|
|
|
* like libpqrcv_PQexec(), which can take a long time to finish.
|
2010-01-15 10:19:10 +01:00
|
|
|
*
|
|
|
|
* We use a combined approach: When WalRcvImmediateInterruptOK is true, it's
|
|
|
|
* safe for the signal handler to elog(FATAL) immediately. Otherwise it just
|
|
|
|
* sets got_SIGTERM flag, which is checked in the main loop when convenient.
|
|
|
|
*
|
|
|
|
* This is very much like what regular backends do with ImmediateInterruptOK,
|
|
|
|
* ProcessInterrupts() etc.
|
|
|
|
*/
|
|
|
|
static volatile bool WalRcvImmediateInterruptOK = false;
|
|
|
|
|
2010-04-21 00:55:03 +02:00
|
|
|
/* Prototypes for private functions */
|
|
|
|
static void ProcessWalRcvInterrupts(void);
|
|
|
|
static void EnableWalRcvImmediateExit(void);
|
|
|
|
static void DisableWalRcvImmediateExit(void);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
static void WalRcvFetchTimeLineHistoryFiles(TimeLineID first, TimeLineID last);
|
|
|
|
static void WalRcvWaitForStartPosition(XLogRecPtr *startpoint, TimeLineID *startpointTLI);
|
2010-04-21 00:55:03 +02:00
|
|
|
static void WalRcvDie(int code, Datum arg);
|
|
|
|
static void XLogWalRcvProcessMsg(unsigned char type, char *buf, Size len);
|
|
|
|
static void XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr);
|
2011-02-16 16:24:50 +01:00
|
|
|
static void XLogWalRcvFlush(bool dying);
|
2012-10-11 16:39:52 +02:00
|
|
|
static void XLogWalRcvSendReply(bool force, bool requestReply);
|
2013-02-04 11:29:22 +01:00
|
|
|
static void XLogWalRcvSendHSFeedback(bool immed);
|
2011-12-31 14:30:26 +01:00
|
|
|
static void ProcessWalSndrMessage(XLogRecPtr walEnd, TimestampTz sendTime);
|
2010-04-21 00:55:03 +02:00
|
|
|
|
|
|
|
/* Signal handlers */
|
|
|
|
static void WalRcvSigHupHandler(SIGNAL_ARGS);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
static void WalRcvSigUsr1Handler(SIGNAL_ARGS);
|
2010-04-21 00:55:03 +02:00
|
|
|
static void WalRcvShutdownHandler(SIGNAL_ARGS);
|
|
|
|
static void WalRcvQuickDieHandler(SIGNAL_ARGS);
|
|
|
|
|
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
static void
|
|
|
|
ProcessWalRcvInterrupts(void)
|
|
|
|
{
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Although walreceiver interrupt handling doesn't use the same scheme as
|
|
|
|
* regular backends, call CHECK_FOR_INTERRUPTS() to make sure we receive
|
|
|
|
* any incoming signals on Win32.
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
|
|
|
|
if (got_SIGTERM)
|
|
|
|
{
|
|
|
|
WalRcvImmediateInterruptOK = false;
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_ADMIN_SHUTDOWN),
|
|
|
|
errmsg("terminating walreceiver process due to administrator command")));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2010-04-21 00:55:03 +02:00
|
|
|
EnableWalRcvImmediateExit(void)
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
|
|
|
WalRcvImmediateInterruptOK = true;
|
|
|
|
ProcessWalRcvInterrupts();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2010-04-21 00:55:03 +02:00
|
|
|
DisableWalRcvImmediateExit(void)
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
|
|
|
WalRcvImmediateInterruptOK = false;
|
|
|
|
ProcessWalRcvInterrupts();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Main entry point for walreceiver process */
|
2010-01-20 10:16:24 +01:00
|
|
|
void
|
|
|
|
WalReceiverMain(void)
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
2010-02-26 03:01:40 +01:00
|
|
|
char conninfo[MAXCONNINFO];
|
2016-06-29 22:57:17 +02:00
|
|
|
char *tmp_conninfo;
|
2014-02-01 04:45:17 +01:00
|
|
|
char slotname[NAMEDATALEN];
|
2010-02-26 03:01:40 +01:00
|
|
|
XLogRecPtr startpoint;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
TimeLineID startpointTLI;
|
|
|
|
TimeLineID primaryTLI;
|
|
|
|
bool first_stream;
|
2015-10-06 21:45:02 +02:00
|
|
|
WalRcvData *walrcv = WalRcv;
|
2012-10-11 16:39:52 +02:00
|
|
|
TimestampTz last_recv_timestamp;
|
2017-10-03 14:58:25 +02:00
|
|
|
TimestampTz now;
|
2012-10-11 16:39:52 +02:00
|
|
|
bool ping_sent;
|
2017-01-19 18:00:00 +01:00
|
|
|
char *err;
|
2018-03-31 00:51:22 +02:00
|
|
|
char *sender_host = NULL;
|
|
|
|
int sender_port = 0;
|
2010-01-20 10:16:24 +01:00
|
|
|
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* WalRcv should be set up already (if we are a backend, we inherit this
|
|
|
|
* by fork() or EXEC_BACKEND mechanism from the postmaster).
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
*/
|
|
|
|
Assert(walrcv != NULL);
|
|
|
|
|
2017-10-03 14:58:25 +02:00
|
|
|
now = GetCurrentTimestamp();
|
|
|
|
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
/*
|
|
|
|
* Mark walreceiver as running in shared memory.
|
|
|
|
*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Do this as early as possible, so that if we fail later on, we'll set
|
|
|
|
* state to STOPPED. If we die before this, the startup process will keep
|
|
|
|
* waiting for us to start up, until it times out.
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
*/
|
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
|
|
|
Assert(walrcv->pid == 0);
|
2010-02-26 03:01:40 +01:00
|
|
|
switch (walrcv->walRcvState)
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
{
|
|
|
|
case WALRCV_STOPPING:
|
|
|
|
/* If we've already been requested to stop, don't start up. */
|
|
|
|
walrcv->walRcvState = WALRCV_STOPPED;
|
|
|
|
/* fall through */
|
|
|
|
|
|
|
|
case WALRCV_STOPPED:
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
proc_exit(1);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case WALRCV_STARTING:
|
|
|
|
/* The usual case */
|
|
|
|
break;
|
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
case WALRCV_WAITING:
|
|
|
|
case WALRCV_STREAMING:
|
|
|
|
case WALRCV_RESTARTING:
|
|
|
|
default:
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
/* Shouldn't happen */
|
2017-10-03 14:58:25 +02:00
|
|
|
SpinLockRelease(&walrcv->mutex);
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
elog(PANIC, "walreceiver still running according to shared memory state");
|
|
|
|
}
|
|
|
|
/* Advertise our PID so that the startup process can kill us */
|
|
|
|
walrcv->pid = MyProcPid;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
walrcv->walRcvState = WALRCV_STREAMING;
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
|
|
|
|
/* Fetch information required to start streaming */
|
2016-07-01 19:53:46 +02:00
|
|
|
walrcv->ready_to_display = false;
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
strlcpy(conninfo, (char *) walrcv->conninfo, MAXCONNINFO);
|
2014-02-01 04:45:17 +01:00
|
|
|
strlcpy(slotname, (char *) walrcv->slotname, NAMEDATALEN);
|
2011-03-01 19:46:57 +01:00
|
|
|
startpoint = walrcv->receiveStart;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
startpointTLI = walrcv->receiveStartTLI;
|
2011-12-31 14:30:26 +01:00
|
|
|
|
|
|
|
/* Initialise to a sanish value */
|
2017-10-03 14:58:25 +02:00
|
|
|
walrcv->lastMsgSendTime =
|
|
|
|
walrcv->lastMsgReceiptTime = walrcv->latestWalEndTime = now;
|
2011-12-31 14:30:26 +01:00
|
|
|
|
2017-10-03 20:00:56 +02:00
|
|
|
/* Report the latch to use to awaken this process */
|
|
|
|
walrcv->latch = &MyProc->procLatch;
|
|
|
|
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
SpinLockRelease(&walrcv->mutex);
|
2010-01-15 10:19:10 +01:00
|
|
|
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
/* Arrange to clean up at walreceiver exit */
|
|
|
|
on_shmem_exit(WalRcvDie, 0);
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
/* Properly accept or ignore signals the postmaster might send us */
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
pqsignal(SIGHUP, WalRcvSigHupHandler); /* set flag to read config file */
|
2010-01-15 10:19:10 +01:00
|
|
|
pqsignal(SIGINT, SIG_IGN);
|
|
|
|
pqsignal(SIGTERM, WalRcvShutdownHandler); /* request shutdown */
|
|
|
|
pqsignal(SIGQUIT, WalRcvQuickDieHandler); /* hard crash time */
|
|
|
|
pqsignal(SIGALRM, SIG_IGN);
|
|
|
|
pqsignal(SIGPIPE, SIG_IGN);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
pqsignal(SIGUSR1, WalRcvSigUsr1Handler);
|
2010-01-15 10:19:10 +01:00
|
|
|
pqsignal(SIGUSR2, SIG_IGN);
|
|
|
|
|
|
|
|
/* Reset some signals that are accepted by postmaster but not here */
|
|
|
|
pqsignal(SIGCHLD, SIG_DFL);
|
|
|
|
|
|
|
|
/* We allow SIGQUIT (quickdie) at all times */
|
|
|
|
sigdelset(&BlockSig, SIGQUIT);
|
|
|
|
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
/* Load the libpq-specific functions */
|
|
|
|
load_file("libpqwalreceiver", false);
|
2016-11-30 18:00:00 +01:00
|
|
|
if (WalReceiverFunctions == NULL)
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
elog(ERROR, "libpqwalreceiver didn't initialize correctly");
|
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/* Unblock signals (they were blocked when the postmaster forked us) */
|
|
|
|
PG_SETMASK(&UnBlockSig);
|
|
|
|
|
2010-01-20 10:16:24 +01:00
|
|
|
/* Establish the connection to the primary for XLOG streaming */
|
|
|
|
EnableWalRcvImmediateExit();
|
2017-01-19 18:00:00 +01:00
|
|
|
wrconn = walrcv_connect(conninfo, false, "walreceiver", &err);
|
|
|
|
if (!wrconn)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errmsg("could not connect to the primary server: %s", err)));
|
2010-01-20 10:16:24 +01:00
|
|
|
DisableWalRcvImmediateExit();
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2016-06-29 22:57:17 +02:00
|
|
|
/*
|
|
|
|
* Save user-visible connection string. This clobbers the original
|
2018-03-31 00:51:22 +02:00
|
|
|
* conninfo, for security. Also save host and port of the sender server
|
|
|
|
* this walreceiver is connected to.
|
2016-06-29 22:57:17 +02:00
|
|
|
*/
|
2016-11-30 18:00:00 +01:00
|
|
|
tmp_conninfo = walrcv_get_conninfo(wrconn);
|
2018-03-31 00:51:22 +02:00
|
|
|
walrcv_get_senderinfo(wrconn, &sender_host, &sender_port);
|
2016-06-29 22:57:17 +02:00
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
|
|
|
memset(walrcv->conninfo, 0, MAXCONNINFO);
|
|
|
|
if (tmp_conninfo)
|
|
|
|
strlcpy((char *) walrcv->conninfo, tmp_conninfo, MAXCONNINFO);
|
2018-03-31 00:51:22 +02:00
|
|
|
|
|
|
|
memset(walrcv->sender_host, 0, NI_MAXHOST);
|
|
|
|
if (sender_host)
|
|
|
|
strlcpy((char *) walrcv->sender_host, sender_host, NI_MAXHOST);
|
|
|
|
|
|
|
|
walrcv->sender_port = sender_port;
|
2016-06-29 22:57:17 +02:00
|
|
|
walrcv->ready_to_display = true;
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
|
2017-10-03 14:58:25 +02:00
|
|
|
if (tmp_conninfo)
|
|
|
|
pfree(tmp_conninfo);
|
|
|
|
|
2018-03-31 00:51:22 +02:00
|
|
|
if (sender_host)
|
|
|
|
pfree(sender_host);
|
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
first_stream = true;
|
2010-01-15 10:19:10 +01:00
|
|
|
for (;;)
|
|
|
|
{
|
2016-11-30 18:00:00 +01:00
|
|
|
char *primary_sysid;
|
|
|
|
char standby_sysid[32];
|
2017-01-19 18:00:00 +01:00
|
|
|
int server_version;
|
|
|
|
WalRcvStreamOptions options;
|
2016-11-30 18:00:00 +01:00
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/*
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
* Check that we're connected to a valid server using the
|
2017-01-19 18:00:00 +01:00
|
|
|
* IDENTIFY_SYSTEM replication command.
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
EnableWalRcvImmediateExit();
|
2017-01-19 18:00:00 +01:00
|
|
|
primary_sysid = walrcv_identify_system(wrconn, &primaryTLI,
|
|
|
|
&server_version);
|
2016-11-30 18:00:00 +01:00
|
|
|
|
|
|
|
snprintf(standby_sysid, sizeof(standby_sysid), UINT64_FORMAT,
|
|
|
|
GetSystemIdentifier());
|
|
|
|
if (strcmp(primary_sysid, standby_sysid) != 0)
|
|
|
|
{
|
|
|
|
ereport(ERROR,
|
|
|
|
(errmsg("database system identifier differs between the primary and standby"),
|
|
|
|
errdetail("The primary's identifier is %s, the standby's identifier is %s.",
|
|
|
|
primary_sysid, standby_sysid)));
|
|
|
|
}
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
DisableWalRcvImmediateExit();
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
/*
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
* Confirm that the current timeline of the primary is the same or
|
|
|
|
* ahead of ours.
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
if (primaryTLI < startpointTLI)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errmsg("highest timeline %u of the primary is behind recovery timeline %u",
|
|
|
|
primaryTLI, startpointTLI)));
|
2010-01-15 10:19:10 +01:00
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
/*
|
|
|
|
* Get any missing history files. We do this always, even when we're
|
2013-05-29 22:58:43 +02:00
|
|
|
* not interested in that timeline, so that if we're promoted to
|
|
|
|
* become the master later on, we don't select the same timeline that
|
|
|
|
* was already used in the current master. This isn't bullet-proof -
|
|
|
|
* you'll need some external software to manage your cluster if you
|
|
|
|
* need to ensure that a unique timeline id is chosen in every case,
|
|
|
|
* but let's avoid the confusion of timeline id collisions where we
|
|
|
|
* can.
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
*/
|
2013-01-03 09:41:58 +01:00
|
|
|
WalRcvFetchTimeLineHistoryFiles(startpointTLI, primaryTLI);
|
2010-01-15 10:19:10 +01:00
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
/*
|
|
|
|
* Start streaming.
|
|
|
|
*
|
|
|
|
* We'll try to start at the requested starting point and timeline,
|
|
|
|
* even if it's different from the server's latest timeline. In case
|
|
|
|
* we've already reached the end of the old timeline, the server will
|
|
|
|
* finish the streaming immediately, and we will go back to await
|
|
|
|
* orders from the startup process. If recovery_target_timeline is
|
2016-10-20 17:24:37 +02:00
|
|
|
* 'latest', the startup process will scan pg_wal and find the new
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
* history file, bump recovery target timeline, and ask us to restart
|
|
|
|
* on the new timeline.
|
|
|
|
*/
|
2017-01-19 18:00:00 +01:00
|
|
|
options.logical = false;
|
|
|
|
options.startpoint = startpoint;
|
|
|
|
options.slotname = slotname[0] != '\0' ? slotname : NULL;
|
|
|
|
options.proto.physical.startpointTLI = startpointTLI;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
ThisTimeLineID = startpointTLI;
|
2017-01-19 18:00:00 +01:00
|
|
|
if (walrcv_startstreaming(wrconn, &options))
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
if (first_stream)
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("started streaming WAL from primary at %X/%X on timeline %u",
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
(uint32) (startpoint >> 32), (uint32) startpoint,
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
startpointTLI)));
|
|
|
|
else
|
|
|
|
ereport(LOG,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
(errmsg("restarted WAL streaming at %X/%X on timeline %u",
|
|
|
|
(uint32) (startpoint >> 32), (uint32) startpoint,
|
|
|
|
startpointTLI)));
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
first_stream = false;
|
|
|
|
|
|
|
|
/* Initialize LogstreamResult and buffers for processing messages */
|
Follow TLI of last replayed record, not recovery target TLI, in walsenders.
Most of the time, the last replayed record comes from the recovery target
timeline, but there is a corner case where it makes a difference. When
the startup process scans for a new timeline, and decides to change recovery
target timeline, there is a window where the recovery target TLI has already
been bumped, but there are no WAL segments from the new timeline in pg_xlog
yet. For example, if we have just replayed up to point 0/30002D8, on
timeline 1, there is a WAL file called 000000010000000000000003 in pg_xlog
that contains the WAL up to that point. When recovery switches recovery
target timeline to 2, a walsender can immediately try to read WAL from
0/30002D8, from timeline 2, so it will try to open WAL file
000000020000000000000003. However, that doesn't exist yet - the startup
process hasn't copied that file from the archive yet nor has the walreceiver
streamed it yet, so walsender fails with error "requested WAL segment
000000020000000000000003 has already been removed". That's harmless, in that
the standby will try to reconnect later and by that time the segment is
already created, but error messages that should be ignored are not good.
To fix that, have walsender track the TLI of the last replayed record,
instead of the recovery target timeline. That way walsender will not try to
read anything from timeline 2, until the WAL segment has been created and at
least one record has been replayed from it. The recovery target timeline is
now xlog.c's internal affair, it doesn't need to be exposed in shared memory
anymore.
This fixes the error reported by Thom Brown. depesz the same error message,
but I'm not sure if this fixes his scenario.
2012-12-20 13:23:31 +01:00
|
|
|
LogstreamResult.Write = LogstreamResult.Flush = GetXLogReplayRecPtr(NULL);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
initStringInfo(&reply_message);
|
|
|
|
initStringInfo(&incoming_message);
|
|
|
|
|
|
|
|
/* Initialize the last recv timestamp */
|
2012-10-11 16:39:52 +02:00
|
|
|
last_recv_timestamp = GetCurrentTimestamp();
|
|
|
|
ping_sent = false;
|
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
/* Loop until end-of-streaming or error */
|
2016-03-30 03:16:12 +02:00
|
|
|
for (;;)
|
2012-10-11 16:39:52 +02:00
|
|
|
{
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
char *buf;
|
|
|
|
int len;
|
2016-03-30 03:16:12 +02:00
|
|
|
bool endofwal = false;
|
2016-04-14 19:49:37 +02:00
|
|
|
pgsocket wait_fd = PGINVALID_SOCKET;
|
2016-03-30 03:16:12 +02:00
|
|
|
int rc;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Exit walreceiver if we're not in recovery. This should not
|
|
|
|
* happen, but cross-check the status here.
|
|
|
|
*/
|
|
|
|
if (!RecoveryInProgress())
|
|
|
|
ereport(FATAL,
|
|
|
|
(errmsg("cannot continue WAL streaming, recovery has already ended")));
|
|
|
|
|
|
|
|
/* Process any requests or signals received recently */
|
|
|
|
ProcessWalRcvInterrupts();
|
|
|
|
|
|
|
|
if (got_SIGHUP)
|
|
|
|
{
|
|
|
|
got_SIGHUP = false;
|
|
|
|
ProcessConfigFile(PGC_SIGHUP);
|
2013-02-04 11:29:22 +01:00
|
|
|
XLogWalRcvSendHSFeedback(true);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
}
|
|
|
|
|
2016-03-30 03:16:12 +02:00
|
|
|
/* See if we can read data immediately */
|
2016-11-30 18:00:00 +01:00
|
|
|
len = walrcv_receive(wrconn, &buf, &wait_fd);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
if (len != 0)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Process the received data, and any subsequent data we
|
|
|
|
* can read without blocking.
|
|
|
|
*/
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
if (len > 0)
|
|
|
|
{
|
2013-05-29 22:58:43 +02:00
|
|
|
/*
|
|
|
|
* Something was received from master, so reset
|
|
|
|
* timeout
|
|
|
|
*/
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
last_recv_timestamp = GetCurrentTimestamp();
|
|
|
|
ping_sent = false;
|
|
|
|
XLogWalRcvProcessMsg(buf[0], &buf[1], len - 1);
|
|
|
|
}
|
|
|
|
else if (len == 0)
|
|
|
|
break;
|
|
|
|
else if (len < 0)
|
|
|
|
{
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("replication terminated by primary server"),
|
2013-07-28 12:59:09 +02:00
|
|
|
errdetail("End of WAL reached on timeline %u at %X/%X.",
|
2013-01-17 22:12:30 +01:00
|
|
|
startpointTLI,
|
|
|
|
(uint32) (LogstreamResult.Write >> 32), (uint32) LogstreamResult.Write)));
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
endofwal = true;
|
|
|
|
break;
|
|
|
|
}
|
2016-11-30 18:00:00 +01:00
|
|
|
len = walrcv_receive(wrconn, &buf, &wait_fd);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Let the master know that we received some data. */
|
|
|
|
XLogWalRcvSendReply(false, false);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we've written some records, flush them to disk and
|
|
|
|
* let the startup process and primary server know about
|
|
|
|
* them.
|
|
|
|
*/
|
|
|
|
XLogWalRcvFlush(false);
|
|
|
|
}
|
2016-03-30 03:16:12 +02:00
|
|
|
|
|
|
|
/* Check if we need to exit the streaming loop. */
|
|
|
|
if (endofwal)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ideally we would reuse a WaitEventSet object repeatedly
|
|
|
|
* here to avoid the overheads of WaitLatchOrSocket on epoll
|
|
|
|
* systems, but we can't be sure that libpq (or any other
|
|
|
|
* walreceiver implementation) has the same socket (even if
|
|
|
|
* the fd is the same number, it may have been closed and
|
|
|
|
* reopened since the last time). In future, if there is a
|
|
|
|
* function for removing sockets from WaitEventSet, then we
|
|
|
|
* could add and remove just the socket each time, potentially
|
|
|
|
* avoiding some system calls.
|
|
|
|
*/
|
|
|
|
Assert(wait_fd != PGINVALID_SOCKET);
|
2016-11-30 18:00:00 +01:00
|
|
|
rc = WaitLatchOrSocket(walrcv->latch,
|
Add WL_EXIT_ON_PM_DEATH pseudo-event.
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-23 08:16:41 +01:00
|
|
|
WL_EXIT_ON_PM_DEATH | WL_SOCKET_READABLE |
|
2016-03-30 03:16:12 +02:00
|
|
|
WL_TIMEOUT | WL_LATCH_SET,
|
|
|
|
wait_fd,
|
2016-10-04 16:50:13 +02:00
|
|
|
NAPTIME_PER_CYCLE,
|
|
|
|
WAIT_EVENT_WAL_RECEIVER_MAIN);
|
2016-03-30 03:16:12 +02:00
|
|
|
if (rc & WL_LATCH_SET)
|
|
|
|
{
|
2016-11-30 18:00:00 +01:00
|
|
|
ResetLatch(walrcv->latch);
|
2016-03-30 03:16:12 +02:00
|
|
|
if (walrcv->force_reply)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The recovery process has asked us to send apply
|
|
|
|
* feedback now. Make sure the flag is really set to
|
2016-06-10 00:02:36 +02:00
|
|
|
* false in shared memory before sending the reply, so
|
|
|
|
* we don't miss a new request for a reply.
|
2016-03-30 03:16:12 +02:00
|
|
|
*/
|
|
|
|
walrcv->force_reply = false;
|
|
|
|
pg_memory_barrier();
|
|
|
|
XLogWalRcvSendReply(true, false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (rc & WL_TIMEOUT)
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We didn't receive anything new. If we haven't heard
|
|
|
|
* anything from the server for more than
|
2013-05-29 22:58:43 +02:00
|
|
|
* wal_receiver_timeout / 2, ping the server. Also, if
|
|
|
|
* it's been longer than wal_receiver_status_interval
|
|
|
|
* since the last update we sent, send a status update to
|
|
|
|
* the master anyway, to report any progress in applying
|
|
|
|
* WAL.
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
*/
|
2013-05-29 22:58:43 +02:00
|
|
|
bool requestReply = false;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if time since last receive from standby has
|
|
|
|
* reached the configured limit.
|
|
|
|
*/
|
|
|
|
if (wal_receiver_timeout > 0)
|
|
|
|
{
|
|
|
|
TimestampTz now = GetCurrentTimestamp();
|
|
|
|
TimestampTz timeout;
|
|
|
|
|
|
|
|
timeout =
|
|
|
|
TimestampTzPlusMilliseconds(last_recv_timestamp,
|
|
|
|
wal_receiver_timeout);
|
|
|
|
|
|
|
|
if (now >= timeout)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errmsg("terminating walreceiver due to timeout")));
|
|
|
|
|
|
|
|
/*
|
2013-05-29 22:58:43 +02:00
|
|
|
* We didn't receive anything new, for half of
|
|
|
|
* receiver replication timeout. Ping the server.
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
*/
|
|
|
|
if (!ping_sent)
|
|
|
|
{
|
|
|
|
timeout = TimestampTzPlusMilliseconds(last_recv_timestamp,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
(wal_receiver_timeout / 2));
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
if (now >= timeout)
|
|
|
|
{
|
|
|
|
requestReply = true;
|
|
|
|
ping_sent = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
XLogWalRcvSendReply(requestReply, requestReply);
|
2013-02-04 11:29:22 +01:00
|
|
|
XLogWalRcvSendHSFeedback(false);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
}
|
|
|
|
}
|
2011-03-25 16:23:39 +01:00
|
|
|
|
2010-01-20 10:16:24 +01:00
|
|
|
/*
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
* The backend finished streaming. Exit streaming COPY-mode from
|
|
|
|
* our side, too.
|
2010-01-20 10:16:24 +01:00
|
|
|
*/
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
EnableWalRcvImmediateExit();
|
2016-11-30 18:00:00 +01:00
|
|
|
walrcv_endstreaming(wrconn, &primaryTLI);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
DisableWalRcvImmediateExit();
|
2013-01-18 10:48:29 +01:00
|
|
|
|
|
|
|
/*
|
2013-05-29 22:58:43 +02:00
|
|
|
* If the server had switched to a new timeline that we didn't
|
|
|
|
* know about when we began streaming, fetch its timeline history
|
|
|
|
* file now.
|
2013-01-18 10:48:29 +01:00
|
|
|
*/
|
|
|
|
WalRcvFetchTimeLineHistoryFiles(startpointTLI, primaryTLI);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("primary server contains no more WAL on requested timeline %u",
|
|
|
|
startpointTLI)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* End of WAL reached on the requested timeline. Close the last
|
|
|
|
* segment, and await for new orders from the startup process.
|
|
|
|
*/
|
|
|
|
if (recvFile >= 0)
|
|
|
|
{
|
2012-08-09 00:58:49 +02:00
|
|
|
char xlogfname[MAXFNAMELEN];
|
|
|
|
|
2011-02-16 16:24:50 +01:00
|
|
|
XLogWalRcvFlush(false);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
if (close(recvFile) != 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not close log segment %s: %m",
|
|
|
|
XLogFileNameP(recvFileTLI, recvSegNo))));
|
2012-08-09 00:58:49 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create .done file forcibly to prevent the streamed segment from
|
|
|
|
* being archived later.
|
|
|
|
*/
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
XLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_segment_size);
|
2015-05-15 17:55:24 +02:00
|
|
|
if (XLogArchiveMode != ARCHIVE_MODE_ALWAYS)
|
|
|
|
XLogArchiveForceDone(xlogfname);
|
|
|
|
else
|
|
|
|
XLogArchiveNotify(xlogfname);
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
recvFile = -1;
|
|
|
|
|
|
|
|
elog(DEBUG1, "walreceiver ended streaming and awaits new instructions");
|
|
|
|
WalRcvWaitForStartPosition(&startpoint, &startpointTLI);
|
|
|
|
}
|
|
|
|
/* not reached */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for startup process to set receiveStart and receiveStartTLI.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
WalRcvWaitForStartPosition(XLogRecPtr *startpoint, TimeLineID *startpointTLI)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
WalRcvData *walrcv = WalRcv;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
int state;
|
|
|
|
|
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
|
|
|
state = walrcv->walRcvState;
|
|
|
|
if (state != WALRCV_STREAMING)
|
|
|
|
{
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
if (state == WALRCV_STOPPING)
|
|
|
|
proc_exit(0);
|
2011-02-10 20:00:29 +01:00
|
|
|
else
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
elog(FATAL, "unexpected walreceiver state");
|
|
|
|
}
|
|
|
|
walrcv->walRcvState = WALRCV_WAITING;
|
|
|
|
walrcv->receiveStart = InvalidXLogRecPtr;
|
|
|
|
walrcv->receiveStartTLI = 0;
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
|
|
|
|
if (update_process_title)
|
|
|
|
set_ps_display("idle", false);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nudge startup process to notice that we've stopped streaming and are
|
|
|
|
* now waiting for instructions.
|
|
|
|
*/
|
|
|
|
WakeupRecovery();
|
|
|
|
for (;;)
|
|
|
|
{
|
2016-11-30 18:00:00 +01:00
|
|
|
ResetLatch(walrcv->latch);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
|
|
|
|
ProcessWalRcvInterrupts();
|
|
|
|
|
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
|
|
|
Assert(walrcv->walRcvState == WALRCV_RESTARTING ||
|
|
|
|
walrcv->walRcvState == WALRCV_WAITING ||
|
|
|
|
walrcv->walRcvState == WALRCV_STOPPING);
|
|
|
|
if (walrcv->walRcvState == WALRCV_RESTARTING)
|
|
|
|
{
|
|
|
|
/* we don't expect primary_conninfo to change */
|
|
|
|
*startpoint = walrcv->receiveStart;
|
|
|
|
*startpointTLI = walrcv->receiveStartTLI;
|
|
|
|
walrcv->walRcvState = WALRCV_STREAMING;
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (walrcv->walRcvState == WALRCV_STOPPING)
|
2011-02-10 20:00:29 +01:00
|
|
|
{
|
|
|
|
/*
|
2013-05-29 22:58:43 +02:00
|
|
|
* We should've received SIGTERM if the startup process wants us
|
|
|
|
* to die, but might as well check it here too.
|
2012-10-11 16:39:52 +02:00
|
|
|
*/
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
2012-10-11 16:39:52 +02:00
|
|
|
|
Add WL_EXIT_ON_PM_DEATH pseudo-event.
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-23 08:16:41 +01:00
|
|
|
(void) WaitLatch(walrcv->latch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
|
|
|
|
WAIT_EVENT_WAL_RECEIVER_WAIT_START);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
if (update_process_title)
|
|
|
|
{
|
|
|
|
char activitymsg[50];
|
2012-10-11 16:39:52 +02:00
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
snprintf(activitymsg, sizeof(activitymsg), "restarting at %X/%X",
|
|
|
|
(uint32) (*startpoint >> 32),
|
|
|
|
(uint32) *startpoint);
|
|
|
|
set_ps_display(activitymsg, false);
|
|
|
|
}
|
|
|
|
}
|
2012-10-11 16:39:52 +02:00
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
/*
|
|
|
|
* Fetch any missing timeline history files between 'first' and 'last'
|
|
|
|
* (inclusive) from the server.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
WalRcvFetchTimeLineHistoryFiles(TimeLineID first, TimeLineID last)
|
|
|
|
{
|
2013-05-29 22:58:43 +02:00
|
|
|
TimeLineID tli;
|
2012-10-11 16:39:52 +02:00
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
for (tli = first; tli <= last; tli++)
|
|
|
|
{
|
2013-01-03 09:41:58 +01:00
|
|
|
/* there's no history file for timeline 1 */
|
|
|
|
if (tli != 1 && !existsTimeLineHistory(tli))
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
{
|
|
|
|
char *fname;
|
|
|
|
char *content;
|
|
|
|
int len;
|
|
|
|
char expectedfname[MAXFNAMELEN];
|
2012-10-11 16:39:52 +02:00
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
ereport(LOG,
|
|
|
|
(errmsg("fetching timeline history file for timeline %u from primary server",
|
|
|
|
tli)));
|
|
|
|
|
|
|
|
EnableWalRcvImmediateExit();
|
2016-11-30 18:00:00 +01:00
|
|
|
walrcv_readtimelinehistoryfile(wrconn, tli, &fname, &content, &len);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
DisableWalRcvImmediateExit();
|
|
|
|
|
|
|
|
/*
|
2013-05-29 22:58:43 +02:00
|
|
|
* Check that the filename on the master matches what we
|
|
|
|
* calculated ourselves. This is just a sanity check, it should
|
|
|
|
* always match.
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
*/
|
|
|
|
TLHistoryFileName(expectedfname, tli);
|
|
|
|
if (strcmp(fname, expectedfname) != 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_PROTOCOL_VIOLATION),
|
2013-05-31 02:56:58 +02:00
|
|
|
errmsg_internal("primary reported unexpected file name for timeline history file of timeline %u",
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
tli)));
|
|
|
|
|
|
|
|
/*
|
2016-10-20 17:24:37 +02:00
|
|
|
* Write the file to pg_wal.
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
*/
|
|
|
|
writeTimeLineHistoryFile(tli, content, len);
|
|
|
|
|
|
|
|
pfree(fname);
|
|
|
|
pfree(content);
|
2011-02-10 20:00:29 +01:00
|
|
|
}
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
* Mark us as STOPPED in shared memory at exit.
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
|
|
|
static void
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
WalRcvDie(int code, Datum arg)
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
WalRcvData *walrcv = WalRcv;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2011-01-17 11:22:24 +01:00
|
|
|
/* Ensure that all WAL records received are flushed to disk */
|
2011-02-16 16:24:50 +01:00
|
|
|
XLogWalRcvFlush(true);
|
2011-01-17 11:22:24 +01:00
|
|
|
|
2017-10-03 20:00:56 +02:00
|
|
|
/* Mark ourselves inactive in shared memory */
|
2010-01-15 10:19:10 +01:00
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
Assert(walrcv->walRcvState == WALRCV_STREAMING ||
|
|
|
|
walrcv->walRcvState == WALRCV_RESTARTING ||
|
|
|
|
walrcv->walRcvState == WALRCV_STARTING ||
|
|
|
|
walrcv->walRcvState == WALRCV_WAITING ||
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
walrcv->walRcvState == WALRCV_STOPPING);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
Assert(walrcv->pid == MyProcPid);
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
walrcv->walRcvState = WALRCV_STOPPED;
|
2010-01-15 10:19:10 +01:00
|
|
|
walrcv->pid = 0;
|
2016-07-01 19:53:46 +02:00
|
|
|
walrcv->ready_to_display = false;
|
2017-10-03 20:00:56 +02:00
|
|
|
walrcv->latch = NULL;
|
2010-01-15 10:19:10 +01:00
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
|
Make standby server continuously retry restoring the next WAL segment with
restore_command, if the connection to the primary server is lost. This
ensures that the standby can recover automatically, if the connection is
lost for a long time and standby falls behind so much that the required
WAL segments have been archived and deleted in the master.
This also makes standby_mode useful without streaming replication; the
server will keep retrying restore_command every few seconds until the
trigger file is found. That's the same basic functionality pg_standby
offers, but without the bells and whistles.
To implement that, refactor the ReadRecord/FetchRecord functions. The
FetchRecord() function introduced in the original streaming replication
patch is removed, and all the retry logic is now in a new function called
XLogReadPage(). XLogReadPage() is now responsible for executing
restore_command, launching walreceiver, and waiting for new WAL to arrive
from primary, as required.
This also changes the life cycle of walreceiver. When launched, it now only
tries to connect to the master once, and exits if the connection fails, or
is lost during streaming for any reason. The startup process detects the
death, and re-launches walreceiver if necessary.
2010-01-27 16:27:51 +01:00
|
|
|
/* Terminate the connection gracefully. */
|
2016-11-30 18:00:00 +01:00
|
|
|
if (wrconn != NULL)
|
|
|
|
walrcv_disconnect(wrconn);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
|
|
|
|
/* Wake up the startup process to notice promptly that we're gone */
|
|
|
|
WakeupRecovery();
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* SIGHUP: set flag to re-read config file at next convenient time */
|
|
|
|
static void
|
|
|
|
WalRcvSigHupHandler(SIGNAL_ARGS)
|
|
|
|
{
|
|
|
|
got_SIGHUP = true;
|
|
|
|
}
|
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
|
|
|
|
/* SIGUSR1: used by latch mechanism */
|
|
|
|
static void
|
|
|
|
WalRcvSigUsr1Handler(SIGNAL_ARGS)
|
|
|
|
{
|
2014-02-01 22:20:56 +01:00
|
|
|
int save_errno = errno;
|
|
|
|
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
latch_sigusr1_handler();
|
2014-02-01 22:20:56 +01:00
|
|
|
|
|
|
|
errno = save_errno;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
}
|
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/* SIGTERM: set flag for main loop, or shutdown immediately if safe */
|
|
|
|
static void
|
|
|
|
WalRcvShutdownHandler(SIGNAL_ARGS)
|
|
|
|
{
|
2011-08-10 18:20:30 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
got_SIGTERM = true;
|
|
|
|
|
2016-11-30 18:00:00 +01:00
|
|
|
if (WalRcv->latch)
|
|
|
|
SetLatch(WalRcv->latch);
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/* Don't joggle the elbow of proc_exit */
|
|
|
|
if (!proc_exit_inprogress && WalRcvImmediateInterruptOK)
|
|
|
|
ProcessWalRcvInterrupts();
|
2011-08-10 18:20:30 +02:00
|
|
|
|
|
|
|
errno = save_errno;
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* WalRcvQuickDieHandler() occurs when signalled SIGQUIT by the postmaster.
|
|
|
|
*
|
|
|
|
* Some backend has bought the farm, so we need to stop what we're doing and
|
|
|
|
* exit.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
WalRcvQuickDieHandler(SIGNAL_ARGS)
|
|
|
|
{
|
|
|
|
/*
|
2018-08-08 18:08:10 +02:00
|
|
|
* We DO NOT want to run proc_exit() or atexit() callbacks -- we're here
|
|
|
|
* because shared memory may be corrupted, so we don't want to try to
|
|
|
|
* clean up our transaction. Just nail the windows shut and get out of
|
|
|
|
* town. The callbacks wouldn't be safe to run from a signal handler,
|
|
|
|
* anyway.
|
|
|
|
*
|
|
|
|
* Note we use _exit(2) not _exit(0). This is to force the postmaster
|
|
|
|
* into a system reset cycle if someone sends a manual SIGQUIT to a
|
|
|
|
* random backend. This is necessary precisely because we don't clean up
|
|
|
|
* our shared memory state. (The "dead man switch" mechanism in
|
|
|
|
* pmsignal.c should ensure the postmaster sees this as a crash, too, but
|
|
|
|
* no harm in being doubly sure.)
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
2018-08-08 18:08:10 +02:00
|
|
|
_exit(2);
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
|
2010-02-03 10:47:19 +01:00
|
|
|
/*
|
|
|
|
* Accept the message from XLOG stream, and process it.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
XLogWalRcvProcessMsg(unsigned char type, char *buf, Size len)
|
|
|
|
{
|
2012-11-07 17:59:12 +01:00
|
|
|
int hdrlen;
|
|
|
|
XLogRecPtr dataStart;
|
|
|
|
XLogRecPtr walEnd;
|
2013-05-29 22:58:43 +02:00
|
|
|
TimestampTz sendTime;
|
2012-11-07 17:59:12 +01:00
|
|
|
bool replyRequested;
|
|
|
|
|
|
|
|
resetStringInfo(&incoming_message);
|
|
|
|
|
2010-02-03 10:47:19 +01:00
|
|
|
switch (type)
|
|
|
|
{
|
2010-02-26 03:01:40 +01:00
|
|
|
case 'w': /* WAL records */
|
|
|
|
{
|
2012-11-07 17:59:12 +01:00
|
|
|
/* copy message to StringInfo */
|
|
|
|
hdrlen = sizeof(int64) + sizeof(int64) + sizeof(int64);
|
|
|
|
if (len < hdrlen)
|
2010-02-26 03:01:40 +01:00
|
|
|
ereport(ERROR,
|
2010-04-21 00:55:03 +02:00
|
|
|
(errcode(ERRCODE_PROTOCOL_VIOLATION),
|
|
|
|
errmsg_internal("invalid WAL message received from primary")));
|
2012-11-07 17:59:12 +01:00
|
|
|
appendBinaryStringInfo(&incoming_message, buf, hdrlen);
|
|
|
|
|
|
|
|
/* read the fields */
|
|
|
|
dataStart = pq_getmsgint64(&incoming_message);
|
|
|
|
walEnd = pq_getmsgint64(&incoming_message);
|
2017-02-23 21:57:08 +01:00
|
|
|
sendTime = pq_getmsgint64(&incoming_message);
|
2012-11-07 17:59:12 +01:00
|
|
|
ProcessWalSndrMessage(walEnd, sendTime);
|
|
|
|
|
|
|
|
buf += hdrlen;
|
|
|
|
len -= hdrlen;
|
|
|
|
XLogWalRcvWrite(buf, len, dataStart);
|
2010-02-26 03:01:40 +01:00
|
|
|
break;
|
|
|
|
}
|
2011-12-31 14:30:26 +01:00
|
|
|
case 'k': /* Keepalive */
|
|
|
|
{
|
2012-11-07 17:59:12 +01:00
|
|
|
/* copy message to StringInfo */
|
|
|
|
hdrlen = sizeof(int64) + sizeof(int64) + sizeof(char);
|
|
|
|
if (len != hdrlen)
|
2011-12-31 14:30:26 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_PROTOCOL_VIOLATION),
|
|
|
|
errmsg_internal("invalid keepalive message received from primary")));
|
2012-11-07 17:59:12 +01:00
|
|
|
appendBinaryStringInfo(&incoming_message, buf, hdrlen);
|
2011-12-31 14:30:26 +01:00
|
|
|
|
2012-11-07 17:59:12 +01:00
|
|
|
/* read the fields */
|
|
|
|
walEnd = pq_getmsgint64(&incoming_message);
|
2017-02-23 21:57:08 +01:00
|
|
|
sendTime = pq_getmsgint64(&incoming_message);
|
2012-11-07 17:59:12 +01:00
|
|
|
replyRequested = pq_getmsgbyte(&incoming_message);
|
|
|
|
|
|
|
|
ProcessWalSndrMessage(walEnd, sendTime);
|
2012-10-11 16:39:52 +02:00
|
|
|
|
|
|
|
/* If the primary requested a reply, send one immediately */
|
2012-11-07 17:59:12 +01:00
|
|
|
if (replyRequested)
|
2012-10-11 16:39:52 +02:00
|
|
|
XLogWalRcvSendReply(true, false);
|
2011-12-31 14:30:26 +01:00
|
|
|
break;
|
|
|
|
}
|
2010-02-03 10:47:19 +01:00
|
|
|
default:
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_PROTOCOL_VIOLATION),
|
2010-04-21 00:55:03 +02:00
|
|
|
errmsg_internal("invalid replication message type %d",
|
|
|
|
type)));
|
2010-02-03 10:47:19 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/*
|
|
|
|
* Write XLOG data to disk.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
|
|
|
|
{
|
2010-02-26 03:01:40 +01:00
|
|
|
int startoff;
|
|
|
|
int byteswritten;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
while (nbytes > 0)
|
|
|
|
{
|
2010-02-26 03:01:40 +01:00
|
|
|
int segbytes;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
if (recvFile < 0 || !XLByteInSeg(recptr, recvSegNo, wal_segment_size))
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
2010-02-26 03:01:40 +01:00
|
|
|
bool use_existent;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* fsync() and close current file before we switch to next one. We
|
|
|
|
* would otherwise have to reopen this file to fsync it later
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
|
|
|
if (recvFile >= 0)
|
|
|
|
{
|
2012-08-09 00:58:49 +02:00
|
|
|
char xlogfname[MAXFNAMELEN];
|
|
|
|
|
2011-02-16 16:24:50 +01:00
|
|
|
XLogWalRcvFlush(false);
|
2010-02-19 11:51:04 +01:00
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/*
|
2010-02-19 11:51:04 +01:00
|
|
|
* XLOG segment files will be re-read by recovery in startup
|
|
|
|
* process soon, so we don't advise the OS to release cache
|
|
|
|
* pages associated with the file like XLogFileClose() does.
|
2010-01-15 10:19:10 +01:00
|
|
|
*/
|
|
|
|
if (close(recvFile) != 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
2012-06-24 17:06:38 +02:00
|
|
|
errmsg("could not close log segment %s: %m",
|
|
|
|
XLogFileNameP(recvFileTLI, recvSegNo))));
|
2012-08-09 00:58:49 +02:00
|
|
|
|
|
|
|
/*
|
2013-05-29 22:58:43 +02:00
|
|
|
* Create .done file forcibly to prevent the streamed segment
|
|
|
|
* from being archived later.
|
2012-08-09 00:58:49 +02:00
|
|
|
*/
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
XLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_segment_size);
|
2015-05-15 17:55:24 +02:00
|
|
|
if (XLogArchiveMode != ARCHIVE_MODE_ALWAYS)
|
|
|
|
XLogArchiveForceDone(xlogfname);
|
|
|
|
else
|
|
|
|
XLogArchiveNotify(xlogfname);
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
recvFile = -1;
|
|
|
|
|
|
|
|
/* Create/use new log file */
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
XLByteToSeg(recptr, recvSegNo, wal_segment_size);
|
2010-01-15 10:19:10 +01:00
|
|
|
use_existent = true;
|
2012-06-24 17:06:38 +02:00
|
|
|
recvFile = XLogFileInit(recvSegNo, &use_existent, true);
|
|
|
|
recvFileTLI = ThisTimeLineID;
|
2010-01-15 10:19:10 +01:00
|
|
|
recvOff = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Calculate the start offset of the received logs */
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
startoff = XLogSegmentOffset(recptr, wal_segment_size);
|
2010-01-15 10:19:10 +01:00
|
|
|
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
if (startoff + nbytes > wal_segment_size)
|
|
|
|
segbytes = wal_segment_size - startoff;
|
2010-01-15 10:19:10 +01:00
|
|
|
else
|
|
|
|
segbytes = nbytes;
|
|
|
|
|
|
|
|
/* Need to seek in the file? */
|
|
|
|
if (recvOff != startoff)
|
|
|
|
{
|
|
|
|
if (lseek(recvFile, (off_t) startoff, SEEK_SET) < 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("could not seek in log segment %s to offset %u: %m",
|
|
|
|
XLogFileNameP(recvFileTLI, recvSegNo),
|
|
|
|
startoff)));
|
2010-01-15 10:19:10 +01:00
|
|
|
recvOff = startoff;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* OK to write the logs */
|
|
|
|
errno = 0;
|
|
|
|
|
|
|
|
byteswritten = write(recvFile, buf, segbytes);
|
|
|
|
if (byteswritten <= 0)
|
|
|
|
{
|
|
|
|
/* if write didn't set errno, assume no disk space */
|
|
|
|
if (errno == 0)
|
|
|
|
errno = ENOSPC;
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
2012-06-24 17:06:38 +02:00
|
|
|
errmsg("could not write to log segment %s "
|
2010-01-15 10:19:10 +01:00
|
|
|
"at offset %u, length %lu: %m",
|
2012-06-24 17:06:38 +02:00
|
|
|
XLogFileNameP(recvFileTLI, recvSegNo),
|
2010-01-15 10:19:10 +01:00
|
|
|
recvOff, (unsigned long) segbytes)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Update state for write */
|
2012-12-28 17:06:15 +01:00
|
|
|
recptr += byteswritten;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
recvOff += byteswritten;
|
|
|
|
nbytes -= byteswritten;
|
|
|
|
buf += byteswritten;
|
|
|
|
|
2010-02-26 03:01:40 +01:00
|
|
|
LogstreamResult.Write = recptr;
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-02-16 16:24:50 +01:00
|
|
|
/*
|
|
|
|
* Flush the log to disk.
|
|
|
|
*
|
|
|
|
* If we're in the midst of dying, it's unwise to do anything that might throw
|
|
|
|
* an error, so we skip sending a reply in that case.
|
|
|
|
*/
|
2010-01-15 10:19:10 +01:00
|
|
|
static void
|
2011-02-16 16:24:50 +01:00
|
|
|
XLogWalRcvFlush(bool dying)
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
2012-12-28 17:06:15 +01:00
|
|
|
if (LogstreamResult.Flush < LogstreamResult.Write)
|
2010-01-15 10:19:10 +01:00
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
WalRcvData *walrcv = WalRcv;
|
2010-01-15 10:19:10 +01:00
|
|
|
|
2012-06-24 17:06:38 +02:00
|
|
|
issue_xlog_fsync(recvFile, recvSegNo);
|
2010-01-15 10:19:10 +01:00
|
|
|
|
|
|
|
LogstreamResult.Flush = LogstreamResult.Write;
|
|
|
|
|
|
|
|
/* Update shared-memory status */
|
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
2012-12-28 17:06:15 +01:00
|
|
|
if (walrcv->receivedUpto < LogstreamResult.Flush)
|
2011-03-01 19:46:57 +01:00
|
|
|
{
|
|
|
|
walrcv->latestChunkStart = walrcv->receivedUpto;
|
|
|
|
walrcv->receivedUpto = LogstreamResult.Flush;
|
Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.
There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.
START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.
Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 18:00:00 +01:00
|
|
|
walrcv->receivedTLI = ThisTimeLineID;
|
2011-03-01 19:46:57 +01:00
|
|
|
}
|
2010-01-15 10:19:10 +01:00
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
|
2011-07-19 04:40:03 +02:00
|
|
|
/* Signal the startup process and walsender that new WAL has arrived */
|
2010-09-15 12:35:05 +02:00
|
|
|
WakeupRecovery();
|
2011-07-19 04:40:03 +02:00
|
|
|
if (AllowCascadeReplication())
|
|
|
|
WalSndWakeup();
|
2010-09-15 12:35:05 +02:00
|
|
|
|
2010-01-15 10:19:10 +01:00
|
|
|
/* Report XLOG streaming progress in PS display */
|
2010-06-07 17:49:30 +02:00
|
|
|
if (update_process_title)
|
|
|
|
{
|
|
|
|
char activitymsg[50];
|
|
|
|
|
|
|
|
snprintf(activitymsg, sizeof(activitymsg), "streaming %X/%X",
|
2012-06-24 17:51:37 +02:00
|
|
|
(uint32) (LogstreamResult.Write >> 32),
|
|
|
|
(uint32) LogstreamResult.Write);
|
2010-06-07 17:49:30 +02:00
|
|
|
set_ps_display(activitymsg, false);
|
|
|
|
}
|
2011-02-10 20:00:29 +01:00
|
|
|
|
|
|
|
/* Also let the master know that we made some progress */
|
2011-02-16 16:24:50 +01:00
|
|
|
if (!dying)
|
2014-01-16 22:05:02 +01:00
|
|
|
{
|
2012-10-11 16:39:52 +02:00
|
|
|
XLogWalRcvSendReply(false, false);
|
2014-01-16 22:05:02 +01:00
|
|
|
XLogWalRcvSendHSFeedback(false);
|
|
|
|
}
|
2010-01-15 10:19:10 +01:00
|
|
|
}
|
|
|
|
}
|
2011-02-10 20:00:29 +01:00
|
|
|
|
|
|
|
/*
|
2017-05-12 19:51:27 +02:00
|
|
|
* Send reply message to primary, indicating our current WAL locations, oldest
|
2012-10-11 16:39:52 +02:00
|
|
|
* xmin and the current time.
|
|
|
|
*
|
|
|
|
* If 'force' is not set, the message is only sent if enough time has
|
2012-10-15 12:01:31 +02:00
|
|
|
* passed since last status update to reach wal_receiver_status_interval.
|
2012-10-11 16:39:52 +02:00
|
|
|
* If wal_receiver_status_interval is disabled altogether and 'force' is
|
|
|
|
* false, this is a no-op.
|
|
|
|
*
|
|
|
|
* If 'requestReply' is true, requests the server to reply immediately upon
|
|
|
|
* receiving this message. This is used for heartbearts, when approaching
|
|
|
|
* wal_receiver_timeout.
|
2011-02-10 20:00:29 +01:00
|
|
|
*/
|
|
|
|
static void
|
2012-10-11 16:39:52 +02:00
|
|
|
XLogWalRcvSendReply(bool force, bool requestReply)
|
2011-02-10 20:00:29 +01:00
|
|
|
{
|
2012-11-07 17:59:12 +01:00
|
|
|
static XLogRecPtr writePtr = 0;
|
|
|
|
static XLogRecPtr flushPtr = 0;
|
|
|
|
XLogRecPtr applyPtr;
|
|
|
|
static TimestampTz sendTime = 0;
|
2011-04-10 17:42:00 +02:00
|
|
|
TimestampTz now;
|
2011-02-10 20:00:29 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the user doesn't want status to be reported to the master, be sure
|
|
|
|
* to exit before doing anything at all.
|
|
|
|
*/
|
2012-10-11 16:39:52 +02:00
|
|
|
if (!force && wal_receiver_status_interval <= 0)
|
2011-02-10 20:00:29 +01:00
|
|
|
return;
|
|
|
|
|
|
|
|
/* Get current timestamp. */
|
|
|
|
now = GetCurrentTimestamp();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can compare the write and flush positions to the last message we
|
|
|
|
* sent without taking any lock, but the apply position requires a spin
|
|
|
|
* lock, so we don't check that unless something else has changed or 10
|
2017-05-12 19:51:27 +02:00
|
|
|
* seconds have passed. This means that the apply WAL location will
|
2011-02-10 20:00:29 +01:00
|
|
|
* appear, from the master's point of view, to lag slightly, but since
|
|
|
|
* this is only for reporting purposes and only on idle systems, that's
|
|
|
|
* probably OK.
|
|
|
|
*/
|
2012-10-11 16:39:52 +02:00
|
|
|
if (!force
|
2012-12-28 17:06:15 +01:00
|
|
|
&& writePtr == LogstreamResult.Write
|
|
|
|
&& flushPtr == LogstreamResult.Flush
|
2012-11-07 17:59:12 +01:00
|
|
|
&& !TimestampDifferenceExceeds(sendTime, now,
|
2011-04-10 17:42:00 +02:00
|
|
|
wal_receiver_status_interval * 1000))
|
2011-02-10 20:00:29 +01:00
|
|
|
return;
|
2012-11-07 17:59:12 +01:00
|
|
|
sendTime = now;
|
2011-02-10 20:00:29 +01:00
|
|
|
|
2011-02-16 20:29:37 +01:00
|
|
|
/* Construct a new message */
|
2012-11-07 17:59:12 +01:00
|
|
|
writePtr = LogstreamResult.Write;
|
|
|
|
flushPtr = LogstreamResult.Flush;
|
Follow TLI of last replayed record, not recovery target TLI, in walsenders.
Most of the time, the last replayed record comes from the recovery target
timeline, but there is a corner case where it makes a difference. When
the startup process scans for a new timeline, and decides to change recovery
target timeline, there is a window where the recovery target TLI has already
been bumped, but there are no WAL segments from the new timeline in pg_xlog
yet. For example, if we have just replayed up to point 0/30002D8, on
timeline 1, there is a WAL file called 000000010000000000000003 in pg_xlog
that contains the WAL up to that point. When recovery switches recovery
target timeline to 2, a walsender can immediately try to read WAL from
0/30002D8, from timeline 2, so it will try to open WAL file
000000020000000000000003. However, that doesn't exist yet - the startup
process hasn't copied that file from the archive yet nor has the walreceiver
streamed it yet, so walsender fails with error "requested WAL segment
000000020000000000000003 has already been removed". That's harmless, in that
the standby will try to reconnect later and by that time the segment is
already created, but error messages that should be ignored are not good.
To fix that, have walsender track the TLI of the last replayed record,
instead of the recovery target timeline. That way walsender will not try to
read anything from timeline 2, until the WAL segment has been created and at
least one record has been replayed from it. The recovery target timeline is
now xlog.c's internal affair, it doesn't need to be exposed in shared memory
anymore.
This fixes the error reported by Thom Brown. depesz the same error message,
but I'm not sure if this fixes his scenario.
2012-12-20 13:23:31 +01:00
|
|
|
applyPtr = GetXLogReplayRecPtr(NULL);
|
2012-11-07 17:59:12 +01:00
|
|
|
|
|
|
|
resetStringInfo(&reply_message);
|
|
|
|
pq_sendbyte(&reply_message, 'r');
|
|
|
|
pq_sendint64(&reply_message, writePtr);
|
|
|
|
pq_sendint64(&reply_message, flushPtr);
|
|
|
|
pq_sendint64(&reply_message, applyPtr);
|
2017-02-23 21:57:08 +01:00
|
|
|
pq_sendint64(&reply_message, GetCurrentTimestamp());
|
2012-11-07 17:59:12 +01:00
|
|
|
pq_sendbyte(&reply_message, requestReply ? 1 : 0);
|
|
|
|
|
|
|
|
/* Send it */
|
|
|
|
elog(DEBUG2, "sending write %X/%X flush %X/%X apply %X/%X%s",
|
|
|
|
(uint32) (writePtr >> 32), (uint32) writePtr,
|
|
|
|
(uint32) (flushPtr >> 32), (uint32) flushPtr,
|
|
|
|
(uint32) (applyPtr >> 32), (uint32) applyPtr,
|
|
|
|
requestReply ? " (reply requested)" : "");
|
|
|
|
|
2016-11-30 18:00:00 +01:00
|
|
|
walrcv_send(wrconn, reply_message.data, reply_message.len);
|
2011-02-10 20:00:29 +01:00
|
|
|
}
|
2011-02-18 12:31:49 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Send hot standby feedback message to primary, plus the current time,
|
|
|
|
* in case they don't have a watch.
|
2013-02-04 11:29:22 +01:00
|
|
|
*
|
|
|
|
* If the user disables feedback, send one final message to tell sender
|
2017-01-26 19:14:02 +01:00
|
|
|
* to forget about the xmin on this standby. We also send this message
|
|
|
|
* on first connect because a previous connection might have set xmin
|
|
|
|
* on a replication slot. (If we're not using a slot it's harmless to
|
|
|
|
* send a feedback message explicitly setting InvalidTransactionId).
|
2011-02-18 12:31:49 +01:00
|
|
|
*/
|
|
|
|
static void
|
2013-02-04 11:29:22 +01:00
|
|
|
XLogWalRcvSendHSFeedback(bool immed)
|
2011-02-18 12:31:49 +01:00
|
|
|
{
|
2011-04-10 17:42:00 +02:00
|
|
|
TimestampTz now;
|
|
|
|
TransactionId nextXid;
|
2017-05-17 22:31:56 +02:00
|
|
|
uint32 xmin_epoch,
|
|
|
|
catalog_xmin_epoch;
|
|
|
|
TransactionId xmin,
|
|
|
|
catalog_xmin;
|
2012-11-07 17:59:12 +01:00
|
|
|
static TimestampTz sendTime = 0;
|
2017-05-17 22:31:56 +02:00
|
|
|
|
2017-01-26 19:14:02 +01:00
|
|
|
/* initially true so we always send at least one feedback message */
|
|
|
|
static bool master_has_standby_xmin = true;
|
2011-02-18 12:31:49 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the user doesn't want status to be reported to the master, be sure
|
|
|
|
* to exit before doing anything at all.
|
|
|
|
*/
|
2013-02-04 11:29:22 +01:00
|
|
|
if ((wal_receiver_status_interval <= 0 || !hot_standby_feedback) &&
|
|
|
|
!master_has_standby_xmin)
|
2011-02-18 12:31:49 +01:00
|
|
|
return;
|
|
|
|
|
|
|
|
/* Get current timestamp. */
|
|
|
|
now = GetCurrentTimestamp();
|
|
|
|
|
2013-02-04 11:29:22 +01:00
|
|
|
if (!immed)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Send feedback at most once per wal_receiver_status_interval.
|
|
|
|
*/
|
|
|
|
if (!TimestampDifferenceExceeds(sendTime, now,
|
2013-05-29 22:58:43 +02:00
|
|
|
wal_receiver_status_interval * 1000))
|
2013-02-04 11:29:22 +01:00
|
|
|
return;
|
|
|
|
sendTime = now;
|
|
|
|
}
|
2011-02-18 12:31:49 +01:00
|
|
|
|
|
|
|
/*
|
2017-01-26 19:14:02 +01:00
|
|
|
* If Hot Standby is not yet accepting connections there is nothing to
|
|
|
|
* send. Check this after the interval has expired to reduce number of
|
|
|
|
* calls.
|
|
|
|
*
|
|
|
|
* Bailing out here also ensures that we don't send feedback until we've
|
|
|
|
* read our own replication slot state, so we don't tell the master to
|
2017-05-17 22:31:56 +02:00
|
|
|
* discard needed xmin or catalog_xmin from any slots that may exist on
|
|
|
|
* this replica.
|
2011-02-18 12:31:49 +01:00
|
|
|
*/
|
|
|
|
if (!HotStandbyActive())
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
2011-04-10 17:42:00 +02:00
|
|
|
* Make the expensive call to get the oldest xmin once we are certain
|
|
|
|
* everything else has been checked.
|
2011-02-18 12:31:49 +01:00
|
|
|
*/
|
2013-02-04 11:29:22 +01:00
|
|
|
if (hot_standby_feedback)
|
2017-03-25 15:07:27 +01:00
|
|
|
{
|
|
|
|
TransactionId slot_xmin;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Usually GetOldestXmin() would include both global replication slot
|
|
|
|
* xmin and catalog_xmin in its calculations, but we want to derive
|
|
|
|
* separate values for each of those. So we ask for an xmin that
|
|
|
|
* excludes the catalog_xmin.
|
|
|
|
*/
|
|
|
|
xmin = GetOldestXmin(NULL,
|
2017-05-17 22:31:56 +02:00
|
|
|
PROCARRAY_FLAGS_DEFAULT | PROCARRAY_SLOTS_XMIN);
|
2017-03-25 15:07:27 +01:00
|
|
|
|
|
|
|
ProcArrayGetReplicationSlotXmin(&slot_xmin, &catalog_xmin);
|
|
|
|
|
|
|
|
if (TransactionIdIsValid(slot_xmin) &&
|
|
|
|
TransactionIdPrecedes(slot_xmin, xmin))
|
|
|
|
xmin = slot_xmin;
|
|
|
|
}
|
2013-02-04 11:29:22 +01:00
|
|
|
else
|
2017-03-25 15:07:27 +01:00
|
|
|
{
|
2013-02-04 11:29:22 +01:00
|
|
|
xmin = InvalidTransactionId;
|
2017-03-25 15:07:27 +01:00
|
|
|
catalog_xmin = InvalidTransactionId;
|
|
|
|
}
|
2011-02-18 12:31:49 +01:00
|
|
|
|
|
|
|
/*
|
2011-04-10 17:42:00 +02:00
|
|
|
* Get epoch and adjust if nextXid and oldestXmin are different sides of
|
|
|
|
* the epoch boundary.
|
2011-02-18 12:31:49 +01:00
|
|
|
*/
|
2017-03-25 15:07:27 +01:00
|
|
|
GetNextXidAndEpoch(&nextXid, &xmin_epoch);
|
|
|
|
catalog_xmin_epoch = xmin_epoch;
|
2011-02-18 12:31:49 +01:00
|
|
|
if (nextXid < xmin)
|
2017-05-17 22:31:56 +02:00
|
|
|
xmin_epoch--;
|
2017-03-25 15:07:27 +01:00
|
|
|
if (nextXid < catalog_xmin)
|
2017-05-17 22:31:56 +02:00
|
|
|
catalog_xmin_epoch--;
|
2011-02-18 12:31:49 +01:00
|
|
|
|
2017-03-25 15:07:27 +01:00
|
|
|
elog(DEBUG2, "sending hot standby feedback xmin %u epoch %u catalog_xmin %u catalog_xmin_epoch %u",
|
|
|
|
xmin, xmin_epoch, catalog_xmin, catalog_xmin_epoch);
|
2012-11-07 17:59:12 +01:00
|
|
|
|
2013-12-13 18:58:48 +01:00
|
|
|
/* Construct the message and send it. */
|
2012-11-07 17:59:12 +01:00
|
|
|
resetStringInfo(&reply_message);
|
|
|
|
pq_sendbyte(&reply_message, 'h');
|
2017-02-23 21:57:08 +01:00
|
|
|
pq_sendint64(&reply_message, GetCurrentTimestamp());
|
2017-10-12 06:00:46 +02:00
|
|
|
pq_sendint32(&reply_message, xmin);
|
|
|
|
pq_sendint32(&reply_message, xmin_epoch);
|
|
|
|
pq_sendint32(&reply_message, catalog_xmin);
|
|
|
|
pq_sendint32(&reply_message, catalog_xmin_epoch);
|
2016-11-30 18:00:00 +01:00
|
|
|
walrcv_send(wrconn, reply_message.data, reply_message.len);
|
2017-03-25 15:07:27 +01:00
|
|
|
if (TransactionIdIsValid(xmin) || TransactionIdIsValid(catalog_xmin))
|
2013-02-04 11:29:22 +01:00
|
|
|
master_has_standby_xmin = true;
|
|
|
|
else
|
|
|
|
master_has_standby_xmin = false;
|
2011-02-18 12:31:49 +01:00
|
|
|
}
|
2011-12-31 14:30:26 +01:00
|
|
|
|
|
|
|
/*
|
2012-11-07 17:59:12 +01:00
|
|
|
* Update shared memory status upon receiving a message from primary.
|
|
|
|
*
|
|
|
|
* 'walEnd' and 'sendTime' are the end-of-WAL and timestamp of the latest
|
|
|
|
* message, reported by primary.
|
2011-12-31 14:30:26 +01:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ProcessWalSndrMessage(XLogRecPtr walEnd, TimestampTz sendTime)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
WalRcvData *walrcv = WalRcv;
|
2011-12-31 14:30:26 +01:00
|
|
|
|
|
|
|
TimestampTz lastMsgReceiptTime = GetCurrentTimestamp();
|
|
|
|
|
|
|
|
/* Update shared-memory status */
|
|
|
|
SpinLockAcquire(&walrcv->mutex);
|
2012-12-28 17:06:15 +01:00
|
|
|
if (walrcv->latestWalEnd < walEnd)
|
2012-08-09 18:03:59 +02:00
|
|
|
walrcv->latestWalEndTime = sendTime;
|
|
|
|
walrcv->latestWalEnd = walEnd;
|
2011-12-31 14:30:26 +01:00
|
|
|
walrcv->lastMsgSendTime = sendTime;
|
|
|
|
walrcv->lastMsgReceiptTime = lastMsgReceiptTime;
|
|
|
|
SpinLockRelease(&walrcv->mutex);
|
|
|
|
|
2012-01-13 14:21:45 +01:00
|
|
|
if (log_min_messages <= DEBUG2)
|
2014-04-04 17:43:34 +02:00
|
|
|
{
|
|
|
|
char *sendtime;
|
|
|
|
char *receipttime;
|
2015-03-14 00:16:50 +01:00
|
|
|
int applyDelay;
|
2014-04-04 17:43:34 +02:00
|
|
|
|
|
|
|
/* Copy because timestamptz_to_str returns a static buffer */
|
|
|
|
sendtime = pstrdup(timestamptz_to_str(sendTime));
|
|
|
|
receipttime = pstrdup(timestamptz_to_str(lastMsgReceiptTime));
|
2015-03-14 00:16:50 +01:00
|
|
|
applyDelay = GetReplicationApplyDelay();
|
|
|
|
|
|
|
|
/* apply delay is not available */
|
|
|
|
if (applyDelay == -1)
|
|
|
|
elog(DEBUG2, "sendtime %s receipttime %s replication apply delay (N/A) transfer latency %d ms",
|
|
|
|
sendtime,
|
|
|
|
receipttime,
|
|
|
|
GetReplicationTransferLatency());
|
|
|
|
else
|
|
|
|
elog(DEBUG2, "sendtime %s receipttime %s replication apply delay %d ms transfer latency %d ms",
|
|
|
|
sendtime,
|
|
|
|
receipttime,
|
|
|
|
applyDelay,
|
|
|
|
GetReplicationTransferLatency());
|
|
|
|
|
2014-04-04 17:43:34 +02:00
|
|
|
pfree(sendtime);
|
|
|
|
pfree(receipttime);
|
|
|
|
}
|
2011-12-31 14:30:26 +01:00
|
|
|
}
|
2016-01-07 20:21:19 +01:00
|
|
|
|
2016-03-30 03:16:12 +02:00
|
|
|
/*
|
|
|
|
* Wake up the walreceiver main loop.
|
|
|
|
*
|
|
|
|
* This is called by the startup process whenever interesting xlog records
|
|
|
|
* are applied, so that walreceiver can check if it needs to send an apply
|
|
|
|
* notification back to the master which may be waiting in a COMMIT with
|
|
|
|
* synchronous_commit = remote_apply.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
WalRcvForceReply(void)
|
|
|
|
{
|
2017-10-03 20:00:56 +02:00
|
|
|
Latch *latch;
|
|
|
|
|
2016-03-30 03:16:12 +02:00
|
|
|
WalRcv->force_reply = true;
|
2017-10-03 20:00:56 +02:00
|
|
|
/* fetching the latch pointer might not be atomic, so use spinlock */
|
|
|
|
SpinLockAcquire(&WalRcv->mutex);
|
|
|
|
latch = WalRcv->latch;
|
|
|
|
SpinLockRelease(&WalRcv->mutex);
|
|
|
|
if (latch)
|
|
|
|
SetLatch(latch);
|
2016-03-30 03:16:12 +02:00
|
|
|
}
|
|
|
|
|
2016-01-07 20:21:19 +01:00
|
|
|
/*
|
|
|
|
* Return a string constant representing the state. This is used
|
|
|
|
* in system functions and views, and should *not* be translated.
|
|
|
|
*/
|
|
|
|
static const char *
|
|
|
|
WalRcvGetStateString(WalRcvState state)
|
|
|
|
{
|
|
|
|
switch (state)
|
|
|
|
{
|
|
|
|
case WALRCV_STOPPED:
|
|
|
|
return "stopped";
|
|
|
|
case WALRCV_STARTING:
|
|
|
|
return "starting";
|
|
|
|
case WALRCV_STREAMING:
|
|
|
|
return "streaming";
|
|
|
|
case WALRCV_WAITING:
|
|
|
|
return "waiting";
|
|
|
|
case WALRCV_RESTARTING:
|
|
|
|
return "restarting";
|
|
|
|
case WALRCV_STOPPING:
|
|
|
|
return "stopping";
|
|
|
|
}
|
|
|
|
return "UNKNOWN";
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns activity of WAL receiver, including pid, state and xlog locations
|
|
|
|
* received from the WAL sender of another server.
|
|
|
|
*/
|
|
|
|
Datum
|
|
|
|
pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
|
|
|
|
{
|
|
|
|
TupleDesc tupdesc;
|
2016-06-29 22:57:17 +02:00
|
|
|
Datum *values;
|
|
|
|
bool *nulls;
|
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier
Reported-by: Michaël Paquier
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
2017-07-01 00:06:33 +02:00
|
|
|
int pid;
|
|
|
|
bool ready_to_display;
|
2016-01-07 20:21:19 +01:00
|
|
|
WalRcvState state;
|
|
|
|
XLogRecPtr receive_start_lsn;
|
|
|
|
TimeLineID receive_start_tli;
|
|
|
|
XLogRecPtr received_lsn;
|
|
|
|
TimeLineID received_tli;
|
2016-06-10 00:02:36 +02:00
|
|
|
TimestampTz last_send_time;
|
|
|
|
TimestampTz last_receipt_time;
|
2016-01-07 20:21:19 +01:00
|
|
|
XLogRecPtr latest_end_lsn;
|
2016-06-10 00:02:36 +02:00
|
|
|
TimestampTz latest_end_time;
|
2018-03-31 00:51:22 +02:00
|
|
|
char sender_host[NI_MAXHOST];
|
|
|
|
int sender_port = 0;
|
2017-10-03 14:58:25 +02:00
|
|
|
char slotname[NAMEDATALEN];
|
|
|
|
char conninfo[MAXCONNINFO];
|
2016-01-07 20:21:19 +01:00
|
|
|
|
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier
Reported-by: Michaël Paquier
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
2017-07-01 00:06:33 +02:00
|
|
|
/* Take a lock to ensure value consistency */
|
|
|
|
SpinLockAcquire(&WalRcv->mutex);
|
|
|
|
pid = (int) WalRcv->pid;
|
|
|
|
ready_to_display = WalRcv->ready_to_display;
|
|
|
|
state = WalRcv->walRcvState;
|
|
|
|
receive_start_lsn = WalRcv->receiveStart;
|
|
|
|
receive_start_tli = WalRcv->receiveStartTLI;
|
|
|
|
received_lsn = WalRcv->receivedUpto;
|
|
|
|
received_tli = WalRcv->receivedTLI;
|
|
|
|
last_send_time = WalRcv->lastMsgSendTime;
|
|
|
|
last_receipt_time = WalRcv->lastMsgReceiptTime;
|
|
|
|
latest_end_lsn = WalRcv->latestWalEnd;
|
|
|
|
latest_end_time = WalRcv->latestWalEndTime;
|
2017-10-03 14:58:25 +02:00
|
|
|
strlcpy(slotname, (char *) WalRcv->slotname, sizeof(slotname));
|
2018-03-31 00:51:22 +02:00
|
|
|
strlcpy(sender_host, (char *) WalRcv->sender_host, sizeof(sender_host));
|
|
|
|
sender_port = WalRcv->sender_port;
|
2017-10-03 14:58:25 +02:00
|
|
|
strlcpy(conninfo, (char *) WalRcv->conninfo, sizeof(conninfo));
|
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier
Reported-by: Michaël Paquier
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
2017-07-01 00:06:33 +02:00
|
|
|
SpinLockRelease(&WalRcv->mutex);
|
|
|
|
|
2016-06-29 22:57:17 +02:00
|
|
|
/*
|
2016-07-01 19:53:46 +02:00
|
|
|
* No WAL receiver (or not ready yet), just return a tuple with NULL
|
|
|
|
* values
|
2016-06-29 22:57:17 +02:00
|
|
|
*/
|
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier
Reported-by: Michaël Paquier
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
2017-07-01 00:06:33 +02:00
|
|
|
if (pid == 0 || !ready_to_display)
|
2016-07-01 19:53:46 +02:00
|
|
|
PG_RETURN_NULL();
|
2016-06-29 22:57:17 +02:00
|
|
|
|
|
|
|
/* determine result type */
|
|
|
|
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
|
|
|
|
elog(ERROR, "return type must be a row type");
|
|
|
|
|
|
|
|
values = palloc0(sizeof(Datum) * tupdesc->natts);
|
|
|
|
nulls = palloc0(sizeof(bool) * tupdesc->natts);
|
2016-01-07 20:21:19 +01:00
|
|
|
|
|
|
|
/* Fetch values */
|
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier
Reported-by: Michaël Paquier
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
2017-07-01 00:06:33 +02:00
|
|
|
values[0] = Int32GetDatum(pid);
|
2016-01-07 20:21:19 +01:00
|
|
|
|
2017-03-30 20:18:53 +02:00
|
|
|
if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS))
|
2016-01-07 20:21:19 +01:00
|
|
|
{
|
|
|
|
/*
|
2018-01-06 12:48:21 +01:00
|
|
|
* Only superusers and members of pg_read_all_stats can see details.
|
2018-04-26 20:47:16 +02:00
|
|
|
* Other users only get the pid value to know whether it is a WAL
|
|
|
|
* receiver, but no details.
|
2016-01-07 20:21:19 +01:00
|
|
|
*/
|
2016-06-29 22:57:17 +02:00
|
|
|
MemSet(&nulls[1], true, sizeof(bool) * (tupdesc->natts - 1));
|
2016-01-07 20:21:19 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
values[1] = CStringGetTextDatum(WalRcvGetStateString(state));
|
|
|
|
|
|
|
|
if (XLogRecPtrIsInvalid(receive_start_lsn))
|
|
|
|
nulls[2] = true;
|
|
|
|
else
|
|
|
|
values[2] = LSNGetDatum(receive_start_lsn);
|
|
|
|
values[3] = Int32GetDatum(receive_start_tli);
|
|
|
|
if (XLogRecPtrIsInvalid(received_lsn))
|
|
|
|
nulls[4] = true;
|
|
|
|
else
|
|
|
|
values[4] = LSNGetDatum(received_lsn);
|
|
|
|
values[5] = Int32GetDatum(received_tli);
|
|
|
|
if (last_send_time == 0)
|
|
|
|
nulls[6] = true;
|
|
|
|
else
|
|
|
|
values[6] = TimestampTzGetDatum(last_send_time);
|
|
|
|
if (last_receipt_time == 0)
|
|
|
|
nulls[7] = true;
|
|
|
|
else
|
|
|
|
values[7] = TimestampTzGetDatum(last_receipt_time);
|
|
|
|
if (XLogRecPtrIsInvalid(latest_end_lsn))
|
|
|
|
nulls[8] = true;
|
|
|
|
else
|
|
|
|
values[8] = LSNGetDatum(latest_end_lsn);
|
|
|
|
if (latest_end_time == 0)
|
|
|
|
nulls[9] = true;
|
|
|
|
else
|
|
|
|
values[9] = TimestampTzGetDatum(latest_end_time);
|
|
|
|
if (*slotname == '\0')
|
|
|
|
nulls[10] = true;
|
|
|
|
else
|
|
|
|
values[10] = CStringGetTextDatum(slotname);
|
2018-03-31 00:51:22 +02:00
|
|
|
if (*sender_host == '\0')
|
2016-06-29 22:57:17 +02:00
|
|
|
nulls[11] = true;
|
|
|
|
else
|
2018-03-31 00:51:22 +02:00
|
|
|
values[11] = CStringGetTextDatum(sender_host);
|
|
|
|
if (sender_port == 0)
|
|
|
|
nulls[12] = true;
|
|
|
|
else
|
|
|
|
values[12] = Int32GetDatum(sender_port);
|
|
|
|
if (*conninfo == '\0')
|
|
|
|
nulls[13] = true;
|
|
|
|
else
|
|
|
|
values[13] = CStringGetTextDatum(conninfo);
|
2016-01-07 20:21:19 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Returns the record as Datum */
|
Fix locking in WAL receiver/sender shmem state structs
In WAL receiver and WAL server, some accesses to their corresponding
shared memory control structs were done without holding any kind of
lock, which could lead to inconsistent and possibly insecure results.
In walsender, fix by clarifying the locking rules and following them
correctly, as documented in the new comment in walsender_private.h;
namely that some members can be read in walsender itself without a lock,
because the only writes occur in the same process. The rest of the
struct requires spinlock for accesses, as usual.
In walreceiver, fix by always holding spinlock while accessing the
struct.
While there is potentially a problem in all branches, it is minor in
stable ones. This only became a real problem in pg10 because of quorum
commit in synchronous replication (commit 3901fd70cc7c), and a potential
security problem in walreceiver because a superuser() check was removed
by default monitoring roles (commit 25fff40798fc). Thus, no backpatch.
In passing, clean up some leftover braces which were used to create
unconditional blocks. Once upon a time these were used for
volatile-izing accesses to those shmem structs, which is no longer
required. Many other occurrences of this pattern remain.
Author: Michaël Paquier
Reported-by: Michaël Paquier
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi, Thomas Munro,
Robert Haas
Discussion: https://postgr.es/m/CAB7nPqTWYqtzD=LN_oDaf9r-hAjUEPAy0B9yRkhcsLdRN8fzrw@mail.gmail.com
2017-07-01 00:06:33 +02:00
|
|
|
PG_RETURN_DATUM(HeapTupleGetDatum(heap_form_tuple(tupdesc, values, nulls)));
|
2016-01-07 20:21:19 +01:00
|
|
|
}
|