postgresql/src/backend/replication
Alvaro Herrera f49a80c481 Fix "base" snapshot handling in logical decoding
Two closely related bugs are fixed.  First, xmin of logical slots was
advanced too early.  During xl_running_xacts processing, xmin of the
slot was set to the oldest running xid in the record, but that's wrong:
actually, snapshots which will be used for not-yet-replayed transactions
might consider older txns as running too, so we need to keep xmin back
for them.  The problem wasn't noticed earlier because DDL which allows
to delete tuple (set xmax) while some another not-yet-committed
transaction looks at it is pretty rare, if not unique: e.g. all forms of
ALTER TABLE which change schema acquire ACCESS EXCLUSIVE lock
conflicting with any inserts. The included test case (test_decoding's
oldest_xmin) uses ALTER of a composite type, which doesn't have such
interlocking.

To deal with this, we must be able to quickly retrieve oldest xmin
(oldest running xid among all assigned snapshots) from ReorderBuffer. To
fix, add another list of ReorderBufferTXNs to the reorderbuffer, where
transactions are sorted by base-snapshot-LSN.  This is slightly
different from the existing (sorted by first-LSN) list, because a
transaction can have an earlier LSN but a later Xmin, if its first
record does not obtain an xmin (eg. xl_xact_assignment).  Note this new
list doesn't fully replace the existing txn list: we still need that one
to prevent WAL recycling.

The second issue concerns SnapBuilder snapshots and subtransactions.
SnapBuildDistributeNewCatalogSnapshot never assigned a snapshot to a
transaction that is known to be a subtxn, which is good in the common
case that the top-level transaction already has one (no point in doing
so), but a bug otherwise.  To fix, arrange to transfer the snapshot from
the subtxn to its top-level txn as soon as the kinship gets known.
test_decoding's snapshot_transfer verifies this.

Also, fix a minor memory leak: refcount of toplevel's old base snapshot
was not decremented when the snapshot is transferred from child.

Liberally sprinkle code comments, and rewrite a few existing ones.  This
part is my (Álvaro's) contribution to this commit, as I had to write all
those comments in order to understand the existing code and Arseny's
patch.

Reported-by: Arseny Sher <a.sher@postgrespro.ru>
Diagnosed-by: Arseny Sher <a.sher@postgrespro.ru>
Co-authored-by: Arseny Sher <a.sher@postgrespro.ru>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/87lgdyz1wj.fsf@ars-thinkpad
2018-06-26 16:48:10 -04:00
..
libpqwalreceiver Post-feature-freeze pgindent run. 2018-04-26 14:47:16 -04:00
logical Fix "base" snapshot handling in logical decoding 2018-06-26 16:48:10 -04:00
pgoutput Don't do logical replication of TRUNCATE of zero tables 2018-04-30 13:49:20 -04:00
.gitignore Support multiple synchronous standby servers. 2016-04-06 17:18:25 +09:00
basebackup.c Address set of issues with errno handling 2018-06-25 11:19:05 +09:00
Makefile Rethink flex flags for syncrep_scanner.l. 2017-05-19 18:05:20 -04:00
README Rename "pg_xlog" directory to "pg_wal". 2016-10-20 11:32:18 -04:00
repl_gram.y Validate page level checksums in base backups 2018-04-03 13:47:16 +02:00
repl_scanner.l Validate page level checksums in base backups 2018-04-03 13:47:16 +02:00
slot.c Address set of issues with errno handling 2018-06-25 11:19:05 +09:00
slotfuncs.c Fix a couple of bugs with replication slot advancing feature 2018-06-11 09:26:13 +09:00
syncrep_gram.y Update copyright for 2018 2018-01-02 23:30:12 -05:00
syncrep_scanner.l Update copyright for 2018 2018-01-02 23:30:12 -05:00
syncrep.c Update copyright for 2018 2018-01-02 23:30:12 -05:00
walreceiver.c Post-feature-freeze pgindent run. 2018-04-26 14:47:16 -04:00
walreceiverfuncs.c Update copyright for 2018 2018-01-02 23:30:12 -05:00
walsender.c Post-feature-freeze pgindent run. 2018-04-26 14:47:16 -04:00

src/backend/replication/README

Walreceiver - libpqwalreceiver API
----------------------------------

The transport-specific part of walreceiver, responsible for connecting to
the primary server, receiving WAL files and sending messages, is loaded
dynamically to avoid having to link the main server binary with libpq.
The dynamically loaded module is in libpqwalreceiver subdirectory.

The dynamically loaded module implements four functions:


bool walrcv_connect(char *conninfo, XLogRecPtr startpoint)

Establish connection to the primary, and starts streaming from 'startpoint'.
Returns true on success.

int walrcv_receive(char **buffer, pgsocket *wait_fd)

Retrieve any message available without blocking through the
connection.  If a message was successfully read, returns its
length. If the connection is closed, returns -1.  Otherwise returns 0
to indicate that no data is available, and sets *wait_fd to a socket
descriptor which can be waited on before trying again.  On success, a
pointer to the message payload is stored in *buffer. The returned
buffer is valid until the next call to walrcv_* functions, and the
caller should not attempt to free it.

void walrcv_send(const char *buffer, int nbytes)

Send a message to XLOG stream.

void walrcv_disconnect(void);

Disconnect.


This API should be considered internal at the moment, but we could open it
up for 3rd party replacements of libpqwalreceiver in the future, allowing
pluggable methods for receiving WAL.

Walreceiver IPC
---------------

When the WAL replay in startup process has reached the end of archived WAL,
restorable using restore_command, it starts up the walreceiver process
to fetch more WAL (if streaming replication is configured).

Walreceiver is a postmaster subprocess, so the startup process can't fork it
directly. Instead, it sends a signal to postmaster, asking postmaster to launch
it. Before that, however, startup process fills in WalRcvData->conninfo
and WalRcvData->slotname, and initializes the starting point in
WalRcvData->receiveStart.

As walreceiver receives WAL from the master server, and writes and flushes
it to disk (in pg_wal), it updates WalRcvData->receivedUpto and signals
the startup process to know how far WAL replay can advance.

Walreceiver sends information about replication progress to the master server
whenever it either writes or flushes new WAL, or the specified interval elapses.
This is used for reporting purpose.

Walsender IPC
-------------

At shutdown, postmaster handles walsender processes differently from regular
backends. It waits for regular backends to die before writing the
shutdown checkpoint and terminating pgarch and other auxiliary processes, but
that's not desirable for walsenders, because we want the standby servers to
receive all the WAL, including the shutdown checkpoint, before the master
is shut down. Therefore postmaster treats walsenders like the pgarch process,
and instructs them to terminate at PM_SHUTDOWN_2 phase, after all regular
backends have died and checkpointer has issued the shutdown checkpoint.

When postmaster accepts a connection, it immediately forks a new process
to handle the handshake and authentication, and the process initializes to
become a backend. Postmaster doesn't know if the process becomes a regular
backend or a walsender process at that time - that's indicated in the
connection handshake - so we need some extra signaling to let postmaster
identify walsender processes.

When walsender process starts up, it marks itself as a walsender process in
the PMSignal array. That way postmaster can tell it apart from regular
backends.

Note that no big harm is done if postmaster thinks that a walsender is a
regular backend; it will just terminate the walsender earlier in the shutdown
phase. A walsender will look like a regular backend until it's done with the
initialization and has marked itself in PMSignal array, and at process
termination, after unmarking the PMSignal slot.

Each walsender allocates an entry from the WalSndCtl array, and tracks
information about replication progress. User can monitor them via
statistics views.


Walsender - walreceiver protocol
--------------------------------

See manual.