2014-02-01 04:45:17 +01:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* slot.c
|
|
|
|
* Replication slot management.
|
|
|
|
*
|
|
|
|
*
|
2024-01-04 02:49:05 +01:00
|
|
|
* Copyright (c) 2012-2024, PostgreSQL Global Development Group
|
2014-02-01 04:45:17 +01:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
|
|
|
* src/backend/replication/slot.c
|
|
|
|
*
|
|
|
|
* NOTES
|
|
|
|
*
|
|
|
|
* Replication slots are used to keep state about replication streams
|
|
|
|
* originating from this cluster. Their primary purpose is to prevent the
|
|
|
|
* premature removal of WAL or of old tuple versions in a manner that would
|
2014-02-02 02:28:18 +01:00
|
|
|
* interfere with replication; they are also useful for monitoring purposes.
|
2014-02-01 04:45:17 +01:00
|
|
|
* Slots need to be permanent (to allow restarts), crash-safe, and allocatable
|
|
|
|
* on standbys (to support cascading setups). The requirement that slots be
|
|
|
|
* usable on standbys precludes storing them in the system catalogs.
|
|
|
|
*
|
|
|
|
* Each replication slot gets its own directory inside the $PGDATA/pg_replslot
|
|
|
|
* directory. Inside that directory the state file will contain the slot's
|
|
|
|
* own data. Additional data can be stored alongside that file if required.
|
|
|
|
* While the server is running, the state data is also cached in memory for
|
|
|
|
* efficiency.
|
|
|
|
*
|
|
|
|
* ReplicationSlotAllocationLock must be taken in exclusive mode to allocate
|
|
|
|
* or free a slot. ReplicationSlotControlLock must be taken in shared mode
|
|
|
|
* to iterate over the slots, and in exclusive mode to change the in_use flag
|
|
|
|
* of a slot. The remaining data in each slot is protected by its mutex.
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
|
|
|
|
#include "access/transam.h"
|
2015-08-11 12:34:31 +02:00
|
|
|
#include "access/xlog_internal.h"
|
2023-04-08 11:20:01 +02:00
|
|
|
#include "access/xlogrecovery.h"
|
Expand the use of get_dirent_type(), shaving a few calls to stat()/lstat()
Several backend-side loops scanning one or more directories with
ReadDir() (WAL segment recycle/removal in xlog.c, backend-side directory
copy, temporary file removal, configuration file parsing, some logical
decoding logic and some pgtz stuff) already know the type of the entry
being scanned thanks to the dirent structure associated to the entry, on
platforms where we know about DT_REG, DT_DIR and DT_LNK to make the
difference between a regular file, a directory and a symbolic link.
Relying on the direct structure of an entry saves a few system calls to
stat() and lstat() in the loops updated here, shaving some code while on
it. The logic of the code remains the same, calling stat() or lstat()
depending on if it is necessary to look through symlinks.
Authors: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Andres Freund, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV8n-J-f=yiLUOx2=HrQGPSOZM3nWzyQQvLPcccPXxEdg@mail.gmail.com
2022-09-02 09:58:06 +02:00
|
|
|
#include "common/file_utils.h"
|
2015-01-03 20:51:52 +01:00
|
|
|
#include "common/string.h"
|
2014-02-01 04:45:17 +01:00
|
|
|
#include "miscadmin.h"
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
#include "pgstat.h"
|
2024-03-08 03:40:45 +01:00
|
|
|
#include "postmaster/interrupt.h"
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
#include "replication/slotsync.h"
|
2014-02-01 04:45:17 +01:00
|
|
|
#include "replication/slot.h"
|
2024-03-08 03:40:45 +01:00
|
|
|
#include "replication/walsender_private.h"
|
2014-02-01 04:45:17 +01:00
|
|
|
#include "storage/fd.h"
|
2022-02-15 01:44:28 +01:00
|
|
|
#include "storage/ipc.h"
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
#include "storage/proc.h"
|
2014-02-01 04:45:17 +01:00
|
|
|
#include "storage/procarray.h"
|
2016-12-08 18:00:00 +01:00
|
|
|
#include "utils/builtins.h"
|
2024-03-08 03:40:45 +01:00
|
|
|
#include "utils/guc_hooks.h"
|
|
|
|
#include "utils/varlena.h"
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Replication slot on-disk data structure.
|
|
|
|
*/
|
|
|
|
typedef struct ReplicationSlotOnDisk
|
|
|
|
{
|
|
|
|
/* first part of this struct needs to be version independent */
|
|
|
|
|
|
|
|
/* data not covered by checksum */
|
|
|
|
uint32 magic;
|
2015-04-14 16:03:42 +02:00
|
|
|
pg_crc32c checksum;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* data covered by checksum */
|
|
|
|
uint32 version;
|
|
|
|
uint32 length;
|
|
|
|
|
2014-11-12 18:52:49 +01:00
|
|
|
/*
|
|
|
|
* The actual data in the slot that follows can differ based on the above
|
|
|
|
* 'version'.
|
|
|
|
*/
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
ReplicationSlotPersistentData slotdata;
|
|
|
|
} ReplicationSlotOnDisk;
|
|
|
|
|
2024-03-08 03:40:45 +01:00
|
|
|
/*
|
|
|
|
* Struct for the configuration of standby_slot_names.
|
|
|
|
*
|
|
|
|
* Note: this must be a flat representation that can be held in a single chunk
|
|
|
|
* of guc_malloc'd memory, so that it can be stored as the "extra" data for the
|
|
|
|
* standby_slot_names GUC.
|
|
|
|
*/
|
|
|
|
typedef struct
|
|
|
|
{
|
|
|
|
/* Number of slot names in the slot_names[] */
|
|
|
|
int nslotnames;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* slot_names contains 'nslotnames' consecutive null-terminated C strings.
|
|
|
|
*/
|
|
|
|
char slot_names[FLEXIBLE_ARRAY_MEMBER];
|
|
|
|
} StandbySlotNamesConfigData;
|
|
|
|
|
2024-02-22 00:40:40 +01:00
|
|
|
/*
|
|
|
|
* Lookup table for slot invalidation causes.
|
|
|
|
*/
|
|
|
|
const char *const SlotInvalidationCauses[] = {
|
|
|
|
[RS_INVAL_NONE] = "none",
|
|
|
|
[RS_INVAL_WAL_REMOVED] = "wal_removed",
|
|
|
|
[RS_INVAL_HORIZON] = "rows_removed",
|
|
|
|
[RS_INVAL_WAL_LEVEL] = "wal_level_insufficient",
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Maximum number of invalidation causes */
|
|
|
|
#define RS_INVAL_MAX_CAUSES RS_INVAL_WAL_LEVEL
|
|
|
|
|
|
|
|
StaticAssertDecl(lengthof(SlotInvalidationCauses) == (RS_INVAL_MAX_CAUSES + 1),
|
|
|
|
"array length mismatch");
|
|
|
|
|
2014-11-12 18:52:49 +01:00
|
|
|
/* size of version independent data */
|
2014-02-01 04:45:17 +01:00
|
|
|
#define ReplicationSlotOnDiskConstantSize \
|
|
|
|
offsetof(ReplicationSlotOnDisk, slotdata)
|
2014-11-12 18:52:49 +01:00
|
|
|
/* size of the part of the slot not covered by the checksum */
|
2021-11-24 03:36:55 +01:00
|
|
|
#define ReplicationSlotOnDiskNotChecksummedSize \
|
2014-11-12 18:52:49 +01:00
|
|
|
offsetof(ReplicationSlotOnDisk, version)
|
|
|
|
/* size of the part covered by the checksum */
|
2021-11-24 03:36:55 +01:00
|
|
|
#define ReplicationSlotOnDiskChecksummedSize \
|
|
|
|
sizeof(ReplicationSlotOnDisk) - ReplicationSlotOnDiskNotChecksummedSize
|
2015-05-20 18:44:46 +02:00
|
|
|
/* size of the slot data that is version dependent */
|
2014-11-12 18:52:49 +01:00
|
|
|
#define ReplicationSlotOnDiskV2Size \
|
2014-02-01 04:45:17 +01:00
|
|
|
sizeof(ReplicationSlotOnDisk) - ReplicationSlotOnDiskConstantSize
|
|
|
|
|
|
|
|
#define SLOT_MAGIC 0x1051CA1 /* format identifier */
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
#define SLOT_VERSION 5 /* version for new files */
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* Control array for replication slot management */
|
|
|
|
ReplicationSlotCtlData *ReplicationSlotCtl = NULL;
|
|
|
|
|
|
|
|
/* My backend's replication slot in the shared memory array */
|
|
|
|
ReplicationSlot *MyReplicationSlot = NULL;
|
|
|
|
|
2024-03-08 03:40:45 +01:00
|
|
|
/* GUC variables */
|
2022-10-25 07:06:07 +02:00
|
|
|
int max_replication_slots = 10; /* the maximum number of replication
|
2014-02-01 04:45:17 +01:00
|
|
|
* slots */
|
|
|
|
|
2024-03-08 03:40:45 +01:00
|
|
|
/*
|
|
|
|
* This GUC lists streaming replication standby server slot names that
|
|
|
|
* logical WAL sender processes will wait for.
|
|
|
|
*/
|
|
|
|
char *standby_slot_names;
|
|
|
|
|
|
|
|
/* This is the parsed and cached configuration for standby_slot_names */
|
|
|
|
static StandbySlotNamesConfigData *standby_slot_names_config;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Oldest LSN that has been confirmed to be flushed to the standbys
|
|
|
|
* corresponding to the physical slots specified in the standby_slot_names GUC.
|
|
|
|
*/
|
|
|
|
static XLogRecPtr ss_oldest_flush_lsn = InvalidXLogRecPtr;
|
|
|
|
|
2022-02-15 01:44:28 +01:00
|
|
|
static void ReplicationSlotShmemExit(int code, Datum arg);
|
2016-12-08 18:00:00 +01:00
|
|
|
static void ReplicationSlotDropPtr(ReplicationSlot *slot);
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/* internal persistency functions */
|
|
|
|
static void RestoreSlotFromDisk(const char *name);
|
|
|
|
static void CreateSlotOnDisk(ReplicationSlot *slot);
|
2022-09-20 04:18:36 +02:00
|
|
|
static void SaveSlotToPath(ReplicationSlot *slot, const char *dir, int elevel);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/*
|
2019-08-13 06:53:41 +02:00
|
|
|
* Report shared-memory space needed by ReplicationSlotsShmemInit.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
Size
|
|
|
|
ReplicationSlotsShmemSize(void)
|
|
|
|
{
|
|
|
|
Size size = 0;
|
|
|
|
|
|
|
|
if (max_replication_slots == 0)
|
|
|
|
return size;
|
|
|
|
|
|
|
|
size = offsetof(ReplicationSlotCtlData, replication_slots);
|
|
|
|
size = add_size(size,
|
|
|
|
mul_size(max_replication_slots, sizeof(ReplicationSlot)));
|
|
|
|
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-05-18 04:23:26 +02:00
|
|
|
* Allocate and initialize shared memory for replication slots.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotsShmemInit(void)
|
|
|
|
{
|
|
|
|
bool found;
|
|
|
|
|
|
|
|
if (max_replication_slots == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ReplicationSlotCtl = (ReplicationSlotCtlData *)
|
|
|
|
ShmemInitStruct("ReplicationSlot Ctl", ReplicationSlotsShmemSize(),
|
|
|
|
&found);
|
|
|
|
|
|
|
|
if (!found)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* First time through, so initialize */
|
|
|
|
MemSet(ReplicationSlotCtl, 0, ReplicationSlotsShmemSize());
|
|
|
|
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
/* everything else is zeroed by the memset above */
|
|
|
|
SpinLockInit(&slot->mutex);
|
2020-05-16 00:11:03 +02:00
|
|
|
LWLockInitialize(&slot->io_in_progress_lock,
|
|
|
|
LWTRANCHE_REPLICATION_SLOT_IO);
|
2017-07-25 19:26:49 +02:00
|
|
|
ConditionVariableInit(&slot->active_cv);
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-02-15 01:44:28 +01:00
|
|
|
/*
|
|
|
|
* Register the callback for replication slot cleanup and releasing.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotInitialize(void)
|
|
|
|
{
|
|
|
|
before_shmem_exit(ReplicationSlotShmemExit, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Release and cleanup replication slots.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ReplicationSlotShmemExit(int code, Datum arg)
|
|
|
|
{
|
|
|
|
/* Make sure active replication slots are released */
|
|
|
|
if (MyReplicationSlot != NULL)
|
|
|
|
ReplicationSlotRelease();
|
|
|
|
|
|
|
|
/* Also cleanup all the temporary slots. */
|
|
|
|
ReplicationSlotCleanup();
|
|
|
|
}
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
|
|
|
* Check whether the passed slot name is valid and report errors at elevel.
|
|
|
|
*
|
|
|
|
* Slot names may consist out of [a-z0-9_]{1,NAMEDATALEN-1} which should allow
|
2014-02-02 02:28:18 +01:00
|
|
|
* the name to be used as a directory name on every supported OS.
|
2014-02-01 04:45:17 +01:00
|
|
|
*
|
|
|
|
* Returns whether the directory name is valid or not if elevel < ERROR.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
ReplicationSlotValidateName(const char *name, int elevel)
|
|
|
|
{
|
|
|
|
const char *cp;
|
|
|
|
|
|
|
|
if (strlen(name) == 0)
|
|
|
|
{
|
|
|
|
ereport(elevel,
|
|
|
|
(errcode(ERRCODE_INVALID_NAME),
|
|
|
|
errmsg("replication slot name \"%s\" is too short",
|
|
|
|
name)));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strlen(name) >= NAMEDATALEN)
|
|
|
|
{
|
|
|
|
ereport(elevel,
|
|
|
|
(errcode(ERRCODE_NAME_TOO_LONG),
|
|
|
|
errmsg("replication slot name \"%s\" is too long",
|
|
|
|
name)));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (cp = name; *cp; cp++)
|
|
|
|
{
|
|
|
|
if (!((*cp >= 'a' && *cp <= 'z')
|
|
|
|
|| (*cp >= '0' && *cp <= '9')
|
|
|
|
|| (*cp == '_')))
|
|
|
|
{
|
|
|
|
ereport(elevel,
|
|
|
|
(errcode(ERRCODE_INVALID_NAME),
|
|
|
|
errmsg("replication slot name \"%s\" contains invalid character",
|
|
|
|
name),
|
2015-10-03 15:29:08 +02:00
|
|
|
errhint("Replication slot names may only contain lower case letters, numbers, and the underscore character.")));
|
2014-02-01 04:45:17 +01:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a new replication slot and mark it as used by this backend.
|
|
|
|
*
|
|
|
|
* name: Name of the slot
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
* db_specific: logical decoding is db specific; if the slot is going to
|
2014-02-01 04:45:17 +01:00
|
|
|
* be used for that pass true, otherwise false.
|
2021-03-03 02:58:43 +01:00
|
|
|
* two_phase: Allows decoding of prepared transactions. We allow this option
|
|
|
|
* to be enabled only at the slot creation time. If we allow this option
|
|
|
|
* to be changed during decoding then it is quite possible that we skip
|
|
|
|
* prepare first time because this option was not enabled. Now next time
|
2022-04-11 10:49:41 +02:00
|
|
|
* during getting changes, if the two_phase option is enabled it can skip
|
2021-03-03 02:58:43 +01:00
|
|
|
* prepare because by that time start decoding point has been moved. So the
|
|
|
|
* user will only get commit prepared.
|
Allow to enable failover property for replication slots via SQL API.
This commit adds the failover property to the replication slot. The
failover property indicates whether the slot will be synced to the standby
servers, enabling the resumption of corresponding logical replication
after failover. But note that this commit does not yet include the
capability to sync the replication slot; the subsequent commits will add
that capability.
A new optional parameter 'failover' is added to the
pg_create_logical_replication_slot() function. We will also enable to set
'failover' option for slots via the subscription commands in the
subsequent commits.
The value of the 'failover' flag is displayed as part of
pg_replication_slots view.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian
Reviewed-by: Peter Smith, Bertrand Drouvot, Dilip Kumar, Masahiko Sawada, Nisha Moond, Kuroda, Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-01-25 07:45:46 +01:00
|
|
|
* failover: If enabled, allows the slot to be synced to standbys so
|
|
|
|
* that logical replication can be resumed after failover.
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
* synced: True if the slot is synchronized from the primary server.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
void
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ReplicationSlotCreate(const char *name, bool db_specific,
|
Allow to enable failover property for replication slots via SQL API.
This commit adds the failover property to the replication slot. The
failover property indicates whether the slot will be synced to the standby
servers, enabling the resumption of corresponding logical replication
after failover. But note that this commit does not yet include the
capability to sync the replication slot; the subsequent commits will add
that capability.
A new optional parameter 'failover' is added to the
pg_create_logical_replication_slot() function. We will also enable to set
'failover' option for slots via the subscription commands in the
subsequent commits.
The value of the 'failover' flag is displayed as part of
pg_replication_slots view.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian
Reviewed-by: Peter Smith, Bertrand Drouvot, Dilip Kumar, Masahiko Sawada, Nisha Moond, Kuroda, Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-01-25 07:45:46 +01:00
|
|
|
ReplicationSlotPersistency persistency,
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
bool two_phase, bool failover, bool synced)
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
|
|
|
ReplicationSlot *slot = NULL;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
Assert(MyReplicationSlot == NULL);
|
|
|
|
|
|
|
|
ReplicationSlotValidateName(name, ERROR);
|
|
|
|
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
if (failover)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Do not allow users to create the failover enabled slots on the
|
|
|
|
* standby as we do not support sync to the cascading standby.
|
|
|
|
*
|
|
|
|
* However, failover enabled slots can be created during slot
|
|
|
|
* synchronization because we need to retain the same values as the
|
|
|
|
* remote slot.
|
|
|
|
*/
|
|
|
|
if (RecoveryInProgress() && !IsSyncingReplicationSlots())
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot enable failover for a replication slot created on the standby"));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not allow users to create failover enabled temporary slots,
|
|
|
|
* because temporary slots will not be synced to the standby.
|
|
|
|
*
|
|
|
|
* However, failover enabled temporary slots can be created during
|
|
|
|
* slot synchronization. See the comments atop slotsync.c for details.
|
|
|
|
*/
|
|
|
|
if (persistency == RS_TEMPORARY && !IsSyncingReplicationSlots())
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot enable failover for a temporary replication slot"));
|
|
|
|
}
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
2015-06-11 03:30:17 +02:00
|
|
|
* If some other backend ran this code concurrently with us, we'd likely
|
2014-02-01 04:45:17 +01:00
|
|
|
* both allocate the same slot, and that would be bad. We'd also be at
|
|
|
|
* risk of missing a name collision. Also, we don't want to try to create
|
|
|
|
* a new slot while somebody's busy cleaning up an old one, because we
|
|
|
|
* might both be monkeying with the same directory.
|
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotAllocationLock, LW_EXCLUSIVE);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for name collision, and identify an allocatable slot. We need to
|
|
|
|
* hold ReplicationSlotControlLock in shared mode for this, so that nobody
|
|
|
|
* else can change the in_use flags while we're looking at them.
|
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
if (s->in_use && strcmp(name, NameStr(s->data.name)) == 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_OBJECT),
|
|
|
|
errmsg("replication slot \"%s\" already exists", name)));
|
|
|
|
if (!s->in_use && slot == NULL)
|
|
|
|
slot = s;
|
|
|
|
}
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
/* If all slots are in use, we're out of luck. */
|
|
|
|
if (slot == NULL)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
|
|
|
|
errmsg("all replication slots are in use"),
|
|
|
|
errhint("Free one or increase max_replication_slots.")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Since this slot is not in use, nobody should be looking at any part of
|
|
|
|
* it other than the in_use field unless they're trying to allocate it.
|
|
|
|
* And since we hold ReplicationSlotAllocationLock, nobody except us can
|
|
|
|
* be doing that. So it's safe to initialize the slot.
|
|
|
|
*/
|
|
|
|
Assert(!slot->in_use);
|
2015-04-21 11:51:06 +02:00
|
|
|
Assert(slot->active_pid == 0);
|
2016-08-17 22:15:03 +02:00
|
|
|
|
|
|
|
/* first initialize persistent data */
|
|
|
|
memset(&slot->data, 0, sizeof(ReplicationSlotPersistentData));
|
2020-08-10 18:51:31 +02:00
|
|
|
namestrcpy(&slot->data.name, name);
|
2014-02-01 04:45:17 +01:00
|
|
|
slot->data.database = db_specific ? MyDatabaseId : InvalidOid;
|
2016-08-17 22:15:03 +02:00
|
|
|
slot->data.persistency = persistency;
|
2021-03-03 02:58:43 +01:00
|
|
|
slot->data.two_phase = two_phase;
|
Add support for prepared transactions to built-in logical replication.
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:
* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.
* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.
We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.
The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.
We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.
Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
2021-07-14 04:03:50 +02:00
|
|
|
slot->data.two_phase_at = InvalidXLogRecPtr;
|
Allow to enable failover property for replication slots via SQL API.
This commit adds the failover property to the replication slot. The
failover property indicates whether the slot will be synced to the standby
servers, enabling the resumption of corresponding logical replication
after failover. But note that this commit does not yet include the
capability to sync the replication slot; the subsequent commits will add
that capability.
A new optional parameter 'failover' is added to the
pg_create_logical_replication_slot() function. We will also enable to set
'failover' option for slots via the subscription commands in the
subsequent commits.
The value of the 'failover' flag is displayed as part of
pg_replication_slots view.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian
Reviewed-by: Peter Smith, Bertrand Drouvot, Dilip Kumar, Masahiko Sawada, Nisha Moond, Kuroda, Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-01-25 07:45:46 +01:00
|
|
|
slot->data.failover = failover;
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
slot->data.synced = synced;
|
2016-08-17 22:15:03 +02:00
|
|
|
|
|
|
|
/* and then data only present in shared memory */
|
|
|
|
slot->just_dirtied = false;
|
|
|
|
slot->dirty = false;
|
|
|
|
slot->effective_xmin = InvalidTransactionId;
|
|
|
|
slot->effective_catalog_xmin = InvalidTransactionId;
|
|
|
|
slot->candidate_catalog_xmin = InvalidTransactionId;
|
|
|
|
slot->candidate_xmin_lsn = InvalidXLogRecPtr;
|
|
|
|
slot->candidate_restart_valid = InvalidXLogRecPtr;
|
|
|
|
slot->candidate_restart_lsn = InvalidXLogRecPtr;
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
slot->last_saved_confirmed_flush = InvalidXLogRecPtr;
|
2024-03-25 12:04:33 +01:00
|
|
|
slot->last_inactive_time = 0;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create the slot on disk. We haven't actually marked the slot allocated
|
|
|
|
* yet, so no special cleanup is required if this errors out.
|
|
|
|
*/
|
|
|
|
CreateSlotOnDisk(slot);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to briefly prevent any other backend from iterating over the
|
|
|
|
* slots while we flip the in_use flag. We also need to set the active
|
|
|
|
* flag while holding the ControlLock as otherwise a concurrent
|
2019-08-13 06:53:41 +02:00
|
|
|
* ReplicationSlotAcquire() could acquire the slot as well.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_EXCLUSIVE);
|
|
|
|
|
|
|
|
slot->in_use = true;
|
|
|
|
|
|
|
|
/* We can now mark the slot active, and that makes it our slot. */
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
Assert(slot->active_pid == 0);
|
|
|
|
slot->active_pid = MyProcPid;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
MyReplicationSlot = slot;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
2020-10-08 05:39:08 +02:00
|
|
|
/*
|
|
|
|
* Create statistics entry for the new logical slot. We don't collect any
|
|
|
|
* stats for physical slots, so no need to create an entry for the same.
|
|
|
|
* See ReplicationSlotDropPtr for why we need to do this before releasing
|
|
|
|
* ReplicationSlotAllocationLock.
|
|
|
|
*/
|
|
|
|
if (SlotIsLogical(slot))
|
2022-04-07 03:26:17 +02:00
|
|
|
pgstat_create_replslot(slot);
|
2020-10-08 05:39:08 +02:00
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
2017-07-25 19:26:49 +02:00
|
|
|
* Now that the slot has been marked as in_use and active, it's safe to
|
2014-02-01 04:45:17 +01:00
|
|
|
* let somebody else try to allocate a slot.
|
|
|
|
*/
|
|
|
|
LWLockRelease(ReplicationSlotAllocationLock);
|
2017-07-25 19:26:49 +02:00
|
|
|
|
|
|
|
/* Let everybody know we've modified this slot */
|
|
|
|
ConditionVariableBroadcast(&slot->active_cv);
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-06-19 10:15:52 +02:00
|
|
|
* Search for the named replication slot.
|
2020-04-08 00:35:00 +02:00
|
|
|
*
|
2020-06-19 10:15:52 +02:00
|
|
|
* Return the replication slot if found, otherwise NULL.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
2020-10-08 05:39:08 +02:00
|
|
|
ReplicationSlot *
|
2021-04-27 05:39:11 +02:00
|
|
|
SearchNamedReplicationSlot(const char *name, bool need_lock)
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
|
|
|
int i;
|
2021-04-27 05:39:11 +02:00
|
|
|
ReplicationSlot *slot = NULL;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2021-04-27 05:39:11 +02:00
|
|
|
if (need_lock)
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
if (s->in_use && strcmp(name, NameStr(s->data.name)) == 0)
|
|
|
|
{
|
|
|
|
slot = s;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-27 05:39:11 +02:00
|
|
|
if (need_lock)
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
2020-06-19 10:15:52 +02:00
|
|
|
return slot;
|
|
|
|
}
|
|
|
|
|
2022-04-07 03:26:17 +02:00
|
|
|
/*
|
|
|
|
* Return the index of the replication slot in
|
|
|
|
* ReplicationSlotCtl->replication_slots.
|
|
|
|
*
|
|
|
|
* This is mainly useful to have an efficient key for storing replication slot
|
|
|
|
* stats.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ReplicationSlotIndex(ReplicationSlot *slot)
|
|
|
|
{
|
|
|
|
Assert(slot >= ReplicationSlotCtl->replication_slots &&
|
|
|
|
slot < ReplicationSlotCtl->replication_slots + max_replication_slots);
|
|
|
|
|
|
|
|
return slot - ReplicationSlotCtl->replication_slots;
|
|
|
|
}
|
|
|
|
|
pgstat: Prevent stats reset from corrupting slotname by removing slotname
Previously PgStat_StatReplSlotEntry contained the slotname, which was mainly
used when writing out the stats during shutdown, to identify the slot in the
serialized data (at runtime the index in ReplicationSlotCtl->replication_slots
is used, but that can change during a restart). Unfortunately the slotname was
overwritten when the slot's stats were reset.
That turned out to only cause "real" problems if the slot was active during
the reset, triggering an assertion failure at the next
pgstat_report_replslot(). In other paths the stats were re-initialized during
pgstat_acquire_replslot().
Fix this by removing slotname from PgStat_StatReplSlotEntry. Instead we can
get the slot's name from the slot itself. Besides fixing a bug, this also is
architecturally cleaner (a name is not really statistics). This is safe
because stats, for a slot removed while shut down, will not be restored at
startup.
In 15 the slotname is not removed, but renamed, to avoid changing the stats
format. In master, bump PGSTAT_FILE_FORMAT_ID.
This commit does not contain a test for the fix. I think this can only be
tested by a tap test starting pg_recvlogical in the background and checking
pg_recvlogical's output. That type of test is notoriously hard to be reliable,
so committing it shortly before the release is wrapped seems like a bad idea.
Reported-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/YxfagaTXUNa9ggLb@ahch-to
Backpatch: 15-, where the bug was introduced in 5891c7a8ed8f
2022-10-08 18:33:23 +02:00
|
|
|
/*
|
|
|
|
* If the slot at 'index' is unused, return false. Otherwise 'name' is set to
|
|
|
|
* the slot's name and true is returned.
|
|
|
|
*
|
|
|
|
* This likely is only useful for pgstat_replslot.c during shutdown, in other
|
|
|
|
* cases there are obvious TOCTOU issues.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
ReplicationSlotName(int index, Name name)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot;
|
|
|
|
bool found;
|
|
|
|
|
|
|
|
slot = &ReplicationSlotCtl->replication_slots[index];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure that the slot cannot be dropped while we copy the name. Don't
|
|
|
|
* need the spinlock as the name of an existing slot cannot change.
|
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
found = slot->in_use;
|
|
|
|
if (slot->in_use)
|
|
|
|
namestrcpy(name, NameStr(slot->data.name));
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2020-06-19 10:15:52 +02:00
|
|
|
/*
|
|
|
|
* Find a previously created slot and mark it as used by this process.
|
|
|
|
*
|
2021-06-11 21:48:26 +02:00
|
|
|
* An error is raised if nowait is true and the slot is currently in use. If
|
|
|
|
* nowait is false, we sleep until the slot is released by the owning process.
|
2020-06-19 10:15:52 +02:00
|
|
|
*/
|
2021-06-11 21:48:26 +02:00
|
|
|
void
|
|
|
|
ReplicationSlotAcquire(const char *name, bool nowait)
|
2020-06-19 10:15:52 +02:00
|
|
|
{
|
|
|
|
ReplicationSlot *s;
|
|
|
|
int active_pid;
|
|
|
|
|
2022-10-28 09:19:06 +02:00
|
|
|
Assert(name != NULL);
|
2020-06-19 10:15:52 +02:00
|
|
|
|
|
|
|
retry:
|
|
|
|
Assert(MyReplicationSlot == NULL);
|
|
|
|
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
|
2024-01-29 05:42:58 +01:00
|
|
|
/* Check if the slot exits with the given name. */
|
2021-06-11 21:48:26 +02:00
|
|
|
s = SearchNamedReplicationSlot(name, false);
|
2020-06-19 10:15:52 +02:00
|
|
|
if (s == NULL || !s->in_use)
|
|
|
|
{
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
2020-06-19 10:15:52 +02:00
|
|
|
errmsg("replication slot \"%s\" does not exist",
|
2021-06-11 21:48:26 +02:00
|
|
|
name)));
|
2020-06-19 10:15:52 +02:00
|
|
|
}
|
2017-07-25 19:26:49 +02:00
|
|
|
|
|
|
|
/*
|
2020-06-19 10:15:52 +02:00
|
|
|
* This is the slot we want; check if it's active under some other
|
|
|
|
* process. In single user mode, we don't need this check.
|
|
|
|
*/
|
|
|
|
if (IsUnderPostmaster)
|
|
|
|
{
|
|
|
|
/*
|
2021-06-14 22:31:12 +02:00
|
|
|
* Get ready to sleep on the slot in case it is active. (We may end
|
|
|
|
* up not sleeping, but we don't want to do this while holding the
|
|
|
|
* spinlock.)
|
2020-06-19 10:15:52 +02:00
|
|
|
*/
|
2021-06-11 21:48:26 +02:00
|
|
|
if (!nowait)
|
2020-06-19 10:15:52 +02:00
|
|
|
ConditionVariablePrepareToSleep(&s->active_cv);
|
|
|
|
|
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
if (s->active_pid == 0)
|
|
|
|
s->active_pid = MyProcPid;
|
|
|
|
active_pid = s->active_pid;
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
active_pid = MyProcPid;
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we found the slot but it's already active in another process, we
|
2021-06-14 22:31:12 +02:00
|
|
|
* wait until the owning process signals us that it's been released, or
|
|
|
|
* error out.
|
2017-07-25 19:26:49 +02:00
|
|
|
*/
|
2016-12-08 18:00:00 +01:00
|
|
|
if (active_pid != MyProcPid)
|
2017-07-25 19:26:49 +02:00
|
|
|
{
|
2021-06-11 21:48:26 +02:00
|
|
|
if (!nowait)
|
2021-06-14 22:31:12 +02:00
|
|
|
{
|
|
|
|
/* Wait here until we get signaled, and then restart */
|
|
|
|
ConditionVariableSleep(&s->active_cv,
|
|
|
|
WAIT_EVENT_REPLICATION_SLOT_DROP);
|
|
|
|
ConditionVariableCancelSleep();
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("replication slot \"%s\" is active for PID %d",
|
|
|
|
NameStr(s->data.name), active_pid)));
|
2017-07-25 19:26:49 +02:00
|
|
|
}
|
2021-06-11 21:48:26 +02:00
|
|
|
else if (!nowait)
|
2020-06-19 10:15:52 +02:00
|
|
|
ConditionVariableCancelSleep(); /* no sleep needed after all */
|
2017-07-25 19:26:49 +02:00
|
|
|
|
|
|
|
/* Let everybody know we've modified this slot */
|
2020-06-19 10:15:52 +02:00
|
|
|
ConditionVariableBroadcast(&s->active_cv);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* We made this slot active, so it's ours now. */
|
2020-06-19 10:15:52 +02:00
|
|
|
MyReplicationSlot = s;
|
2022-04-07 06:29:46 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The call to pgstat_acquire_replslot() protects against stats for a
|
|
|
|
* different slot, from before a restart or such, being present during
|
|
|
|
* pgstat_report_replslot().
|
|
|
|
*/
|
|
|
|
if (SlotIsLogical(s))
|
|
|
|
pgstat_acquire_replslot(s);
|
2023-11-21 03:29:53 +01:00
|
|
|
|
2024-03-25 12:04:33 +01:00
|
|
|
/* Reset the last inactive time as the slot is active now. */
|
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
s->last_inactive_time = 0;
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
|
2023-11-21 03:29:53 +01:00
|
|
|
if (am_walsender)
|
|
|
|
{
|
|
|
|
ereport(log_replication_commands ? LOG : DEBUG1,
|
|
|
|
SlotIsLogical(s)
|
|
|
|
? errmsg("acquired logical replication slot \"%s\"",
|
|
|
|
NameStr(s->data.name))
|
|
|
|
: errmsg("acquired physical replication slot \"%s\"",
|
|
|
|
NameStr(s->data.name)));
|
|
|
|
}
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2017-07-25 19:26:49 +02:00
|
|
|
* Release the replication slot that this backend considers to own.
|
|
|
|
*
|
|
|
|
* This or another backend can re-acquire the slot later.
|
|
|
|
* Resources this slot requires will be preserved.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotRelease(void)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot = MyReplicationSlot;
|
2023-11-21 03:29:53 +01:00
|
|
|
char *slotname = NULL; /* keep compiler quiet */
|
|
|
|
bool is_logical = false; /* keep compiler quiet */
|
2024-03-25 12:04:33 +01:00
|
|
|
TimestampTz now = 0;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2015-04-21 11:51:06 +02:00
|
|
|
Assert(slot != NULL && slot->active_pid != 0);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2023-11-21 03:29:53 +01:00
|
|
|
if (am_walsender)
|
|
|
|
{
|
|
|
|
slotname = pstrdup(NameStr(slot->data.name));
|
|
|
|
is_logical = SlotIsLogical(slot);
|
|
|
|
}
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
if (slot->data.persistency == RS_EPHEMERAL)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Delete the slot. There is no !PANIC case where this is allowed to
|
|
|
|
* fail, all that may happen is an incomplete cleanup of the on-disk
|
|
|
|
* data.
|
|
|
|
*/
|
|
|
|
ReplicationSlotDropAcquired();
|
|
|
|
}
|
Preserve required !catalog tuples while computing initial decoding snapshot.
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers
Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
2017-04-24 05:41:29 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If slot needed to temporarily restrain both data and catalog xmin to
|
|
|
|
* create the catalog snapshot, remove that temporary constraint.
|
|
|
|
* Snapshots can only be exported while the initial snapshot is still
|
|
|
|
* acquired.
|
|
|
|
*/
|
|
|
|
if (!TransactionIdIsValid(slot->data.xmin) &&
|
|
|
|
TransactionIdIsValid(slot->effective_xmin))
|
|
|
|
{
|
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
slot->effective_xmin = InvalidTransactionId;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
ReplicationSlotsComputeRequiredXmin(false);
|
|
|
|
}
|
|
|
|
|
2024-03-25 12:04:33 +01:00
|
|
|
/*
|
2024-03-25 14:18:33 +01:00
|
|
|
* Set the last inactive time after marking the slot inactive. We don't
|
|
|
|
* set it for the slots currently being synced from the primary to the
|
|
|
|
* standby because such slots are typically inactive as decoding is not
|
|
|
|
* allowed on those.
|
2024-03-25 12:04:33 +01:00
|
|
|
*/
|
|
|
|
if (!(RecoveryInProgress() && slot->data.synced))
|
|
|
|
now = GetCurrentTimestamp();
|
|
|
|
|
2017-07-25 19:26:49 +02:00
|
|
|
if (slot->data.persistency == RS_PERSISTENT)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Mark persistent slot inactive. We're not freeing it, just
|
|
|
|
* disconnecting, but wake up others that may be waiting for it.
|
|
|
|
*/
|
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
slot->active_pid = 0;
|
2024-03-25 12:04:33 +01:00
|
|
|
slot->last_inactive_time = now;
|
2017-07-25 19:26:49 +02:00
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
ConditionVariableBroadcast(&slot->active_cv);
|
|
|
|
}
|
2024-03-25 12:04:33 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
slot->last_inactive_time = now;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
}
|
2017-07-25 19:26:49 +02:00
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
MyReplicationSlot = NULL;
|
|
|
|
|
|
|
|
/* might not have been set when we've been a plain slot */
|
2021-11-11 15:03:29 +01:00
|
|
|
LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
|
2020-11-16 23:42:55 +01:00
|
|
|
MyProc->statusFlags &= ~PROC_IN_LOGICAL_DECODING;
|
|
|
|
ProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
LWLockRelease(ProcArrayLock);
|
2023-11-21 03:29:53 +01:00
|
|
|
|
|
|
|
if (am_walsender)
|
|
|
|
{
|
|
|
|
ereport(log_replication_commands ? LOG : DEBUG1,
|
|
|
|
is_logical
|
|
|
|
? errmsg("released logical replication slot \"%s\"",
|
|
|
|
slotname)
|
|
|
|
: errmsg("released physical replication slot \"%s\"",
|
|
|
|
slotname));
|
|
|
|
|
|
|
|
pfree(slotname);
|
|
|
|
}
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
2016-12-08 18:00:00 +01:00
|
|
|
/*
|
|
|
|
* Cleanup all temporary slots created in current session.
|
|
|
|
*/
|
|
|
|
void
|
2017-04-06 05:56:35 +02:00
|
|
|
ReplicationSlotCleanup(void)
|
2016-12-08 18:00:00 +01:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
Assert(MyReplicationSlot == NULL);
|
|
|
|
|
2017-07-25 19:26:49 +02:00
|
|
|
restart:
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
2016-12-08 18:00:00 +01:00
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
2017-07-25 19:26:49 +02:00
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
SpinLockAcquire(&s->mutex);
|
2016-12-08 18:00:00 +01:00
|
|
|
if (s->active_pid == MyProcPid)
|
|
|
|
{
|
2017-07-25 19:26:49 +02:00
|
|
|
Assert(s->data.persistency == RS_TEMPORARY);
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
LWLockRelease(ReplicationSlotControlLock); /* avoid deadlock */
|
2016-12-08 18:00:00 +01:00
|
|
|
|
|
|
|
ReplicationSlotDropPtr(s);
|
2017-07-25 19:26:49 +02:00
|
|
|
|
|
|
|
ConditionVariableBroadcast(&s->active_cv);
|
|
|
|
goto restart;
|
2016-12-08 18:00:00 +01:00
|
|
|
}
|
2017-07-25 19:26:49 +02:00
|
|
|
else
|
|
|
|
SpinLockRelease(&s->mutex);
|
2016-12-08 18:00:00 +01:00
|
|
|
}
|
2017-07-25 19:26:49 +02:00
|
|
|
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
2016-12-08 18:00:00 +01:00
|
|
|
}
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
|
|
|
* Permanently drop replication slot identified by the passed in name.
|
|
|
|
*/
|
|
|
|
void
|
2017-07-25 19:26:49 +02:00
|
|
|
ReplicationSlotDrop(const char *name, bool nowait)
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
Assert(MyReplicationSlot == NULL);
|
|
|
|
|
2021-06-11 21:48:26 +02:00
|
|
|
ReplicationSlotAcquire(name, nowait);
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
/*
|
|
|
|
* Do not allow users to drop the slots which are currently being synced
|
|
|
|
* from the primary to the standby.
|
|
|
|
*/
|
|
|
|
if (RecoveryInProgress() && MyReplicationSlot->data.synced)
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
|
|
|
errmsg("cannot drop replication slot \"%s\"", name),
|
|
|
|
errdetail("This slot is being synced from the primary server."));
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ReplicationSlotDropAcquired();
|
|
|
|
}
|
|
|
|
|
2024-01-29 04:40:00 +01:00
|
|
|
/*
|
|
|
|
* Change the definition of the slot identified by the specified name.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotAlter(const char *name, bool failover)
|
|
|
|
{
|
|
|
|
Assert(MyReplicationSlot == NULL);
|
|
|
|
|
|
|
|
ReplicationSlotAcquire(name, false);
|
|
|
|
|
|
|
|
if (SlotIsPhysical(MyReplicationSlot))
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot use %s with a physical replication slot",
|
|
|
|
"ALTER_REPLICATION_SLOT"));
|
|
|
|
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
if (RecoveryInProgress())
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Do not allow users to alter the slots which are currently being
|
|
|
|
* synced from the primary to the standby.
|
|
|
|
*/
|
|
|
|
if (MyReplicationSlot->data.synced)
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
|
|
|
errmsg("cannot alter replication slot \"%s\"", name),
|
|
|
|
errdetail("This slot is being synced from the primary server."));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not allow users to enable failover on the standby as we do not
|
|
|
|
* support sync to the cascading standby.
|
|
|
|
*/
|
|
|
|
if (failover)
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot enable failover for a replication slot"
|
|
|
|
" on the standby"));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not allow users to enable failover for temporary slots as we do not
|
|
|
|
* support syncing temporary slots to the standby.
|
|
|
|
*/
|
|
|
|
if (failover && MyReplicationSlot->data.persistency == RS_TEMPORARY)
|
|
|
|
ereport(ERROR,
|
|
|
|
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot enable failover for a temporary replication slot"));
|
|
|
|
|
2024-02-07 05:34:04 +01:00
|
|
|
if (MyReplicationSlot->data.failover != failover)
|
|
|
|
{
|
|
|
|
SpinLockAcquire(&MyReplicationSlot->mutex);
|
|
|
|
MyReplicationSlot->data.failover = failover;
|
|
|
|
SpinLockRelease(&MyReplicationSlot->mutex);
|
|
|
|
|
|
|
|
ReplicationSlotMarkDirty();
|
|
|
|
ReplicationSlotSave();
|
|
|
|
}
|
2024-01-29 04:40:00 +01:00
|
|
|
|
|
|
|
ReplicationSlotRelease();
|
|
|
|
}
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
/*
|
2016-12-08 18:00:00 +01:00
|
|
|
* Permanently drop the currently acquired replication slot.
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
*/
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
void
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ReplicationSlotDropAcquired(void)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot = MyReplicationSlot;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
Assert(MyReplicationSlot != NULL);
|
|
|
|
|
|
|
|
/* slot isn't acquired anymore */
|
|
|
|
MyReplicationSlot = NULL;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2016-12-08 18:00:00 +01:00
|
|
|
ReplicationSlotDropPtr(slot);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Permanently drop the replication slot which will be released by the point
|
|
|
|
* this function returns.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ReplicationSlotDropPtr(ReplicationSlot *slot)
|
|
|
|
{
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
char tmppath[MAXPGPATH];
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
* If some other backend ran this code concurrently with us, we might try
|
|
|
|
* to delete a slot with a certain name while someone else was trying to
|
|
|
|
* create a slot with the same name.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotAllocationLock, LW_EXCLUSIVE);
|
|
|
|
|
|
|
|
/* Generate pathnames. */
|
|
|
|
sprintf(path, "pg_replslot/%s", NameStr(slot->data.name));
|
|
|
|
sprintf(tmppath, "pg_replslot/%s.tmp", NameStr(slot->data.name));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Rename the slot directory on disk, so that we'll no longer recognize
|
|
|
|
* this as a valid slot. Note that if this fails, we've got to mark the
|
2016-12-08 18:00:00 +01:00
|
|
|
* slot inactive before bailing out. If we're dropping an ephemeral or a
|
|
|
|
* temporary slot, we better never fail hard as the caller won't expect
|
|
|
|
* the slot to survive and this might get called during error handling.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
if (rename(path, tmppath) == 0)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We need to fsync() the directory we just renamed and its parent to
|
|
|
|
* make sure that our changes are on disk in a crash-safe fashion. If
|
|
|
|
* fsync() fails, we can't be sure whether the changes are on disk or
|
|
|
|
* not. For now, we handle that by panicking;
|
|
|
|
* StartupReplicationSlots() will try to straighten it out after
|
|
|
|
* restart.
|
|
|
|
*/
|
|
|
|
START_CRIT_SECTION();
|
|
|
|
fsync_fname(tmppath, true);
|
|
|
|
fsync_fname("pg_replslot", true);
|
|
|
|
END_CRIT_SECTION();
|
|
|
|
}
|
|
|
|
else
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
2016-12-08 18:00:00 +01:00
|
|
|
bool fail_softly = slot->data.persistency != RS_PERSISTENT;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
SpinLockAcquire(&slot->mutex);
|
2015-10-06 21:45:02 +02:00
|
|
|
slot->active_pid = 0;
|
2014-02-01 04:45:17 +01:00
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
|
2017-07-25 19:26:49 +02:00
|
|
|
/* wake up anyone waiting on this slot */
|
|
|
|
ConditionVariableBroadcast(&slot->active_cv);
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ereport(fail_softly ? WARNING : ERROR,
|
2014-02-01 04:45:17 +01:00
|
|
|
(errcode_for_file_access(),
|
2014-09-05 07:20:33 +02:00
|
|
|
errmsg("could not rename file \"%s\" to \"%s\": %m",
|
2014-02-01 04:45:17 +01:00
|
|
|
path, tmppath)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The slot is definitely gone. Lock out concurrent scans of the array
|
2017-07-25 19:26:49 +02:00
|
|
|
* long enough to kill it. It's OK to clear the active PID here without
|
2014-02-01 04:45:17 +01:00
|
|
|
* grabbing the mutex because nobody else can be scanning the array here,
|
|
|
|
* and nobody can be attached to this slot and thus access it without
|
|
|
|
* scanning the array.
|
2017-07-25 19:26:49 +02:00
|
|
|
*
|
|
|
|
* Also wake up processes waiting for it.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_EXCLUSIVE);
|
2015-04-21 11:51:06 +02:00
|
|
|
slot->active_pid = 0;
|
2014-02-01 04:45:17 +01:00
|
|
|
slot->in_use = false;
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
2017-07-25 19:26:49 +02:00
|
|
|
ConditionVariableBroadcast(&slot->active_cv);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Slot is dead and doesn't prevent resource removal anymore, recompute
|
|
|
|
* limits.
|
|
|
|
*/
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ReplicationSlotsComputeRequiredXmin(false);
|
2014-02-01 04:45:17 +01:00
|
|
|
ReplicationSlotsComputeRequiredLSN();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If removing the directory fails, the worst thing that will happen is
|
|
|
|
* that the user won't be able to create a new slot with the same name
|
|
|
|
* until the next server restart. We warn about it, but that's all.
|
|
|
|
*/
|
|
|
|
if (!rmtree(tmppath, true))
|
|
|
|
ereport(WARNING,
|
2018-09-04 20:06:04 +02:00
|
|
|
(errmsg("could not remove directory \"%s\"", tmppath)));
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2020-10-08 05:39:08 +02:00
|
|
|
/*
|
2022-04-07 06:29:46 +02:00
|
|
|
* Drop the statistics entry for the replication slot. Do this while
|
|
|
|
* holding ReplicationSlotAllocationLock so that we don't drop a
|
|
|
|
* statistics entry for another slot with the same name just created in
|
|
|
|
* another session.
|
2020-10-08 05:39:08 +02:00
|
|
|
*/
|
|
|
|
if (SlotIsLogical(slot))
|
2022-04-07 03:26:17 +02:00
|
|
|
pgstat_drop_replslot(slot);
|
2020-10-08 05:39:08 +02:00
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
|
|
|
* We release this at the very end, so that nobody starts trying to create
|
|
|
|
* a slot while we're still cleaning up the detritus of the old one.
|
|
|
|
*/
|
|
|
|
LWLockRelease(ReplicationSlotAllocationLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Serialize the currently acquired slot's state from memory to disk, thereby
|
|
|
|
* guaranteeing the current state will survive a crash.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotSave(void)
|
|
|
|
{
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
|
|
|
|
Assert(MyReplicationSlot != NULL);
|
|
|
|
|
|
|
|
sprintf(path, "pg_replslot/%s", NameStr(MyReplicationSlot->data.name));
|
|
|
|
SaveSlotToPath(MyReplicationSlot, path, ERROR);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Signal that it would be useful if the currently acquired slot would be
|
|
|
|
* flushed out to disk.
|
|
|
|
*
|
|
|
|
* Note that the actual flush to disk can be delayed for a long time, if
|
|
|
|
* required for correctness explicitly do a ReplicationSlotSave().
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotMarkDirty(void)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
ReplicationSlot *slot = MyReplicationSlot;
|
2016-06-10 00:02:36 +02:00
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
Assert(MyReplicationSlot != NULL);
|
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
MyReplicationSlot->just_dirtied = true;
|
|
|
|
MyReplicationSlot->dirty = true;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
/*
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
* Convert a slot that's marked as RS_EPHEMERAL or RS_TEMPORARY to a
|
|
|
|
* RS_PERSISTENT slot, guaranteeing it will be there after an eventual crash.
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotPersist(void)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot = MyReplicationSlot;
|
|
|
|
|
|
|
|
Assert(slot != NULL);
|
|
|
|
Assert(slot->data.persistency != RS_PERSISTENT);
|
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
slot->data.persistency = RS_PERSISTENT;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
ReplicationSlotMarkDirty();
|
|
|
|
ReplicationSlotSave();
|
|
|
|
}
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
|
|
|
* Compute the oldest xmin across all slots and store it in the ProcArray.
|
Preserve required !catalog tuples while computing initial decoding snapshot.
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers
Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
2017-04-24 05:41:29 +02:00
|
|
|
*
|
|
|
|
* If already_locked is true, ProcArrayLock has already been acquired
|
|
|
|
* exclusively.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
void
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ReplicationSlotsComputeRequiredXmin(bool already_locked)
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
TransactionId agg_xmin = InvalidTransactionId;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
TransactionId agg_catalog_xmin = InvalidTransactionId;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
Assert(ReplicationSlotCtl != NULL);
|
|
|
|
|
Preserve required !catalog tuples while computing initial decoding snapshot.
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers
Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
2017-04-24 05:41:29 +02:00
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
TransactionId effective_xmin;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
TransactionId effective_catalog_xmin;
|
2022-11-22 10:56:07 +01:00
|
|
|
bool invalidated;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
effective_xmin = s->effective_xmin;
|
|
|
|
effective_catalog_xmin = s->effective_catalog_xmin;
|
2023-04-08 06:47:25 +02:00
|
|
|
invalidated = s->data.invalidated != RS_INVAL_NONE;
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockRelease(&s->mutex);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2022-11-22 10:56:07 +01:00
|
|
|
/* invalidated slots need not apply */
|
|
|
|
if (invalidated)
|
|
|
|
continue;
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/* check the data xmin */
|
|
|
|
if (TransactionIdIsValid(effective_xmin) &&
|
|
|
|
(!TransactionIdIsValid(agg_xmin) ||
|
|
|
|
TransactionIdPrecedes(effective_xmin, agg_xmin)))
|
|
|
|
agg_xmin = effective_xmin;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
/* check the catalog xmin */
|
|
|
|
if (TransactionIdIsValid(effective_catalog_xmin) &&
|
|
|
|
(!TransactionIdIsValid(agg_catalog_xmin) ||
|
|
|
|
TransactionIdPrecedes(effective_catalog_xmin, agg_catalog_xmin)))
|
|
|
|
agg_catalog_xmin = effective_catalog_xmin;
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
Preserve required !catalog tuples while computing initial decoding snapshot.
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers
Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
2017-04-24 05:41:29 +02:00
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
ProcArraySetReplicationSlotXmin(agg_xmin, agg_catalog_xmin, already_locked);
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute the oldest restart LSN across all slots and inform xlog module.
|
2020-04-08 00:35:00 +02:00
|
|
|
*
|
|
|
|
* Note: while max_slot_wal_keep_size is theoretically relevant for this
|
|
|
|
* purpose, we don't try to account for that, because this module doesn't
|
|
|
|
* know what to compare against.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotsComputeRequiredLSN(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
XLogRecPtr min_required = InvalidXLogRecPtr;
|
|
|
|
|
|
|
|
Assert(ReplicationSlotCtl != NULL);
|
|
|
|
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
XLogRecPtr restart_lsn;
|
2023-04-08 06:47:25 +02:00
|
|
|
bool invalidated;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
restart_lsn = s->data.restart_lsn;
|
2023-04-08 06:47:25 +02:00
|
|
|
invalidated = s->data.invalidated != RS_INVAL_NONE;
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockRelease(&s->mutex);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2023-04-08 06:47:25 +02:00
|
|
|
/* invalidated slots need not apply */
|
|
|
|
if (invalidated)
|
|
|
|
continue;
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
if (restart_lsn != InvalidXLogRecPtr &&
|
|
|
|
(min_required == InvalidXLogRecPtr ||
|
|
|
|
restart_lsn < min_required))
|
|
|
|
min_required = restart_lsn;
|
|
|
|
}
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
XLogSetReplicationSlotMinimumLSN(min_required);
|
|
|
|
}
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
/*
|
|
|
|
* Compute the oldest WAL LSN required by *logical* decoding slots..
|
|
|
|
*
|
2015-08-11 12:32:48 +02:00
|
|
|
* Returns InvalidXLogRecPtr if logical decoding is disabled or no logical
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
* slots exist.
|
|
|
|
*
|
|
|
|
* NB: this returns a value >= ReplicationSlotsComputeRequiredLSN(), since it
|
|
|
|
* ignores physical replication slots.
|
|
|
|
*
|
|
|
|
* The results aren't required frequently, so we don't maintain a precomputed
|
|
|
|
* value like we do for ComputeRequiredLSN() and ComputeRequiredXmin().
|
|
|
|
*/
|
|
|
|
XLogRecPtr
|
|
|
|
ReplicationSlotsComputeLogicalRestartLSN(void)
|
|
|
|
{
|
|
|
|
XLogRecPtr result = InvalidXLogRecPtr;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (max_replication_slots <= 0)
|
|
|
|
return InvalidXLogRecPtr;
|
|
|
|
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
ReplicationSlot *s;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
XLogRecPtr restart_lsn;
|
2023-04-08 06:47:25 +02:00
|
|
|
bool invalidated;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
/* cannot change while ReplicationSlotCtlLock is held */
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* we're only interested in logical slots */
|
2015-08-11 12:32:48 +02:00
|
|
|
if (!SlotIsLogical(s))
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* read once, it's ok if it increases while we're checking */
|
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
restart_lsn = s->data.restart_lsn;
|
2023-04-08 06:47:25 +02:00
|
|
|
invalidated = s->data.invalidated != RS_INVAL_NONE;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
|
2023-04-08 06:47:25 +02:00
|
|
|
/* invalidated slots need not apply */
|
|
|
|
if (invalidated)
|
|
|
|
continue;
|
|
|
|
|
2020-04-08 00:35:00 +02:00
|
|
|
if (restart_lsn == InvalidXLogRecPtr)
|
|
|
|
continue;
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
if (result == InvalidXLogRecPtr ||
|
|
|
|
restart_lsn < result)
|
|
|
|
result = restart_lsn;
|
|
|
|
}
|
|
|
|
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ReplicationSlotsCountDBSlots -- count the number of slots that refer to the
|
|
|
|
* passed database oid.
|
|
|
|
*
|
|
|
|
* Returns true if there are any slots referencing the database. *nslots will
|
|
|
|
* be set to the absolute number of slots in the database, *nactive to ones
|
|
|
|
* currently active.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
ReplicationSlotsCountDBSlots(Oid dboid, int *nslots, int *nactive)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
*nslots = *nactive = 0;
|
|
|
|
|
|
|
|
if (max_replication_slots <= 0)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
2015-10-06 21:45:02 +02:00
|
|
|
ReplicationSlot *s;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
/* cannot change while ReplicationSlotCtlLock is held */
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
2015-08-11 12:32:48 +02:00
|
|
|
/* only logical slots are database specific, skip */
|
|
|
|
if (!SlotIsLogical(s))
|
2014-03-07 17:42:18 +01:00
|
|
|
continue;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
/* not our database, skip */
|
|
|
|
if (s->data.database != dboid)
|
|
|
|
continue;
|
|
|
|
|
2023-04-08 06:47:25 +02:00
|
|
|
/* NB: intentionally counting invalidated slots */
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
/* count slots with spinlock held */
|
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
(*nslots)++;
|
2015-04-21 11:51:06 +02:00
|
|
|
if (s->active_pid != 0)
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
(*nactive)++;
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
}
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
if (*nslots > 0)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2017-03-28 16:05:21 +02:00
|
|
|
/*
|
|
|
|
* ReplicationSlotsDropDBSlots -- Drop all db-specific slots relating to the
|
|
|
|
* passed database oid. The caller should hold an exclusive lock on the
|
|
|
|
* pg_database oid for the database to prevent creation of new slots on the db
|
|
|
|
* or replay from existing slots.
|
|
|
|
*
|
|
|
|
* Another session that concurrently acquires an existing slot on the target DB
|
|
|
|
* (most likely to drop it) may cause this function to ERROR. If that happens
|
|
|
|
* it may have dropped some but not all slots.
|
2017-04-06 05:56:35 +02:00
|
|
|
*
|
|
|
|
* This routine isn't as efficient as it could be - but we don't drop
|
|
|
|
* databases often, especially databases with lots of slots.
|
2017-03-28 16:05:21 +02:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotsDropDBSlots(Oid dboid)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (max_replication_slots <= 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
restart:
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s;
|
2017-04-06 05:56:35 +02:00
|
|
|
char *slotname;
|
2017-03-28 16:05:21 +02:00
|
|
|
int active_pid;
|
|
|
|
|
|
|
|
s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
/* cannot change while ReplicationSlotCtlLock is held */
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* only logical slots are database specific, skip */
|
|
|
|
if (!SlotIsLogical(s))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* not our database, skip */
|
|
|
|
if (s->data.database != dboid)
|
|
|
|
continue;
|
|
|
|
|
2023-04-08 06:47:25 +02:00
|
|
|
/* NB: intentionally including invalidated slots */
|
|
|
|
|
2017-04-06 05:56:35 +02:00
|
|
|
/* acquire slot, so ReplicationSlotDropAcquired can be reused */
|
2017-03-28 16:05:21 +02:00
|
|
|
SpinLockAcquire(&s->mutex);
|
2017-04-06 05:56:35 +02:00
|
|
|
/* can't change while ReplicationSlotControlLock is held */
|
|
|
|
slotname = NameStr(s->data.name);
|
2017-03-28 16:05:21 +02:00
|
|
|
active_pid = s->active_pid;
|
|
|
|
if (active_pid == 0)
|
|
|
|
{
|
|
|
|
MyReplicationSlot = s;
|
|
|
|
s->active_pid = MyProcPid;
|
|
|
|
}
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
|
|
|
|
/*
|
2017-04-06 05:56:35 +02:00
|
|
|
* Even though we hold an exclusive lock on the database object a
|
|
|
|
* logical slot for that DB can still be active, e.g. if it's
|
|
|
|
* concurrently being dropped by a backend connected to another DB.
|
2017-03-28 16:05:21 +02:00
|
|
|
*
|
2017-04-06 05:56:35 +02:00
|
|
|
* That's fairly unlikely in practice, so we'll just bail out.
|
Add a new slot sync worker to synchronize logical slots.
By enabling slot synchronization, all the failover logical replication
slots on the primary (assuming configurations are appropriate) are
automatically created on the physical standbys and are synced
periodically. The slot sync worker on the standby server pings the primary
server at regular intervals to get the necessary failover logical slots
information and create/update the slots locally. The slots that no longer
require synchronization are automatically dropped by the worker.
The nap time of the worker is tuned according to the activity on the
primary. The slot sync worker waits for some time before the next
synchronization, with the duration varying based on whether any slots were
updated during the last cycle.
A new parameter sync_replication_slots enables or disables this new
process.
On promotion, the slot sync worker is shut down by the startup process to
drop any temporary slots acquired by the slot sync worker and to prevent
the worker from trying to fetch the failover slots.
A functionality to allow logical walsenders to wait for the physical will
be done in a subsequent commit.
Author: Shveta Malik, Hou Zhijie based on design inputs by Masahiko Sawada and Amit Kapila
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Ajin Cherian, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-22 10:55:15 +01:00
|
|
|
*
|
|
|
|
* The slot sync worker holds a shared lock on the database before
|
|
|
|
* operating on synced logical slots to avoid conflict with the drop
|
|
|
|
* happening here. The persistent synced slots are thus safe but there
|
|
|
|
* is a possibility that the slot sync worker has created a temporary
|
|
|
|
* slot (which stays active even on release) and we are trying to drop
|
|
|
|
* that here. In practice, the chances of hitting this scenario are
|
|
|
|
* less as during slot synchronization, the temporary slot is
|
|
|
|
* immediately converted to persistent and thus is safe due to the
|
|
|
|
* shared lock taken on the database. So, we'll just bail out in such
|
|
|
|
* a case.
|
|
|
|
*
|
|
|
|
* XXX: We can consider shutting down the slot sync worker before
|
|
|
|
* trying to drop synced temporary slots here.
|
2017-03-28 16:05:21 +02:00
|
|
|
*/
|
|
|
|
if (active_pid)
|
2017-04-06 05:56:35 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("replication slot \"%s\" is active for PID %d",
|
|
|
|
slotname, active_pid)));
|
2017-03-28 16:05:21 +02:00
|
|
|
|
|
|
|
/*
|
2017-04-06 05:56:35 +02:00
|
|
|
* To avoid duplicating ReplicationSlotDropAcquired() and to avoid
|
|
|
|
* holding ReplicationSlotControlLock over filesystem operations,
|
|
|
|
* release ReplicationSlotControlLock and use
|
|
|
|
* ReplicationSlotDropAcquired.
|
2017-03-28 16:05:21 +02:00
|
|
|
*
|
2017-04-06 05:56:35 +02:00
|
|
|
* As that means the set of slots could change, restart scan from the
|
|
|
|
* beginning each time we release the lock.
|
2017-03-28 16:05:21 +02:00
|
|
|
*/
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
ReplicationSlotDropAcquired();
|
|
|
|
goto restart;
|
|
|
|
}
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
}
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
|
|
|
* Check whether the server's configuration supports using replication
|
|
|
|
* slots.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
CheckSlotRequirements(void)
|
|
|
|
{
|
2018-10-31 22:47:41 +01:00
|
|
|
/*
|
|
|
|
* NB: Adding a new requirement likely means that RestoreSlotFromDisk()
|
|
|
|
* needs the same check.
|
|
|
|
*/
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
if (max_replication_slots == 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
2020-01-30 17:32:04 +01:00
|
|
|
errmsg("replication slots can only be used if max_replication_slots > 0")));
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2016-03-01 02:01:54 +01:00
|
|
|
if (wal_level < WAL_LEVEL_REPLICA)
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
2016-04-05 03:17:54 +02:00
|
|
|
errmsg("replication slots can only be used if wal_level >= replica")));
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
2021-09-14 03:15:49 +02:00
|
|
|
/*
|
|
|
|
* Check whether the user has privilege to use replication slots.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
CheckSlotPermissions(void)
|
|
|
|
{
|
2023-03-16 15:43:33 +01:00
|
|
|
if (!has_rolreplication(GetUserId()))
|
2021-09-14 03:15:49 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
2023-03-17 10:14:16 +01:00
|
|
|
errmsg("permission denied to use replication slots"),
|
|
|
|
errdetail("Only roles with the %s attribute may use replication slots.",
|
|
|
|
"REPLICATION")));
|
2021-09-14 03:15:49 +02:00
|
|
|
}
|
|
|
|
|
2015-08-11 12:34:31 +02:00
|
|
|
/*
|
|
|
|
* Reserve WAL for the currently active slot.
|
|
|
|
*
|
|
|
|
* Compute and set restart_lsn in a manner that's appropriate for the type of
|
|
|
|
* the slot and concurrency safe.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ReplicationSlotReserveWal(void)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot = MyReplicationSlot;
|
|
|
|
|
|
|
|
Assert(slot != NULL);
|
|
|
|
Assert(slot->data.restart_lsn == InvalidXLogRecPtr);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The replication slot mechanism is used to prevent removal of required
|
|
|
|
* WAL. As there is no interlock between this routine and checkpoints, WAL
|
|
|
|
* segments could concurrently be removed when a now stale return value of
|
|
|
|
* ReplicationSlotsComputeRequiredLSN() is used. In the unlikely case that
|
|
|
|
* this happens we'll just retry.
|
|
|
|
*/
|
|
|
|
while (true)
|
|
|
|
{
|
|
|
|
XLogSegNo segno;
|
2018-06-11 23:52:34 +02:00
|
|
|
XLogRecPtr restart_lsn;
|
2015-08-11 12:34:31 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* For logical slots log a standby snapshot and start logical decoding
|
|
|
|
* at exactly that position. That allows the slot to start up more
|
2023-04-08 11:20:01 +02:00
|
|
|
* quickly. But on a standby we cannot do WAL writes, so just use the
|
|
|
|
* replay pointer; effectively, an attempt to create a logical slot on
|
|
|
|
* standby will cause it to wait for an xl_running_xact record to be
|
|
|
|
* logged independently on the primary, so that a snapshot can be
|
|
|
|
* built using the record.
|
2015-08-11 12:34:31 +02:00
|
|
|
*
|
2023-04-08 11:20:01 +02:00
|
|
|
* None of this is needed (or indeed helpful) for physical slots as
|
|
|
|
* they'll start replay at the last logged checkpoint anyway. Instead
|
|
|
|
* return the location of the last redo LSN. While that slightly
|
|
|
|
* increases the chance that we have to retry, it's where a base
|
|
|
|
* backup has to start replay at.
|
2015-08-11 12:34:31 +02:00
|
|
|
*/
|
2023-04-08 11:20:01 +02:00
|
|
|
if (SlotIsPhysical(slot))
|
|
|
|
restart_lsn = GetRedoRecPtr();
|
|
|
|
else if (RecoveryInProgress())
|
|
|
|
restart_lsn = GetXLogReplayRecPtr(NULL);
|
|
|
|
else
|
2018-06-11 23:52:34 +02:00
|
|
|
restart_lsn = GetXLogInsertRecPtr();
|
2015-08-11 12:34:31 +02:00
|
|
|
|
2023-04-08 11:20:01 +02:00
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
slot->data.restart_lsn = restart_lsn;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
2015-08-11 12:34:31 +02:00
|
|
|
|
|
|
|
/* prevent WAL removal as fast as possible */
|
|
|
|
ReplicationSlotsComputeRequiredLSN();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If all required WAL is still there, great, otherwise retry. The
|
|
|
|
* slot should prevent further removal of WAL, unless there's a
|
|
|
|
* concurrent ReplicationSlotsComputeRequiredLSN() after we've written
|
|
|
|
* the new restart_lsn above, so normally we should never need to loop
|
|
|
|
* more than twice.
|
|
|
|
*/
|
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund
Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
2017-09-20 07:03:48 +02:00
|
|
|
XLByteToSeg(slot->data.restart_lsn, segno, wal_segment_size);
|
2015-08-11 12:34:31 +02:00
|
|
|
if (XLogGetLastRemovedSegno() < segno)
|
|
|
|
break;
|
|
|
|
}
|
2023-04-08 11:20:01 +02:00
|
|
|
|
|
|
|
if (!RecoveryInProgress() && SlotIsLogical(slot))
|
|
|
|
{
|
|
|
|
XLogRecPtr flushptr;
|
|
|
|
|
|
|
|
/* make sure we have enough information to start */
|
|
|
|
flushptr = LogStandbySnapshot();
|
|
|
|
|
|
|
|
/* and make sure it's fsynced to disk */
|
|
|
|
XLogFlush(flushptr);
|
|
|
|
}
|
2015-08-11 12:34:31 +02:00
|
|
|
}
|
|
|
|
|
2020-04-08 00:35:00 +02:00
|
|
|
/*
|
2023-04-08 07:40:27 +02:00
|
|
|
* Report that replication slot needs to be invalidated
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ReportSlotInvalidation(ReplicationSlotInvalidationCause cause,
|
|
|
|
bool terminating,
|
|
|
|
int pid,
|
|
|
|
NameData slotname,
|
|
|
|
XLogRecPtr restart_lsn,
|
|
|
|
XLogRecPtr oldestLSN,
|
|
|
|
TransactionId snapshotConflictHorizon)
|
|
|
|
{
|
|
|
|
StringInfoData err_detail;
|
|
|
|
bool hint = false;
|
|
|
|
|
|
|
|
initStringInfo(&err_detail);
|
|
|
|
|
|
|
|
switch (cause)
|
|
|
|
{
|
|
|
|
case RS_INVAL_WAL_REMOVED:
|
2023-08-24 14:22:02 +02:00
|
|
|
{
|
|
|
|
unsigned long long ex = oldestLSN - restart_lsn;
|
|
|
|
|
|
|
|
hint = true;
|
|
|
|
appendStringInfo(&err_detail,
|
|
|
|
ngettext("The slot's restart_lsn %X/%X exceeds the limit by %llu byte.",
|
|
|
|
"The slot's restart_lsn %X/%X exceeds the limit by %llu bytes.",
|
|
|
|
ex),
|
|
|
|
LSN_FORMAT_ARGS(restart_lsn),
|
|
|
|
ex);
|
|
|
|
break;
|
|
|
|
}
|
2023-04-08 07:40:27 +02:00
|
|
|
case RS_INVAL_HORIZON:
|
|
|
|
appendStringInfo(&err_detail, _("The slot conflicted with xid horizon %u."),
|
|
|
|
snapshotConflictHorizon);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RS_INVAL_WAL_LEVEL:
|
2023-10-03 06:09:52 +02:00
|
|
|
appendStringInfoString(&err_detail, _("Logical decoding on standby requires wal_level >= logical on the primary server."));
|
2023-04-08 07:40:27 +02:00
|
|
|
break;
|
|
|
|
case RS_INVAL_NONE:
|
|
|
|
pg_unreachable();
|
|
|
|
}
|
|
|
|
|
|
|
|
ereport(LOG,
|
|
|
|
terminating ?
|
|
|
|
errmsg("terminating process %d to release replication slot \"%s\"",
|
|
|
|
pid, NameStr(slotname)) :
|
|
|
|
errmsg("invalidating obsolete replication slot \"%s\"",
|
|
|
|
NameStr(slotname)),
|
|
|
|
errdetail_internal("%s", err_detail.data),
|
2023-06-23 16:36:17 +02:00
|
|
|
hint ? errhint("You might need to increase %s.", "max_slot_wal_keep_size") : 0);
|
2023-04-08 07:40:27 +02:00
|
|
|
|
|
|
|
pfree(err_detail.data);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper for InvalidateObsoleteReplicationSlots
|
|
|
|
*
|
|
|
|
* Acquires the given slot and mark it invalid, if necessary and possible.
|
2020-04-08 00:35:00 +02:00
|
|
|
*
|
2021-06-11 18:16:14 +02:00
|
|
|
* Returns whether ReplicationSlotControlLock was released in the interim (and
|
|
|
|
* in that case we're not holding the lock at return, otherwise we are).
|
|
|
|
*
|
2021-07-16 18:07:30 +02:00
|
|
|
* Sets *invalidated true if the slot was invalidated. (Untouched otherwise.)
|
|
|
|
*
|
2021-06-11 18:16:14 +02:00
|
|
|
* This is inherently racy, because we release the LWLock
|
|
|
|
* for syscalls, so caller must restart if we return true.
|
2020-04-08 00:35:00 +02:00
|
|
|
*/
|
2021-06-11 18:16:14 +02:00
|
|
|
static bool
|
2023-04-08 07:40:27 +02:00
|
|
|
InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,
|
|
|
|
ReplicationSlot *s,
|
|
|
|
XLogRecPtr oldestLSN,
|
|
|
|
Oid dboid, TransactionId snapshotConflictHorizon,
|
2021-07-16 18:07:30 +02:00
|
|
|
bool *invalidated)
|
2020-04-08 00:35:00 +02:00
|
|
|
{
|
2021-06-11 18:16:14 +02:00
|
|
|
int last_signaled_pid = 0;
|
|
|
|
bool released_lock = false;
|
2024-02-20 05:43:51 +01:00
|
|
|
bool terminated = false;
|
|
|
|
XLogRecPtr initial_effective_xmin = InvalidXLogRecPtr;
|
|
|
|
XLogRecPtr initial_catalog_effective_xmin = InvalidXLogRecPtr;
|
|
|
|
XLogRecPtr initial_restart_lsn = InvalidXLogRecPtr;
|
2024-03-22 09:22:05 +01:00
|
|
|
ReplicationSlotInvalidationCause invalidation_cause_prev PG_USED_FOR_ASSERTS_ONLY = RS_INVAL_NONE;
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
for (;;)
|
2020-04-08 00:35:00 +02:00
|
|
|
{
|
2021-06-11 18:16:14 +02:00
|
|
|
XLogRecPtr restart_lsn;
|
Don't call palloc() while holding a spinlock, either.
Fix some more violations of the "only straight-line code inside a
spinlock" rule. These are hazardous not only because they risk
holding the lock for an excessively long time, but because it's
possible for palloc to throw elog(ERROR), leaving a stuck spinlock
behind.
copy_replication_slot() had two separate places that did pallocs
while holding a spinlock. We can make the code simpler and safer
by copying the whole ReplicationSlot struct into a local variable
while holding the spinlock, and then referencing that copy.
(While that's arguably more cycles than we really need to spend
holding the lock, the struct isn't all that big, and this way seems
far more maintainable than copying fields piecemeal. Anyway this
is surely much cheaper than a palloc.) That bug goes back to v12.
InvalidateObsoleteReplicationSlots() not only did a palloc while
holding a spinlock, but for extra sloppiness then leaked the memory
--- probably for the lifetime of the checkpointer process, though
I didn't try to verify that. Fortunately that silliness is new
in HEAD.
pg_get_replication_slots() had a cosmetic violation of the rule,
in that it only assumed it's safe to call namecpy() while holding
a spinlock. Still, that's a hazard waiting to bite somebody, and
there were some other cosmetic coding-rule violations in the same
function, so clean it up. I back-patched this as far as v10; the
code exists before that but it looks different, and this didn't
seem important enough to adapt the patch further back.
Discussion: https://postgr.es/m/20200602.161518.1399689010416646074.horikyota.ntt@gmail.com
2020-06-03 18:36:00 +02:00
|
|
|
NameData slotname;
|
2021-06-11 18:16:14 +02:00
|
|
|
int active_pid = 0;
|
2024-03-22 09:22:05 +01:00
|
|
|
ReplicationSlotInvalidationCause invalidation_cause = RS_INVAL_NONE;
|
2021-06-11 18:16:14 +02:00
|
|
|
|
|
|
|
Assert(LWLockHeldByMeInMode(ReplicationSlotControlLock, LW_SHARED));
|
2020-04-08 00:35:00 +02:00
|
|
|
|
|
|
|
if (!s->in_use)
|
2021-06-11 18:16:14 +02:00
|
|
|
{
|
|
|
|
if (released_lock)
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
break;
|
|
|
|
}
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
/*
|
|
|
|
* Check if the slot needs to be invalidated. If it needs to be
|
|
|
|
* invalidated, and is not currently acquired, acquire it and mark it
|
|
|
|
* as having been invalidated. We do this with the spinlock held to
|
|
|
|
* avoid race conditions -- for example the restart_lsn could move
|
|
|
|
* forward, or the slot could be dropped.
|
|
|
|
*/
|
2020-04-08 00:35:00 +02:00
|
|
|
SpinLockAcquire(&s->mutex);
|
2021-06-11 18:16:14 +02:00
|
|
|
|
2020-04-08 00:35:00 +02:00
|
|
|
restart_lsn = s->data.restart_lsn;
|
2020-06-19 10:15:52 +02:00
|
|
|
|
2024-03-22 09:22:05 +01:00
|
|
|
/* we do nothing if the slot is already invalid */
|
2023-04-08 07:40:27 +02:00
|
|
|
if (s->data.invalidated == RS_INVAL_NONE)
|
|
|
|
{
|
2024-02-20 05:43:51 +01:00
|
|
|
/*
|
|
|
|
* The slot's mutex will be released soon, and it is possible that
|
|
|
|
* those values change since the process holding the slot has been
|
|
|
|
* terminated (if any), so record them here to ensure that we
|
2024-03-22 09:22:05 +01:00
|
|
|
* would report the correct invalidation cause.
|
2024-02-20 05:43:51 +01:00
|
|
|
*/
|
|
|
|
if (!terminated)
|
|
|
|
{
|
|
|
|
initial_restart_lsn = s->data.restart_lsn;
|
|
|
|
initial_effective_xmin = s->effective_xmin;
|
|
|
|
initial_catalog_effective_xmin = s->effective_catalog_xmin;
|
|
|
|
}
|
|
|
|
|
2023-04-08 07:40:27 +02:00
|
|
|
switch (cause)
|
|
|
|
{
|
|
|
|
case RS_INVAL_WAL_REMOVED:
|
2024-02-20 05:43:51 +01:00
|
|
|
if (initial_restart_lsn != InvalidXLogRecPtr &&
|
|
|
|
initial_restart_lsn < oldestLSN)
|
2024-03-22 09:22:05 +01:00
|
|
|
invalidation_cause = cause;
|
2023-04-08 07:40:27 +02:00
|
|
|
break;
|
|
|
|
case RS_INVAL_HORIZON:
|
|
|
|
if (!SlotIsLogical(s))
|
|
|
|
break;
|
|
|
|
/* invalid DB oid signals a shared relation */
|
|
|
|
if (dboid != InvalidOid && dboid != s->data.database)
|
|
|
|
break;
|
2024-02-20 05:43:51 +01:00
|
|
|
if (TransactionIdIsValid(initial_effective_xmin) &&
|
|
|
|
TransactionIdPrecedesOrEquals(initial_effective_xmin,
|
2023-04-08 07:40:27 +02:00
|
|
|
snapshotConflictHorizon))
|
2024-03-22 09:22:05 +01:00
|
|
|
invalidation_cause = cause;
|
2024-02-20 05:43:51 +01:00
|
|
|
else if (TransactionIdIsValid(initial_catalog_effective_xmin) &&
|
|
|
|
TransactionIdPrecedesOrEquals(initial_catalog_effective_xmin,
|
2023-04-08 07:40:27 +02:00
|
|
|
snapshotConflictHorizon))
|
2024-03-22 09:22:05 +01:00
|
|
|
invalidation_cause = cause;
|
2023-04-08 07:40:27 +02:00
|
|
|
break;
|
|
|
|
case RS_INVAL_WAL_LEVEL:
|
|
|
|
if (SlotIsLogical(s))
|
2024-03-22 09:22:05 +01:00
|
|
|
invalidation_cause = cause;
|
2023-04-08 07:40:27 +02:00
|
|
|
break;
|
|
|
|
case RS_INVAL_NONE:
|
|
|
|
pg_unreachable();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-02-20 05:43:51 +01:00
|
|
|
/*
|
2024-03-22 09:22:05 +01:00
|
|
|
* The invalidation cause recorded previously should not change while
|
|
|
|
* the process owning the slot (if any) has been terminated.
|
2024-02-20 05:43:51 +01:00
|
|
|
*/
|
2024-03-22 09:22:05 +01:00
|
|
|
Assert(!(invalidation_cause_prev != RS_INVAL_NONE && terminated &&
|
|
|
|
invalidation_cause_prev != invalidation_cause));
|
2024-02-20 05:43:51 +01:00
|
|
|
|
2024-03-22 09:22:05 +01:00
|
|
|
/* if there's no invalidation, we're done */
|
|
|
|
if (invalidation_cause == RS_INVAL_NONE)
|
2021-06-11 18:16:14 +02:00
|
|
|
{
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
if (released_lock)
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
slotname = s->data.name;
|
|
|
|
active_pid = s->active_pid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the slot can be acquired, do so and mark it invalidated
|
|
|
|
* immediately. Otherwise we'll signal the owning process, below, and
|
|
|
|
* retry.
|
|
|
|
*/
|
|
|
|
if (active_pid == 0)
|
|
|
|
{
|
|
|
|
MyReplicationSlot = s;
|
|
|
|
s->active_pid = MyProcPid;
|
2024-03-22 09:22:05 +01:00
|
|
|
s->data.invalidated = invalidation_cause;
|
2023-04-08 06:47:25 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* XXX: We should consider not overwriting restart_lsn and instead
|
|
|
|
* just rely on .invalidated.
|
|
|
|
*/
|
2024-03-22 09:22:05 +01:00
|
|
|
if (invalidation_cause == RS_INVAL_WAL_REMOVED)
|
2023-04-08 07:40:27 +02:00
|
|
|
s->data.restart_lsn = InvalidXLogRecPtr;
|
2021-07-16 18:07:30 +02:00
|
|
|
|
|
|
|
/* Let caller know */
|
|
|
|
*invalidated = true;
|
2021-06-11 18:16:14 +02:00
|
|
|
}
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
SpinLockRelease(&s->mutex);
|
2020-06-19 10:15:52 +02:00
|
|
|
|
Migrate logical slots to the new node during an upgrade.
While reading information from the old cluster, a list of logical
slots is fetched. At the later part of upgrading, pg_upgrade revisits the
list and restores slots by executing pg_create_logical_replication_slot()
on the new cluster. Migration of logical replication slots is only
supported when the old cluster is version 17.0 or later.
If the old node has invalid slots or slots with unconsumed WAL records,
the pg_upgrade fails. These checks are needed to prevent data loss.
The significant advantage of this commit is that it makes it easy to
continue logical replication even after upgrading the publisher node.
Previously, pg_upgrade allowed copying publications to a new node. With
this patch, adjusting the connection string to the new publisher will
cause the apply worker on the subscriber to connect to the new publisher
automatically. This enables seamless continuation of logical replication,
even after an upgrade.
Author: Hayato Kuroda, Hou Zhijie
Reviewed-by: Peter Smith, Bharath Rupireddy, Dilip Kumar, Vignesh C, Shlok Kyal
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
Discussion: http://postgr.es/m/CAA4eK1+t7xYcfa0rEQw839=b2MzsfvYDPz3xbD+ZqOdP3zpKYg@mail.gmail.com
2023-10-26 03:24:16 +02:00
|
|
|
/*
|
2023-11-10 04:15:01 +01:00
|
|
|
* The logical replication slots shouldn't be invalidated as GUC
|
|
|
|
* max_slot_wal_keep_size is set to -1 during the binary upgrade. See
|
|
|
|
* check_old_cluster_for_valid_slots() where we ensure that no
|
|
|
|
* invalidated before the upgrade.
|
Migrate logical slots to the new node during an upgrade.
While reading information from the old cluster, a list of logical
slots is fetched. At the later part of upgrading, pg_upgrade revisits the
list and restores slots by executing pg_create_logical_replication_slot()
on the new cluster. Migration of logical replication slots is only
supported when the old cluster is version 17.0 or later.
If the old node has invalid slots or slots with unconsumed WAL records,
the pg_upgrade fails. These checks are needed to prevent data loss.
The significant advantage of this commit is that it makes it easy to
continue logical replication even after upgrading the publisher node.
Previously, pg_upgrade allowed copying publications to a new node. With
this patch, adjusting the connection string to the new publisher will
cause the apply worker on the subscriber to connect to the new publisher
automatically. This enables seamless continuation of logical replication,
even after an upgrade.
Author: Hayato Kuroda, Hou Zhijie
Reviewed-by: Peter Smith, Bharath Rupireddy, Dilip Kumar, Vignesh C, Shlok Kyal
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
Discussion: http://postgr.es/m/CAA4eK1+t7xYcfa0rEQw839=b2MzsfvYDPz3xbD+ZqOdP3zpKYg@mail.gmail.com
2023-10-26 03:24:16 +02:00
|
|
|
*/
|
2023-11-10 04:15:01 +01:00
|
|
|
Assert(!(*invalidated && SlotIsLogical(s) && IsBinaryUpgrade));
|
Migrate logical slots to the new node during an upgrade.
While reading information from the old cluster, a list of logical
slots is fetched. At the later part of upgrading, pg_upgrade revisits the
list and restores slots by executing pg_create_logical_replication_slot()
on the new cluster. Migration of logical replication slots is only
supported when the old cluster is version 17.0 or later.
If the old node has invalid slots or slots with unconsumed WAL records,
the pg_upgrade fails. These checks are needed to prevent data loss.
The significant advantage of this commit is that it makes it easy to
continue logical replication even after upgrading the publisher node.
Previously, pg_upgrade allowed copying publications to a new node. With
this patch, adjusting the connection string to the new publisher will
cause the apply worker on the subscriber to connect to the new publisher
automatically. This enables seamless continuation of logical replication,
even after an upgrade.
Author: Hayato Kuroda, Hou Zhijie
Reviewed-by: Peter Smith, Bharath Rupireddy, Dilip Kumar, Vignesh C, Shlok Kyal
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
Discussion: http://postgr.es/m/CAA4eK1+t7xYcfa0rEQw839=b2MzsfvYDPz3xbD+ZqOdP3zpKYg@mail.gmail.com
2023-10-26 03:24:16 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
if (active_pid != 0)
|
2020-04-08 00:35:00 +02:00
|
|
|
{
|
2020-06-19 10:15:52 +02:00
|
|
|
/*
|
2021-06-11 18:16:14 +02:00
|
|
|
* Prepare the sleep on the slot's condition variable before
|
|
|
|
* releasing the lock, to close a possible race condition if the
|
|
|
|
* slot is released before the sleep below.
|
2020-06-19 10:15:52 +02:00
|
|
|
*/
|
2021-06-11 18:16:14 +02:00
|
|
|
ConditionVariablePrepareToSleep(&s->active_cv);
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
released_lock = true;
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2020-06-19 10:15:52 +02:00
|
|
|
/*
|
2021-06-11 18:16:14 +02:00
|
|
|
* Signal to terminate the process that owns the slot, if we
|
|
|
|
* haven't already signalled it. (Avoidance of repeated
|
|
|
|
* signalling is the only reason for there to be a loop in this
|
|
|
|
* routine; otherwise we could rely on caller's restart loop.)
|
2020-06-19 10:15:52 +02:00
|
|
|
*
|
2021-06-11 18:16:14 +02:00
|
|
|
* There is the race condition that other process may own the slot
|
|
|
|
* after its current owner process is terminated and before this
|
|
|
|
* process owns it. To handle that, we signal only if the PID of
|
|
|
|
* the owning process has changed from the previous time. (This
|
|
|
|
* logic assumes that the same PID is not reused very quickly.)
|
2020-06-19 10:15:52 +02:00
|
|
|
*/
|
2021-06-11 18:16:14 +02:00
|
|
|
if (last_signaled_pid != active_pid)
|
2020-06-19 10:15:52 +02:00
|
|
|
{
|
2024-03-22 09:22:05 +01:00
|
|
|
ReportSlotInvalidation(invalidation_cause, true, active_pid,
|
2023-04-08 07:40:27 +02:00
|
|
|
slotname, restart_lsn,
|
|
|
|
oldestLSN, snapshotConflictHorizon);
|
2021-06-11 18:16:14 +02:00
|
|
|
|
2023-04-08 08:11:28 +02:00
|
|
|
if (MyBackendType == B_STARTUP)
|
|
|
|
(void) SendProcSignal(active_pid,
|
|
|
|
PROCSIG_RECOVERY_CONFLICT_LOGICALSLOT,
|
2024-03-03 18:38:22 +01:00
|
|
|
INVALID_PROC_NUMBER);
|
2023-04-08 08:11:28 +02:00
|
|
|
else
|
|
|
|
(void) kill(active_pid, SIGTERM);
|
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
last_signaled_pid = active_pid;
|
2024-02-20 05:43:51 +01:00
|
|
|
terminated = true;
|
2024-03-22 09:22:05 +01:00
|
|
|
invalidation_cause_prev = invalidation_cause;
|
2020-06-19 10:15:52 +02:00
|
|
|
}
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
/* Wait until the slot is released. */
|
|
|
|
ConditionVariableSleep(&s->active_cv,
|
|
|
|
WAIT_EVENT_REPLICATION_SLOT_DROP);
|
|
|
|
|
|
|
|
/*
|
2021-06-11 21:48:26 +02:00
|
|
|
* Re-acquire lock and start over; we expect to invalidate the
|
|
|
|
* slot next time (unless another process acquires the slot in the
|
2021-06-11 18:16:14 +02:00
|
|
|
* meantime).
|
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
continue;
|
2020-04-08 00:35:00 +02:00
|
|
|
}
|
2021-06-11 18:16:14 +02:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We hold the slot now and have already invalidated it; flush it
|
|
|
|
* to ensure that state persists.
|
|
|
|
*
|
|
|
|
* Don't want to hold ReplicationSlotControlLock across file
|
|
|
|
* system operations, so release it now but be sure to tell caller
|
|
|
|
* to restart from scratch.
|
|
|
|
*/
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
released_lock = true;
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
/* Make sure the invalidated state persists across server restart */
|
|
|
|
ReplicationSlotMarkDirty();
|
|
|
|
ReplicationSlotSave();
|
|
|
|
ReplicationSlotRelease();
|
2020-06-19 10:15:52 +02:00
|
|
|
|
2024-03-22 09:22:05 +01:00
|
|
|
ReportSlotInvalidation(invalidation_cause, false, active_pid,
|
2023-04-08 07:40:27 +02:00
|
|
|
slotname, restart_lsn,
|
|
|
|
oldestLSN, snapshotConflictHorizon);
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
/* done with this slot for now */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2020-06-27 02:41:29 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
Assert(released_lock == !LWLockHeldByMe(ReplicationSlotControlLock));
|
2020-04-08 00:35:00 +02:00
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
return released_lock;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2023-04-08 07:40:27 +02:00
|
|
|
* Invalidate slots that require resources about to be removed.
|
2021-06-11 18:16:14 +02:00
|
|
|
*
|
2021-07-16 18:07:30 +02:00
|
|
|
* Returns true when any slot have got invalidated.
|
|
|
|
*
|
2023-04-08 07:40:27 +02:00
|
|
|
* Whether a slot needs to be invalidated depends on the cause. A slot is
|
|
|
|
* removed if it:
|
|
|
|
* - RS_INVAL_WAL_REMOVED: requires a LSN older than the given segment
|
|
|
|
* - RS_INVAL_HORIZON: requires a snapshot <= the given horizon in the given
|
|
|
|
* db; dboid may be InvalidOid for shared relations
|
|
|
|
* - RS_INVAL_WAL_LEVEL: is logical
|
|
|
|
*
|
2021-06-11 18:16:14 +02:00
|
|
|
* NB - this runs as part of checkpoint, so avoid raising errors if possible.
|
|
|
|
*/
|
2021-07-16 18:07:30 +02:00
|
|
|
bool
|
2023-04-08 07:40:27 +02:00
|
|
|
InvalidateObsoleteReplicationSlots(ReplicationSlotInvalidationCause cause,
|
|
|
|
XLogSegNo oldestSegno, Oid dboid,
|
|
|
|
TransactionId snapshotConflictHorizon)
|
2021-06-11 18:16:14 +02:00
|
|
|
{
|
|
|
|
XLogRecPtr oldestLSN;
|
2021-07-16 18:07:30 +02:00
|
|
|
bool invalidated = false;
|
2021-06-11 18:16:14 +02:00
|
|
|
|
2023-04-08 07:40:27 +02:00
|
|
|
Assert(cause != RS_INVAL_HORIZON || TransactionIdIsValid(snapshotConflictHorizon));
|
|
|
|
Assert(cause != RS_INVAL_WAL_REMOVED || oldestSegno > 0);
|
|
|
|
Assert(cause != RS_INVAL_NONE);
|
|
|
|
|
|
|
|
if (max_replication_slots == 0)
|
|
|
|
return invalidated;
|
|
|
|
|
2021-06-11 18:16:14 +02:00
|
|
|
XLogSegNoOffsetToRecPtr(oldestSegno, 0, wal_segment_size, oldestLSN);
|
|
|
|
|
|
|
|
restart:
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
for (int i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
2023-04-08 07:40:27 +02:00
|
|
|
if (InvalidatePossiblyObsoleteSlot(cause, s, oldestLSN, dboid,
|
|
|
|
snapshotConflictHorizon,
|
|
|
|
&invalidated))
|
2021-06-11 18:16:14 +02:00
|
|
|
{
|
|
|
|
/* if the lock was released, start from scratch */
|
|
|
|
goto restart;
|
|
|
|
}
|
2020-04-08 00:35:00 +02:00
|
|
|
}
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
2021-07-16 18:07:30 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If any slots have been invalidated, recalculate the resource limits.
|
|
|
|
*/
|
|
|
|
if (invalidated)
|
|
|
|
{
|
|
|
|
ReplicationSlotsComputeRequiredXmin(false);
|
|
|
|
ReplicationSlotsComputeRequiredLSN();
|
|
|
|
}
|
|
|
|
|
|
|
|
return invalidated;
|
2020-04-08 00:35:00 +02:00
|
|
|
}
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/*
|
|
|
|
* Flush all replication slots to disk.
|
|
|
|
*
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
* It is convenient to flush dirty replication slots at the time of checkpoint.
|
|
|
|
* Additionally, in case of a shutdown checkpoint, we also identify the slots
|
|
|
|
* for which the confirmed_flush LSN has been updated since the last time it
|
|
|
|
* was saved and flush them.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
void
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
CheckPointReplicationSlots(bool is_shutdown)
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2014-11-12 02:00:58 +01:00
|
|
|
elog(DEBUG1, "performing replication slot checkpoint");
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Prevent any slot from being created/dropped while we're active. As we
|
|
|
|
* explicitly do *not* want to block iterating over replication_slots or
|
|
|
|
* acquiring a slot we cannot take the control lock - but that's OK,
|
|
|
|
* because holding ReplicationSlotAllocationLock is strictly stronger, and
|
|
|
|
* enough to guarantee that nobody can change the in_use bits on us.
|
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotAllocationLock, LW_SHARED);
|
|
|
|
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
|
|
|
|
if (!s->in_use)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* save the slot to disk, locking is handled in SaveSlotToPath() */
|
|
|
|
sprintf(path, "pg_replslot/%s", NameStr(s->data.name));
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Slot's data is not flushed each time the confirmed_flush LSN is
|
|
|
|
* updated as that could lead to frequent writes. However, we decide
|
|
|
|
* to force a flush of all logical slot's data at the time of shutdown
|
|
|
|
* if the confirmed_flush LSN is changed since we last flushed it to
|
|
|
|
* disk. This helps in avoiding an unnecessary retreat of the
|
|
|
|
* confirmed_flush LSN after restart.
|
|
|
|
*/
|
|
|
|
if (is_shutdown && SlotIsLogical(s))
|
|
|
|
{
|
|
|
|
SpinLockAcquire(&s->mutex);
|
|
|
|
|
|
|
|
Assert(s->data.confirmed_flush >= s->last_saved_confirmed_flush);
|
|
|
|
|
|
|
|
if (s->data.invalidated == RS_INVAL_NONE &&
|
|
|
|
s->data.confirmed_flush != s->last_saved_confirmed_flush)
|
|
|
|
{
|
|
|
|
s->just_dirtied = true;
|
|
|
|
s->dirty = true;
|
|
|
|
}
|
|
|
|
SpinLockRelease(&s->mutex);
|
|
|
|
}
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
SaveSlotToPath(s, path, LOG);
|
|
|
|
}
|
|
|
|
LWLockRelease(ReplicationSlotAllocationLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Load all replication slots from disk into memory at server startup. This
|
|
|
|
* needs to be run before we start crash recovery.
|
|
|
|
*/
|
|
|
|
void
|
2014-06-12 13:23:46 +02:00
|
|
|
StartupReplicationSlots(void)
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
|
|
|
DIR *replication_dir;
|
|
|
|
struct dirent *replication_de;
|
|
|
|
|
2014-11-12 02:00:58 +01:00
|
|
|
elog(DEBUG1, "starting up replication slots");
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* restore all slots by iterating over all on-disk entries */
|
|
|
|
replication_dir = AllocateDir("pg_replslot");
|
|
|
|
while ((replication_de = ReadDir(replication_dir, "pg_replslot")) != NULL)
|
|
|
|
{
|
2017-04-11 20:13:31 +02:00
|
|
|
char path[MAXPGPATH + 12];
|
Expand the use of get_dirent_type(), shaving a few calls to stat()/lstat()
Several backend-side loops scanning one or more directories with
ReadDir() (WAL segment recycle/removal in xlog.c, backend-side directory
copy, temporary file removal, configuration file parsing, some logical
decoding logic and some pgtz stuff) already know the type of the entry
being scanned thanks to the dirent structure associated to the entry, on
platforms where we know about DT_REG, DT_DIR and DT_LNK to make the
difference between a regular file, a directory and a symbolic link.
Relying on the direct structure of an entry saves a few system calls to
stat() and lstat() in the loops updated here, shaving some code while on
it. The logic of the code remains the same, calling stat() or lstat()
depending on if it is necessary to look through symlinks.
Authors: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Andres Freund, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV8n-J-f=yiLUOx2=HrQGPSOZM3nWzyQQvLPcccPXxEdg@mail.gmail.com
2022-09-02 09:58:06 +02:00
|
|
|
PGFileType de_type;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
if (strcmp(replication_de->d_name, ".") == 0 ||
|
|
|
|
strcmp(replication_de->d_name, "..") == 0)
|
|
|
|
continue;
|
|
|
|
|
2017-04-11 20:13:31 +02:00
|
|
|
snprintf(path, sizeof(path), "pg_replslot/%s", replication_de->d_name);
|
Expand the use of get_dirent_type(), shaving a few calls to stat()/lstat()
Several backend-side loops scanning one or more directories with
ReadDir() (WAL segment recycle/removal in xlog.c, backend-side directory
copy, temporary file removal, configuration file parsing, some logical
decoding logic and some pgtz stuff) already know the type of the entry
being scanned thanks to the dirent structure associated to the entry, on
platforms where we know about DT_REG, DT_DIR and DT_LNK to make the
difference between a regular file, a directory and a symbolic link.
Relying on the direct structure of an entry saves a few system calls to
stat() and lstat() in the loops updated here, shaving some code while on
it. The logic of the code remains the same, calling stat() or lstat()
depending on if it is necessary to look through symlinks.
Authors: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Andres Freund, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV8n-J-f=yiLUOx2=HrQGPSOZM3nWzyQQvLPcccPXxEdg@mail.gmail.com
2022-09-02 09:58:06 +02:00
|
|
|
de_type = get_dirent_type(path, replication_de, false, DEBUG1);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* we're only creating directories here, skip if it's not our's */
|
Expand the use of get_dirent_type(), shaving a few calls to stat()/lstat()
Several backend-side loops scanning one or more directories with
ReadDir() (WAL segment recycle/removal in xlog.c, backend-side directory
copy, temporary file removal, configuration file parsing, some logical
decoding logic and some pgtz stuff) already know the type of the entry
being scanned thanks to the dirent structure associated to the entry, on
platforms where we know about DT_REG, DT_DIR and DT_LNK to make the
difference between a regular file, a directory and a symbolic link.
Relying on the direct structure of an entry saves a few system calls to
stat() and lstat() in the loops updated here, shaving some code while on
it. The logic of the code remains the same, calling stat() or lstat()
depending on if it is necessary to look through symlinks.
Authors: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Andres Freund, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV8n-J-f=yiLUOx2=HrQGPSOZM3nWzyQQvLPcccPXxEdg@mail.gmail.com
2022-09-02 09:58:06 +02:00
|
|
|
if (de_type != PGFILETYPE_ERROR && de_type != PGFILETYPE_DIR)
|
2014-02-01 04:45:17 +01:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* we crashed while a slot was being setup or deleted, clean up */
|
2015-01-03 20:51:52 +01:00
|
|
|
if (pg_str_endswith(replication_de->d_name, ".tmp"))
|
2014-02-01 04:45:17 +01:00
|
|
|
{
|
|
|
|
if (!rmtree(path, true))
|
|
|
|
{
|
|
|
|
ereport(WARNING,
|
2018-09-04 20:06:04 +02:00
|
|
|
(errmsg("could not remove directory \"%s\"",
|
|
|
|
path)));
|
2014-02-01 04:45:17 +01:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
fsync_fname("pg_replslot", true);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* looks like a slot in a normal state, restore */
|
|
|
|
RestoreSlotFromDisk(replication_de->d_name);
|
|
|
|
}
|
|
|
|
FreeDir(replication_dir);
|
|
|
|
|
|
|
|
/* currently no slots exist, we're done. */
|
|
|
|
if (max_replication_slots <= 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Now that we have recovered all the data, compute replication xmin */
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ReplicationSlotsComputeRequiredXmin(false);
|
2014-02-01 04:45:17 +01:00
|
|
|
ReplicationSlotsComputeRequiredLSN();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ----
|
2015-08-11 12:32:48 +02:00
|
|
|
* Manipulation of on-disk state of replication slots
|
2014-02-01 04:45:17 +01:00
|
|
|
*
|
|
|
|
* NB: none of the routines below should take any notice whether a slot is the
|
|
|
|
* current one or not, that's all handled a layer above.
|
|
|
|
* ----
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
CreateSlotOnDisk(ReplicationSlot *slot)
|
|
|
|
{
|
|
|
|
char tmppath[MAXPGPATH];
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No need to take out the io_in_progress_lock, nobody else can see this
|
2014-02-02 02:28:18 +01:00
|
|
|
* slot yet, so nobody else will write. We're reusing SaveSlotToPath which
|
2014-02-01 04:45:17 +01:00
|
|
|
* takes out the lock, if we'd take the lock here, we'd deadlock.
|
|
|
|
*/
|
|
|
|
|
|
|
|
sprintf(path, "pg_replslot/%s", NameStr(slot->data.name));
|
|
|
|
sprintf(tmppath, "pg_replslot/%s.tmp", NameStr(slot->data.name));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* It's just barely possible that some previous effort to create or drop a
|
|
|
|
* slot with this name left a temp directory lying around. If that seems
|
|
|
|
* to be the case, try to remove it. If the rmtree() fails, we'll error
|
2018-04-07 23:45:39 +02:00
|
|
|
* out at the MakePGDirectory() below, so we don't bother checking
|
|
|
|
* success.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
if (stat(tmppath, &st) == 0 && S_ISDIR(st.st_mode))
|
|
|
|
rmtree(tmppath, true);
|
|
|
|
|
|
|
|
/* Create and fsync the temporary slot directory. */
|
2018-04-07 23:45:39 +02:00
|
|
|
if (MakePGDirectory(tmppath) < 0)
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not create directory \"%s\": %m",
|
|
|
|
tmppath)));
|
|
|
|
fsync_fname(tmppath, true);
|
|
|
|
|
|
|
|
/* Write the actual state file. */
|
|
|
|
slot->dirty = true; /* signal that we really need to write */
|
|
|
|
SaveSlotToPath(slot, tmppath, ERROR);
|
|
|
|
|
|
|
|
/* Rename the directory into place. */
|
|
|
|
if (rename(tmppath, path) != 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not rename file \"%s\" to \"%s\": %m",
|
|
|
|
tmppath, path)));
|
|
|
|
|
|
|
|
/*
|
2014-02-02 02:28:18 +01:00
|
|
|
* If we'd now fail - really unlikely - we wouldn't know whether this slot
|
2014-02-01 04:45:17 +01:00
|
|
|
* would persist after an OS crash or not - so, force a restart. The
|
2016-02-05 14:11:00 +01:00
|
|
|
* restart would try to fsync this again till it works.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
|
|
|
START_CRIT_SECTION();
|
|
|
|
|
|
|
|
fsync_fname(path, true);
|
|
|
|
fsync_fname("pg_replslot", true);
|
|
|
|
|
|
|
|
END_CRIT_SECTION();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Shared functionality between saving and creating a replication slot.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
SaveSlotToPath(ReplicationSlot *slot, const char *dir, int elevel)
|
|
|
|
{
|
|
|
|
char tmppath[MAXPGPATH];
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
int fd;
|
|
|
|
ReplicationSlotOnDisk cp;
|
|
|
|
bool was_dirty;
|
|
|
|
|
|
|
|
/* first check whether there's something to write out */
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
was_dirty = slot->dirty;
|
|
|
|
slot->just_dirtied = false;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* and don't do anything if there's nothing to write */
|
|
|
|
if (!was_dirty)
|
|
|
|
return;
|
|
|
|
|
2016-01-29 15:44:29 +01:00
|
|
|
LWLockAcquire(&slot->io_in_progress_lock, LW_EXCLUSIVE);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* silence valgrind :( */
|
|
|
|
memset(&cp, 0, sizeof(ReplicationSlotOnDisk));
|
|
|
|
|
|
|
|
sprintf(tmppath, "%s/state.tmp", dir);
|
|
|
|
sprintf(path, "%s/state", dir);
|
|
|
|
|
2017-09-23 15:49:22 +02:00
|
|
|
fd = OpenTransientFile(tmppath, O_CREAT | O_EXCL | O_WRONLY | PG_BINARY);
|
2014-02-01 04:45:17 +01:00
|
|
|
if (fd < 0)
|
|
|
|
{
|
2020-03-26 11:51:39 +01:00
|
|
|
/*
|
|
|
|
* If not an ERROR, then release the lock before returning. In case
|
|
|
|
* of an ERROR, the error recovery path automatically releases the
|
2020-04-05 10:02:00 +02:00
|
|
|
* lock, but no harm in explicitly releasing even in that case. Note
|
|
|
|
* that LWLockRelease() could affect errno.
|
2020-03-26 11:51:39 +01:00
|
|
|
*/
|
2020-04-05 10:02:00 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2020-03-26 11:51:39 +01:00
|
|
|
LWLockRelease(&slot->io_in_progress_lock);
|
2020-04-05 10:02:00 +02:00
|
|
|
errno = save_errno;
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(elevel,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not create file \"%s\": %m",
|
|
|
|
tmppath)));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
cp.magic = SLOT_MAGIC;
|
Switch to CRC-32C in WAL and other places.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.
Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.
The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
2014-11-04 10:35:15 +01:00
|
|
|
INIT_CRC32C(cp.checksum);
|
2014-11-12 18:52:49 +01:00
|
|
|
cp.version = SLOT_VERSION;
|
|
|
|
cp.length = ReplicationSlotOnDiskV2Size;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
|
|
|
|
memcpy(&cp.slotdata, &slot->data, sizeof(ReplicationSlotPersistentData));
|
|
|
|
|
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
|
Switch to CRC-32C in WAL and other places.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.
Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.
The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
2014-11-04 10:35:15 +01:00
|
|
|
COMP_CRC32C(cp.checksum,
|
2021-11-24 03:36:55 +01:00
|
|
|
(char *) (&cp) + ReplicationSlotOnDiskNotChecksummedSize,
|
|
|
|
ReplicationSlotOnDiskChecksummedSize);
|
2014-11-12 18:52:49 +01:00
|
|
|
FIN_CRC32C(cp.checksum);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2018-08-04 22:31:18 +02:00
|
|
|
errno = 0;
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_start(WAIT_EVENT_REPLICATION_SLOT_WRITE);
|
2014-02-01 04:45:17 +01:00
|
|
|
if ((write(fd, &cp, sizeof(cp))) != sizeof(cp))
|
|
|
|
{
|
|
|
|
int save_errno = errno;
|
2014-05-06 18:12:18 +02:00
|
|
|
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
CloseTransientFile(fd);
|
2020-03-26 11:51:39 +01:00
|
|
|
LWLockRelease(&slot->io_in_progress_lock);
|
2018-06-25 04:19:05 +02:00
|
|
|
|
|
|
|
/* if write didn't set errno, assume problem is no disk space */
|
|
|
|
errno = save_errno ? save_errno : ENOSPC;
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(elevel,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not write to file \"%s\": %m",
|
|
|
|
tmppath)));
|
|
|
|
return;
|
|
|
|
}
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* fsync the temporary file */
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_start(WAIT_EVENT_REPLICATION_SLOT_SYNC);
|
2014-02-01 04:45:17 +01:00
|
|
|
if (pg_fsync(fd) != 0)
|
|
|
|
{
|
|
|
|
int save_errno = errno;
|
2014-05-06 18:12:18 +02:00
|
|
|
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
CloseTransientFile(fd);
|
2020-03-26 11:51:39 +01:00
|
|
|
LWLockRelease(&slot->io_in_progress_lock);
|
2014-02-01 04:45:17 +01:00
|
|
|
errno = save_errno;
|
|
|
|
ereport(elevel,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not fsync file \"%s\": %m",
|
|
|
|
tmppath)));
|
|
|
|
return;
|
|
|
|
}
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2019-07-06 23:18:46 +02:00
|
|
|
if (CloseTransientFile(fd) != 0)
|
2019-04-17 03:01:22 +02:00
|
|
|
{
|
2020-04-05 10:02:00 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2020-03-26 11:51:39 +01:00
|
|
|
LWLockRelease(&slot->io_in_progress_lock);
|
2020-04-05 10:02:00 +02:00
|
|
|
errno = save_errno;
|
Tighten use of OpenTransientFile and CloseTransientFile
This fixes two sets of issues related to the use of transient files in
the backend:
1) OpenTransientFile() has been used in some code paths with read-write
flags while read-only is sufficient, so switch those calls to be
read-only where necessary. These have been reported by Joe Conway.
2) When opening transient files, it is up to the caller to close the
file descriptors opened. In error code paths, CloseTransientFile() gets
called to clean up things before issuing an error. However in normal
exit paths, a lot of callers of CloseTransientFile() never actually
reported errors, which could leave a file descriptor open without
knowing about it. This is an issue I complained about a couple of
times, but never had the courage to write and submit a patch, so here we
go.
Note that one frontend code path is impacted by this commit so as an
error is issued when fetching control file data, making backend and
frontend to be treated consistently.
Reported-by: Joe Conway, Michael Paquier
Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Georgios Kokolatos, Joe Conway
Discussion: https://postgr.es/m/20190301023338.GD1348@paquier.xyz
Discussion: https://postgr.es/m/c49b69ec-e2f7-ff33-4f17-0eaa4f2cef27@joeconway.com
2019-03-09 00:50:55 +01:00
|
|
|
ereport(elevel,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not close file \"%s\": %m",
|
|
|
|
tmppath)));
|
2019-04-17 03:01:22 +02:00
|
|
|
return;
|
|
|
|
}
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* rename to permanent file, fsync file and directory */
|
|
|
|
if (rename(tmppath, path) != 0)
|
|
|
|
{
|
2020-04-05 10:02:00 +02:00
|
|
|
int save_errno = errno;
|
|
|
|
|
2020-03-26 11:51:39 +01:00
|
|
|
LWLockRelease(&slot->io_in_progress_lock);
|
2020-04-05 10:02:00 +02:00
|
|
|
errno = save_errno;
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(elevel,
|
|
|
|
(errcode_for_file_access(),
|
2014-09-05 07:20:33 +02:00
|
|
|
errmsg("could not rename file \"%s\" to \"%s\": %m",
|
2014-02-01 04:45:17 +01:00
|
|
|
tmppath, path)));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-07-01 03:00:23 +02:00
|
|
|
/*
|
|
|
|
* Check CreateSlotOnDisk() for the reasoning of using a critical section.
|
|
|
|
*/
|
2014-02-01 04:45:17 +01:00
|
|
|
START_CRIT_SECTION();
|
|
|
|
|
|
|
|
fsync_fname(path, false);
|
2016-03-10 03:53:53 +01:00
|
|
|
fsync_fname(dir, true);
|
2014-02-01 04:45:17 +01:00
|
|
|
fsync_fname("pg_replslot", true);
|
|
|
|
|
|
|
|
END_CRIT_SECTION();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Successfully wrote, unset dirty bit, unless somebody dirtied again
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
* already and remember the confirmed_flush LSN value.
|
2014-02-01 04:45:17 +01:00
|
|
|
*/
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
if (!slot->just_dirtied)
|
|
|
|
slot->dirty = false;
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
slot->last_saved_confirmed_flush = cp.slotdata.confirmed_flush;
|
2015-10-06 21:45:02 +02:00
|
|
|
SpinLockRelease(&slot->mutex);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2016-01-29 15:44:29 +01:00
|
|
|
LWLockRelease(&slot->io_in_progress_lock);
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Load a single slot from disk into memory.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
RestoreSlotFromDisk(const char *name)
|
|
|
|
{
|
|
|
|
ReplicationSlotOnDisk cp;
|
|
|
|
int i;
|
2018-09-02 21:40:30 +02:00
|
|
|
char slotdir[MAXPGPATH + 12];
|
2017-04-11 20:13:31 +02:00
|
|
|
char path[MAXPGPATH + 22];
|
2014-02-01 04:45:17 +01:00
|
|
|
int fd;
|
|
|
|
bool restored = false;
|
|
|
|
int readBytes;
|
2015-04-14 16:03:42 +02:00
|
|
|
pg_crc32c checksum;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* no need to lock here, no concurrent access allowed yet */
|
|
|
|
|
|
|
|
/* delete temp file if it exists */
|
2018-09-02 21:40:30 +02:00
|
|
|
sprintf(slotdir, "pg_replslot/%s", name);
|
|
|
|
sprintf(path, "%s/state.tmp", slotdir);
|
2014-02-01 04:45:17 +01:00
|
|
|
if (unlink(path) < 0 && errno != ENOENT)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
2014-09-05 07:20:33 +02:00
|
|
|
errmsg("could not remove file \"%s\": %m", path)));
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2018-09-02 21:40:30 +02:00
|
|
|
sprintf(path, "%s/state", slotdir);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
elog(DEBUG1, "restoring replication slot from \"%s\"", path);
|
|
|
|
|
2019-10-04 22:08:51 +02:00
|
|
|
/* on some operating systems fsyncing a file requires O_RDWR */
|
|
|
|
fd = OpenTransientFile(path, O_RDWR | PG_BINARY);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We do not need to handle this as we are rename()ing the directory into
|
|
|
|
* place only after we fsync()ed the state file.
|
|
|
|
*/
|
|
|
|
if (fd < 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not open file \"%s\": %m", path)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sync state file before we're reading from it. We might have crashed
|
|
|
|
* while it wasn't synced yet and we shouldn't continue on that basis.
|
|
|
|
*/
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_start(WAIT_EVENT_REPLICATION_SLOT_RESTORE_SYNC);
|
2014-02-01 04:45:17 +01:00
|
|
|
if (pg_fsync(fd) != 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not fsync file \"%s\": %m",
|
|
|
|
path)));
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
|
|
|
|
/* Also sync the parent directory */
|
|
|
|
START_CRIT_SECTION();
|
2018-09-02 21:40:30 +02:00
|
|
|
fsync_fname(slotdir, true);
|
2014-02-01 04:45:17 +01:00
|
|
|
END_CRIT_SECTION();
|
|
|
|
|
|
|
|
/* read part of statefile that's guaranteed to be version independent */
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_start(WAIT_EVENT_REPLICATION_SLOT_READ);
|
2014-02-01 04:45:17 +01:00
|
|
|
readBytes = read(fd, &cp, ReplicationSlotOnDiskConstantSize);
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
if (readBytes != ReplicationSlotOnDiskConstantSize)
|
|
|
|
{
|
Rework error messages around file handling
Some error messages related to file handling are using the code path
context to define their state. For example, 2PC-related errors are
referring to "two-phase status files", or "relation mapping file" is
used for catalog-to-filenode mapping, however those prove to be
difficult to translate, and are not more helpful than just referring to
the path of the file being worked on. So simplify all those error
messages by just referring to files with their path used. In some
cases, like the manipulation of WAL segments, the context is actually
helpful so those are kept.
Calls to the system function read() have also been rather inconsistent
with their error handling sometimes not reporting the number of bytes
read, and some other code paths trying to use an errno which has not
been set. The in-core functions are using a more consistent pattern
with this patch, which checks for both errno if set or if an
inconsistent read is happening.
So as to care about pluralization when reading an unexpected number of
byte(s), "could not read: read %d of %zu" is used as error message, with
%d field being the output result of read() and %zu the expected size.
This simplifies the work of translators with less variations of the same
message.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180520000522.GB1603@paquier.xyz
2018-07-18 01:01:23 +02:00
|
|
|
if (readBytes < 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not read file \"%s\": %m", path)));
|
|
|
|
else
|
|
|
|
ereport(PANIC,
|
2018-07-23 02:37:36 +02:00
|
|
|
(errcode(ERRCODE_DATA_CORRUPTED),
|
|
|
|
errmsg("could not read file \"%s\": read %d of %zu",
|
Rework error messages around file handling
Some error messages related to file handling are using the code path
context to define their state. For example, 2PC-related errors are
referring to "two-phase status files", or "relation mapping file" is
used for catalog-to-filenode mapping, however those prove to be
difficult to translate, and are not more helpful than just referring to
the path of the file being worked on. So simplify all those error
messages by just referring to files with their path used. In some
cases, like the manipulation of WAL segments, the context is actually
helpful so those are kept.
Calls to the system function read() have also been rather inconsistent
with their error handling sometimes not reporting the number of bytes
read, and some other code paths trying to use an errno which has not
been set. The in-core functions are using a more consistent pattern
with this patch, which checks for both errno if set or if an
inconsistent read is happening.
So as to care about pluralization when reading an unexpected number of
byte(s), "could not read: read %d of %zu" is used as error message, with
%d field being the output result of read() and %zu the expected size.
This simplifies the work of translators with less variations of the same
message.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180520000522.GB1603@paquier.xyz
2018-07-18 01:01:23 +02:00
|
|
|
path, readBytes,
|
2018-07-18 02:51:53 +02:00
|
|
|
(Size) ReplicationSlotOnDiskConstantSize)));
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* verify magic */
|
|
|
|
if (cp.magic != SLOT_MAGIC)
|
|
|
|
ereport(PANIC,
|
2018-09-04 20:06:04 +02:00
|
|
|
(errcode(ERRCODE_DATA_CORRUPTED),
|
2015-10-29 01:23:53 +01:00
|
|
|
errmsg("replication slot file \"%s\" has wrong magic number: %u instead of %u",
|
2014-02-01 04:45:17 +01:00
|
|
|
path, cp.magic, SLOT_MAGIC)));
|
|
|
|
|
|
|
|
/* verify version */
|
|
|
|
if (cp.version != SLOT_VERSION)
|
|
|
|
ereport(PANIC,
|
2018-09-04 20:06:04 +02:00
|
|
|
(errcode(ERRCODE_DATA_CORRUPTED),
|
2014-02-01 04:45:17 +01:00
|
|
|
errmsg("replication slot file \"%s\" has unsupported version %u",
|
|
|
|
path, cp.version)));
|
|
|
|
|
|
|
|
/* boundary check on length */
|
2014-11-12 18:52:49 +01:00
|
|
|
if (cp.length != ReplicationSlotOnDiskV2Size)
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(PANIC,
|
2018-09-04 20:06:04 +02:00
|
|
|
(errcode(ERRCODE_DATA_CORRUPTED),
|
2014-02-01 04:45:17 +01:00
|
|
|
errmsg("replication slot file \"%s\" has corrupted length %u",
|
|
|
|
path, cp.length)));
|
|
|
|
|
|
|
|
/* Now that we know the size, read the entire file */
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_start(WAIT_EVENT_REPLICATION_SLOT_READ);
|
2014-02-01 04:45:17 +01:00
|
|
|
readBytes = read(fd,
|
|
|
|
(char *) &cp + ReplicationSlotOnDiskConstantSize,
|
|
|
|
cp.length);
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
pgstat_report_wait_end();
|
2014-02-01 04:45:17 +01:00
|
|
|
if (readBytes != cp.length)
|
|
|
|
{
|
Rework error messages around file handling
Some error messages related to file handling are using the code path
context to define their state. For example, 2PC-related errors are
referring to "two-phase status files", or "relation mapping file" is
used for catalog-to-filenode mapping, however those prove to be
difficult to translate, and are not more helpful than just referring to
the path of the file being worked on. So simplify all those error
messages by just referring to files with their path used. In some
cases, like the manipulation of WAL segments, the context is actually
helpful so those are kept.
Calls to the system function read() have also been rather inconsistent
with their error handling sometimes not reporting the number of bytes
read, and some other code paths trying to use an errno which has not
been set. The in-core functions are using a more consistent pattern
with this patch, which checks for both errno if set or if an
inconsistent read is happening.
So as to care about pluralization when reading an unexpected number of
byte(s), "could not read: read %d of %zu" is used as error message, with
%d field being the output result of read() and %zu the expected size.
This simplifies the work of translators with less variations of the same
message.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180520000522.GB1603@paquier.xyz
2018-07-18 01:01:23 +02:00
|
|
|
if (readBytes < 0)
|
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not read file \"%s\": %m", path)));
|
|
|
|
else
|
|
|
|
ereport(PANIC,
|
2018-07-23 02:37:36 +02:00
|
|
|
(errcode(ERRCODE_DATA_CORRUPTED),
|
|
|
|
errmsg("could not read file \"%s\": read %d of %zu",
|
Rework error messages around file handling
Some error messages related to file handling are using the code path
context to define their state. For example, 2PC-related errors are
referring to "two-phase status files", or "relation mapping file" is
used for catalog-to-filenode mapping, however those prove to be
difficult to translate, and are not more helpful than just referring to
the path of the file being worked on. So simplify all those error
messages by just referring to files with their path used. In some
cases, like the manipulation of WAL segments, the context is actually
helpful so those are kept.
Calls to the system function read() have also been rather inconsistent
with their error handling sometimes not reporting the number of bytes
read, and some other code paths trying to use an errno which has not
been set. The in-core functions are using a more consistent pattern
with this patch, which checks for both errno if set or if an
inconsistent read is happening.
So as to care about pluralization when reading an unexpected number of
byte(s), "could not read: read %d of %zu" is used as error message, with
%d field being the output result of read() and %zu the expected size.
This simplifies the work of translators with less variations of the same
message.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20180520000522.GB1603@paquier.xyz
2018-07-18 01:01:23 +02:00
|
|
|
path, readBytes, (Size) cp.length)));
|
2014-02-01 04:45:17 +01:00
|
|
|
}
|
|
|
|
|
2019-07-06 23:18:46 +02:00
|
|
|
if (CloseTransientFile(fd) != 0)
|
Tighten use of OpenTransientFile and CloseTransientFile
This fixes two sets of issues related to the use of transient files in
the backend:
1) OpenTransientFile() has been used in some code paths with read-write
flags while read-only is sufficient, so switch those calls to be
read-only where necessary. These have been reported by Joe Conway.
2) When opening transient files, it is up to the caller to close the
file descriptors opened. In error code paths, CloseTransientFile() gets
called to clean up things before issuing an error. However in normal
exit paths, a lot of callers of CloseTransientFile() never actually
reported errors, which could leave a file descriptor open without
knowing about it. This is an issue I complained about a couple of
times, but never had the courage to write and submit a patch, so here we
go.
Note that one frontend code path is impacted by this commit so as an
error is issued when fetching control file data, making backend and
frontend to be treated consistently.
Reported-by: Joe Conway, Michael Paquier
Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Georgios Kokolatos, Joe Conway
Discussion: https://postgr.es/m/20190301023338.GD1348@paquier.xyz
Discussion: https://postgr.es/m/c49b69ec-e2f7-ff33-4f17-0eaa4f2cef27@joeconway.com
2019-03-09 00:50:55 +01:00
|
|
|
ereport(PANIC,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not close file \"%s\": %m", path)));
|
2014-02-01 04:45:17 +01:00
|
|
|
|
Switch to CRC-32C in WAL and other places.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.
Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.
The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
2014-11-04 10:35:15 +01:00
|
|
|
/* now verify the CRC */
|
|
|
|
INIT_CRC32C(checksum);
|
|
|
|
COMP_CRC32C(checksum,
|
2021-11-24 03:36:55 +01:00
|
|
|
(char *) &cp + ReplicationSlotOnDiskNotChecksummedSize,
|
|
|
|
ReplicationSlotOnDiskChecksummedSize);
|
2014-11-12 18:52:49 +01:00
|
|
|
FIN_CRC32C(checksum);
|
2014-02-01 04:45:17 +01:00
|
|
|
|
Switch to CRC-32C in WAL and other places.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.
Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.
The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
2014-11-04 10:35:15 +01:00
|
|
|
if (!EQ_CRC32C(checksum, cp.checksum))
|
2014-02-01 04:45:17 +01:00
|
|
|
ereport(PANIC,
|
2015-10-29 01:23:53 +01:00
|
|
|
(errmsg("checksum mismatch for replication slot file \"%s\": is %u, should be %u",
|
2014-02-01 04:45:17 +01:00
|
|
|
path, checksum, cp.checksum)));
|
|
|
|
|
2014-07-24 14:32:34 +02:00
|
|
|
/*
|
|
|
|
* If we crashed with an ephemeral slot active, don't restore but delete
|
|
|
|
* it.
|
|
|
|
*/
|
|
|
|
if (cp.slotdata.persistency != RS_PERSISTENT)
|
|
|
|
{
|
2018-09-02 21:40:30 +02:00
|
|
|
if (!rmtree(slotdir, true))
|
2014-07-24 14:32:34 +02:00
|
|
|
{
|
|
|
|
ereport(WARNING,
|
2018-09-04 20:06:04 +02:00
|
|
|
(errmsg("could not remove directory \"%s\"",
|
|
|
|
slotdir)));
|
2014-07-24 14:32:34 +02:00
|
|
|
}
|
|
|
|
fsync_fname("pg_replslot", true);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2018-10-31 22:47:41 +01:00
|
|
|
/*
|
|
|
|
* Verify that requirements for the specific slot type are met. That's
|
|
|
|
* important because if these aren't met we're not guaranteed to retain
|
|
|
|
* all the necessary resources for the slot.
|
|
|
|
*
|
|
|
|
* NB: We have to do so *after* the above checks for ephemeral slots,
|
|
|
|
* because otherwise a slot that shouldn't exist anymore could prevent
|
|
|
|
* restarts.
|
|
|
|
*
|
|
|
|
* NB: Changing the requirements here also requires adapting
|
|
|
|
* CheckSlotRequirements() and CheckLogicalDecodingRequirements().
|
|
|
|
*/
|
|
|
|
if (cp.slotdata.database != InvalidOid && wal_level < WAL_LEVEL_LOGICAL)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
2018-11-01 18:44:29 +01:00
|
|
|
errmsg("logical replication slot \"%s\" exists, but wal_level < logical",
|
2018-10-31 22:47:41 +01:00
|
|
|
NameStr(cp.slotdata.name)),
|
2018-11-05 21:02:25 +01:00
|
|
|
errhint("Change wal_level to be logical or higher.")));
|
2018-10-31 22:47:41 +01:00
|
|
|
else if (wal_level < WAL_LEVEL_REPLICA)
|
|
|
|
ereport(FATAL,
|
|
|
|
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
2018-11-01 18:44:29 +01:00
|
|
|
errmsg("physical replication slot \"%s\" exists, but wal_level < replica",
|
2018-10-31 22:47:41 +01:00
|
|
|
NameStr(cp.slotdata.name)),
|
|
|
|
errhint("Change wal_level to be replica or higher.")));
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
/* nothing can be active yet, don't lock anything */
|
|
|
|
for (i = 0; i < max_replication_slots; i++)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot;
|
|
|
|
|
|
|
|
slot = &ReplicationSlotCtl->replication_slots[i];
|
|
|
|
|
|
|
|
if (slot->in_use)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* restore the entire set of persistent data */
|
|
|
|
memcpy(&slot->data, &cp.slotdata,
|
|
|
|
sizeof(ReplicationSlotPersistentData));
|
|
|
|
|
|
|
|
/* initialize in memory state */
|
|
|
|
slot->effective_xmin = cp.slotdata.xmin;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
slot->effective_catalog_xmin = cp.slotdata.catalog_xmin;
|
Flush logical slots to disk during a shutdown checkpoint if required.
It's entirely possible for a logical slot to have a confirmed_flush LSN
higher than the last value saved on disk while not being marked as dirty.
Currently, it is not a major problem but a later patch adding support for
the upgrade of slots relies on that value being properly flushed to disk.
It can also help avoid processing the same transactions again in some
boundary cases after the clean shutdown and restart. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN, it
may lead to processing the same changes again without this patch.
The approach taken by this patch has been suggested by Ashutosh Bapat.
Author: Vignesh C, Julien Rouhaud, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Michael Paquier, Ashutosh Bapat, Peter Smith, Hou Zhijie
Discussion: http://postgr.es/m/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com
Discussion: http://postgr.es/m/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
2023-09-14 05:26:13 +02:00
|
|
|
slot->last_saved_confirmed_flush = cp.slotdata.confirmed_flush;
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
|
|
|
slot->candidate_catalog_xmin = InvalidTransactionId;
|
|
|
|
slot->candidate_xmin_lsn = InvalidXLogRecPtr;
|
|
|
|
slot->candidate_restart_lsn = InvalidXLogRecPtr;
|
|
|
|
slot->candidate_restart_valid = InvalidXLogRecPtr;
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
slot->in_use = true;
|
2015-04-21 11:51:06 +02:00
|
|
|
slot->active_pid = 0;
|
2014-02-01 04:45:17 +01:00
|
|
|
|
2024-03-25 12:04:33 +01:00
|
|
|
/*
|
|
|
|
* We set the last inactive time after loading the slot from the disk
|
|
|
|
* into memory. Whoever acquires the slot i.e. makes the slot active
|
|
|
|
* will reset it. We don't set it for the slots currently being synced
|
|
|
|
* from the primary to the standby because such slots are typically
|
|
|
|
* inactive as decoding is not allowed on those.
|
|
|
|
*/
|
|
|
|
if (!(RecoveryInProgress() && slot->data.synced))
|
|
|
|
slot->last_inactive_time = GetCurrentTimestamp();
|
|
|
|
else
|
|
|
|
slot->last_inactive_time = 0;
|
|
|
|
|
2014-02-01 04:45:17 +01:00
|
|
|
restored = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!restored)
|
2018-11-01 23:59:24 +01:00
|
|
|
ereport(FATAL,
|
2014-02-01 04:45:17 +01:00
|
|
|
(errmsg("too many replication slots active before shutdown"),
|
|
|
|
errhint("Increase max_replication_slots and try again.")));
|
|
|
|
}
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
|
|
|
|
/*
|
2024-03-22 09:22:05 +01:00
|
|
|
* Maps an invalidation reason for a replication slot to
|
2024-02-22 00:40:40 +01:00
|
|
|
* ReplicationSlotInvalidationCause.
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
*/
|
|
|
|
ReplicationSlotInvalidationCause
|
2024-03-22 09:22:05 +01:00
|
|
|
GetSlotInvalidationCause(const char *invalidation_reason)
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
{
|
2024-02-22 00:40:40 +01:00
|
|
|
ReplicationSlotInvalidationCause cause;
|
2024-02-22 11:59:58 +01:00
|
|
|
ReplicationSlotInvalidationCause result = RS_INVAL_NONE;
|
2024-02-22 00:40:40 +01:00
|
|
|
bool found PG_USED_FOR_ASSERTS_ONLY = false;
|
|
|
|
|
2024-03-22 09:22:05 +01:00
|
|
|
Assert(invalidation_reason);
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
|
2024-02-22 00:40:40 +01:00
|
|
|
for (cause = RS_INVAL_NONE; cause <= RS_INVAL_MAX_CAUSES; cause++)
|
|
|
|
{
|
2024-03-22 09:22:05 +01:00
|
|
|
if (strcmp(SlotInvalidationCauses[cause], invalidation_reason) == 0)
|
2024-02-22 00:40:40 +01:00
|
|
|
{
|
|
|
|
found = true;
|
2024-02-22 11:59:58 +01:00
|
|
|
result = cause;
|
2024-02-22 00:40:40 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
|
2024-02-22 00:40:40 +01:00
|
|
|
Assert(found);
|
2024-02-22 11:59:58 +01:00
|
|
|
return result;
|
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
2024-02-14 05:15:36 +01:00
|
|
|
}
|
2024-03-08 03:40:45 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* A helper function to validate slots specified in GUC standby_slot_names.
|
|
|
|
*
|
|
|
|
* The rawname will be parsed, and the result will be saved into *elemlist.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
validate_standby_slots(char *rawname, List **elemlist)
|
|
|
|
{
|
|
|
|
bool ok;
|
|
|
|
|
|
|
|
/* Verify syntax and parse string into a list of identifiers */
|
|
|
|
ok = SplitIdentifierString(rawname, ',', elemlist);
|
|
|
|
|
|
|
|
if (!ok)
|
|
|
|
{
|
|
|
|
GUC_check_errdetail("List syntax is invalid.");
|
|
|
|
}
|
|
|
|
else if (!ReplicationSlotCtl)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We cannot validate the replication slot if the replication slots'
|
|
|
|
* data has not been initialized. This is ok as we will anyway
|
|
|
|
* validate the specified slot when waiting for them to catch up. See
|
|
|
|
* StandbySlotsHaveCaughtup() for details.
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Check that the specified slots exist and are logical slots */
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
|
|
|
|
foreach_ptr(char, name, *elemlist)
|
|
|
|
{
|
|
|
|
ReplicationSlot *slot;
|
|
|
|
|
|
|
|
slot = SearchNamedReplicationSlot(name, false);
|
|
|
|
|
|
|
|
if (!slot)
|
|
|
|
{
|
|
|
|
GUC_check_errdetail("replication slot \"%s\" does not exist",
|
|
|
|
name);
|
|
|
|
ok = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!SlotIsPhysical(slot))
|
|
|
|
{
|
|
|
|
GUC_check_errdetail("\"%s\" is not a physical replication slot",
|
|
|
|
name);
|
|
|
|
ok = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GUC check_hook for standby_slot_names
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
check_standby_slot_names(char **newval, void **extra, GucSource source)
|
|
|
|
{
|
|
|
|
char *rawname;
|
|
|
|
char *ptr;
|
|
|
|
List *elemlist;
|
|
|
|
int size;
|
|
|
|
bool ok;
|
|
|
|
StandbySlotNamesConfigData *config;
|
|
|
|
|
|
|
|
if ((*newval)[0] == '\0')
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* Need a modifiable copy of the GUC string */
|
|
|
|
rawname = pstrdup(*newval);
|
|
|
|
|
|
|
|
/* Now verify if the specified slots exist and have correct type */
|
|
|
|
ok = validate_standby_slots(rawname, &elemlist);
|
|
|
|
|
|
|
|
if (!ok || elemlist == NIL)
|
|
|
|
{
|
|
|
|
pfree(rawname);
|
|
|
|
list_free(elemlist);
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Compute the size required for the StandbySlotNamesConfigData struct */
|
|
|
|
size = offsetof(StandbySlotNamesConfigData, slot_names);
|
|
|
|
foreach_ptr(char, slot_name, elemlist)
|
|
|
|
size += strlen(slot_name) + 1;
|
|
|
|
|
|
|
|
/* GUC extra value must be guc_malloc'd, not palloc'd */
|
|
|
|
config = (StandbySlotNamesConfigData *) guc_malloc(LOG, size);
|
|
|
|
|
|
|
|
/* Transform the data into StandbySlotNamesConfigData */
|
|
|
|
config->nslotnames = list_length(elemlist);
|
|
|
|
|
|
|
|
ptr = config->slot_names;
|
|
|
|
foreach_ptr(char, slot_name, elemlist)
|
|
|
|
{
|
|
|
|
strcpy(ptr, slot_name);
|
|
|
|
ptr += strlen(slot_name) + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
*extra = (void *) config;
|
|
|
|
|
|
|
|
pfree(rawname);
|
|
|
|
list_free(elemlist);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* GUC assign_hook for standby_slot_names
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
assign_standby_slot_names(const char *newval, void *extra)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The standby slots may have changed, so we must recompute the oldest
|
|
|
|
* LSN.
|
|
|
|
*/
|
|
|
|
ss_oldest_flush_lsn = InvalidXLogRecPtr;
|
|
|
|
|
|
|
|
standby_slot_names_config = (StandbySlotNamesConfigData *) extra;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if the passed slot_name is specified in the standby_slot_names GUC.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
SlotExistsInStandbySlotNames(const char *slot_name)
|
|
|
|
{
|
|
|
|
const char *standby_slot_name;
|
|
|
|
|
|
|
|
/* Return false if there is no value in standby_slot_names */
|
|
|
|
if (standby_slot_names_config == NULL)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* XXX: We are not expecting this list to be long so a linear search
|
|
|
|
* shouldn't hurt but if that turns out not to be true then we can cache
|
|
|
|
* this information for each WalSender as well.
|
|
|
|
*/
|
|
|
|
standby_slot_name = standby_slot_names_config->slot_names;
|
|
|
|
for (int i = 0; i < standby_slot_names_config->nslotnames; i++)
|
|
|
|
{
|
|
|
|
if (strcmp(standby_slot_name, slot_name) == 0)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
standby_slot_name += strlen(standby_slot_name) + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return true if the slots specified in standby_slot_names have caught up to
|
|
|
|
* the given WAL location, false otherwise.
|
|
|
|
*
|
|
|
|
* The elevel parameter specifies the error level used for logging messages
|
|
|
|
* related to slots that do not exist, are invalidated, or are inactive.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
StandbySlotsHaveCaughtup(XLogRecPtr wait_for_lsn, int elevel)
|
|
|
|
{
|
|
|
|
const char *name;
|
|
|
|
int caught_up_slot_num = 0;
|
|
|
|
XLogRecPtr min_restart_lsn = InvalidXLogRecPtr;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't need to wait for the standbys to catch up if there is no value in
|
|
|
|
* standby_slot_names.
|
|
|
|
*/
|
|
|
|
if (standby_slot_names_config == NULL)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't need to wait for the standbys to catch up if we are on a standby
|
|
|
|
* server, since we do not support syncing slots to cascading standbys.
|
|
|
|
*/
|
|
|
|
if (RecoveryInProgress())
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't need to wait for the standbys to catch up if they are already
|
|
|
|
* beyond the specified WAL location.
|
|
|
|
*/
|
|
|
|
if (!XLogRecPtrIsInvalid(ss_oldest_flush_lsn) &&
|
|
|
|
ss_oldest_flush_lsn >= wait_for_lsn)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* To prevent concurrent slot dropping and creation while filtering the
|
|
|
|
* slots, take the ReplicationSlotControlLock outside of the loop.
|
|
|
|
*/
|
|
|
|
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
|
|
|
|
|
|
|
|
name = standby_slot_names_config->slot_names;
|
|
|
|
for (int i = 0; i < standby_slot_names_config->nslotnames; i++)
|
|
|
|
{
|
|
|
|
XLogRecPtr restart_lsn;
|
|
|
|
bool invalidated;
|
|
|
|
bool inactive;
|
|
|
|
ReplicationSlot *slot;
|
|
|
|
|
|
|
|
slot = SearchNamedReplicationSlot(name, false);
|
|
|
|
|
|
|
|
if (!slot)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If a slot name provided in standby_slot_names does not exist,
|
|
|
|
* report a message and exit the loop. A user can specify a slot
|
|
|
|
* name that does not exist just before the server startup. The
|
|
|
|
* GUC check_hook(validate_standby_slots) cannot validate such a
|
|
|
|
* slot during startup as the ReplicationSlotCtl shared memory is
|
|
|
|
* not initialized at that time. It is also possible for a user to
|
|
|
|
* drop the slot in standby_slot_names afterwards.
|
|
|
|
*/
|
|
|
|
ereport(elevel,
|
|
|
|
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("replication slot \"%s\" specified in parameter %s does not exist",
|
|
|
|
name, "standby_slot_names"),
|
|
|
|
errdetail("Logical replication is waiting on the standby associated with \"%s\".",
|
|
|
|
name),
|
|
|
|
errhint("Consider creating the slot \"%s\" or amend parameter %s.",
|
|
|
|
name, "standby_slot_names"));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (SlotIsLogical(slot))
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If a logical slot name is provided in standby_slot_names,
|
|
|
|
* report a message and exit the loop. Similar to the non-existent
|
|
|
|
* case, a user can specify a logical slot name in
|
|
|
|
* standby_slot_names before the server startup, or drop an
|
|
|
|
* existing physical slot and recreate a logical slot with the
|
|
|
|
* same name.
|
|
|
|
*/
|
|
|
|
ereport(elevel,
|
|
|
|
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("cannot have logical replication slot \"%s\" in parameter %s",
|
|
|
|
name, "standby_slot_names"),
|
|
|
|
errdetail("Logical replication is waiting for correction on \"%s\".",
|
|
|
|
name),
|
|
|
|
errhint("Consider removing logical slot \"%s\" from parameter %s.",
|
|
|
|
name, "standby_slot_names"));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
SpinLockAcquire(&slot->mutex);
|
|
|
|
restart_lsn = slot->data.restart_lsn;
|
|
|
|
invalidated = slot->data.invalidated != RS_INVAL_NONE;
|
|
|
|
inactive = slot->active_pid == 0;
|
|
|
|
SpinLockRelease(&slot->mutex);
|
|
|
|
|
|
|
|
if (invalidated)
|
|
|
|
{
|
|
|
|
/* Specified physical slot has been invalidated */
|
|
|
|
ereport(elevel,
|
|
|
|
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
|
|
|
errmsg("physical slot \"%s\" specified in parameter %s has been invalidated",
|
|
|
|
name, "standby_slot_names"),
|
|
|
|
errdetail("Logical replication is waiting on the standby associated with \"%s\".",
|
|
|
|
name),
|
|
|
|
errhint("Consider dropping and recreating the slot \"%s\" or amend parameter %s.",
|
|
|
|
name, "standby_slot_names"));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (XLogRecPtrIsInvalid(restart_lsn) || restart_lsn < wait_for_lsn)
|
|
|
|
{
|
|
|
|
/* Log a message if no active_pid for this physical slot */
|
|
|
|
if (inactive)
|
|
|
|
ereport(elevel,
|
|
|
|
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
|
|
|
errmsg("replication slot \"%s\" specified in parameter %s does not have active_pid",
|
|
|
|
name, "standby_slot_names"),
|
|
|
|
errdetail("Logical replication is waiting on the standby associated with \"%s\".",
|
|
|
|
name),
|
|
|
|
errhint("Consider starting standby associated with \"%s\" or amend parameter %s.",
|
|
|
|
name, "standby_slot_names"));
|
|
|
|
|
|
|
|
/* Continue if the current slot hasn't caught up. */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
Assert(restart_lsn >= wait_for_lsn);
|
|
|
|
|
|
|
|
if (XLogRecPtrIsInvalid(min_restart_lsn) ||
|
|
|
|
min_restart_lsn > restart_lsn)
|
|
|
|
min_restart_lsn = restart_lsn;
|
|
|
|
|
|
|
|
caught_up_slot_num++;
|
|
|
|
|
|
|
|
name += strlen(name) + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
LWLockRelease(ReplicationSlotControlLock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return false if not all the standbys have caught up to the specified
|
|
|
|
* WAL location.
|
|
|
|
*/
|
|
|
|
if (caught_up_slot_num != standby_slot_names_config->nslotnames)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* The ss_oldest_flush_lsn must not retreat. */
|
|
|
|
Assert(XLogRecPtrIsInvalid(ss_oldest_flush_lsn) ||
|
|
|
|
min_restart_lsn >= ss_oldest_flush_lsn);
|
|
|
|
|
|
|
|
ss_oldest_flush_lsn = min_restart_lsn;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for physical standbys to confirm receiving the given lsn.
|
|
|
|
*
|
|
|
|
* Used by logical decoding SQL functions. It waits for physical standbys
|
|
|
|
* corresponding to the physical slots specified in the standby_slot_names GUC.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
WaitForStandbyConfirmation(XLogRecPtr wait_for_lsn)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Don't need to wait for the standby to catch up if the current acquired
|
|
|
|
* slot is not a logical failover slot, or there is no value in
|
|
|
|
* standby_slot_names.
|
|
|
|
*/
|
|
|
|
if (!MyReplicationSlot->data.failover || !standby_slot_names_config)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ConditionVariablePrepareToSleep(&WalSndCtl->wal_confirm_rcv_cv);
|
|
|
|
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
|
|
|
|
if (ConfigReloadPending)
|
|
|
|
{
|
|
|
|
ConfigReloadPending = false;
|
|
|
|
ProcessConfigFile(PGC_SIGHUP);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Exit if done waiting for every slot. */
|
|
|
|
if (StandbySlotsHaveCaughtup(wait_for_lsn, WARNING))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for the slots in the standby_slot_names to catch up, but use a
|
|
|
|
* timeout (1s) so we can also check if the standby_slot_names has
|
|
|
|
* been changed.
|
|
|
|
*/
|
|
|
|
ConditionVariableTimedSleep(&WalSndCtl->wal_confirm_rcv_cv, 1000,
|
|
|
|
WAIT_EVENT_WAIT_FOR_STANDBY_CONFIRMATION);
|
|
|
|
}
|
|
|
|
|
|
|
|
ConditionVariableCancelSleep();
|
|
|
|
}
|