postgresql/src/backend/replication
Alvaro Herrera a61b1f7482
Rework query relation permission checking
Currently, information about the permissions to be checked on relations
mentioned in a query is stored in their range table entries.  So the
executor must scan the entire range table looking for relations that
need to have permissions checked.  This can make the permission checking
part of the executor initialization needlessly expensive when many
inheritance children are present in the range range.  While the
permissions need not be checked on the individual child relations, the
executor still must visit every range table entry to filter them out.

This commit moves the permission checking information out of the range
table entries into a new plan node called RTEPermissionInfo.  Every
top-level (inheritance "root") RTE_RELATION entry in the range table
gets one and a list of those is maintained alongside the range table.
This new list is initialized by the parser when initializing the range
table.  The rewriter can add more entries to it as rules/views are
expanded.  Finally, the planner combines the lists of the individual
subqueries into one flat list that is passed to the executor for
checking.

To make it quick to find the RTEPermissionInfo entry belonging to a
given relation, RangeTblEntry gets a new Index field 'perminfoindex'
that stores the corresponding RTEPermissionInfo's index in the query's
list of the latter.

ExecutorCheckPerms_hook has gained another List * argument; the
signature is now:
typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable,
					      List *rtePermInfos,
					      bool ereport_on_violation);
The first argument is no longer used by any in-core uses of the hook,
but we leave it in place because there may be other implementations that
do.  Implementations should likely scan the rtePermInfos list to
determine which operations to allow or deny.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
2022-12-06 16:09:24 +01:00
..
libpqwalreceiver meson: Add windows resource files 2022-10-05 09:56:05 -07:00
logical Rework query relation permission checking 2022-12-06 16:09:24 +01:00
pgoutput Fix incorrect output from pgoutput when using column lists. 2022-12-02 10:52:58 +05:30
.gitignore Build all Flex files standalone 2022-09-04 12:09:01 +07:00
Makefile Build all Flex files standalone 2022-09-04 12:09:01 +07:00
README code: replace 'master' with 'primary' where appropriate. 2020-07-08 12:57:23 -07:00
meson.build meson: Add initial version of meson based build system 2022-09-21 22:37:17 -07:00
repl_gram.y doc: Fix description of replication command CREATE_REPLICATION_SLOT 2022-10-13 08:53:42 +09:00
repl_scanner.l Split up guc.c for better build speed and ease of maintenance. 2022-09-13 11:11:45 -04:00
slot.c Ignore invalidated slots while computing oldest catalog Xmin 2022-11-22 10:56:07 +01:00
slotfuncs.c Improve comments atop pg_get_replication_slots. 2022-11-22 15:22:00 +05:30
syncrep.c Store GUC data in a memory context, instead of using malloc(). 2022-10-14 12:10:48 -04:00
syncrep_gram.y Split up guc.c for better build speed and ease of maintenance. 2022-09-13 11:11:45 -04:00
syncrep_scanner.l Split up guc.c for better build speed and ease of maintenance. 2022-09-13 11:11:45 -04:00
walreceiver.c Fix slowdown in TAP tests due to recent walreceiver change. 2022-11-17 11:30:14 +13:00
walreceiverfuncs.c Split xlog.c into xlog.c and xlogrecovery.c. 2022-02-16 09:30:38 +02:00
walsender.c Add additional checks while creating the initial decoding snapshot. 2022-11-21 08:54:43 +05:30

README

src/backend/replication/README

Walreceiver - libpqwalreceiver API
----------------------------------

The transport-specific part of walreceiver, responsible for connecting to
the primary server, receiving WAL files and sending messages, is loaded
dynamically to avoid having to link the main server binary with libpq.
The dynamically loaded module is in libpqwalreceiver subdirectory.

The dynamically loaded module implements a set of functions with details
about each one of them provided in src/include/replication/walreceiver.h.

This API should be considered internal at the moment, but we could open it
up for 3rd party replacements of libpqwalreceiver in the future, allowing
pluggable methods for receiving WAL.

Walreceiver IPC
---------------

When the WAL replay in startup process has reached the end of archived WAL,
restorable using restore_command, it starts up the walreceiver process
to fetch more WAL (if streaming replication is configured).

Walreceiver is a postmaster subprocess, so the startup process can't fork it
directly. Instead, it sends a signal to postmaster, asking postmaster to launch
it. Before that, however, startup process fills in WalRcvData->conninfo
and WalRcvData->slotname, and initializes the starting point in
WalRcvData->receiveStart.

As walreceiver receives WAL from the primary server, and writes and flushes
it to disk (in pg_wal), it updates WalRcvData->flushedUpto and signals
the startup process to know how far WAL replay can advance.

Walreceiver sends information about replication progress to the primary server
whenever it either writes or flushes new WAL, or the specified interval elapses.
This is used for reporting purpose.

Walsender IPC
-------------

At shutdown, postmaster handles walsender processes differently from regular
backends. It waits for regular backends to die before writing the
shutdown checkpoint and terminating pgarch and other auxiliary processes, but
that's not desirable for walsenders, because we want the standby servers to
receive all the WAL, including the shutdown checkpoint, before the primary
is shut down. Therefore postmaster treats walsenders like the pgarch process,
and instructs them to terminate at PM_SHUTDOWN_2 phase, after all regular
backends have died and checkpointer has issued the shutdown checkpoint.

When postmaster accepts a connection, it immediately forks a new process
to handle the handshake and authentication, and the process initializes to
become a backend. Postmaster doesn't know if the process becomes a regular
backend or a walsender process at that time - that's indicated in the
connection handshake - so we need some extra signaling to let postmaster
identify walsender processes.

When walsender process starts up, it marks itself as a walsender process in
the PMSignal array. That way postmaster can tell it apart from regular
backends.

Note that no big harm is done if postmaster thinks that a walsender is a
regular backend; it will just terminate the walsender earlier in the shutdown
phase. A walsender will look like a regular backend until it's done with the
initialization and has marked itself in PMSignal array, and at process
termination, after unmarking the PMSignal slot.

Each walsender allocates an entry from the WalSndCtl array, and tracks
information about replication progress. User can monitor them via
statistics views.


Walsender - walreceiver protocol
--------------------------------

See manual.