docs: replace 'master' with 'primary' where appropriate.

Also changed "in the primary" to "on the primary", and added a few
"the" before "primary".

Author: Andres Freund
Reviewed-By: David Steele
Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue@alap3.anarazel.de
This commit is contained in:
Andres Freund 2020-06-15 10:12:58 -07:00
parent e07633646a
commit 9e101cf606
15 changed files with 86 additions and 87 deletions

View File

@ -253,7 +253,7 @@ SET client_min_messages = DEBUG1;
implies that operating system collation rules must never change. implies that operating system collation rules must never change.
Though rare, updates to operating system collation rules can Though rare, updates to operating system collation rules can
cause these issues. More commonly, an inconsistency in the cause these issues. More commonly, an inconsistency in the
collation order between a master server and a standby server is collation order between a primary server and a standby server is
implicated, possibly because the <emphasis>major</emphasis> operating implicated, possibly because the <emphasis>major</emphasis> operating
system version in use is inconsistent. Such inconsistencies will system version in use is inconsistent. Such inconsistencies will
generally only arise on standby servers, and so can generally generally only arise on standby servers, and so can generally

View File

@ -964,7 +964,7 @@ SELECT * FROM pg_stop_backup(false, true);
non-exclusive one, but it differs in a few key steps. This type of non-exclusive one, but it differs in a few key steps. This type of
backup can only be taken on a primary and does not allow concurrent backup can only be taken on a primary and does not allow concurrent
backups. Moreover, because it creates a backup label file, as backups. Moreover, because it creates a backup label file, as
described below, it can block automatic restart of the master server described below, it can block automatic restart of the primary server
after a crash. On the other hand, the erroneous removal of this after a crash. On the other hand, the erroneous removal of this
file from a backup or standby is a common mistake, which can result file from a backup or standby is a common mistake, which can result
in serious data corruption. If it is necessary to use this method, in serious data corruption. If it is necessary to use this method,
@ -1033,9 +1033,9 @@ SELECT pg_start_backup('label', true);
this will result in corruption. Confusion about when it is appropriate this will result in corruption. Confusion about when it is appropriate
to remove this file is a common cause of data corruption when using this to remove this file is a common cause of data corruption when using this
method; be very certain that you remove the file only on an existing method; be very certain that you remove the file only on an existing
master and never when building a standby or restoring a backup, even if primary and never when building a standby or restoring a backup, even if
you are building a standby that will subsequently be promoted to a new you are building a standby that will subsequently be promoted to a new
master. primary.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -1128,16 +1128,16 @@ SELECT pg_stop_backup();
<para> <para>
It is often a good idea to also omit from the backup the files It is often a good idea to also omit from the backup the files
within the cluster's <filename>pg_replslot/</filename> directory, so that within the cluster's <filename>pg_replslot/</filename> directory, so that
replication slots that exist on the master do not become part of the replication slots that exist on the primary do not become part of the
backup. Otherwise, the subsequent use of the backup to create a standby backup. Otherwise, the subsequent use of the backup to create a standby
may result in indefinite retention of WAL files on the standby, and may result in indefinite retention of WAL files on the standby, and
possibly bloat on the master if hot standby feedback is enabled, because possibly bloat on the primary if hot standby feedback is enabled, because
the clients that are using those replication slots will still be connecting the clients that are using those replication slots will still be connecting
to and updating the slots on the master, not the standby. Even if the to and updating the slots on the primary, not the standby. Even if the
backup is only intended for use in creating a new master, copying the backup is only intended for use in creating a new primary, copying the
replication slots isn't expected to be particularly useful, since the replication slots isn't expected to be particularly useful, since the
contents of those slots will likely be badly out of date by the time contents of those slots will likely be badly out of date by the time
the new master comes on line. the new primary comes on line.
</para> </para>
<para> <para>

View File

@ -697,7 +697,7 @@ include_dir 'conf.d'
<para> <para>
When running a standby server, you must set this parameter to the When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server. will not be allowed in the standby server.
</para> </para>
</listitem> </listitem>
@ -1643,7 +1643,7 @@ include_dir 'conf.d'
<para> <para>
When running a standby server, you must set this parameter to the When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server. will not be allowed in the standby server.
</para> </para>
</listitem> </listitem>
@ -2259,7 +2259,7 @@ include_dir 'conf.d'
<para> <para>
When running a standby server, you must set this parameter to the When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server. will not be allowed in the standby server.
</para> </para>
@ -3253,7 +3253,7 @@ include_dir 'conf.d'
<varname>archive_timeout</varname> &mdash; it will bloat your archive <varname>archive_timeout</varname> &mdash; it will bloat your archive
storage. <varname>archive_timeout</varname> settings of a minute or so are storage. <varname>archive_timeout</varname> settings of a minute or so are
usually reasonable. You should consider using streaming replication, usually reasonable. You should consider using streaming replication,
instead of archiving, if you want data to be copied off the master instead of archiving, if you want data to be copied off the primary
server more quickly than that. server more quickly than that.
If this value is specified without units, it is taken as seconds. If this value is specified without units, it is taken as seconds.
This parameter can only be set in the This parameter can only be set in the
@ -3678,12 +3678,12 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
These settings control the behavior of the built-in These settings control the behavior of the built-in
<firstterm>streaming replication</firstterm> feature (see <firstterm>streaming replication</firstterm> feature (see
<xref linkend="streaming-replication"/>). Servers will be either a <xref linkend="streaming-replication"/>). Servers will be either a
master or a standby server. Masters can send data, while standbys primary or a standby server. Primaries can send data, while standbys
are always receivers of replicated data. When cascading replication are always receivers of replicated data. When cascading replication
(see <xref linkend="cascading-replication"/>) is used, standby servers (see <xref linkend="cascading-replication"/>) is used, standby servers
can also be senders, as well as receivers. can also be senders, as well as receivers.
Parameters are mainly for sending and standby servers, though some Parameters are mainly for sending and standby servers, though some
parameters have meaning only on the master server. Settings may vary parameters have meaning only on the primary server. Settings may vary
across the cluster without problems if that is required. across the cluster without problems if that is required.
</para> </para>
@ -3693,10 +3693,10 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para> <para>
These parameters can be set on any server that is These parameters can be set on any server that is
to send replication data to one or more standby servers. to send replication data to one or more standby servers.
The master is always a sending server, so these parameters must The primary is always a sending server, so these parameters must
always be set on the master. always be set on the primary.
The role and meaning of these parameters does not change after a The role and meaning of these parameters does not change after a
standby becomes the master. standby becomes the primary.
</para> </para>
<variablelist> <variablelist>
@ -3724,7 +3724,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para> <para>
When running a standby server, you must set this parameter to the When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server. will not be allowed in the standby server.
</para> </para>
</listitem> </listitem>
@ -3855,19 +3855,19 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
</variablelist> </variablelist>
</sect2> </sect2>
<sect2 id="runtime-config-replication-master"> <sect2 id="runtime-config-replication-primary">
<title>Master Server</title> <title>Primary Server</title>
<para> <para>
These parameters can be set on the master/primary server that is These parameters can be set on the primary server that is
to send replication data to one or more standby servers. to send replication data to one or more standby servers.
Note that in addition to these parameters, Note that in addition to these parameters,
<xref linkend="guc-wal-level"/> must be set appropriately on the master <xref linkend="guc-wal-level"/> must be set appropriately on the primary
server, and optionally WAL archiving can be enabled as server, and optionally WAL archiving can be enabled as
well (see <xref linkend="runtime-config-wal-archiving"/>). well (see <xref linkend="runtime-config-wal-archiving"/>).
The values of these parameters on standby servers are irrelevant, The values of these parameters on standby servers are irrelevant,
although you may wish to set them there in preparation for the although you may wish to set them there in preparation for the
possibility of a standby becoming the master. possibility of a standby becoming the primary.
</para> </para>
<variablelist> <variablelist>
@ -4042,7 +4042,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
<para> <para>
These settings control the behavior of a standby server that is These settings control the behavior of a standby server that is
to receive replication data. Their values on the master server to receive replication data. Their values on the primary server
are irrelevant. are irrelevant.
</para> </para>
@ -4369,7 +4369,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
of time. For example, if of time. For example, if
you set this parameter to <literal>5min</literal>, the standby will you set this parameter to <literal>5min</literal>, the standby will
replay each transaction commit only when the system time on the standby replay each transaction commit only when the system time on the standby
is at least five minutes past the commit time reported by the master. is at least five minutes past the commit time reported by the primary.
If this value is specified without units, it is taken as milliseconds. If this value is specified without units, it is taken as milliseconds.
The default is zero, adding no delay. The default is zero, adding no delay.
</para> </para>
@ -4377,10 +4377,10 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
It is possible that the replication delay between servers exceeds the It is possible that the replication delay between servers exceeds the
value of this parameter, in which case no delay is added. value of this parameter, in which case no delay is added.
Note that the delay is calculated between the WAL time stamp as written Note that the delay is calculated between the WAL time stamp as written
on master and the current time on the standby. Delays in transfer on primary and the current time on the standby. Delays in transfer
because of network lag or cascading replication configurations because of network lag or cascading replication configurations
may reduce the actual wait time significantly. If the system may reduce the actual wait time significantly. If the system
clocks on master and standby are not synchronized, this may lead to clocks on primary and standby are not synchronized, this may lead to
recovery applying records earlier than expected; but that is not a recovery applying records earlier than expected; but that is not a
major issue because useful settings of this parameter are much larger major issue because useful settings of this parameter are much larger
than typical time deviations between servers. than typical time deviations between servers.
@ -4402,7 +4402,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
except crash recovery. except crash recovery.
<varname>hot_standby_feedback</varname> will be delayed by use of this feature <varname>hot_standby_feedback</varname> will be delayed by use of this feature
which could lead to bloat on the master; use both together with care. which could lead to bloat on the primary; use both together with care.
<warning> <warning>
<para> <para>
@ -8998,7 +8998,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<para> <para>
When running a standby server, you must set this parameter to the When running a standby server, you must set this parameter to the
same or higher value than on the master server. Otherwise, queries same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server. will not be allowed in the standby server.
</para> </para>
</listitem> </listitem>

View File

@ -244,7 +244,7 @@
<productname>PostgreSQL</productname> replication solutions can be developed <productname>PostgreSQL</productname> replication solutions can be developed
externally. For example, <application> <ulink externally. For example, <application> <ulink
url="http://www.slony.info">Slony-I</ulink></application> is a popular url="http://www.slony.info">Slony-I</ulink></application> is a popular
master/standby replication solution that is developed independently primary/standby replication solution that is developed independently
from the core project. from the core project.
</para> </para>
</sect1> </sect1>

View File

@ -120,7 +120,7 @@
system residing on another computer. The only restriction is that system residing on another computer. The only restriction is that
the mirroring must be done in a way that ensures the standby server the mirroring must be done in a way that ensures the standby server
has a consistent copy of the file system &mdash; specifically, writes has a consistent copy of the file system &mdash; specifically, writes
to the standby must be done in the same order as those on the master. to the standby must be done in the same order as those on the primary.
<productname>DRBD</productname> is a popular file system replication solution <productname>DRBD</productname> is a popular file system replication solution
for Linux. for Linux.
</para> </para>
@ -146,7 +146,7 @@ protocol to make nodes agree on a serializable transactional order.
stream of write-ahead log (<acronym>WAL</acronym>) stream of write-ahead log (<acronym>WAL</acronym>)
records. If the main server fails, the standby contains records. If the main server fails, the standby contains
almost all of the data of the main server, and can be quickly almost all of the data of the main server, and can be quickly
made the new master database server. This can be synchronous or made the new primary database server. This can be synchronous or
asynchronous and can only be done for the entire database server. asynchronous and can only be done for the entire database server.
</para> </para>
<para> <para>
@ -167,7 +167,7 @@ protocol to make nodes agree on a serializable transactional order.
logical replication constructs a stream of logical data modifications logical replication constructs a stream of logical data modifications
from the WAL. Logical replication allows the data changes from from the WAL. Logical replication allows the data changes from
individual tables to be replicated. Logical replication doesn't require individual tables to be replicated. Logical replication doesn't require
a particular server to be designated as a master or a replica but allows a particular server to be designated as a primary or a replica but allows
data to flow in multiple directions. For more information on logical data to flow in multiple directions. For more information on logical
replication, see <xref linkend="logical-replication"/>. Through the replication, see <xref linkend="logical-replication"/>. Through the
logical decoding interface (<xref linkend="logicaldecoding"/>), logical decoding interface (<xref linkend="logicaldecoding"/>),
@ -219,9 +219,9 @@ protocol to make nodes agree on a serializable transactional order.
this is unacceptable, either the middleware or the application this is unacceptable, either the middleware or the application
must query such values from a single server and then use those must query such values from a single server and then use those
values in write queries. Another option is to use this replication values in write queries. Another option is to use this replication
option with a traditional master-standby setup, i.e. data modification option with a traditional primary-standby setup, i.e. data modification
queries are sent only to the master and are propagated to the queries are sent only to the primary and are propagated to the
standby servers via master-standby replication, not by the replication standby servers via primary-standby replication, not by the replication
middleware. Care must also be taken that all middleware. Care must also be taken that all
transactions either commit or abort on all servers, perhaps transactions either commit or abort on all servers, perhaps
using two-phase commit (<xref linkend="sql-prepare-transaction"/> using two-phase commit (<xref linkend="sql-prepare-transaction"/>
@ -263,7 +263,7 @@ protocol to make nodes agree on a serializable transactional order.
to reduce the communication overhead. Synchronous multimaster to reduce the communication overhead. Synchronous multimaster
replication is best for mostly read workloads, though its big replication is best for mostly read workloads, though its big
advantage is that any server can accept write requests &mdash; advantage is that any server can accept write requests &mdash;
there is no need to partition workloads between master and there is no need to partition workloads between primary and
standby servers, and because the data changes are sent from one standby servers, and because the data changes are sent from one
server to another, there is no problem with non-deterministic server to another, there is no problem with non-deterministic
functions like <function>random()</function>. functions like <function>random()</function>.
@ -363,7 +363,7 @@ protocol to make nodes agree on a serializable transactional order.
</row> </row>
<row> <row>
<entry>No master server overhead</entry> <entry>No overhead on primary</entry>
<entry align="center">&bull;</entry> <entry align="center">&bull;</entry>
<entry align="center"></entry> <entry align="center"></entry>
<entry align="center">&bull;</entry> <entry align="center">&bull;</entry>
@ -387,7 +387,7 @@ protocol to make nodes agree on a serializable transactional order.
</row> </row>
<row> <row>
<entry>Master failure will never lose data</entry> <entry>Primary failure will never lose data</entry>
<entry align="center">&bull;</entry> <entry align="center">&bull;</entry>
<entry align="center">&bull;</entry> <entry align="center">&bull;</entry>
<entry align="center">with sync on</entry> <entry align="center">with sync on</entry>
@ -454,7 +454,7 @@ protocol to make nodes agree on a serializable transactional order.
partitioned by offices, e.g., London and Paris, with a server partitioned by offices, e.g., London and Paris, with a server
in each office. If queries combining London and Paris data in each office. If queries combining London and Paris data
are necessary, an application can query both servers, or are necessary, an application can query both servers, or
master/standby replication can be used to keep a read-only copy primary/standby replication can be used to keep a read-only copy
of the other office's data on each server. of the other office's data on each server.
</para> </para>
</listitem> </listitem>
@ -621,13 +621,13 @@ protocol to make nodes agree on a serializable transactional order.
<para> <para>
In standby mode, the server continuously applies WAL received from the In standby mode, the server continuously applies WAL received from the
master server. The standby server can read WAL from a WAL archive primary server. The standby server can read WAL from a WAL archive
(see <xref linkend="guc-restore-command"/>) or directly from the master (see <xref linkend="guc-restore-command"/>) or directly from the primary
over a TCP connection (streaming replication). The standby server will over a TCP connection (streaming replication). The standby server will
also attempt to restore any WAL found in the standby cluster's also attempt to restore any WAL found in the standby cluster's
<filename>pg_wal</filename> directory. That typically happens after a server <filename>pg_wal</filename> directory. That typically happens after a server
restart, when the standby replays again WAL that was streamed from the restart, when the standby replays again WAL that was streamed from the
master before the restart, but you can also manually copy files to primary before the restart, but you can also manually copy files to
<filename>pg_wal</filename> at any time to have them replayed. <filename>pg_wal</filename> at any time to have them replayed.
</para> </para>
@ -652,20 +652,20 @@ protocol to make nodes agree on a serializable transactional order.
<function>pg_promote()</function> is called, or a trigger file is found <function>pg_promote()</function> is called, or a trigger file is found
(<varname>promote_trigger_file</varname>). Before failover, (<varname>promote_trigger_file</varname>). Before failover,
any WAL immediately available in the archive or in <filename>pg_wal</filename> will be any WAL immediately available in the archive or in <filename>pg_wal</filename> will be
restored, but no attempt is made to connect to the master. restored, but no attempt is made to connect to the primary.
</para> </para>
</sect2> </sect2>
<sect2 id="preparing-master-for-standby"> <sect2 id="preparing-primary-for-standby">
<title>Preparing the Master for Standby Servers</title> <title>Preparing the Primary for Standby Servers</title>
<para> <para>
Set up continuous archiving on the primary to an archive directory Set up continuous archiving on the primary to an archive directory
accessible from the standby, as described accessible from the standby, as described
in <xref linkend="continuous-archiving"/>. The archive location should be in <xref linkend="continuous-archiving"/>. The archive location should be
accessible from the standby even when the master is down, i.e. it should accessible from the standby even when the primary is down, i.e. it should
reside on the standby server itself or another trusted server, not on reside on the standby server itself or another trusted server, not on
the master server. the primary server.
</para> </para>
<para> <para>
@ -898,7 +898,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<link linkend="monitoring-pg-stat-replication-view"><structname> <link linkend="monitoring-pg-stat-replication-view"><structname>
pg_stat_replication</structname></link> view. Large differences between pg_stat_replication</structname></link> view. Large differences between
<function>pg_current_wal_lsn</function> and the view's <literal>sent_lsn</literal> field <function>pg_current_wal_lsn</function> and the view's <literal>sent_lsn</literal> field
might indicate that the master server is under heavy load, while might indicate that the primary server is under heavy load, while
differences between <literal>sent_lsn</literal> and differences between <literal>sent_lsn</literal> and
<function>pg_last_wal_receive_lsn</function> on the standby might indicate <function>pg_last_wal_receive_lsn</function> on the standby might indicate
network delay, or that the standby is under heavy load. network delay, or that the standby is under heavy load.
@ -921,9 +921,9 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<secondary>streaming replication</secondary> <secondary>streaming replication</secondary>
</indexterm> </indexterm>
<para> <para>
Replication slots provide an automated way to ensure that the master does Replication slots provide an automated way to ensure that the primary does
not remove WAL segments until they have been received by all standbys, not remove WAL segments until they have been received by all standbys,
and that the master does not remove rows which could cause a and that the primary does not remove rows which could cause a
<link linkend="hot-standby-conflict">recovery conflict</link> even when the <link linkend="hot-standby-conflict">recovery conflict</link> even when the
standby is disconnected. standby is disconnected.
</para> </para>
@ -1001,23 +1001,22 @@ primary_slot_name = 'node_a_slot'
<para> <para>
The cascading replication feature allows a standby server to accept replication The cascading replication feature allows a standby server to accept replication
connections and stream WAL records to other standbys, acting as a relay. connections and stream WAL records to other standbys, acting as a relay.
This can be used to reduce the number of direct connections to the master This can be used to reduce the number of direct connections to the primary
and also to minimize inter-site bandwidth overheads. and also to minimize inter-site bandwidth overheads.
</para> </para>
<para> <para>
A standby acting as both a receiver and a sender is known as a cascading A standby acting as both a receiver and a sender is known as a cascading
standby. Standbys that are more directly connected to the master are known standby. Standbys that are more directly connected to the primary are known
as upstream servers, while those standby servers further away are downstream as upstream servers, while those standby servers further away are downstream
servers. Cascading replication does not place limits on the number or servers. Cascading replication does not place limits on the number or
arrangement of downstream servers, though each standby connects to only arrangement of downstream servers, though each standby connects to only
one upstream server which eventually links to a single master/primary one upstream server which eventually links to a single primary server.
server.
</para> </para>
<para> <para>
A cascading standby sends not only WAL records received from the A cascading standby sends not only WAL records received from the
master but also those restored from the archive. So even if the replication primary but also those restored from the archive. So even if the replication
connection in some upstream connection is terminated, streaming replication connection in some upstream connection is terminated, streaming replication
continues downstream for as long as new WAL records are available. continues downstream for as long as new WAL records are available.
</para> </para>
@ -1033,8 +1032,8 @@ primary_slot_name = 'node_a_slot'
</para> </para>
<para> <para>
If an upstream standby server is promoted to become new master, downstream If an upstream standby server is promoted to become the new primary, downstream
servers will continue to stream from the new master if servers will continue to stream from the new primary if
<varname>recovery_target_timeline</varname> is set to <literal>'latest'</literal> (the default). <varname>recovery_target_timeline</varname> is set to <literal>'latest'</literal> (the default).
</para> </para>
@ -1120,7 +1119,7 @@ primary_slot_name = 'node_a_slot'
a non-empty value. <varname>synchronous_commit</varname> must also be set to a non-empty value. <varname>synchronous_commit</varname> must also be set to
<literal>on</literal>, but since this is the default value, typically no change is <literal>on</literal>, but since this is the default value, typically no change is
required. (See <xref linkend="runtime-config-wal-settings"/> and required. (See <xref linkend="runtime-config-wal-settings"/> and
<xref linkend="runtime-config-replication-master"/>.) <xref linkend="runtime-config-replication-primary"/>.)
This configuration will cause each commit to wait for This configuration will cause each commit to wait for
confirmation that the standby has written the commit record to durable confirmation that the standby has written the commit record to durable
storage. storage.
@ -1145,8 +1144,8 @@ primary_slot_name = 'node_a_slot'
confirmation that the commit record has been received. These parameters confirmation that the commit record has been received. These parameters
allow the administrator to specify which standby servers should be allow the administrator to specify which standby servers should be
synchronous standbys. Note that the configuration of synchronous synchronous standbys. Note that the configuration of synchronous
replication is mainly on the master. Named standbys must be directly replication is mainly on the primary. Named standbys must be directly
connected to the master; the master knows nothing about downstream connected to the primary; the primary knows nothing about downstream
standby servers using cascaded replication. standby servers using cascaded replication.
</para> </para>
@ -1504,7 +1503,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)'
<para> <para>
Note that in this mode, the server will apply WAL one file at a Note that in this mode, the server will apply WAL one file at a
time, so if you use the standby server for queries (see Hot Standby), time, so if you use the standby server for queries (see Hot Standby),
there is a delay between an action in the master and when the there is a delay between an action in the primary and when the
action becomes visible in the standby, corresponding the time it takes action becomes visible in the standby, corresponding the time it takes
to fill up the WAL file. <varname>archive_timeout</varname> can be used to make that delay to fill up the WAL file. <varname>archive_timeout</varname> can be used to make that delay
shorter. Also note that you can't combine streaming replication with shorter. Also note that you can't combine streaming replication with
@ -2049,7 +2048,7 @@ if (!triggered)
cleanup of old row versions when there are no transactions that need to cleanup of old row versions when there are no transactions that need to
see them to ensure correct visibility of data according to MVCC rules. see them to ensure correct visibility of data according to MVCC rules.
However, this rule can only be applied for transactions executing on the However, this rule can only be applied for transactions executing on the
master. So it is possible that cleanup on the master will remove row primary. So it is possible that cleanup on the primary will remove row
versions that are still visible to a transaction on the standby. versions that are still visible to a transaction on the standby.
</para> </para>
@ -2438,7 +2437,7 @@ LOG: database system is ready to accept read only connections
<listitem> <listitem>
<para> <para>
Valid starting points for standby queries are generated at each Valid starting points for standby queries are generated at each
checkpoint on the master. If the standby is shut down while the master checkpoint on the primary. If the standby is shut down while the primary
is in a shutdown state, it might not be possible to re-enter Hot Standby is in a shutdown state, it might not be possible to re-enter Hot Standby
until the primary is started up, so that it generates further starting until the primary is started up, so that it generates further starting
points in the WAL logs. This situation isn't a problem in the most points in the WAL logs. This situation isn't a problem in the most

View File

@ -7362,7 +7362,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough)
the <literal>host</literal> parameter the <literal>host</literal> parameter
matches <application>libpq</application>'s default socket directory path. matches <application>libpq</application>'s default socket directory path.
In a standby server, a database field of <literal>replication</literal> In a standby server, a database field of <literal>replication</literal>
matches streaming replication connections made to the master server. matches streaming replication connections made to the primary server.
The database field is of limited usefulness otherwise, because users have The database field is of limited usefulness otherwise, because users have
the same password for all databases in the same cluster. the same password for all databases in the same cluster.
</para> </para>

View File

@ -99,7 +99,7 @@
<para> <para>
A <firstterm>publication</firstterm> can be defined on any physical A <firstterm>publication</firstterm> can be defined on any physical
replication master. The node where a publication is defined is referred to replication primary. The node where a publication is defined is referred to
as <firstterm>publisher</firstterm>. A publication is a set of changes as <firstterm>publisher</firstterm>. A publication is a set of changes
generated from a table or a group of tables, and might also be described as generated from a table or a group of tables, and might also be described as
a change set or replication set. Each publication exists in only one database. a change set or replication set. Each publication exists in only one database.
@ -489,7 +489,7 @@
Because logical replication is based on a similar architecture as Because logical replication is based on a similar architecture as
<link linkend="streaming-replication">physical streaming replication</link>, <link linkend="streaming-replication">physical streaming replication</link>,
the monitoring on a publication node is similar to monitoring of a the monitoring on a publication node is similar to monitoring of a
physical replication master physical replication primary
(see <xref linkend="streaming-replication-monitoring"/>). (see <xref linkend="streaming-replication-monitoring"/>).
</para> </para>

View File

@ -62,10 +62,10 @@ postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl
(The appropriate invocation of <command>ps</command> varies across different (The appropriate invocation of <command>ps</command> varies across different
platforms, as do the details of what is shown. This example is from a platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the recent Linux system.) The first process listed here is the
master server process. The command arguments primary server process. The command arguments
shown for it are the same ones used when it was launched. The next five shown for it are the same ones used when it was launched. The next five
processes are background worker processes automatically launched by the processes are background worker processes automatically launched by the
master process. (The <quote>stats collector</quote> process will not be present primary process. (The <quote>stats collector</quote> process will not be present
if you have set the system not to start the statistics collector; likewise if you have set the system not to start the statistics collector; likewise
the <quote>autovacuum launcher</quote> process can be disabled.) the <quote>autovacuum launcher</quote> process can be disabled.)
Each of the remaining Each of the remaining
@ -3545,7 +3545,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
one row per database, showing database-wide statistics about one row per database, showing database-wide statistics about
query cancels occurring due to conflicts with recovery on standby servers. query cancels occurring due to conflicts with recovery on standby servers.
This view will only contain information on standby servers, since This view will only contain information on standby servers, since
conflicts do not occur on master servers. conflicts do not occur on primary servers.
</para> </para>
<table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts"> <table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts">

View File

@ -1642,7 +1642,7 @@ SELECT pg_advisory_lock(q.id) FROM
This level of integrity protection using Serializable transactions This level of integrity protection using Serializable transactions
does not yet extend to hot standby mode (<xref linkend="hot-standby"/>). does not yet extend to hot standby mode (<xref linkend="hot-standby"/>).
Because of that, those using hot standby may want to use Repeatable Because of that, those using hot standby may want to use Repeatable
Read and explicit locking on the master. Read and explicit locking on the primary.
</para> </para>
</warning> </warning>
</sect2> </sect2>
@ -1744,10 +1744,10 @@ SELECT pg_advisory_lock(q.id) FROM
<xref linkend="hot-standby"/>). The strictest isolation level currently <xref linkend="hot-standby"/>). The strictest isolation level currently
supported in hot standby mode is Repeatable Read. While performing all supported in hot standby mode is Repeatable Read. While performing all
permanent database writes within Serializable transactions on the permanent database writes within Serializable transactions on the
master will ensure that all standbys will eventually reach a consistent primary will ensure that all standbys will eventually reach a consistent
state, a Repeatable Read transaction run on the standby can sometimes state, a Repeatable Read transaction run on the standby can sometimes
see a transient state that is inconsistent with any serial execution see a transient state that is inconsistent with any serial execution
of the transactions on the master. of the transactions on the primary.
</para> </para>
<para> <para>

View File

@ -73,7 +73,7 @@ restore_command = 'pg_standby <replaceable>archiveDir</replaceable> %f %p %r'
</para> </para>
<para> <para>
There are two ways to fail over to a <quote>warm standby</quote> database server There are two ways to fail over to a <quote>warm standby</quote> database server
when the master server fails: when the primary server fails:
<variablelist> <variablelist>
<varlistentry> <varlistentry>

View File

@ -1793,7 +1793,7 @@ The commands accepted in replication mode are:
<listitem> <listitem>
<para> <para>
Current timeline ID. Also useful to check that the standby is Current timeline ID. Also useful to check that the standby is
consistent with the master. consistent with the primary.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View File

@ -65,11 +65,11 @@ PostgreSQL documentation
<para> <para>
<application>pg_basebackup</application> can make a base backup from <application>pg_basebackup</application> can make a base backup from
not only the master but also the standby. To take a backup from the standby, not only the primary but also the standby. To take a backup from the standby,
set up the standby so that it can accept replication connections (that is, set set up the standby so that it can accept replication connections (that is, set
<varname>max_wal_senders</varname> and <xref linkend="guc-hot-standby"/>, <varname>max_wal_senders</varname> and <xref linkend="guc-hot-standby"/>,
and configure <link linkend="auth-pg-hba-conf">host-based authentication</link>). and configure <link linkend="auth-pg-hba-conf">host-based authentication</link>).
You will also need to enable <xref linkend="guc-full-page-writes"/> on the master. You will also need to enable <xref linkend="guc-full-page-writes"/> on the primary.
</para> </para>
<para> <para>
@ -89,13 +89,13 @@ PostgreSQL documentation
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
If the standby is promoted to the master during online backup, the backup fails. If the standby is promoted to the primary during online backup, the backup fails.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
All WAL records required for the backup must contain sufficient full-page writes, All WAL records required for the backup must contain sufficient full-page writes,
which requires you to enable <varname>full_page_writes</varname> on the master and which requires you to enable <varname>full_page_writes</varname> on the primary and
not to use a tool like <application>pg_compresslog</application> as not to use a tool like <application>pg_compresslog</application> as
<varname>archive_command</varname> to remove full-page writes from WAL files. <varname>archive_command</varname> to remove full-page writes from WAL files.
</para> </para>
@ -328,7 +328,7 @@ PostgreSQL documentation
it will use up two connections configured by the it will use up two connections configured by the
<xref linkend="guc-max-wal-senders"/> parameter. As long as the <xref linkend="guc-max-wal-senders"/> parameter. As long as the
client can keep up with write-ahead log received, using this mode client can keep up with write-ahead log received, using this mode
requires no extra write-ahead logs to be saved on the master. requires no extra write-ahead logs to be saved on the primary.
</para> </para>
<para> <para>
When tar format mode is used, the write-ahead log files will be When tar format mode is used, the write-ahead log files will be

View File

@ -43,8 +43,8 @@ PostgreSQL documentation
<para> <para>
<application>pg_rewind</application> is a tool for synchronizing a PostgreSQL cluster <application>pg_rewind</application> is a tool for synchronizing a PostgreSQL cluster
with another copy of the same cluster, after the clusters' timelines have with another copy of the same cluster, after the clusters' timelines have
diverged. A typical scenario is to bring an old master server back online diverged. A typical scenario is to bring an old primary server back online
after failover as a standby that follows the new master. after failover as a standby that follows the new primary.
</para> </para>
<para> <para>

View File

@ -1864,9 +1864,9 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433
This is possible because logical replication supports This is possible because logical replication supports
replication between different major versions of replication between different major versions of
<productname>PostgreSQL</productname>. The standby can be on the same computer or <productname>PostgreSQL</productname>. The standby can be on the same computer or
a different computer. Once it has synced up with the master server a different computer. Once it has synced up with the primary server
(running the older version of <productname>PostgreSQL</productname>), you can (running the older version of <productname>PostgreSQL</productname>), you can
switch masters and make the standby the master and shut down the older switch primaries and make the standby the primary and shut down the older
database instance. Such a switch-over results in only several seconds database instance. Such a switch-over results in only several seconds
of downtime for an upgrade. of downtime for an upgrade.
</para> </para>

View File

@ -596,8 +596,8 @@
indicate that the already-processed WAL data need not be scanned again, indicate that the already-processed WAL data need not be scanned again,
and then recycles any old log segment files in the <filename>pg_wal</filename> and then recycles any old log segment files in the <filename>pg_wal</filename>
directory. directory.
Restartpoints can't be performed more frequently than checkpoints in the Restartpoints can't be performed more frequently than checkpoints on the
master because restartpoints can only be performed at checkpoint records. primary because restartpoints can only be performed at checkpoint records.
A restartpoint is triggered when a checkpoint record is reached if at A restartpoint is triggered when a checkpoint record is reached if at
least <varname>checkpoint_timeout</varname> seconds have passed since the last least <varname>checkpoint_timeout</varname> seconds have passed since the last
restartpoint, or if WAL size is about to exceed restartpoint, or if WAL size is about to exceed