From 9e101cf60612f4be4f855d7393531900c2986a55 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 15 Jun 2020 10:12:58 -0700 Subject: [PATCH] docs: replace 'master' with 'primary' where appropriate. Also changed "in the primary" to "on the primary", and added a few "the" before "primary". Author: Andres Freund Reviewed-By: David Steele Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue@alap3.anarazel.de --- doc/src/sgml/amcheck.sgml | 2 +- doc/src/sgml/backup.sgml | 16 +++---- doc/src/sgml/config.sgml | 42 ++++++++--------- doc/src/sgml/external-projects.sgml | 2 +- doc/src/sgml/high-availability.sgml | 67 +++++++++++++-------------- doc/src/sgml/libpq.sgml | 2 +- doc/src/sgml/logical-replication.sgml | 4 +- doc/src/sgml/monitoring.sgml | 6 +-- doc/src/sgml/mvcc.sgml | 6 +-- doc/src/sgml/pgstandby.sgml | 2 +- doc/src/sgml/protocol.sgml | 2 +- doc/src/sgml/ref/pg_basebackup.sgml | 10 ++-- doc/src/sgml/ref/pg_rewind.sgml | 4 +- doc/src/sgml/runtime.sgml | 4 +- doc/src/sgml/wal.sgml | 4 +- 15 files changed, 86 insertions(+), 87 deletions(-) diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml index 75518a7820..a9df2c1a9d 100644 --- a/doc/src/sgml/amcheck.sgml +++ b/doc/src/sgml/amcheck.sgml @@ -253,7 +253,7 @@ SET client_min_messages = DEBUG1; implies that operating system collation rules must never change. Though rare, updates to operating system collation rules can cause these issues. More commonly, an inconsistency in the - collation order between a master server and a standby server is + collation order between a primary server and a standby server is implicated, possibly because the major operating system version in use is inconsistent. Such inconsistencies will generally only arise on standby servers, and so can generally diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index bdc9026c62..b9331830f7 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -964,7 +964,7 @@ SELECT * FROM pg_stop_backup(false, true); non-exclusive one, but it differs in a few key steps. This type of backup can only be taken on a primary and does not allow concurrent backups. Moreover, because it creates a backup label file, as - described below, it can block automatic restart of the master server + described below, it can block automatic restart of the primary server after a crash. On the other hand, the erroneous removal of this file from a backup or standby is a common mistake, which can result in serious data corruption. If it is necessary to use this method, @@ -1033,9 +1033,9 @@ SELECT pg_start_backup('label', true); this will result in corruption. Confusion about when it is appropriate to remove this file is a common cause of data corruption when using this method; be very certain that you remove the file only on an existing - master and never when building a standby or restoring a backup, even if + primary and never when building a standby or restoring a backup, even if you are building a standby that will subsequently be promoted to a new - master. + primary. @@ -1128,16 +1128,16 @@ SELECT pg_stop_backup(); It is often a good idea to also omit from the backup the files within the cluster's pg_replslot/ directory, so that - replication slots that exist on the master do not become part of the + replication slots that exist on the primary do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL files on the standby, and - possibly bloat on the master if hot standby feedback is enabled, because + possibly bloat on the primary if hot standby feedback is enabled, because the clients that are using those replication slots will still be connecting - to and updating the slots on the master, not the standby. Even if the - backup is only intended for use in creating a new master, copying the + to and updating the slots on the primary, not the standby. Even if the + backup is only intended for use in creating a new primary, copying the replication slots isn't expected to be particularly useful, since the contents of those slots will likely be badly out of date by the time - the new master comes on line. + the new primary comes on line. diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 02909b1e66..b353c61683 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -697,7 +697,7 @@ include_dir 'conf.d' When running a standby server, you must set this parameter to the - same or higher value than on the master server. Otherwise, queries + same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server. @@ -1643,7 +1643,7 @@ include_dir 'conf.d' When running a standby server, you must set this parameter to the - same or higher value than on the master server. Otherwise, queries + same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server. @@ -2259,7 +2259,7 @@ include_dir 'conf.d' When running a standby server, you must set this parameter to the - same or higher value than on the master server. Otherwise, queries + same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server. @@ -3253,7 +3253,7 @@ include_dir 'conf.d' archive_timeout — it will bloat your archive storage. archive_timeout settings of a minute or so are usually reasonable. You should consider using streaming replication, - instead of archiving, if you want data to be copied off the master + instead of archiving, if you want data to be copied off the primary server more quickly than that. If this value is specified without units, it is taken as seconds. This parameter can only be set in the @@ -3678,12 +3678,12 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows These settings control the behavior of the built-in streaming replication feature (see ). Servers will be either a - master or a standby server. Masters can send data, while standbys + primary or a standby server. Primaries can send data, while standbys are always receivers of replicated data. When cascading replication (see ) is used, standby servers can also be senders, as well as receivers. Parameters are mainly for sending and standby servers, though some - parameters have meaning only on the master server. Settings may vary + parameters have meaning only on the primary server. Settings may vary across the cluster without problems if that is required. @@ -3693,10 +3693,10 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows These parameters can be set on any server that is to send replication data to one or more standby servers. - The master is always a sending server, so these parameters must - always be set on the master. + The primary is always a sending server, so these parameters must + always be set on the primary. The role and meaning of these parameters does not change after a - standby becomes the master. + standby becomes the primary. @@ -3724,7 +3724,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows When running a standby server, you must set this parameter to the - same or higher value than on the master server. Otherwise, queries + same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server. @@ -3855,19 +3855,19 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows - - Master Server + + Primary Server - These parameters can be set on the master/primary server that is + These parameters can be set on the primary server that is to send replication data to one or more standby servers. Note that in addition to these parameters, - must be set appropriately on the master + must be set appropriately on the primary server, and optionally WAL archiving can be enabled as well (see ). The values of these parameters on standby servers are irrelevant, although you may wish to set them there in preparation for the - possibility of a standby becoming the master. + possibility of a standby becoming the primary. @@ -4042,7 +4042,7 @@ ANY num_sync ( num_sync ( num_sync ( num_sync ( Slony-I is a popular - master/standby replication solution that is developed independently + primary/standby replication solution that is developed independently from the core project. diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 65c3fc62a9..6a9184f314 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -120,7 +120,7 @@ system residing on another computer. The only restriction is that the mirroring must be done in a way that ensures the standby server has a consistent copy of the file system — specifically, writes - to the standby must be done in the same order as those on the master. + to the standby must be done in the same order as those on the primary. DRBD is a popular file system replication solution for Linux. @@ -146,7 +146,7 @@ protocol to make nodes agree on a serializable transactional order. stream of write-ahead log (WAL) records. If the main server fails, the standby contains almost all of the data of the main server, and can be quickly - made the new master database server. This can be synchronous or + made the new primary database server. This can be synchronous or asynchronous and can only be done for the entire database server. @@ -167,7 +167,7 @@ protocol to make nodes agree on a serializable transactional order. logical replication constructs a stream of logical data modifications from the WAL. Logical replication allows the data changes from individual tables to be replicated. Logical replication doesn't require - a particular server to be designated as a master or a replica but allows + a particular server to be designated as a primary or a replica but allows data to flow in multiple directions. For more information on logical replication, see . Through the logical decoding interface (), @@ -219,9 +219,9 @@ protocol to make nodes agree on a serializable transactional order. this is unacceptable, either the middleware or the application must query such values from a single server and then use those values in write queries. Another option is to use this replication - option with a traditional master-standby setup, i.e. data modification - queries are sent only to the master and are propagated to the - standby servers via master-standby replication, not by the replication + option with a traditional primary-standby setup, i.e. data modification + queries are sent only to the primary and are propagated to the + standby servers via primary-standby replication, not by the replication middleware. Care must also be taken that all transactions either commit or abort on all servers, perhaps using two-phase commit ( @@ -263,7 +263,7 @@ protocol to make nodes agree on a serializable transactional order. to reduce the communication overhead. Synchronous multimaster replication is best for mostly read workloads, though its big advantage is that any server can accept write requests — - there is no need to partition workloads between master and + there is no need to partition workloads between primary and standby servers, and because the data changes are sent from one server to another, there is no problem with non-deterministic functions like random(). @@ -363,7 +363,7 @@ protocol to make nodes agree on a serializable transactional order. - No master server overhead + No overhead on primary @@ -387,7 +387,7 @@ protocol to make nodes agree on a serializable transactional order. - Master failure will never lose data + Primary failure will never lose data with sync on @@ -454,7 +454,7 @@ protocol to make nodes agree on a serializable transactional order. partitioned by offices, e.g., London and Paris, with a server in each office. If queries combining London and Paris data are necessary, an application can query both servers, or - master/standby replication can be used to keep a read-only copy + primary/standby replication can be used to keep a read-only copy of the other office's data on each server. @@ -621,13 +621,13 @@ protocol to make nodes agree on a serializable transactional order. In standby mode, the server continuously applies WAL received from the - master server. The standby server can read WAL from a WAL archive - (see ) or directly from the master + primary server. The standby server can read WAL from a WAL archive + (see ) or directly from the primary over a TCP connection (streaming replication). The standby server will also attempt to restore any WAL found in the standby cluster's pg_wal directory. That typically happens after a server restart, when the standby replays again WAL that was streamed from the - master before the restart, but you can also manually copy files to + primary before the restart, but you can also manually copy files to pg_wal at any time to have them replayed. @@ -652,20 +652,20 @@ protocol to make nodes agree on a serializable transactional order. pg_promote() is called, or a trigger file is found (promote_trigger_file). Before failover, any WAL immediately available in the archive or in pg_wal will be - restored, but no attempt is made to connect to the master. + restored, but no attempt is made to connect to the primary. - - Preparing the Master for Standby Servers + + Preparing the Primary for Standby Servers Set up continuous archiving on the primary to an archive directory accessible from the standby, as described in . The archive location should be - accessible from the standby even when the master is down, i.e. it should + accessible from the standby even when the primary is down, i.e. it should reside on the standby server itself or another trusted server, not on - the master server. + the primary server. @@ -898,7 +898,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' pg_stat_replication view. Large differences between pg_current_wal_lsn and the view's sent_lsn field - might indicate that the master server is under heavy load, while + might indicate that the primary server is under heavy load, while differences between sent_lsn and pg_last_wal_receive_lsn on the standby might indicate network delay, or that the standby is under heavy load. @@ -921,9 +921,9 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' streaming replication - Replication slots provide an automated way to ensure that the master does + Replication slots provide an automated way to ensure that the primary does not remove WAL segments until they have been received by all standbys, - and that the master does not remove rows which could cause a + and that the primary does not remove rows which could cause a recovery conflict even when the standby is disconnected. @@ -1001,23 +1001,22 @@ primary_slot_name = 'node_a_slot' The cascading replication feature allows a standby server to accept replication connections and stream WAL records to other standbys, acting as a relay. - This can be used to reduce the number of direct connections to the master + This can be used to reduce the number of direct connections to the primary and also to minimize inter-site bandwidth overheads. A standby acting as both a receiver and a sender is known as a cascading - standby. Standbys that are more directly connected to the master are known + standby. Standbys that are more directly connected to the primary are known as upstream servers, while those standby servers further away are downstream servers. Cascading replication does not place limits on the number or arrangement of downstream servers, though each standby connects to only - one upstream server which eventually links to a single master/primary - server. + one upstream server which eventually links to a single primary server. A cascading standby sends not only WAL records received from the - master but also those restored from the archive. So even if the replication + primary but also those restored from the archive. So even if the replication connection in some upstream connection is terminated, streaming replication continues downstream for as long as new WAL records are available. @@ -1033,8 +1032,8 @@ primary_slot_name = 'node_a_slot' - If an upstream standby server is promoted to become new master, downstream - servers will continue to stream from the new master if + If an upstream standby server is promoted to become the new primary, downstream + servers will continue to stream from the new primary if recovery_target_timeline is set to 'latest' (the default). @@ -1120,7 +1119,7 @@ primary_slot_name = 'node_a_slot' a non-empty value. synchronous_commit must also be set to on, but since this is the default value, typically no change is required. (See and - .) + .) This configuration will cause each commit to wait for confirmation that the standby has written the commit record to durable storage. @@ -1145,8 +1144,8 @@ primary_slot_name = 'node_a_slot' confirmation that the commit record has been received. These parameters allow the administrator to specify which standby servers should be synchronous standbys. Note that the configuration of synchronous - replication is mainly on the master. Named standbys must be directly - connected to the master; the master knows nothing about downstream + replication is mainly on the primary. Named standbys must be directly + connected to the primary; the primary knows nothing about downstream standby servers using cascaded replication. @@ -1504,7 +1503,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' Note that in this mode, the server will apply WAL one file at a time, so if you use the standby server for queries (see Hot Standby), - there is a delay between an action in the master and when the + there is a delay between an action in the primary and when the action becomes visible in the standby, corresponding the time it takes to fill up the WAL file. archive_timeout can be used to make that delay shorter. Also note that you can't combine streaming replication with @@ -2049,7 +2048,7 @@ if (!triggered) cleanup of old row versions when there are no transactions that need to see them to ensure correct visibility of data according to MVCC rules. However, this rule can only be applied for transactions executing on the - master. So it is possible that cleanup on the master will remove row + primary. So it is possible that cleanup on the primary will remove row versions that are still visible to a transaction on the standby. @@ -2438,7 +2437,7 @@ LOG: database system is ready to accept read only connections Valid starting points for standby queries are generated at each - checkpoint on the master. If the standby is shut down while the master + checkpoint on the primary. If the standby is shut down while the primary is in a shutdown state, it might not be possible to re-enter Hot Standby until the primary is started up, so that it generates further starting points in the WAL logs. This situation isn't a problem in the most diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index ea1909c08d..d1ccaa775a 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -7362,7 +7362,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) the host parameter matches libpq's default socket directory path. In a standby server, a database field of replication - matches streaming replication connections made to the master server. + matches streaming replication connections made to the primary server. The database field is of limited usefulness otherwise, because users have the same password for all databases in the same cluster. diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml index e19bb3fd65..7c8629d74e 100644 --- a/doc/src/sgml/logical-replication.sgml +++ b/doc/src/sgml/logical-replication.sgml @@ -99,7 +99,7 @@ A publication can be defined on any physical - replication master. The node where a publication is defined is referred to + replication primary. The node where a publication is defined is referred to as publisher. A publication is a set of changes generated from a table or a group of tables, and might also be described as a change set or replication set. Each publication exists in only one database. @@ -489,7 +489,7 @@ Because logical replication is based on a similar architecture as physical streaming replication, the monitoring on a publication node is similar to monitoring of a - physical replication master + physical replication primary (see ). diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 211d279094..f7ef4ba0f7 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -62,10 +62,10 @@ postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl (The appropriate invocation of ps varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the - master server process. The command arguments + primary server process. The command arguments shown for it are the same ones used when it was launched. The next five processes are background worker processes automatically launched by the - master process. (The stats collector process will not be present + primary process. (The stats collector process will not be present if you have set the system not to start the statistics collector; likewise the autovacuum launcher process can be disabled.) Each of the remaining @@ -3545,7 +3545,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i one row per database, showing database-wide statistics about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain information on standby servers, since - conflicts do not occur on master servers. + conflicts do not occur on primary servers. diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index dda6f1f2ad..d127c0b9ad 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -1642,7 +1642,7 @@ SELECT pg_advisory_lock(q.id) FROM This level of integrity protection using Serializable transactions does not yet extend to hot standby mode (). Because of that, those using hot standby may want to use Repeatable - Read and explicit locking on the master. + Read and explicit locking on the primary. @@ -1744,10 +1744,10 @@ SELECT pg_advisory_lock(q.id) FROM ). The strictest isolation level currently supported in hot standby mode is Repeatable Read. While performing all permanent database writes within Serializable transactions on the - master will ensure that all standbys will eventually reach a consistent + primary will ensure that all standbys will eventually reach a consistent state, a Repeatable Read transaction run on the standby can sometimes see a transient state that is inconsistent with any serial execution - of the transactions on the master. + of the transactions on the primary. diff --git a/doc/src/sgml/pgstandby.sgml b/doc/src/sgml/pgstandby.sgml index d8aded4384..66a6255930 100644 --- a/doc/src/sgml/pgstandby.sgml +++ b/doc/src/sgml/pgstandby.sgml @@ -73,7 +73,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' There are two ways to fail over to a warm standby database server - when the master server fails: + when the primary server fails: diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 20d1fe0ad8..8b00235a51 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1793,7 +1793,7 @@ The commands accepted in replication mode are: Current timeline ID. Also useful to check that the standby is - consistent with the master. + consistent with the primary. diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index db480be674..e2a01be895 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -65,11 +65,11 @@ PostgreSQL documentation pg_basebackup can make a base backup from - not only the master but also the standby. To take a backup from the standby, + not only the primary but also the standby. To take a backup from the standby, set up the standby so that it can accept replication connections (that is, set max_wal_senders and , and configure host-based authentication). - You will also need to enable on the master. + You will also need to enable on the primary. @@ -89,13 +89,13 @@ PostgreSQL documentation - If the standby is promoted to the master during online backup, the backup fails. + If the standby is promoted to the primary during online backup, the backup fails. All WAL records required for the backup must contain sufficient full-page writes, - which requires you to enable full_page_writes on the master and + which requires you to enable full_page_writes on the primary and not to use a tool like pg_compresslog as archive_command to remove full-page writes from WAL files. @@ -328,7 +328,7 @@ PostgreSQL documentation it will use up two connections configured by the parameter. As long as the client can keep up with write-ahead log received, using this mode - requires no extra write-ahead logs to be saved on the master. + requires no extra write-ahead logs to be saved on the primary. When tar format mode is used, the write-ahead log files will be diff --git a/doc/src/sgml/ref/pg_rewind.sgml b/doc/src/sgml/ref/pg_rewind.sgml index 9ae1bf3ab6..440eed7d4b 100644 --- a/doc/src/sgml/ref/pg_rewind.sgml +++ b/doc/src/sgml/ref/pg_rewind.sgml @@ -43,8 +43,8 @@ PostgreSQL documentation pg_rewind is a tool for synchronizing a PostgreSQL cluster with another copy of the same cluster, after the clusters' timelines have - diverged. A typical scenario is to bring an old master server back online - after failover as a standby that follows the new master. + diverged. A typical scenario is to bring an old primary server back online + after failover as a standby that follows the new primary. diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 88210c4a5d..1fd4ab723c 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -1864,9 +1864,9 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 This is possible because logical replication supports replication between different major versions of PostgreSQL. The standby can be on the same computer or - a different computer. Once it has synced up with the master server + a different computer. Once it has synced up with the primary server (running the older version of PostgreSQL), you can - switch masters and make the standby the master and shut down the older + switch primaries and make the standby the primary and shut down the older database instance. Such a switch-over results in only several seconds of downtime for an upgrade. diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index bd9fae544c..1902f36291 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -596,8 +596,8 @@ indicate that the already-processed WAL data need not be scanned again, and then recycles any old log segment files in the pg_wal directory. - Restartpoints can't be performed more frequently than checkpoints in the - master because restartpoints can only be performed at checkpoint records. + Restartpoints can't be performed more frequently than checkpoints on the + primary because restartpoints can only be performed at checkpoint records. A restartpoint is triggered when a checkpoint record is reached if at least checkpoint_timeout seconds have passed since the last restartpoint, or if WAL size is about to exceed