diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index b01e3ad544..3717d13fff 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -557,7 +557,7 @@ CREATE TABLE products ( tests, it cannot guarantee that the database will not reach a state in which the constraint condition is false (due to subsequent changes of the other row(s) involved). This would cause a database dump and - reload to fail. The reload could fail even when the complete + restore to fail. The restore could fail even when the complete database state is consistent with the constraint, due to rows not being loaded in an order that will satisfy the constraint. If possible, use UNIQUE, EXCLUDE, @@ -569,10 +569,10 @@ CREATE TABLE products ( If what you desire is a one-time check against other rows at row insertion, rather than a continuously-maintained consistency guarantee, a custom trigger can be used - to implement that. (This approach avoids the dump/reload problem because + to implement that. (This approach avoids the dump/restore problem because pg_dump does not reinstall triggers until after - reloading data, so that the check will not be enforced during a - dump/reload.) + restoring data, so that the check will not be enforced during a + dump/restore.) @@ -594,7 +594,7 @@ CREATE TABLE products ( function. PostgreSQL does not disallow that, but it will not notice if there are rows in the table that now violate the CHECK constraint. That would cause a - subsequent database dump and reload to fail. + subsequent database dump and restore to fail. The recommended way to handle such a change is to drop the constraint (using ALTER TABLE), adjust the function definition, and re-add the constraint, thereby rechecking it against all table rows. diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml index b3857015b1..bf475e7aa0 100644 --- a/doc/src/sgml/extend.sgml +++ b/doc/src/sgml/extend.sgml @@ -982,7 +982,7 @@ SET LOCAL search_path TO @extschema@, pg_temp; pg_dump. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump - and reload. + and restore. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 8fd8e25c51..c3ee47b3d6 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1785,7 +1785,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Dump scripts generated by pg_dump automatically apply - several, but not all, of the above guidelines. To reload a + several, but not all, of the above guidelines. To restore a pg_dump dump as quickly as possible, you need to do a few extra things manually. (Note that these points apply while restoring a dump, not while creating it. diff --git a/doc/src/sgml/plhandler.sgml b/doc/src/sgml/plhandler.sgml index 40ee59de9f..980c95ecf3 100644 --- a/doc/src/sgml/plhandler.sgml +++ b/doc/src/sgml/plhandler.sgml @@ -156,7 +156,7 @@ attached to a function when check_function_bodies is on. Therefore, checks whose results might be affected by GUC parameters definitely should be skipped when check_function_bodies is - off, to avoid false failures when reloading a dump. + off, to avoid false failures when restoring a dump. diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 21887e88a0..146065144f 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -411,7 +411,7 @@ ALTER TYPE name SET ( since the original creation of the enum type). The slowdown is usually insignificant; but if it matters, optimal performance can be regained by dropping and recreating the enum type, or by dumping and - reloading the database. + restoring the database. diff --git a/doc/src/sgml/ref/create_domain.sgml b/doc/src/sgml/ref/create_domain.sgml index e4b856d630..82a0b87492 100644 --- a/doc/src/sgml/ref/create_domain.sgml +++ b/doc/src/sgml/ref/create_domain.sgml @@ -234,7 +234,7 @@ INSERT INTO tab (domcol) VALUES ((SELECT domcol FROM tab WHERE false)); function. PostgreSQL does not disallow that, but it will not notice if there are stored values of the domain type that now violate the CHECK constraint. That would cause a - subsequent database dump and reload to fail. The recommended way to + subsequent database dump and restore to fail. The recommended way to handle such a change is to drop the constraint (using ALTER DOMAIN), adjust the function definition, and re-add the constraint, thereby rechecking it against stored data. diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 5efb442b44..c08276bc0a 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -684,7 +684,7 @@ PostgreSQL documentation ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. - Any error during reloading will cause only rows that are part of the + Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents. @@ -708,9 +708,9 @@ PostgreSQL documentation This option is relevant only when creating a data-only dump. It instructs pg_dump to include commands to temporarily disable triggers on the target tables while - the data is reloaded. Use this if you have referential + the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you - do not want to invoke during data reload. + do not want to invoke during data restore. @@ -828,7 +828,7 @@ PostgreSQL documentation than COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. - Any error during reloading will cause only rows that are part of the + Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents. Note that the restore might fail altogether if you have rearranged column order. The @@ -847,7 +847,7 @@ PostgreSQL documentation target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be - useful when reloading data on a server where rows do not always fall + useful when restoring data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used @@ -859,7 +859,7 @@ PostgreSQL documentation with this option, because pg_restore will not know exactly which partition(s) a given archive data item will load data into. This could result in inefficiency due to lock - conflicts between parallel jobs, or perhaps even reload failures due + conflicts between parallel jobs, or perhaps even restore failures due to foreign key constraints being set up before all the relevant data is loaded. @@ -1028,7 +1028,7 @@ PostgreSQL documentation Dump data as INSERT commands (rather than COPY). Controls the maximum number of rows per INSERT command. The value specified must be a - number greater than zero. Any error during reloading will cause only + number greater than zero. Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents. diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index 8a081f0080..5d54074e01 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -276,9 +276,9 @@ PostgreSQL documentation This option is relevant only when creating a data-only dump. It instructs pg_dumpall to include commands to temporarily disable triggers on the target tables while - the data is reloaded. Use this if you have referential + the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you - do not want to invoke during data reload. + do not want to invoke during data restore. @@ -355,7 +355,7 @@ PostgreSQL documentation target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be - useful when reloading data on a server where rows do not always fall + useful when restoring data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used @@ -530,7 +530,7 @@ PostgreSQL documentation Dump data as INSERT commands (rather than COPY). Controls the maximum number of rows per INSERT command. The value specified must be a - number greater than zero. Any error during reloading will cause only + number greater than zero. Any error during restoring will cause only rows that are part of the problematic INSERT to be lost, rather than the entire table contents. @@ -799,7 +799,7 @@ PostgreSQL documentation - To reload database(s) from this file, you can use: + To restore database(s) from this file, you can use: $ psql -f db.out postgres diff --git a/doc/src/sgml/ref/pg_resetwal.sgml b/doc/src/sgml/ref/pg_resetwal.sgml index 3e4882cdc6..fd539f5604 100644 --- a/doc/src/sgml/ref/pg_resetwal.sgml +++ b/doc/src/sgml/ref/pg_resetwal.sgml @@ -55,7 +55,7 @@ PostgreSQL documentation After running this command, it should be possible to start the server, but bear in mind that the database might contain inconsistent data due to partially-committed transactions. You should immediately dump your data, - run initdb, and reload. After reload, check for + run initdb, and restore. After restore, check for inconsistencies and repair as needed. @@ -78,7 +78,7 @@ PostgreSQL documentation discussed below. If you are not able to determine correct values for all these fields, can still be used, but the recovered database must be treated with even more suspicion than - usual: an immediate dump and reload is imperative. Do not + usual: an immediate dump and restore is imperative. Do not execute any data-modifying operations in the database before you dump, as any such action is likely to make the corruption worse. diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index 526986eadb..47bd7dbda0 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -538,9 +538,9 @@ PostgreSQL documentation This option is relevant only when performing a data-only restore. It instructs pg_restore to execute commands to temporarily disable triggers on the target tables while - the data is reloaded. Use this if you have referential + the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you - do not want to invoke during data reload. + do not want to invoke during data restore. @@ -969,7 +969,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To reload the dump into a new database called newdb: + To restore the dump into a new database called newdb: $ createdb -T template0 newdb diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index d065227656..26459f7e91 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -39,7 +39,7 @@ PostgreSQL documentation pg_upgrade (formerly called pg_migrator) allows data stored in PostgreSQL data files to be upgraded to a later PostgreSQL - major version without the data dump/reload typically required for + major version without the data dump/restore typically required for major version upgrades, e.g., from 9.5.8 to 9.6.4 or from 10.7 to 11.2. It is not required for minor version upgrades, e.g., from 9.6.2 to 9.6.3 or from 10.1 to 10.2. @@ -420,7 +420,7 @@ NET STOP postgresql-&majorversion; The option allows multiple CPU cores to be used - for copying/linking of files and to dump and reload database schemas + for copying/linking of files and to dump and restore database schemas in parallel; a good place to start is the maximum of the number of CPU cores and tablespaces. This option can dramatically reduce the time to upgrade a multi-database server running on a multiprocessor diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 2352389ab6..c27c703aa1 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -1669,7 +1669,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades. The traditional method for moving data to a new major version - is to dump and reload the database, though this can be slow. A + is to dump and restore the database, though this can be slow. A faster method is . Replication methods are also available, as discussed below. (If you are using a pre-packaged version @@ -1755,7 +1755,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` One upgrade method is to dump data from one major version of - PostgreSQL and reload it in another — to do + PostgreSQL and restore it in another — to do this, you must use a logical backup tool like pg_dumpall; file system level backup methods will not work. (There are checks in place that prevent diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index 2cbf7e7d42..c51b0e4d77 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -1974,7 +1974,7 @@ CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE explicitly when creating tsvector values inside triggers, so that the column's contents will not be affected by changes to default_text_search_config. Failure to do this is likely to - lead to problems such as search results changing after a dump and reload. + lead to problems such as search results changing after a dump and restore.