Docs: Fix various mistakes and typos

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
This commit is contained in:
David Rowley 2022-04-11 20:48:48 +12:00
parent 7597cc3083
commit bba3c35b29
6 changed files with 41 additions and 41 deletions

View File

@ -5047,7 +5047,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
</term>
<listitem>
<para>
Enables or disables reodering of keys in <literal>GROUP BY</literal>
Enables or disables reordering of keys in <literal>GROUP BY</literal>
clause. The default is <literal>on</literal>.
</para>
</listitem>
@ -7657,8 +7657,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</para>
<para>
Each log line is serialized as a JSON object as of the following
set of keys with their values.
Each log line is serialized as a JSON object with the following
set of keys and their associated values.
</para>
<table>

View File

@ -137,7 +137,7 @@
<!-- NB: In the code, the autovac launcher doesn't use the auxiliary
process scaffolding; however it does behave as one so we list it
here anyway. In addition, logger isn't connected to shared memory so
most code outside postmaster.c doesn't even consider them "procs" in
most code outside postmaster.c doesn't even consider it a "proc" in
the first place.
-->
the <glossterm linkend="glossary-autovacuum">autovacuum launcher</glossterm>

View File

@ -200,7 +200,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<para>
Cumulative statistics are collected in shared memory. Every
<productname>PostgreSQL</productname> process collects statistics locally
<productname>PostgreSQL</productname> process collects statistics locally,
then updates the shared data at appropriate intervals. When a server,
including a physical replica, shuts down cleanly, a permanent copy of the
statistics data is stored in the <filename>pg_stat</filename> subdirectory,

View File

@ -491,7 +491,7 @@ OPTIONS (ADD password_required 'false');
<para>
For a foreign server with this option enabled, if many remote
(sub)transactions are opened on the foreign server in a local
(sub)transaction, this option might increase the remote servers load
(sub)transaction, this option might increase the remote server's load
when the local (sub)transaction commits, so be careful when using this
option.
</para>
@ -1040,7 +1040,7 @@ postgres=# SELECT postgres_fdw_disconnect_all();
<row>
<entry><literal>%C</literal></entry>
<entry>
Cluster name in local server
Cluster name on local server
(see <xref linkend="guc-cluster-name"/> for details)
</entry>
</row>

View File

@ -132,7 +132,7 @@ CREATE DATABASE <replaceable class="parameter">name</replaceable>
available. This strategy writes a small record to the write-ahead log
for each tablespace used by the target database. Each such record
represents copying an entire directory to a new location at the
filesystem level. While this does reduce the write-ahed
filesystem level. While this does reduce the write-ahead
log volume substantially, especially if the template database is large,
it also forces the system to perform a checkpoint both before and
after the creation of the new database. In some situations, this may
@ -278,11 +278,11 @@ CREATE DATABASE <replaceable class="parameter">name</replaceable>
<listitem>
<para>
The object identifier to be used for the new database. If this
parameter is not specified, the database will choose a suitable
OID automatically. This parameter is primarily intended for internal
use by <application>pg_upgrade</application>, and only
<application>pg_upgrade</application> can specify a value less
than 16384.
parameter is not specified, <productname>PostgreSQL</productname>
will choose a suitable OID automatically. This parameter is primarily
intended for internal use by <application>pg_upgrade</application>,
and only <application>pg_upgrade</application> can specify a value
less than 16384.
</para>
</listitem>
</varlistentry>

View File

@ -541,15 +541,15 @@ pgbench <optional> <replaceable>options</replaceable> </optional> <replaceable>d
<firstterm>skipped</firstterm>.
</para>
<para>
When the <option>--max-tries</option> option is used, the transaction with
serialization or deadlock error cannot be retried if the total time of
all its tries is greater than <replaceable>limit</replaceable> ms. To
limit only the time of tries and not their number, use
<literal>--max-tries=0</literal>. By default option
<option>--max-tries</option> is set to 1 and transactions with
serialization/deadlock errors are not retried. See <xref
linkend="failures-and-retries"/> for more information about retrying
such transactions.
When the <option>--max-tries</option> option is used, a transaction
which fails due to a serialization anomaly or from a deadlock will not
be retried if the total time of all its tries is greater than
<replaceable>limit</replaceable> ms. To limit only the time of tries
and not their number, use <literal>--max-tries=0</literal>. By
default, the option <option>--max-tries</option> is set to 1 and
transactions with serialization/deadlock errors are not retried. See
<xref linkend="failures-and-retries"/> for more information about
retrying such transactions.
</para>
</listitem>
</varlistentry>
@ -622,7 +622,7 @@ pgbench <optional> <replaceable>options</replaceable> </optional> <replaceable>d
throttling (<option>-R</option>), the latency is computed with respect
to the transaction scheduled start time, not the actual transaction
beginning time, thus it also includes the average schedule lag time.
When <option>--max-tries</option> is used to enable transactions retries
When <option>--max-tries</option> is used to enable transaction retries
after serialization/deadlock errors, the report includes the number of
retried transactions and the sum of all retries.
</para>
@ -818,7 +818,7 @@ pgbench <optional> <replaceable>options</replaceable> </optional> <replaceable>d
<listitem>
<para>
Print messages about all errors and failures (errors without retrying)
including which limit for retries was violated and how far it was
including which limit for retries was exceeded and how far it was
exceeded for the serialization/deadlock failures. (Note that in this
case the output can be significantly increased.).
See <xref linkend="failures-and-retries"/> for more information.
@ -2433,7 +2433,7 @@ END;
</para>
<para>
If <option>--failures-detailed</option> option is used, the type of
If the <option>--failures-detailed</option> option is used, the type of
failure is reported in the <replaceable>time</replaceable> like this:
<screen>
3 0 47423 0 1499414498 34501 3
@ -2773,12 +2773,12 @@ statement latencies in milliseconds, failures and retries:
<listitem>
<para>
Errors of the main program. They are the most serious and always result
in an immediate exit from the <application>pgbench</application> with
the corresponding error message. They include:
in an immediate exit from <application>pgbench</application> with the
corresponding error message. They include:
<itemizedlist>
<listitem>
<para>
errors at the beginning of the <application>pgbench</application>
errors at the beginning of <application>pgbench</application>
(e.g. an invalid option value);
</para>
</listitem>
@ -2790,8 +2790,8 @@ statement latencies in milliseconds, failures and retries:
</listitem>
<listitem>
<para>
errors before starting threads (e.g. we could not connect to the
database server / the syntax error in the meta command / thread
errors before starting threads (e.g. could not connect to the
database server, syntax error in the meta command, thread
creation failure);
</para>
</listitem>
@ -2813,7 +2813,7 @@ statement latencies in milliseconds, failures and retries:
</listitem>
<listitem>
<para>
Direct client errors. They lead to immediate exit from the
Direct client errors. They lead to immediate exit from
<application>pgbench</application> with the corresponding error message
only in the case of an internal <application>pgbench</application>
error (which are supposed to never occur...). Otherwise in the worst
@ -2829,11 +2829,11 @@ statement latencies in milliseconds, failures and retries:
</para>
<para>
Client's run is aborted in case of a serious error, for example, the
connection with the database server was lost or the end of script reached
without completing the last transaction. In addition, if an execution of SQL
A client's run is aborted in case of a serious error; for example, the
connection with the database server was lost or the end of script was reached
without completing the last transaction. In addition, if execution of an SQL
or meta command fails for reasons other than serialization or deadlock errors,
the client is aborted. Otherwise, if an SQL fails with serialization or
the client is aborted. Otherwise, if an SQL command fails with serialization or
deadlock errors, the client is not aborted. In such cases, the current
transaction is rolled back, which also includes setting the client variables
as they were before the run of this transaction (it is assumed that one
@ -2845,21 +2845,21 @@ statement latencies in milliseconds, failures and retries:
time of retries (specified by the <option>--latency-limit</option> option) / the end
of benchmark (specified by the <option>--time</option> option). If
the last trial run fails, this transaction will be reported as failed but
the client is not aborted and continue to work.
the client is not aborted and continues to work.
</para>
<note>
<para>
Without specifying the <option>--max-tries</option> option a transaction will
Without specifying the <option>--max-tries</option> option, a transaction will
never be retried after a serialization or deadlock error because its default
values is 1. Use an unlimited number of tries (<literal>--max-tries=0</literal>)
value is 1. Use an unlimited number of tries (<literal>--max-tries=0</literal>)
and the <option>--latency-limit</option> option to limit only the maximum time
of tries. You can also use the <option>--time</option> option to limit the
benchmark duration under an unlimited number of tries.
</para>
<para>
Be careful when repeating scripts that contain multiple transactions: the
script is always retried completely, so the successful transactions can be
script is always retried completely, so successful transactions can be
performed several times.
</para>
<para>
@ -2879,7 +2879,7 @@ statement latencies in milliseconds, failures and retries:
<para>
The main report contains the number of failed transactions. If the
<option>--max-tries</option> option is not equal to 1, the main report also
contains the statistics related to retries: the total number of retried
contains statistics related to retries: the total number of retried
transactions and total number of retries. The per-script report inherits all
these fields from the main report. The per-statement report displays retry
statistics only if the <option>--max-tries</option> option is not equal to 1.
@ -2890,7 +2890,7 @@ statement latencies in milliseconds, failures and retries:
aggregation logs, as well as in the main and per-script reports, use the
<option>--failures-detailed</option> option. If you also want to distinguish
all errors and failures (errors without retrying) by type including which
limit for retries was violated and how far it was exceeded for the
limit for retries was exceeded and how much it was exceeded by for the
serialization/deadlock failures, use the <option>--verbose-errors</option>
option.
</para>