Fix typos and grammar in docs and comments

This fixes several areas of the documentation and some comments in
matters of style, grammar, or even format.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20201222041153.GK30237@telsasoft.com
This commit is contained in:
Michael Paquier 2020-12-24 17:05:49 +09:00
parent 6ecf488d91
commit 90fbf7c57d
23 changed files with 45 additions and 43 deletions

View File

@ -191,7 +191,7 @@ typedef struct Counters
double usage; /* usage factor */
int64 wal_records; /* # of WAL records generated */
int64 wal_fpi; /* # of WAL full page images generated */
uint64 wal_bytes; /* total amount of WAL bytes generated */
uint64 wal_bytes; /* total amount of WAL generated in bytes */
} Counters;
/*

View File

@ -525,7 +525,7 @@ SET client_min_messages = DEBUG1;
designed to diagnose corruption without undue risk. It cannot guard
against all causes of backend crashes, as even executing the calling
query could be unsafe on a badly corrupted system. Access to <link
linkend="catalogs-overview">catalog tables</link> are performed and could
linkend="catalogs-overview">catalog tables</link> is performed and could
be problematic if the catalogs themselves are corrupted.
</para>

View File

@ -4478,7 +4478,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
inherited columns are to be arranged. The count starts at 1.
</para>
<para>
Indexes can not have multiple inheritance, since they can only inherit
Indexes cannot have multiple inheritance, since they can only inherit
when using declarative partitioning.
</para></entry>
</row>

View File

@ -321,7 +321,7 @@ SELECT c FROM test ORDER BY c ~&gt; 3 DESC LIMIT 5;
Makes a one dimensional cube.
</para>
<para>
<literal>cube(1,2)</literal>
<literal>cube(1, 2)</literal>
<returnvalue>(1),(2)</returnvalue>
</para></entry>
</row>

View File

@ -1274,7 +1274,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
(truncates towards zero)
</para>
<para>
<literal>div(9,4)</literal>
<literal>div(9, 4)</literal>
<returnvalue>2</returnvalue>
</para></entry>
</row>
@ -1493,7 +1493,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
<type>bigint</type>, and <type>numeric</type>
</para>
<para>
<literal>mod(9,4)</literal>
<literal>mod(9, 4)</literal>
<returnvalue>1</returnvalue>
</para></entry>
</row>
@ -1975,7 +1975,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
result in radians
</para>
<para>
<literal>atan2(1,0)</literal>
<literal>atan2(1, 0)</literal>
<returnvalue>1.5707963267948966</returnvalue>
</para></entry>
</row>
@ -1995,7 +1995,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
result in degrees
</para>
<para>
<literal>atan2d(1,0)</literal>
<literal>atan2d(1, 0)</literal>
<returnvalue>90</returnvalue>
</para></entry>
</row>

View File

@ -953,11 +953,11 @@ stream_commit_cb(...); &lt;-- commit of the streamed transaction
<para>
Similar to spill-to-disk behavior, streaming is triggered when the total
amount of changes decoded from the WAL (for all in-progress transactions)
exceeds limit defined by <varname>logical_decoding_work_mem</varname> setting.
At that point the largest toplevel transaction (measured by amount of memory
exceeds the limit defined by <varname>logical_decoding_work_mem</varname> setting.
At that point, the largest toplevel transaction (measured by the amount of memory
currently used for decoded changes) is selected and streamed. However, in
some cases we still have to spill to the disk even if streaming is enabled
because if we cross the memory limit but we still have not decoded the
some cases we still have to spill to disk even if streaming is enabled
because we exceed the memory threshold but still have not decoded the
complete tuple e.g., only decoded toast table insert but not the main table
insert.
</para>

View File

@ -3470,7 +3470,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
<structfield>wal_bytes</structfield> <type>numeric</type>
</para>
<para>
Total amount of WAL bytes generated
Total amount of WAL generated in bytes
</para></entry>
</row>
@ -3479,7 +3479,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
<structfield>wal_buffers_full</structfield> <type>bigint</type>
</para>
<para>
Number of times WAL data was written to the disk because WAL buffers got full
Number of times WAL data was written to disk because WAL buffers became full
</para></entry>
</row>

View File

@ -360,7 +360,7 @@
<structfield>wal_bytes</structfield> <type>numeric</type>
</para>
<para>
Total amount of WAL bytes generated by the statement
Total amount of WAL generated by the statement in bytes
</para></entry>
</row>
</tbody>

View File

@ -29,7 +29,7 @@
<para>
Every range type has a corresponding multirange type. A multirange is
an ordered list of non-continguous, non-empty, non-null ranges. Most
an ordered list of non-contiguous, non-empty, non-null ranges. Most
range operators also work on multiranges, and they have a few functions
of their own.
</para>

View File

@ -180,10 +180,10 @@ CREATE TYPE <replaceable class="parameter">name</replaceable>
The optional <replaceable class="parameter">multirange_type_name</replaceable>
parameter specifies the name of the corresponding multirange type. If not
specified, this name is chosen automatically as follows.
If range type name contains <literal>range</literal> substring, then
multirange type name is formed by replacement of the <literal>range</literal>
substring with <literal>multirange</literal> substring in the range
type name. Otherwise, multirange type name is formed by appending
If the range type name contains the substring <literal>range</literal>, then
the multirange type name is formed by replacement of the <literal>range</literal>
substring with <literal>multirange</literal> in the range
type name. Otherwise, the multirange type name is formed by appending a
<literal>_multirange</literal> suffix to the range type name.
</para>
</refsect2>

View File

@ -198,9 +198,9 @@ ROLLBACK;
<listitem>
<para>
Include information on WAL record generation. Specifically, include the
number of records, number of full page images (fpi) and amount of WAL
bytes generated. In text format, only non-zero values are printed. This
parameter may only be used when <literal>ANALYZE</literal> is also
number of records, number of full page images (fpi) and the amount of WAL
generated in bytes. In text format, only non-zero values are printed.
This parameter may only be used when <literal>ANALYZE</literal> is also
enabled. It defaults to <literal>FALSE</literal>.
</para>
</listitem>

View File

@ -621,7 +621,7 @@ PostgreSQL documentation
<listitem>
<para>
Specify the compression level to use. Zero means no compression.
For the custom archive format, this specifies compression of
For the custom and directory archive formats, this specifies compression of
individual table-data segments, and the default is to compress
at a moderate level.
For plain text output, setting a nonzero compression level causes

View File

@ -40,7 +40,7 @@ PostgreSQL documentation
<para>
It is important to note that the validation which is performed by
<application>pg_verifybackup</application> does not and can not include
<application>pg_verifybackup</application> does not and cannot include
every check which will be performed by a running server when attempting
to make use of the backup. Even if you use this tool, you should still
perform test restores and verify that the resulting databases work as

View File

@ -184,7 +184,7 @@ EXPLAIN EXECUTE <replaceable>name</replaceable>(<replaceable>parameter_values</r
analysis and planning of the statement, <productname>PostgreSQL</productname> will
force re-analysis and re-planning of the statement before using it
whenever database objects used in the statement have undergone
definitional (DDL) changes or the planner statistics of them have
definitional (DDL) changes or their planner statistics have
been updated since the previous use of the prepared
statement. Also, if the value of <xref linkend="guc-search-path"/> changes
from one use to the next, the statement will be re-parsed using the new

View File

@ -103,7 +103,7 @@ less -x4
message text. In addition there are optional elements, the most
common of which is an error identifier code that follows the SQL spec's
SQLSTATE conventions.
<function>ereport</function> itself is just a shell macro, that exists
<function>ereport</function> itself is just a shell macro that exists
mainly for the syntactic convenience of making message generation
look like a single function call in the C source code. The only parameter
accepted directly by <function>ereport</function> is the severity level.

View File

@ -580,7 +580,7 @@
Independently of <varname>max_wal_size</varname>,
the most recent <xref linkend="guc-wal-keep-size"/> megabytes of
WAL files plus one additional WAL file are
kept at all times. Also, if WAL archiving is used, old segments can not be
kept at all times. Also, if WAL archiving is used, old segments cannot be
removed or recycled until they are archived. If WAL archiving cannot keep up
with the pace that WAL is generated, or if <varname>archive_command</varname>
fails repeatedly, old WAL files will accumulate in <filename>pg_wal</filename>

View File

@ -10418,7 +10418,7 @@ get_sync_bit(int method)
*
* Never use O_DIRECT in walreceiver process for similar reasons; the WAL
* written by walreceiver is normally read by the startup process soon
* after its written. Also, walreceiver performs unaligned writes, which
* after it's written. Also, walreceiver performs unaligned writes, which
* don't work with O_DIRECT, so it is required for correctness too.
*/
if (!XLogIsNeeded() && !AmWalReceiverProcess())

View File

@ -3119,7 +3119,7 @@ get_matching_range_bounds(PartitionPruneContext *context,
/*
* If the smallest partition to return has MINVALUE (negative infinity) as
* its lower bound, increment it to point to the next finite bound
* (supposedly its upper bound), so that we don't advertently end up
* (supposedly its upper bound), so that we don't inadvertently end up
* scanning the default partition.
*/
if (minoff < boundinfo->ndatums && partindices[minoff] < 0)
@ -3138,7 +3138,7 @@ get_matching_range_bounds(PartitionPruneContext *context,
* If the previous greatest partition has MAXVALUE (positive infinity) as
* its upper bound (something only possible to do with multi-column range
* partitioning), we scan switch to it as the greatest partition to
* return. Again, so that we don't advertently end up scanning the
* return. Again, so that we don't inadvertently end up scanning the
* default partition.
*/
if (maxoff >= 1 && partindices[maxoff] < 0)

View File

@ -329,10 +329,15 @@ struct _archiveHandle
DumpId *tableDataId; /* TABLE DATA ids, indexed by table dumpId */
struct _tocEntry *currToc; /* Used when dumping data */
int compression; /* Compression requested on open Possible
* values for compression: -1
* Z_DEFAULT_COMPRESSION 0 COMPRESSION_NONE
* 1-9 levels for gzip compression */
int compression; /*---------
* Compression requested on open().
* Possible values for compression:
* -2 ZSTD_COMPRESSION
* -1 Z_DEFAULT_COMPRESSION
* 0 COMPRESSION_NONE
* 1-9 levels for gzip compression
*---------
*/
bool dosync; /* data requested to be synced on sight */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */

View File

@ -4,7 +4,7 @@
*
* A directory format dump is a directory, which contains a "toc.dat" file
* for the TOC, and a separate file for each data entry, named "<oid>.dat".
* Large objects (BLOBs) are stored in separate files named "blob_<uid>.dat",
* Large objects (BLOBs) are stored in separate files named "blob_<oid>.dat",
* and there's a plain-text TOC file for them called "blobs.toc". If
* compression is used, each data file is individually compressed and the
* ".gz" suffix is added to the filenames. The TOC files are never

View File

@ -7018,10 +7018,7 @@ getInherits(Archive *fout, int *numInherits)
int i_inhrelid;
int i_inhparent;
/*
* Find all the inheritance information, excluding implicit inheritance
* via partitioning.
*/
/* find all the inheritance information */
appendPQExpBufferStr(query, "SELECT inhrelid, inhparent FROM pg_inherits");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

View File

@ -247,8 +247,8 @@ output_completion_banner(char *deletion_script_file_name)
}
pg_log(PG_REPORT,
"Optimizer statistics are not transferred by pg_upgrade so,\n"
"once you start the new server, consider running:\n"
"Optimizer statistics are not transferred by pg_upgrade.\n"
"Once you start the new server, consider running:\n"
" %s/vacuumdb %s--all --analyze-in-stages\n\n", new_cluster.bindir, user_specification.data);
if (deletion_script_file_name)

View File

@ -81,7 +81,7 @@ VSObjectFactory.pm factory module providing the code to create the
Description of the internals of the Visual Studio build process
---------------------------------------------------------------
By typing 'build' the user starts the build.bat wrapper which simply passes
it's arguments to build.pl.
its arguments to build.pl.
In build.pl the user's buildenv.pl is used to set up the build environment
(i. e. path to bison and flex). In addition his config.pl file is merged into
config_default.pl to create the configuration arguments.