Documentation spell checking and markup improvements

This commit is contained in:
Peter Eisentraut 2016-07-28 22:46:15 -04:00
parent ed0b228d7a
commit 5676da2d01
15 changed files with 86 additions and 86 deletions

View File

@ -212,8 +212,8 @@ CREATE INDEX
<para>
An operator class for bloom indexes requires only a hash function for the
indexed datatype and an equality operator for searching. This example
shows the opclass definition for the <type>text</> data type:
indexed data type and an equality operator for searching. This example
shows the operator class definition for the <type>text</> data type:
</para>
<programlisting>

View File

@ -625,7 +625,7 @@ typedef struct BrinOpcInfo
</table>
<para>
To write an operator class for a complex datatype which has values
To write an operator class for a complex data type which has values
included within another type, it's possible to use the inclusion support
procedures alongside the corresponding operators, as shown
in <xref linkend="brin-extensibility-inclusion-table">. It requires
@ -774,16 +774,16 @@ typedef struct BrinOpcInfo
Support procedure numbers 1-10 are reserved for the BRIN internal
functions, so the SQL level functions start with number 11. Support
function number 11 is the main function required to build the index.
It should accept two arguments with the same datatype as the opclass,
and return the union of them. The inclusion opclass can store union
values with different datatypes if it is defined with the
It should accept two arguments with the same data type as the operator class,
and return the union of them. The inclusion operator class can store union
values with different data types if it is defined with the
<literal>STORAGE</> parameter. The return value of the union
function should match the <literal>STORAGE</> datatype.
function should match the <literal>STORAGE</> data type.
</para>
<para>
Support procedure numbers 12 and 14 are provided to support
irregularities of built-in datatypes. Procedure number 12
irregularities of built-in data types. Procedure number 12
is used to support network addresses from different families which
are not mergeable. Procedure number 14 is used to support
empty ranges. Procedure number 13 is an optional but
@ -794,17 +794,17 @@ typedef struct BrinOpcInfo
</para>
<para>
Both minmax and inclusion opclasses support cross-datatype
Both minmax and inclusion operator classes support cross-data-type
operators, though with these the dependencies become more complicated.
The minmax opclass requires a full set of operators to be
defined with both arguments having the same datatype. It allows
additional datatypes to be supported by defining extra sets
of operators. Inclusion opclass operator strategies are dependent
The minmax operator class requires a full set of operators to be
defined with both arguments having the same data type. It allows
additional data types to be supported by defining extra sets
of operators. Inclusion operator class operator strategies are dependent
on another operator strategy as shown in
<xref linkend="brin-extensibility-inclusion-table">, or the same
operator strategy as themselves. They require the dependency
operator to be defined with the <literal>STORAGE</> datatype as the
left-hand-side argument and the other supported datatype to be the
operator to be defined with the <literal>STORAGE</> data type as the
left-hand-side argument and the other supported data type to be the
right-hand-side argument of the supported operator. See
<literal>float4_minmax_ops</> as an example of minmax, and
<literal>box_inclusion_ops</> as an example of inclusion.

View File

@ -2006,7 +2006,7 @@ include_dir 'conf.d'
<literal>Gather</literal> node. Parallel workers are taken from the
pool of processes established by
<xref linkend="guc-max-worker-processes">. Note that the requested
number of workers may not actually be available at runtime. If this
number of workers may not actually be available at run time. If this
occurs, the plan will run with fewer workers than expected, which may
be inefficient. The default value is 2. Setting this value to 0
disables parallel query execution.

View File

@ -3782,7 +3782,7 @@ EXEC SQL GET DESCRIPTOR <replaceable>name</replaceable> VALUE <replaceable>num</
keywords are different. A Descriptor Area can also be manually built to
provide the input parameters for a query or a cursor and
<literal>USING SQL DESCRIPTOR <replaceable>name</replaceable></literal>
is the way to pass the input parameters into a parametrized query. The statement
is the way to pass the input parameters into a parameterized query. The statement
to build a named SQL Descriptor Area is below:
<programlisting>
EXEC SQL SET DESCRIPTOR <replaceable>name</replaceable> VALUE <replaceable>num</replaceable> <replaceable>field</replaceable> = :<replaceable>hostvar</replaceable>;
@ -3917,7 +3917,7 @@ typedef struct sqlda_struct sqlda_t;
<term><literal>sqln</></term>
<listitem>
<para>
It contains the number of input parameters for a parametrized query in
It contains the number of input parameters for a parameterized query in
case it's passed into <command>OPEN</command>, <command>DECLARE</command> or
<command>EXECUTE</command> statements using the <literal>USING</literal>
keyword. In case it's used as output of <command>SELECT</command>,
@ -8121,7 +8121,7 @@ if (*(int2 *)sqldata->sqlvar[i].sqlind != 0)
<listitem>
<para>
Type of the NULL indicator data. It's always SQLSMINT when returning data from the server.
When the <literal>SQLDA</literal> is used for a parametrized query, the data is treated
When the <literal>SQLDA</literal> is used for a parameterized query, the data is treated
according to the set type.
</para>
</listitem>

View File

@ -17100,7 +17100,7 @@ SELECT collation for ('foo' COLLATE "de_DE");
</entry>
<entry><type>record</type></entry>
<entry>
Returns information about current controldata file state.
Returns information about current control file state.
</entry>
</row>
@ -17148,97 +17148,97 @@ SELECT collation for ('foo' COLLATE "de_DE");
<tbody>
<row>
<entry>checkpoint_location</entry>
<entry><literal>checkpoint_location</literal></entry>
<entry><type>pg_lsn</type></entry>
</row>
<row>
<entry>prior_location</entry>
<entry><literal>prior_location</literal></entry>
<entry><type>pg_lsn</type></entry>
</row>
<row>
<entry>redo_location</entry>
<entry><literal>redo_location</literal></entry>
<entry><type>pg_lsn</type></entry>
</row>
<row>
<entry>redo_wal_file</entry>
<entry><literal>redo_wal_file</literal></entry>
<entry><type>text</type></entry>
</row>
<row>
<entry>timeline_id</entry>
<entry><literal>timeline_id</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>prev_timeline_id</entry>
<entry><literal>prev_timeline_id</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>full_page_writes</entry>
<entry><literal>full_page_writes</literal></entry>
<entry><type>boolean</type></entry>
</row>
<row>
<entry>next_xid</entry>
<entry><literal>next_xid</literal></entry>
<entry><type>text</type></entry>
</row>
<row>
<entry>next_oid</entry>
<entry><literal>next_oid</literal></entry>
<entry><type>oid</type></entry>
</row>
<row>
<entry>next_multixact_id</entry>
<entry><literal>next_multixact_id</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>next_multi_offset</entry>
<entry><literal>next_multi_offset</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>oldest_xid</entry>
<entry><literal>oldest_xid</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>oldest_xid_dbid</entry>
<entry><literal>oldest_xid_dbid</literal></entry>
<entry><type>oid</type></entry>
</row>
<row>
<entry>oldest_active_xid</entry>
<entry><literal>oldest_active_xid</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>oldest_multi_xid</entry>
<entry><literal>oldest_multi_xid</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>oldest_multi_dbid</entry>
<entry><literal>oldest_multi_dbid</literal></entry>
<entry><type>oid</type></entry>
</row>
<row>
<entry>oldest_commit_ts_xid</entry>
<entry><literal>oldest_commit_ts_xid</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>newest_commit_ts_xid</entry>
<entry><literal>newest_commit_ts_xid</literal></entry>
<entry><type>xid</type></entry>
</row>
<row>
<entry>checkpoint_time</entry>
<entry><literal>checkpoint_time</literal></entry>
<entry><type>timestamp with time zone</type></entry>
</row>
@ -17264,22 +17264,22 @@ SELECT collation for ('foo' COLLATE "de_DE");
<tbody>
<row>
<entry>pg_control_version</entry>
<entry><literal>pg_control_version</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>catalog_version_no</entry>
<entry><literal>catalog_version_no</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>system_identifier</entry>
<entry><literal>system_identifier</literal></entry>
<entry><type>bigint</type></entry>
</row>
<row>
<entry>pg_control_last_modified</entry>
<entry><literal>pg_control_last_modified</literal></entry>
<entry><type>timestamp with time zone</type></entry>
</row>
@ -17305,67 +17305,67 @@ SELECT collation for ('foo' COLLATE "de_DE");
<tbody>
<row>
<entry>max_data_alignment</entry>
<entry><literal>max_data_alignment</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>database_block_size</entry>
<entry><literal>database_block_size</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>blocks_per_segment</entry>
<entry><literal>blocks_per_segment</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>wal_block_size</entry>
<entry><literal>wal_block_size</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>bytes_per_wal_segment</entry>
<entry><literal>bytes_per_wal_segment</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>max_identifier_length</entry>
<entry><literal>max_identifier_length</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>max_index_columns</entry>
<entry><literal>max_index_columns</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>max_toast_chunk_size</entry>
<entry><literal>max_toast_chunk_size</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>large_object_chunk_size</entry>
<entry><literal>large_object_chunk_size</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>bigint_timestamps</entry>
<entry><literal>bigint_timestamps</literal></entry>
<entry><type>boolean</type></entry>
</row>
<row>
<entry>float4_pass_by_value</entry>
<entry><literal>float4_pass_by_value</literal></entry>
<entry><type>boolean</type></entry>
</row>
<row>
<entry>float8_pass_by_value</entry>
<entry><literal>float8_pass_by_value</literal></entry>
<entry><type>boolean</type></entry>
</row>
<row>
<entry>data_page_checksum_version</entry>
<entry><literal>data_page_checksum_version</literal></entry>
<entry><type>integer</type></entry>
</row>
@ -17391,27 +17391,27 @@ SELECT collation for ('foo' COLLATE "de_DE");
<tbody>
<row>
<entry>min_recovery_end_location</entry>
<entry><literal>min_recovery_end_location</literal></entry>
<entry><type>pg_lsn</type></entry>
</row>
<row>
<entry>min_recovery_end_timeline</entry>
<entry><literal>min_recovery_end_timeline</literal></entry>
<entry><type>integer</type></entry>
</row>
<row>
<entry>backup_start_location</entry>
<entry><literal>backup_start_location</literal></entry>
<entry><type>pg_lsn</type></entry>
</row>
<row>
<entry>backup_end_location</entry>
<entry><literal>backup_end_location</literal></entry>
<entry><type>pg_lsn</type></entry>
</row>
<row>
<entry>end_of_backup_record_required</entry>
<entry><literal>end_of_backup_record_required</literal></entry>
<entry><type>boolean</type></entry>
</row>

View File

@ -1046,7 +1046,7 @@ primary_slot_name = 'node_a_slot'
servers. Waiting for confirmation increases the user's confidence that the
changes will not be lost in the event of server crashes but it also
necessarily increases the response time for the requesting transaction.
The minimum wait time is the roundtrip time between primary to standby.
The minimum wait time is the round-trip time between primary to standby.
</para>
<para>

View File

@ -1315,7 +1315,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success;
But there's a problem: the <literal>WHERE</> clause refers
to <literal>success</> which is not available as a result column of the
index. Nonetheless, an index-only scan is possible because the plan does
not need to recheck that part of the <literal>WHERE</> clause at runtime:
not need to recheck that part of the <literal>WHERE</> clause at run time:
all entries found in the index necessarily have <literal>success = true</>
so this need not be explicitly checked in the
plan. <productname>PostgreSQL</> versions 9.6 and later will recognize

View File

@ -7345,7 +7345,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*)
<para>
In <literal>verify-full</> mode, the host name is matched against the
certificate's Subject Alternative Name attribute(s), or against the
Common Name attribute if no Subject Alternative Name of type dNSName is
Common Name attribute if no Subject Alternative Name of type <literal>dNSName</literal> is
present. If the certificate's name attribute starts with an asterisk
(<literal>*</>), the asterisk will be treated as
a wildcard, which will match all characters <emphasis>except</> a dot
@ -7505,7 +7505,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*)
<para>
All <acronym>SSL</> options carry overhead in the form of encryption and
key-exchange, so there is a tradeoff that has to be made between performance
key-exchange, so there is a trade-off that has to be made between performance
and security. <xref linkend="libpq-ssl-sslmode-statements">
illustrates the risks the different <literal>sslmode</> values
protect against, and what statement they make about security and overhead.

View File

@ -870,7 +870,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<row>
<entry><literal>BtreeVacuumLock</></entry>
<entry>Waiting to read or update vacuum-related information for a
Btree index.</entry>
B-tree index.</entry>
</row>
<row>
<entry><literal>AddinShmemInitLock</></entry>

View File

@ -266,13 +266,13 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
<term><option>--builtin</>=<replaceable>scriptname[@weight]</></term>
<listitem>
<para>
Add the specified builtin script to the list of executed scripts.
Add the specified built-in script to the list of executed scripts.
An optional integer weight after <literal>@</> allows to adjust the
probability of drawing the script. If not specified, it is set to 1.
Available builtin scripts are: <literal>tpcb-like</>,
Available built-in scripts are: <literal>tpcb-like</>,
<literal>simple-update</> and <literal>select-only</>.
Unambiguous prefixes of builtin names are accepted.
With special name <literal>list</>, show the list of builtin scripts
Unambiguous prefixes of built-in names are accepted.
With special name <literal>list</>, show the list of built-in scripts
and exit immediately.
</para>
</listitem>
@ -422,7 +422,7 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
<term><option>--skip-some-updates</option></term>
<listitem>
<para>
Run builtin simple-update script.
Run built-in simple-update script.
Shorthand for <option>-b simple-update</>.
</para>
</listitem>
@ -702,7 +702,7 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
</para>
<para>
The default builtin transaction script (also invoked with <option>-b tpcb-like</>)
The default built-in transaction script (also invoked with <option>-b tpcb-like</>)
issues seven commands per transaction over randomly chosen <literal>aid</>,
<literal>tid</>, <literal>bid</> and <literal>balance</>.
The scenario is inspired by the TPC-B benchmark, but is not actually TPC-B,
@ -720,14 +720,14 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
</orderedlist>
<para>
If you select the <literal>simple-update</> builtin (also <option>-N</>),
If you select the <literal>simple-update</> built-in (also <option>-N</>),
steps 4 and 5 aren't included in the transaction.
This will avoid update contention on these tables, but
it makes the test case even less like TPC-B.
</para>
<para>
If you select the <literal>select-only</> builtin (also <option>-S</>),
If you select the <literal>select-only</> built-in (also <option>-S</>),
only the <command>SELECT</> is issued.
</para>
</refsect2>
@ -1000,7 +1000,7 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
<row>
<entry><literal><function>random_gaussian(<replaceable>lb</>, <replaceable>ub</>, <replaceable>parameter</>)</></></>
<entry>integer</>
<entry>gaussian-distributed random integer in <literal>[lb, ub]</>,
<entry>Gaussian-distributed random integer in <literal>[lb, ub]</>,
see below</>
<entry><literal>random_gaussian(1, 10, 2.5)</></>
<entry>an integer between <literal>1</> and <literal>10</></>

View File

@ -4164,7 +4164,7 @@ second | four
<para>
When suitable, query results can be shown in a crosstab representation
with the \crosstabview command:
with the <command>\crosstabview</command> command:
<programlisting>
testdb=&gt; <userinput>SELECT first, second, first &gt; 2 AS gt2 FROM my_table;</userinput>
first | second | ge2

View File

@ -460,7 +460,7 @@ TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ]
with <literal>ROWS FROM( ... )</>. The output of such an item is the
concatenation of the first row from each function, then the second
row from each function, etc. If some of the functions produce fewer
rows than others, NULLs are substituted for the missing data, so
rows than others, null values are substituted for the missing data, so
that the total number of rows returned is always the same as for the
function that produced the most rows.
</para>

View File

@ -347,7 +347,7 @@
output list are executed in the order suggested by <literal>ORDER
BY</>, and that they are not evaluated more times than required when
there's a <literal>LIMIT</>. Previously, these properties held if
the ordering was performed by an indexscan or pre-mergejoin sort,
the ordering was performed by an index scan or pre-merge-join sort,
but not if it was performed by a top-level sort step.
</para>
</listitem>
@ -543,7 +543,7 @@
2016-01-26 [cc988fbb0] Improve ResourceOwners' behavior for large numbers of ow
-->
<para>
Improve performance of ResourceOwners with many tracked objects
Improve performance of resource owners with many tracked objects
(Aleksander Alekseev)
</para>
</listitem>

View File

@ -935,10 +935,10 @@ MemoryContextSwitchTo(MemoryContext context)
</para>
<para>
Barring special arrangements code in signal handlers may only
call async-signal safe functions (as defined in posix) and access
call async-signal safe functions (as defined in POSIX) and access
variables of type <literal>volatile sig_atomic_t</literal>. A few
functions in postgres are also deemed signal safe, importantly
<literal>SetLatch()</literal>.
functions in <command>postgres</command> are also deemed signal safe, importantly
<function>SetLatch()</function>.
</para>
<para>
In most cases signal handlers should do nothing more than note
@ -957,9 +957,9 @@ handle_sighup(SIGNAL_ARGS)
errno = save_errno;
}
</programlisting>
<literal>errno</> is safed and restored because
<literal>SetLatch()</> might change it. If that were not done
interrupted code that's currently inspecting errno might see the wrong
<varname>errno</> is saved and restored because
<function>SetLatch()</> might change it. If that were not done
interrupted code that's currently inspecting <varname>errno</varname> might see the wrong
value.
</para>
</simplesect>

View File

@ -2798,7 +2798,7 @@ SELECT plainto_tsquery('supernova star');
to extract <filename>.aff</> and <filename>.dic</> files, change
extensions to <filename>.affix</> and <filename>.dict</>. For some
dictionary files it is also needed to convert characters to the UTF-8
encoding with commands (for example, for norwegian language dictionary):
encoding with commands (for example, for a Norwegian language dictionary):
<programlisting>
iconv -f ISO_8859-1 -t UTF-8 -o nn_no.affix nn_NO.aff
iconv -f ISO_8859-1 -t UTF-8 -o nn_no.dict nn_NO.dic