doc: Move parallel_leader_participation to its correct category

parallel_leader_participation got introduced in e5253fd, where it was
listed under RESOURCES_ASYNCHRONOUS in guc.c, but the documentation
did not reflect that and listed it with the other planner-related
options.  This commit fixes this inconsistency as the parameter is
intended to be an asynchronous one.

While on it, reorganize a bit the section dedicated to asynchronous
parameters, backend_flush_after being moved first to do better in terms
of alphabetical order of the options listed.

Reported-by: Yanliang Lei
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/16972-42d4b0c15aa1d5f5@postgresql.org
This commit is contained in:
Michael Paquier 2021-04-22 09:47:43 +09:00
parent 7c298c6573
commit 1599e7b375
1 changed files with 44 additions and 44 deletions

View File

@ -2383,6 +2383,36 @@ include_dir 'conf.d'
<title>Asynchronous Behavior</title>
<variablelist>
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
<term><varname>backend_flush_after</varname> (<type>integer</type>)
<indexterm>
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Whenever more than this amount of data has
been written by a single backend, attempt to force the OS to issue
these writes to the underlying storage. Doing so will limit the
amount of dirty data in the kernel's page cache, reducing the
likelihood of stalls when an <function>fsync</function> is issued at the end of a
checkpoint, or when the OS writes data back in larger batches in the
background. Often that will result in greatly reduced transaction
latency, but there also are some cases, especially with workloads
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
than the OS's page cache, where performance might degrade. This
setting may have no effect on some platforms.
If this value is specified without units, it is taken as blocks,
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
The valid range is
between <literal>0</literal>, which disables forced writeback,
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
the maximum value scales proportionally to it.)
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-effective-io-concurrency" xreflabel="effective_io_concurrency">
<term><varname>effective_io_concurrency</varname> (<type>integer</type>)
<indexterm>
@ -2579,32 +2609,25 @@ include_dir 'conf.d'
</listitem>
</varlistentry>
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
<term><varname>backend_flush_after</varname> (<type>integer</type>)
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
<term>
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
<indexterm>
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Whenever more than this amount of data has
been written by a single backend, attempt to force the OS to issue
these writes to the underlying storage. Doing so will limit the
amount of dirty data in the kernel's page cache, reducing the
likelihood of stalls when an <function>fsync</function> is issued at the end of a
checkpoint, or when the OS writes data back in larger batches in the
background. Often that will result in greatly reduced transaction
latency, but there also are some cases, especially with workloads
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
than the OS's page cache, where performance might degrade. This
setting may have no effect on some platforms.
If this value is specified without units, it is taken as blocks,
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
The valid range is
between <literal>0</literal>, which disables forced writeback,
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
the maximum value scales proportionally to it.)
Allows the leader process to execute the query plan under
<literal>Gather</literal> and <literal>Gather Merge</literal> nodes
instead of waiting for worker processes. The default is
<literal>on</literal>. Setting this value to <literal>off</literal>
reduces the likelihood that workers will become blocked because the
leader is not reading tuples fast enough, but requires the leader
process to wait for worker processes to start up before the first
tuples can be produced. The degree to which the leader can help or
hinder performance depends on the plan type, number of workers and
query duration.
</para>
</listitem>
</varlistentry>
@ -5889,29 +5912,6 @@ SELECT * FROM parent WHERE key = 2400;
</listitem>
</varlistentry>
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
<term>
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
<indexterm>
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Allows the leader process to execute the query plan under
<literal>Gather</literal> and <literal>Gather Merge</literal> nodes
instead of waiting for worker processes. The default is
<literal>on</literal>. Setting this value to <literal>off</literal>
reduces the likelihood that workers will become blocked because the
leader is not reading tuples fast enough, but requires the leader
process to wait for worker processes to start up before the first
tuples can be produced. The degree to which the leader can help or
hinder performance depends on the plan type, number of workers and
query duration.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-plan-cache_mode" xreflabel="plan_cache_mode">
<term><varname>plan_cache_mode</varname> (<type>enum</type>)
<indexterm>