doc: Move parallel_leader_participation to its correct category
parallel_leader_participation got introduced in e5253fd
, where it was
listed under RESOURCES_ASYNCHRONOUS in guc.c, but the documentation
did not reflect that and listed it with the other planner-related
options. This commit fixes this inconsistency as the parameter is
intended to be an asynchronous one.
While on it, reorganize a bit the section dedicated to asynchronous
parameters, backend_flush_after being moved first to do better in terms
of alphabetical order of the options listed.
Reported-by: Yanliang Lei
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/16972-42d4b0c15aa1d5f5@postgresql.org
This commit is contained in:
parent
7c298c6573
commit
1599e7b375
|
@ -2383,6 +2383,36 @@ include_dir 'conf.d'
|
||||||
<title>Asynchronous Behavior</title>
|
<title>Asynchronous Behavior</title>
|
||||||
|
|
||||||
<variablelist>
|
<variablelist>
|
||||||
|
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
|
||||||
|
<term><varname>backend_flush_after</varname> (<type>integer</type>)
|
||||||
|
<indexterm>
|
||||||
|
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
|
||||||
|
</indexterm>
|
||||||
|
</term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Whenever more than this amount of data has
|
||||||
|
been written by a single backend, attempt to force the OS to issue
|
||||||
|
these writes to the underlying storage. Doing so will limit the
|
||||||
|
amount of dirty data in the kernel's page cache, reducing the
|
||||||
|
likelihood of stalls when an <function>fsync</function> is issued at the end of a
|
||||||
|
checkpoint, or when the OS writes data back in larger batches in the
|
||||||
|
background. Often that will result in greatly reduced transaction
|
||||||
|
latency, but there also are some cases, especially with workloads
|
||||||
|
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
|
||||||
|
than the OS's page cache, where performance might degrade. This
|
||||||
|
setting may have no effect on some platforms.
|
||||||
|
If this value is specified without units, it is taken as blocks,
|
||||||
|
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
|
||||||
|
The valid range is
|
||||||
|
between <literal>0</literal>, which disables forced writeback,
|
||||||
|
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
|
||||||
|
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
|
||||||
|
the maximum value scales proportionally to it.)
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
|
||||||
<varlistentry id="guc-effective-io-concurrency" xreflabel="effective_io_concurrency">
|
<varlistentry id="guc-effective-io-concurrency" xreflabel="effective_io_concurrency">
|
||||||
<term><varname>effective_io_concurrency</varname> (<type>integer</type>)
|
<term><varname>effective_io_concurrency</varname> (<type>integer</type>)
|
||||||
<indexterm>
|
<indexterm>
|
||||||
|
@ -2579,32 +2609,25 @@ include_dir 'conf.d'
|
||||||
</listitem>
|
</listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
|
|
||||||
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
|
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
|
||||||
<term><varname>backend_flush_after</varname> (<type>integer</type>)
|
<term>
|
||||||
|
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
|
||||||
<indexterm>
|
<indexterm>
|
||||||
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
|
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>
|
||||||
</indexterm>
|
</indexterm>
|
||||||
</term>
|
</term>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Whenever more than this amount of data has
|
Allows the leader process to execute the query plan under
|
||||||
been written by a single backend, attempt to force the OS to issue
|
<literal>Gather</literal> and <literal>Gather Merge</literal> nodes
|
||||||
these writes to the underlying storage. Doing so will limit the
|
instead of waiting for worker processes. The default is
|
||||||
amount of dirty data in the kernel's page cache, reducing the
|
<literal>on</literal>. Setting this value to <literal>off</literal>
|
||||||
likelihood of stalls when an <function>fsync</function> is issued at the end of a
|
reduces the likelihood that workers will become blocked because the
|
||||||
checkpoint, or when the OS writes data back in larger batches in the
|
leader is not reading tuples fast enough, but requires the leader
|
||||||
background. Often that will result in greatly reduced transaction
|
process to wait for worker processes to start up before the first
|
||||||
latency, but there also are some cases, especially with workloads
|
tuples can be produced. The degree to which the leader can help or
|
||||||
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
|
hinder performance depends on the plan type, number of workers and
|
||||||
than the OS's page cache, where performance might degrade. This
|
query duration.
|
||||||
setting may have no effect on some platforms.
|
|
||||||
If this value is specified without units, it is taken as blocks,
|
|
||||||
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
|
|
||||||
The valid range is
|
|
||||||
between <literal>0</literal>, which disables forced writeback,
|
|
||||||
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
|
|
||||||
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
|
|
||||||
the maximum value scales proportionally to it.)
|
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
|
@ -5889,29 +5912,6 @@ SELECT * FROM parent WHERE key = 2400;
|
||||||
</listitem>
|
</listitem>
|
||||||
</varlistentry>
|
</varlistentry>
|
||||||
|
|
||||||
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
|
|
||||||
<term>
|
|
||||||
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
|
|
||||||
<indexterm>
|
|
||||||
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>
|
|
||||||
</indexterm>
|
|
||||||
</term>
|
|
||||||
<listitem>
|
|
||||||
<para>
|
|
||||||
Allows the leader process to execute the query plan under
|
|
||||||
<literal>Gather</literal> and <literal>Gather Merge</literal> nodes
|
|
||||||
instead of waiting for worker processes. The default is
|
|
||||||
<literal>on</literal>. Setting this value to <literal>off</literal>
|
|
||||||
reduces the likelihood that workers will become blocked because the
|
|
||||||
leader is not reading tuples fast enough, but requires the leader
|
|
||||||
process to wait for worker processes to start up before the first
|
|
||||||
tuples can be produced. The degree to which the leader can help or
|
|
||||||
hinder performance depends on the plan type, number of workers and
|
|
||||||
query duration.
|
|
||||||
</para>
|
|
||||||
</listitem>
|
|
||||||
</varlistentry>
|
|
||||||
|
|
||||||
<varlistentry id="guc-plan-cache_mode" xreflabel="plan_cache_mode">
|
<varlistentry id="guc-plan-cache_mode" xreflabel="plan_cache_mode">
|
||||||
<term><varname>plan_cache_mode</varname> (<type>enum</type>)
|
<term><varname>plan_cache_mode</varname> (<type>enum</type>)
|
||||||
<indexterm>
|
<indexterm>
|
||||||
|
|
Loading…
Reference in New Issue