From 1599e7b375127cac81b539d2c69d3faf7598509b Mon Sep 17 00:00:00 2001 From: Michael Paquier Date: Thu, 22 Apr 2021 09:47:43 +0900 Subject: [PATCH] doc: Move parallel_leader_participation to its correct category parallel_leader_participation got introduced in e5253fd, where it was listed under RESOURCES_ASYNCHRONOUS in guc.c, but the documentation did not reflect that and listed it with the other planner-related options. This commit fixes this inconsistency as the parameter is intended to be an asynchronous one. While on it, reorganize a bit the section dedicated to asynchronous parameters, backend_flush_after being moved first to do better in terms of alphabetical order of the options listed. Reported-by: Yanliang Lei Author: Bharath Rupireddy Discussion: https://postgr.es/m/16972-42d4b0c15aa1d5f5@postgresql.org --- doc/src/sgml/config.sgml | 88 ++++++++++++++++++++-------------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index dd7ebe7a9d..cf75d913ce 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -2383,6 +2383,36 @@ include_dir 'conf.d' Asynchronous Behavior + + backend_flush_after (integer) + + backend_flush_after configuration parameter + + + + + Whenever more than this amount of data has + been written by a single backend, attempt to force the OS to issue + these writes to the underlying storage. Doing so will limit the + amount of dirty data in the kernel's page cache, reducing the + likelihood of stalls when an fsync is issued at the end of a + checkpoint, or when the OS writes data back in larger batches in the + background. Often that will result in greatly reduced transaction + latency, but there also are some cases, especially with workloads + that are bigger than , but smaller + than the OS's page cache, where performance might degrade. This + setting may have no effect on some platforms. + If this value is specified without units, it is taken as blocks, + that is BLCKSZ bytes, typically 8kB. + The valid range is + between 0, which disables forced writeback, + and 2MB. The default is 0, i.e., no + forced writeback. (If BLCKSZ is not 8kB, + the maximum value scales proportionally to it.) + + + + effective_io_concurrency (integer) @@ -2579,32 +2609,25 @@ include_dir 'conf.d' - - backend_flush_after (integer) + + + parallel_leader_participation (boolean) - backend_flush_after configuration parameter + parallel_leader_participation configuration parameter - Whenever more than this amount of data has - been written by a single backend, attempt to force the OS to issue - these writes to the underlying storage. Doing so will limit the - amount of dirty data in the kernel's page cache, reducing the - likelihood of stalls when an fsync is issued at the end of a - checkpoint, or when the OS writes data back in larger batches in the - background. Often that will result in greatly reduced transaction - latency, but there also are some cases, especially with workloads - that are bigger than , but smaller - than the OS's page cache, where performance might degrade. This - setting may have no effect on some platforms. - If this value is specified without units, it is taken as blocks, - that is BLCKSZ bytes, typically 8kB. - The valid range is - between 0, which disables forced writeback, - and 2MB. The default is 0, i.e., no - forced writeback. (If BLCKSZ is not 8kB, - the maximum value scales proportionally to it.) + Allows the leader process to execute the query plan under + Gather and Gather Merge nodes + instead of waiting for worker processes. The default is + on. Setting this value to off + reduces the likelihood that workers will become blocked because the + leader is not reading tuples fast enough, but requires the leader + process to wait for worker processes to start up before the first + tuples can be produced. The degree to which the leader can help or + hinder performance depends on the plan type, number of workers and + query duration. @@ -5889,29 +5912,6 @@ SELECT * FROM parent WHERE key = 2400; - - - parallel_leader_participation (boolean) - - parallel_leader_participation configuration parameter - - - - - Allows the leader process to execute the query plan under - Gather and Gather Merge nodes - instead of waiting for worker processes. The default is - on. Setting this value to off - reduces the likelihood that workers will become blocked because the - leader is not reading tuples fast enough, but requires the leader - process to wait for worker processes to start up before the first - tuples can be produced. The degree to which the leader can help or - hinder performance depends on the plan type, number of workers and - query duration. - - - - plan_cache_mode (enum)