doc: clarify the effect of concurrent work_mem allocations

Reported-by: Sami Imseih

Discussion: https://postgr.es/m/66590882-F48C-4A25-83E3-73792CF8C51F@amazon.com

Backpatch-through: 11
This commit is contained in:
Bruce Momjian 2023-09-26 19:44:22 -04:00
parent eec2190b8c
commit 5f567b3c35
1 changed files with 5 additions and 4 deletions

View File

@ -1834,9 +1834,10 @@ include_dir 'conf.d'
(such as a sort or hash table) before writing to temporary disk files.
If this value is specified without units, it is taken as kilobytes.
The default value is four megabytes (<literal>4MB</literal>).
Note that for a complex query, several sort or hash operations might be
running in parallel; each operation will generally be allowed
to use as much memory as this value specifies before it starts
Note that a complex query might perform several sort and hash
operations at the same time, with each operation generally being
allowed to use as much memory as this value specifies before
it starts
to write data into temporary files. Also, several running
sessions could be doing such operations concurrently.
Therefore, the total memory used could be many times the value
@ -1850,7 +1851,7 @@ include_dir 'conf.d'
<para>
Hash-based operations are generally more sensitive to memory
availability than equivalent sort-based operations. The
memory available for hash tables is computed by multiplying
memory limit for a hash table is computed by multiplying
<varname>work_mem</varname> by
<varname>hash_mem_multiplier</varname>. This makes it
possible for hash-based operations to use an amount of memory