doc: clarify the effect of concurrent work_mem allocations

Reported-by: Sami Imseih

Discussion: https://postgr.es/m/66590882-F48C-4A25-83E3-73792CF8C51F@amazon.com

Backpatch-through: 11
This commit is contained in:
Bruce Momjian 2023-09-26 19:44:21 -04:00
parent cc06762141
commit 9a59ff483e

View File

@ -1697,9 +1697,10 @@ include_dir 'conf.d'
(such as a sort or hash table) before writing to temporary disk files. (such as a sort or hash table) before writing to temporary disk files.
If this value is specified without units, it is taken as kilobytes. If this value is specified without units, it is taken as kilobytes.
The default value is four megabytes (<literal>4MB</literal>). The default value is four megabytes (<literal>4MB</literal>).
Note that for a complex query, several sort or hash operations might be Note that a complex query might perform several sort and hash
running in parallel; each operation will be allowed to use as much memory operations at the same time, with each operation generally being
as this value specifies before it starts to write data into temporary allowed to use as much memory as this value specifies before it
starts to write data into temporary
files. Also, several running sessions could be doing such operations files. Also, several running sessions could be doing such operations
concurrently. Therefore, the total memory used could be many concurrently. Therefore, the total memory used could be many
times the value of <varname>work_mem</varname>; it is necessary to times the value of <varname>work_mem</varname>; it is necessary to