Doc: Review for "Optionally prefetch referenced data in recovery."

Typos, corrections and language improvements in the docs, and a few in
code comments too.

Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20210409033703.GP6592%40telsasoft.com
This commit is contained in:
Thomas Munro 2021-04-10 08:09:30 +12:00
parent 49fb4e6b24
commit dc88460c24
4 changed files with 10 additions and 10 deletions

View File

@ -3621,7 +3621,7 @@ include_dir 'conf.d'
pool after that. However, on file systems with a block size larger pool after that. However, on file systems with a block size larger
than than
<productname>PostgreSQL</productname>'s, prefetching can avoid a <productname>PostgreSQL</productname>'s, prefetching can avoid a
costly read-before-write when a blocks are later written. costly read-before-write when blocks are later written.
The default is off. The default is off.
</para> </para>
</listitem> </listitem>

View File

@ -816,9 +816,7 @@
prefetching mechanism is most likely to be effective on systems prefetching mechanism is most likely to be effective on systems
with <varname>full_page_writes</varname> set to with <varname>full_page_writes</varname> set to
<varname>off</varname> (where that is safe), and where the working <varname>off</varname> (where that is safe), and where the working
set is larger than RAM. By default, prefetching in recovery is enabled set is larger than RAM. By default, prefetching in recovery is disabled.
on operating systems that have <function>posix_fadvise</function>
support.
</para> </para>
</sect1> </sect1>

View File

@ -31,12 +31,14 @@
* stall; this is counted with "skip_fpw". * stall; this is counted with "skip_fpw".
* *
* The only way we currently have to know that an I/O initiated with * The only way we currently have to know that an I/O initiated with
* PrefetchSharedBuffer() has that recovery will eventually call ReadBuffer(), * PrefetchSharedBuffer() has completed is to wait for the corresponding call
* and perform a synchronous read. Therefore, we track the number of * to XLogReadBufferInRedo() to return. Therefore, we track the number of
* potentially in-flight I/Os by using a circular buffer of LSNs. When it's * potentially in-flight I/Os by using a circular buffer of LSNs. When it's
* full, we have to wait for recovery to replay records so that the queue * full, we have to wait for recovery to replay enough records to remove some
* depth can be reduced, before we can do any more prefetching. Ideally, this * LSNs, and only then can we initiate more prefetching. Ideally, this keeps
* keeps us the right distance ahead to respect maintenance_io_concurrency. * us just the right distance ahead to respect maintenance_io_concurrency,
* though in practice it errs on the side of being too conservative because
* many I/Os complete sooner than we know.
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */

View File

@ -2774,7 +2774,7 @@ static struct config_int ConfigureNamesInt[] =
{ {
{"wal_decode_buffer_size", PGC_POSTMASTER, WAL_ARCHIVE_RECOVERY, {"wal_decode_buffer_size", PGC_POSTMASTER, WAL_ARCHIVE_RECOVERY,
gettext_noop("Maximum buffer size for reading ahead in the WAL during recovery."), gettext_noop("Maximum buffer size for reading ahead in the WAL during recovery."),
gettext_noop("This controls the maximum distance we can read ahead n the WAL to prefetch referenced blocks."), gettext_noop("This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks."),
GUC_UNIT_BYTE GUC_UNIT_BYTE
}, },
&wal_decode_buffer_size, &wal_decode_buffer_size,