Revert recovery prefetching feature.

This set of commits has some bugs with known fixes, but at this late
stage in the release cycle it seems best to revert and resubmit next
time, along with some new automated test coverage for this whole area.

Commits reverted:

dc88460c: Doc: Review for "Optionally prefetch referenced data in recovery."
1d257577: Optionally prefetch referenced data in recovery.
f003d9f8: Add circular WAL decoding buffer.
323cbe7c: Remove read_page callback from XLogReader.

Remove the new GUC group WAL_RECOVERY recently added by a55a9847, as the
corresponding section of config.sgml is now reverted.

Discussion: https://postgr.es/m/CAOuzzgrn7iKnFRsB4MHp3UisEQAGgZMbk_ViTN4HV4-Ksq8zCg%40mail.gmail.com
This commit is contained in:
Thomas Munro 2021-05-10 16:00:53 +12:00
parent 63db0ac3f9
commit c2dc19342e
35 changed files with 859 additions and 3124 deletions

View File

@ -3588,89 +3588,6 @@ include_dir 'conf.d'
</variablelist>
</sect2>
<sect2 id="runtime-config-wal-recovery">
<title>Recovery</title>
<indexterm>
<primary>configuration</primary>
<secondary>of recovery</secondary>
<tertiary>general settings</tertiary>
</indexterm>
<para>
This section describes the settings that apply to recovery in general,
affecting crash recovery, streaming replication and archive-based
replication.
</para>
<variablelist>
<varlistentry id="guc-recovery-prefetch" xreflabel="recovery_prefetch">
<term><varname>recovery_prefetch</varname> (<type>boolean</type>)
<indexterm>
<primary><varname>recovery_prefetch</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Whether to try to prefetch blocks that are referenced in the WAL that
are not yet in the buffer pool, during recovery. Prefetching blocks
that will soon be needed can reduce I/O wait times in some workloads.
See also the <xref linkend="guc-wal-decode-buffer-size"/> and
<xref linkend="guc-maintenance-io-concurrency"/> settings, which limit
prefetching activity.
This setting is disabled by default.
</para>
<para>
This feature currently depends on an effective
<function>posix_fadvise</function> function, which some
operating systems lack.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-recovery-prefetch-fpw" xreflabel="recovery_prefetch_fpw">
<term><varname>recovery_prefetch_fpw</varname> (<type>boolean</type>)
<indexterm>
<primary><varname>recovery_prefetch_fpw</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Whether to prefetch blocks that were logged with full page images,
during recovery. Often this doesn't help, since such blocks will not
be read the first time they are needed and might remain in the buffer
pool after that. However, on file systems with a block size larger
than
<productname>PostgreSQL</productname>'s, prefetching can avoid a
costly read-before-write when blocks are later written.
The default is off.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-wal-decode-buffer-size" xreflabel="wal_decode_buffer_size">
<term><varname>wal_decode_buffer_size</varname> (<type>integer</type>)
<indexterm>
<primary><varname>wal_decode_buffer_size</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
A limit on how far ahead the server can look in the WAL, to find
blocks to prefetch. Setting it too high might be counterproductive,
if it means that data falls out of the
kernel cache before it is needed. If this value is specified without
units, it is taken as bytes.
The default is 512kB.
</para>
</listitem>
</varlistentry>
</variablelist>
</sect2>
<sect2 id="runtime-config-wal-archive-recovery">
<title>Archive Recovery</title>

View File

@ -337,13 +337,6 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
</entry>
</row>
<row>
<entry><structname>pg_stat_prefetch_recovery</structname><indexterm><primary>pg_stat_prefetch_recovery</primary></indexterm></entry>
<entry>Only one row, showing statistics about blocks prefetched during recovery.
See <xref linkend="pg-stat-prefetch-recovery-view"/> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_subscription</structname><indexterm><primary>pg_stat_subscription</primary></indexterm></entry>
<entry>At least one row per subscription, showing information about
@ -2948,78 +2941,6 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
copy of the subscribed tables.
</para>
<table id="pg-stat-prefetch-recovery-view" xreflabel="pg_stat_prefetch_recovery">
<title><structname>pg_stat_prefetch_recovery</structname> View</title>
<tgroup cols="3">
<thead>
<row>
<entry>Column</entry>
<entry>Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><structfield>prefetch</structfield></entry>
<entry><type>bigint</type></entry>
<entry>Number of blocks prefetched because they were not in the buffer pool</entry>
</row>
<row>
<entry><structfield>skip_hit</structfield></entry>
<entry><type>bigint</type></entry>
<entry>Number of blocks not prefetched because they were already in the buffer pool</entry>
</row>
<row>
<entry><structfield>skip_new</structfield></entry>
<entry><type>bigint</type></entry>
<entry>Number of blocks not prefetched because they were new (usually relation extension)</entry>
</row>
<row>
<entry><structfield>skip_fpw</structfield></entry>
<entry><type>bigint</type></entry>
<entry>Number of blocks not prefetched because a full page image was included in the WAL and <xref linkend="guc-recovery-prefetch-fpw"/> was set to <literal>off</literal></entry>
</row>
<row>
<entry><structfield>skip_seq</structfield></entry>
<entry><type>bigint</type></entry>
<entry>Number of blocks not prefetched because of repeated access</entry>
</row>
<row>
<entry><structfield>distance</structfield></entry>
<entry><type>integer</type></entry>
<entry>How far ahead of recovery the prefetcher is currently reading, in bytes</entry>
</row>
<row>
<entry><structfield>queue_depth</structfield></entry>
<entry><type>integer</type></entry>
<entry>How many prefetches have been initiated but are not yet known to have completed</entry>
</row>
<row>
<entry><structfield>avg_distance</structfield></entry>
<entry><type>float4</type></entry>
<entry>How far ahead of recovery the prefetcher is on average, while recovery is not idle</entry>
</row>
<row>
<entry><structfield>avg_queue_depth</structfield></entry>
<entry><type>float4</type></entry>
<entry>Average number of prefetches in flight while recovery is not idle</entry>
</row>
</tbody>
</tgroup>
</table>
<para>
The <structname>pg_stat_prefetch_recovery</structname> view will contain only
one row. It is filled with nulls if recovery is not running or WAL
prefetching is not enabled. See <xref linkend="guc-recovery-prefetch"/>
for more information. The counters in this view are reset whenever the
<xref linkend="guc-recovery-prefetch"/>,
<xref linkend="guc-recovery-prefetch-fpw"/> or
<xref linkend="guc-maintenance-io-concurrency"/> setting is changed and
the server configuration is reloaded.
</para>
<table id="pg-stat-subscription" xreflabel="pg_stat_subscription">
<title><structname>pg_stat_subscription</structname> View</title>
<tgroup cols="1">
@ -5152,11 +5073,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
all the counters shown in
the <structname>pg_stat_bgwriter</structname>
view, <literal>archiver</literal> to reset all the counters shown in
the <structname>pg_stat_archiver</structname> view,
<literal>wal</literal> to reset all the counters shown in the
<structname>pg_stat_wal</structname> view or
<literal>prefetch_recovery</literal> to reset all the counters shown
in the <structname>pg_stat_prefetch_recovery</structname> view.
the <structname>pg_stat_archiver</structname> view or <literal>wal</literal>
to reset all the counters shown in the <structname>pg_stat_wal</structname> view.
</para>
<para>
This function is restricted to superusers by default, but other users

View File

@ -803,21 +803,6 @@
counted as <literal>wal_write</literal> and <literal>wal_sync</literal>
in <structname>pg_stat_wal</structname>, respectively.
</para>
<para>
The <xref linkend="guc-recovery-prefetch"/> parameter can
be used to improve I/O performance during recovery by instructing
<productname>PostgreSQL</productname> to initiate reads
of disk blocks that will soon be needed but are not currently in
<productname>PostgreSQL</productname>'s buffer pool.
The <xref linkend="guc-maintenance-io-concurrency"/> and
<xref linkend="guc-wal-decode-buffer-size"/> settings limit prefetching
concurrency and distance, respectively. The
prefetching mechanism is most likely to be effective on systems
with <varname>full_page_writes</varname> set to
<varname>off</varname> (where that is safe), and where the working
set is larger than RAM. By default, prefetching in recovery is disabled.
</para>
</sect1>
<sect1 id="wal-internals">

View File

@ -31,7 +31,6 @@ OBJS = \
xlogarchive.o \
xlogfuncs.o \
xloginsert.o \
xlogprefetch.o \
xlogreader.o \
xlogutils.o

View File

@ -482,10 +482,10 @@ generic_redo(XLogReaderState *record)
uint8 block_id;
/* Protect limited size of buffers[] array */
Assert(XLogRecMaxBlockId(record) < MAX_GENERIC_XLOG_PAGES);
Assert(record->max_block_id < MAX_GENERIC_XLOG_PAGES);
/* Iterate over blocks */
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
XLogRedoAction action;
@ -525,7 +525,7 @@ generic_redo(XLogReaderState *record)
}
/* Changes are done: unlock and release all buffers */
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
if (BufferIsValid(buffers[block_id]))
UnlockReleaseBuffer(buffers[block_id]);

View File

@ -1330,8 +1330,11 @@ XlogReadTwoPhaseData(XLogRecPtr lsn, char **buf, int *len)
char *errormsg;
TimeLineID save_currtli = ThisTimeLineID;
xlogreader = XLogReaderAllocate(wal_segment_size, NULL, wal_segment_close);
xlogreader = XLogReaderAllocate(wal_segment_size, NULL,
XL_ROUTINE(.page_read = &read_local_xlog_page,
.segment_open = &wal_segment_open,
.segment_close = &wal_segment_close),
NULL);
if (!xlogreader)
ereport(ERROR,
(errcode(ERRCODE_OUT_OF_MEMORY),
@ -1339,12 +1342,7 @@ XlogReadTwoPhaseData(XLogRecPtr lsn, char **buf, int *len)
errdetail("Failed while allocating a WAL reading processor.")));
XLogBeginRead(xlogreader, lsn);
while (XLogReadRecord(xlogreader, &record, &errormsg) ==
XLREAD_NEED_DATA)
{
if (!read_local_xlog_page(xlogreader))
break;
}
record = XLogReadRecord(xlogreader, &errormsg);
/*
* Restore immediately the timeline where it was previously, as

View File

@ -35,7 +35,6 @@
#include "access/xlog_internal.h"
#include "access/xlogarchive.h"
#include "access/xloginsert.h"
#include "access/xlogprefetch.h"
#include "access/xlogreader.h"
#include "access/xlogutils.h"
#include "catalog/catversion.h"
@ -111,7 +110,6 @@ int CommitDelay = 0; /* precommit delay in microseconds */
int CommitSiblings = 5; /* # concurrent xacts needed to sleep */
int wal_retrieve_retry_interval = 5000;
int max_slot_wal_keep_size_mb = -1;
int wal_decode_buffer_size = 512 * 1024;
bool track_wal_io_timing = false;
#ifdef WAL_DEBUG
@ -813,13 +811,17 @@ static XLogSegNo openLogSegNo = 0;
* These variables are used similarly to the ones above, but for reading
* the XLOG. Note, however, that readOff generally represents the offset
* of the page just read, not the seek position of the FD itself, which
* will be just past that page. readSource indicates where we got the
* currently open file from.
* will be just past that page. readLen indicates how much of the current
* page has been read into readBuf, and readSource indicates where we got
* the currently open file from.
* Note: we could use Reserve/ReleaseExternalFD to track consumption of
* this FD too; but it doesn't currently seem worthwhile, since the XLOG is
* not read by general-purpose sessions.
*/
static int readFile = -1;
static XLogSegNo readSegNo = 0;
static uint32 readOff = 0;
static uint32 readLen = 0;
static XLogSource readSource = XLOG_FROM_ANY;
/*
@ -836,6 +838,13 @@ static XLogSource currentSource = XLOG_FROM_ANY;
static bool lastSourceFailed = false;
static bool pendingWalRcvRestart = false;
typedef struct XLogPageReadPrivate
{
int emode;
bool fetching_ckpt; /* are we fetching a checkpoint record? */
bool randAccess;
} XLogPageReadPrivate;
/*
* These variables track when we last obtained some WAL data to process,
* and where we got it from. (XLogReceiptSource is initially the same as
@ -911,13 +920,10 @@ static bool InstallXLogFileSegment(XLogSegNo *segno, char *tmppath,
static int XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
XLogSource source, bool notfoundOk);
static int XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source);
static bool XLogPageRead(XLogReaderState *state,
bool fetching_ckpt, int emode, bool randAccess,
bool nowait);
static int XLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr,
int reqLen, XLogRecPtr targetRecPtr, char *readBuf);
static bool WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
bool fetching_ckpt,
XLogRecPtr tliRecPtr,
XLogSegNo readSegNo);
bool fetching_ckpt, XLogRecPtr tliRecPtr);
static int emode_for_corrupt_record(int emode, XLogRecPtr RecPtr);
static void XLogFileClose(void);
static void PreallocXlogFiles(XLogRecPtr endptr);
@ -1212,7 +1218,6 @@ XLogInsertRecord(XLogRecData *rdata,
StringInfoData recordBuf;
char *errormsg = NULL;
MemoryContext oldCxt;
DecodedXLogRecord *decoded;
oldCxt = MemoryContextSwitchTo(walDebugCxt);
@ -1228,19 +1233,15 @@ XLogInsertRecord(XLogRecData *rdata,
for (; rdata != NULL; rdata = rdata->next)
appendBinaryStringInfo(&recordBuf, rdata->data, rdata->len);
/* How much space would it take to decode this record? */
decoded = palloc(DecodeXLogRecordRequiredSpace(recordBuf.len));
if (!debug_reader)
debug_reader = XLogReaderAllocate(wal_segment_size, NULL, NULL);
debug_reader = XLogReaderAllocate(wal_segment_size, NULL,
XL_ROUTINE(), NULL);
if (!debug_reader)
{
appendStringInfoString(&buf, "error decoding record: out of memory");
}
else if (!DecodeXLogRecord(debug_reader, decoded,
(XLogRecord *) recordBuf.data,
EndPos,
else if (!DecodeXLogRecord(debug_reader, (XLogRecord *) recordBuf.data,
&errormsg))
{
appendStringInfo(&buf, "error decoding record: %s",
@ -1249,17 +1250,10 @@ XLogInsertRecord(XLogRecData *rdata,
else
{
appendStringInfoString(&buf, " - ");
/*
* Temporarily make this decoded record the current record for
* XLogRecGetXXX() macros.
*/
debug_reader->record = decoded;
xlog_outdesc(&buf, debug_reader);
debug_reader->record = NULL;
}
elog(LOG, "%s", buf.data);
pfree(decoded);
pfree(buf.data);
pfree(recordBuf.data);
MemoryContextSwitchTo(oldCxt);
@ -1433,7 +1427,7 @@ checkXLogConsistency(XLogReaderState *record)
Assert((XLogRecGetInfo(record) & XLR_CHECK_CONSISTENCY) != 0);
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
Buffer buf;
Page page;
@ -1464,7 +1458,7 @@ checkXLogConsistency(XLogReaderState *record)
* temporary page.
*/
buf = XLogReadBufferExtended(rnode, forknum, blkno,
RBM_NORMAL_NO_LOG, InvalidBuffer);
RBM_NORMAL_NO_LOG);
if (!BufferIsValid(buf))
continue;
@ -3732,6 +3726,7 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,
snprintf(activitymsg, sizeof(activitymsg), "waiting for %s",
xlogfname);
set_ps_display(activitymsg);
restoredFromArchive = RestoreArchivedFile(path, xlogfname,
"RECOVERYXLOG",
wal_segment_size,
@ -4378,7 +4373,12 @@ ReadRecord(XLogReaderState *xlogreader, int emode,
bool fetching_ckpt)
{
XLogRecord *record;
bool randAccess = (xlogreader->ReadRecPtr == InvalidXLogRecPtr);
XLogPageReadPrivate *private = (XLogPageReadPrivate *) xlogreader->private_data;
/* Pass through parameters to XLogPageRead */
private->fetching_ckpt = fetching_ckpt;
private->emode = emode;
private->randAccess = (xlogreader->ReadRecPtr == InvalidXLogRecPtr);
/* This is the first attempt to read this page. */
lastSourceFailed = false;
@ -4386,19 +4386,10 @@ ReadRecord(XLogReaderState *xlogreader, int emode,
for (;;)
{
char *errormsg;
XLogReadRecordResult result;
while ((result = XLogReadRecord(xlogreader, &record, &errormsg))
== XLREAD_NEED_DATA)
{
if (!XLogPageRead(xlogreader, fetching_ckpt, emode, randAccess,
false /* wait for data if streaming */))
break;
}
record = XLogReadRecord(xlogreader, &errormsg);
ReadRecPtr = xlogreader->ReadRecPtr;
EndRecPtr = xlogreader->EndRecPtr;
if (record == NULL)
{
if (readFile >= 0)
@ -6466,6 +6457,7 @@ StartupXLOG(void)
bool backupFromStandby = false;
DBState dbstate_at_startup;
XLogReaderState *xlogreader;
XLogPageReadPrivate private;
bool promoted = false;
struct stat st;
@ -6624,9 +6616,13 @@ StartupXLOG(void)
OwnLatch(&XLogCtl->recoveryWakeupLatch);
/* Set up XLOG reader facility */
MemSet(&private, 0, sizeof(XLogPageReadPrivate));
xlogreader =
XLogReaderAllocate(wal_segment_size, NULL, wal_segment_close);
XLogReaderAllocate(wal_segment_size, NULL,
XL_ROUTINE(.page_read = &XLogPageRead,
.segment_open = NULL,
.segment_close = wal_segment_close),
&private);
if (!xlogreader)
ereport(ERROR,
(errcode(ERRCODE_OUT_OF_MEMORY),
@ -6634,12 +6630,6 @@ StartupXLOG(void)
errdetail("Failed while allocating a WAL reading processor.")));
xlogreader->system_identifier = ControlFile->system_identifier;
/*
* Set the WAL decode buffer size. This limits how far ahead we can read
* in the WAL.
*/
XLogReaderSetDecodeBuffer(xlogreader, NULL, wal_decode_buffer_size);
/*
* Allocate two page buffers dedicated to WAL consistency checks. We do
* it this way, rather than just making static arrays, for two reasons:
@ -7320,7 +7310,6 @@ StartupXLOG(void)
{
ErrorContextCallback errcallback;
TimestampTz xtime;
XLogPrefetchState prefetch;
PGRUsage ru0;
pg_rusage_init(&ru0);
@ -7331,9 +7320,6 @@ StartupXLOG(void)
(errmsg("redo starts at %X/%X",
LSN_FORMAT_ARGS(ReadRecPtr))));
/* Prepare to prefetch, if configured. */
XLogPrefetchBegin(&prefetch, xlogreader);
/*
* main redo apply loop
*/
@ -7363,14 +7349,6 @@ StartupXLOG(void)
/* Handle interrupt signals of startup process */
HandleStartupProcInterrupts();
/* Perform WAL prefetching, if enabled. */
while (XLogPrefetch(&prefetch, xlogreader->ReadRecPtr) == XLREAD_NEED_DATA)
{
if (!XLogPageRead(xlogreader, false, LOG, false,
true /* don't wait for streaming data */))
break;
}
/*
* Pause WAL replay, if requested by a hot-standby session via
* SetRecoveryPause().
@ -7544,9 +7522,6 @@ StartupXLOG(void)
*/
if (AllowCascadeReplication())
WalSndWakeup();
/* Reset the prefetcher. */
XLogPrefetchReconfigure();
}
/* Exit loop if we reached inclusive recovery target */
@ -7563,7 +7538,6 @@ StartupXLOG(void)
/*
* end of main redo apply loop
*/
XLogPrefetchEnd(&prefetch);
if (reachedRecoveryTarget)
{
@ -7845,8 +7819,7 @@ StartupXLOG(void)
XLogRecPtr pageBeginPtr;
pageBeginPtr = EndOfLog - (EndOfLog % XLOG_BLCKSZ);
Assert(XLogSegmentOffset(xlogreader->readPagePtr, wal_segment_size) ==
XLogSegmentOffset(pageBeginPtr, wal_segment_size));
Assert(readOff == XLogSegmentOffset(pageBeginPtr, wal_segment_size));
firstIdx = XLogRecPtrToBufIdx(EndOfLog);
@ -10338,7 +10311,7 @@ xlog_redo(XLogReaderState *record)
* XLOG_FPI and XLOG_FPI_FOR_HINT records, they use a different info
* code just to distinguish them for statistics purposes.
*/
for (uint8 block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (uint8 block_id = 0; block_id <= record->max_block_id; block_id++)
{
Buffer buffer;
@ -10473,7 +10446,7 @@ xlog_block_info(StringInfo buf, XLogReaderState *record)
int block_id;
/* decode block references */
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
RelFileNode rnode;
ForkNumber forknum;
@ -12133,19 +12106,14 @@ CancelBackup(void)
* and call XLogPageRead() again with the same arguments. This lets
* XLogPageRead() to try fetching the record from another source, or to
* sleep and retry.
*
* If nowait is true, then return false immediately if the requested data isn't
* available yet.
*/
static bool
XLogPageRead(XLogReaderState *state,
bool fetching_ckpt, int emode, bool randAccess, bool nowait)
static int
XLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr, int reqLen,
XLogRecPtr targetRecPtr, char *readBuf)
{
char *readBuf = state->readBuf;
XLogRecPtr targetPagePtr = state->readPagePtr;
int reqLen = state->reqLen;
int readLen = 0;
XLogRecPtr targetRecPtr = state->DecodeRecPtr;
XLogPageReadPrivate *private =
(XLogPageReadPrivate *) xlogreader->private_data;
int emode = private->emode;
uint32 targetPageOff;
XLogSegNo targetSegNo PG_USED_FOR_ASSERTS_ONLY;
int r;
@ -12158,7 +12126,7 @@ XLogPageRead(XLogReaderState *state,
* is not in the currently open one.
*/
if (readFile >= 0 &&
!XLByteInSeg(targetPagePtr, state->seg.ws_segno, wal_segment_size))
!XLByteInSeg(targetPagePtr, readSegNo, wal_segment_size))
{
/*
* Request a restartpoint if we've replayed too much xlog since the
@ -12166,10 +12134,10 @@ XLogPageRead(XLogReaderState *state,
*/
if (bgwriterLaunched)
{
if (XLogCheckpointNeeded(state->seg.ws_segno))
if (XLogCheckpointNeeded(readSegNo))
{
(void) GetRedoRecPtr();
if (XLogCheckpointNeeded(state->seg.ws_segno))
if (XLogCheckpointNeeded(readSegNo))
RequestCheckpoint(CHECKPOINT_CAUSE_XLOG);
}
}
@ -12179,7 +12147,7 @@ XLogPageRead(XLogReaderState *state,
readSource = XLOG_FROM_ANY;
}
XLByteToSeg(targetPagePtr, state->seg.ws_segno, wal_segment_size);
XLByteToSeg(targetPagePtr, readSegNo, wal_segment_size);
retry:
/* See if we need to retrieve more data */
@ -12187,22 +12155,18 @@ retry:
(readSource == XLOG_FROM_STREAM &&
flushedUpto < targetPagePtr + reqLen))
{
if (nowait)
{
XLogReaderSetInputData(state, -1);
return false;
}
if (!WaitForWALToBecomeAvailable(targetPagePtr + reqLen,
randAccess, fetching_ckpt,
targetRecPtr, state->seg.ws_segno))
private->randAccess,
private->fetching_ckpt,
targetRecPtr))
{
if (readFile >= 0)
close(readFile);
readFile = -1;
readLen = 0;
readSource = XLOG_FROM_ANY;
XLogReaderSetInputData(state, -1);
return false;
return -1;
}
}
@ -12229,36 +12193,40 @@ retry:
else
readLen = XLOG_BLCKSZ;
/* Read the requested page */
readOff = targetPageOff;
pgstat_report_wait_start(WAIT_EVENT_WAL_READ);
r = pg_pread(readFile, readBuf, XLOG_BLCKSZ, (off_t) targetPageOff);
r = pg_pread(readFile, readBuf, XLOG_BLCKSZ, (off_t) readOff);
if (r != XLOG_BLCKSZ)
{
char fname[MAXFNAMELEN];
int save_errno = errno;
pgstat_report_wait_end();
XLogFileName(fname, curFileTLI, state->seg.ws_segno, wal_segment_size);
XLogFileName(fname, curFileTLI, readSegNo, wal_segment_size);
if (r < 0)
{
errno = save_errno;
ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
(errcode_for_file_access(),
errmsg("could not read from log segment %s, offset %u: %m",
fname, targetPageOff)));
fname, readOff)));
}
else
ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("could not read from log segment %s, offset %u: read %d of %zu",
fname, targetPageOff, r, (Size) XLOG_BLCKSZ)));
fname, readOff, r, (Size) XLOG_BLCKSZ)));
goto next_record_is_invalid;
}
pgstat_report_wait_end();
Assert(targetSegNo == state->seg.ws_segno);
Assert(readLen >= reqLen);
Assert(targetSegNo == readSegNo);
Assert(targetPageOff == readOff);
Assert(reqLen <= readLen);
state->seg.ws_tli = curFileTLI;
xlogreader->seg.ws_tli = curFileTLI;
/*
* Check the page header immediately, so that we can retry immediately if
@ -12286,16 +12254,14 @@ retry:
* Validating the page header is cheap enough that doing it twice
* shouldn't be a big deal from a performance point of view.
*/
if (!XLogReaderValidatePageHeader(state, targetPagePtr, readBuf))
if (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))
{
/* reset any error StateValidatePageHeader() might have set */
state->errormsg_buf[0] = '\0';
/* reset any error XLogReaderValidatePageHeader() might have set */
xlogreader->errormsg_buf[0] = '\0';
goto next_record_is_invalid;
}
Assert(state->readPagePtr == targetPagePtr);
XLogReaderSetInputData(state, readLen);
return true;
return readLen;
next_record_is_invalid:
lastSourceFailed = true;
@ -12303,14 +12269,14 @@ next_record_is_invalid:
if (readFile >= 0)
close(readFile);
readFile = -1;
readLen = 0;
readSource = XLOG_FROM_ANY;
/* In standby-mode, keep trying */
if (StandbyMode)
goto retry;
XLogReaderSetInputData(state, -1);
return false;
else
return -1;
}
/*
@ -12341,8 +12307,7 @@ next_record_is_invalid:
*/
static bool
WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
bool fetching_ckpt, XLogRecPtr tliRecPtr,
XLogSegNo readSegNo)
bool fetching_ckpt, XLogRecPtr tliRecPtr)
{
static TimestampTz last_fail_time = 0;
TimestampTz now;
@ -12426,7 +12391,6 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
*/
currentSource = XLOG_FROM_STREAM;
startWalReceiver = true;
XLogPrefetchReconfigure();
break;
case XLOG_FROM_STREAM:
@ -12661,7 +12625,6 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
* be updated on each cycle. When we are behind,
* XLogReceiptTime will not advance, so the grace time
* allotted to conflicting queries will decrease.
*
*/
if (RecPtr < flushedUpto)
havedata = true;
@ -12682,7 +12645,6 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
else
havedata = false;
}
if (havedata)
{
/*

View File

@ -1,923 +0,0 @@
/*-------------------------------------------------------------------------
*
* xlogprefetch.c
* Prefetching support for recovery.
*
* Portions Copyright (c) 2021, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
* src/backend/access/transam/xlogprefetch.c
*
* The goal of this module is to read future WAL records and issue
* PrefetchSharedBuffer() calls for referenced blocks, so that we avoid I/O
* stalls in the main recovery loop.
*
* When examining a WAL record from the future, we need to consider that a
* referenced block or segment file might not exist on disk until this record
* or some earlier record has been replayed. After a crash, a file might also
* be missing because it was dropped by a later WAL record; in that case, it
* will be recreated when this record is replayed. These cases are handled by
* recognizing them and adding a "filter" that prevents all prefetching of a
* certain block range until the present WAL record has been replayed. Blocks
* skipped for these reasons are counted as "skip_new" (that is, cases where we
* didn't try to prefetch "new" blocks).
*
* Blocks found in the buffer pool already are counted as "skip_hit".
* Repeated access to the same buffer is detected and skipped, and this is
* counted with "skip_seq". Blocks that were logged with FPWs are skipped if
* recovery_prefetch_fpw is off, since on most systems there will be no I/O
* stall; this is counted with "skip_fpw".
*
* The only way we currently have to know that an I/O initiated with
* PrefetchSharedBuffer() has completed is to wait for the corresponding call
* to XLogReadBufferInRedo() to return. Therefore, we track the number of
* potentially in-flight I/Os by using a circular buffer of LSNs. When it's
* full, we have to wait for recovery to replay enough records to remove some
* LSNs, and only then can we initiate more prefetching. Ideally, this keeps
* us just the right distance ahead to respect maintenance_io_concurrency,
* though in practice it errs on the side of being too conservative because
* many I/Os complete sooner than we know.
*
*-------------------------------------------------------------------------
*/
#include "postgres.h"
#include "access/xlog.h"
#include "access/xlogprefetch.h"
#include "access/xlogreader.h"
#include "access/xlogutils.h"
#include "catalog/storage_xlog.h"
#include "utils/fmgrprotos.h"
#include "utils/timestamp.h"
#include "funcapi.h"
#include "pgstat.h"
#include "miscadmin.h"
#include "port/atomics.h"
#include "storage/bufmgr.h"
#include "storage/shmem.h"
#include "storage/smgr.h"
#include "utils/guc.h"
#include "utils/hsearch.h"
/*
* Sample the queue depth and distance every time we replay this much WAL.
* This is used to compute avg_queue_depth and avg_distance for the log
* message that appears at the end of crash recovery. It's also used to send
* messages periodically to the stats collector, to save the counters on disk.
*/
#define XLOGPREFETCHER_SAMPLE_DISTANCE 0x40000
/* GUCs */
bool recovery_prefetch = false;
bool recovery_prefetch_fpw = false;
int XLogPrefetchReconfigureCount;
/*
* A prefetcher object. There is at most one of these in existence at a time,
* recreated whenever there is a configuration change.
*/
struct XLogPrefetcher
{
/* Reader and current reading state. */
XLogReaderState *reader;
DecodedXLogRecord *record;
int next_block_id;
bool shutdown;
/* Details of last prefetch to skip repeats and seq scans. */
SMgrRelation last_reln;
RelFileNode last_rnode;
BlockNumber last_blkno;
/* Online averages. */
uint64 samples;
double avg_queue_depth;
double avg_distance;
XLogRecPtr next_sample_lsn;
/* Book-keeping required to avoid accessing non-existing blocks. */
HTAB *filter_table;
dlist_head filter_queue;
/* Book-keeping required to limit concurrent prefetches. */
int prefetch_head;
int prefetch_tail;
int prefetch_queue_size;
XLogRecPtr prefetch_queue[MAX_IO_CONCURRENCY + 1];
};
/*
* A temporary filter used to track block ranges that haven't been created
* yet, whole relations that haven't been created yet, and whole relations
* that we must assume have already been dropped.
*/
typedef struct XLogPrefetcherFilter
{
RelFileNode rnode;
XLogRecPtr filter_until_replayed;
BlockNumber filter_from_block;
dlist_node link;
} XLogPrefetcherFilter;
/*
* Counters exposed in shared memory for pg_stat_prefetch_recovery.
*/
typedef struct XLogPrefetchStats
{
pg_atomic_uint64 reset_time; /* Time of last reset. */
pg_atomic_uint64 prefetch; /* Prefetches initiated. */
pg_atomic_uint64 skip_hit; /* Blocks already buffered. */
pg_atomic_uint64 skip_new; /* New/missing blocks filtered. */
pg_atomic_uint64 skip_fpw; /* FPWs skipped. */
pg_atomic_uint64 skip_seq; /* Repeat blocks skipped. */
float avg_distance;
float avg_queue_depth;
/* Reset counters */
pg_atomic_uint32 reset_request;
uint32 reset_handled;
/* Dynamic values */
int distance; /* Number of bytes ahead in the WAL. */
int queue_depth; /* Number of I/Os possibly in progress. */
} XLogPrefetchStats;
static inline void XLogPrefetcherAddFilter(XLogPrefetcher *prefetcher,
RelFileNode rnode,
BlockNumber blockno,
XLogRecPtr lsn);
static inline bool XLogPrefetcherIsFiltered(XLogPrefetcher *prefetcher,
RelFileNode rnode,
BlockNumber blockno);
static inline void XLogPrefetcherCompleteFilters(XLogPrefetcher *prefetcher,
XLogRecPtr replaying_lsn);
static inline void XLogPrefetcherInitiatedIO(XLogPrefetcher *prefetcher,
XLogRecPtr prefetching_lsn);
static inline void XLogPrefetcherCompletedIO(XLogPrefetcher *prefetcher,
XLogRecPtr replaying_lsn);
static inline bool XLogPrefetcherSaturated(XLogPrefetcher *prefetcher);
static bool XLogPrefetcherScanRecords(XLogPrefetcher *prefetcher,
XLogRecPtr replaying_lsn);
static bool XLogPrefetcherScanBlocks(XLogPrefetcher *prefetcher);
static void XLogPrefetchSaveStats(void);
static void XLogPrefetchRestoreStats(void);
static XLogPrefetchStats *SharedStats;
size_t
XLogPrefetchShmemSize(void)
{
return sizeof(XLogPrefetchStats);
}
static void
XLogPrefetchResetStats(void)
{
pg_atomic_write_u64(&SharedStats->reset_time, GetCurrentTimestamp());
pg_atomic_write_u64(&SharedStats->prefetch, 0);
pg_atomic_write_u64(&SharedStats->skip_hit, 0);
pg_atomic_write_u64(&SharedStats->skip_new, 0);
pg_atomic_write_u64(&SharedStats->skip_fpw, 0);
pg_atomic_write_u64(&SharedStats->skip_seq, 0);
SharedStats->avg_distance = 0;
SharedStats->avg_queue_depth = 0;
}
void
XLogPrefetchShmemInit(void)
{
bool found;
SharedStats = (XLogPrefetchStats *)
ShmemInitStruct("XLogPrefetchStats",
sizeof(XLogPrefetchStats),
&found);
if (!found)
{
pg_atomic_init_u32(&SharedStats->reset_request, 0);
SharedStats->reset_handled = 0;
pg_atomic_init_u64(&SharedStats->reset_time, GetCurrentTimestamp());
pg_atomic_init_u64(&SharedStats->prefetch, 0);
pg_atomic_init_u64(&SharedStats->skip_hit, 0);
pg_atomic_init_u64(&SharedStats->skip_new, 0);
pg_atomic_init_u64(&SharedStats->skip_fpw, 0);
pg_atomic_init_u64(&SharedStats->skip_seq, 0);
SharedStats->avg_distance = 0;
SharedStats->avg_queue_depth = 0;
SharedStats->distance = 0;
SharedStats->queue_depth = 0;
}
}
/*
* Called when any GUC is changed that affects prefetching.
*/
void
XLogPrefetchReconfigure(void)
{
XLogPrefetchReconfigureCount++;
}
/*
* Called by any backend to request that the stats be reset.
*/
void
XLogPrefetchRequestResetStats(void)
{
pg_atomic_fetch_add_u32(&SharedStats->reset_request, 1);
}
/*
* Tell the stats collector to serialize the shared memory counters into the
* stats file.
*/
static void
XLogPrefetchSaveStats(void)
{
PgStat_RecoveryPrefetchStats serialized = {
.prefetch = pg_atomic_read_u64(&SharedStats->prefetch),
.skip_hit = pg_atomic_read_u64(&SharedStats->skip_hit),
.skip_new = pg_atomic_read_u64(&SharedStats->skip_new),
.skip_fpw = pg_atomic_read_u64(&SharedStats->skip_fpw),
.skip_seq = pg_atomic_read_u64(&SharedStats->skip_seq),
.stat_reset_timestamp = pg_atomic_read_u64(&SharedStats->reset_time)
};
pgstat_send_recoveryprefetch(&serialized);
}
/*
* Try to restore the shared memory counters from the stats file.
*/
static void
XLogPrefetchRestoreStats(void)
{
PgStat_RecoveryPrefetchStats *serialized = pgstat_fetch_recoveryprefetch();
if (serialized->stat_reset_timestamp != 0)
{
pg_atomic_write_u64(&SharedStats->prefetch, serialized->prefetch);
pg_atomic_write_u64(&SharedStats->skip_hit, serialized->skip_hit);
pg_atomic_write_u64(&SharedStats->skip_new, serialized->skip_new);
pg_atomic_write_u64(&SharedStats->skip_fpw, serialized->skip_fpw);
pg_atomic_write_u64(&SharedStats->skip_seq, serialized->skip_seq);
pg_atomic_write_u64(&SharedStats->reset_time, serialized->stat_reset_timestamp);
}
}
/*
* Increment a counter in shared memory. This is equivalent to *counter++ on a
* plain uint64 without any memory barrier or locking, except on platforms
* where readers can't read uint64 without possibly observing a torn value.
*/
static inline void
XLogPrefetchIncrement(pg_atomic_uint64 *counter)
{
Assert(AmStartupProcess() || !IsUnderPostmaster);
pg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);
}
/*
* Initialize an XLogPrefetchState object and restore the last saved
* statistics from disk.
*/
void
XLogPrefetchBegin(XLogPrefetchState *state, XLogReaderState *reader)
{
XLogPrefetchRestoreStats();
/* We'll reconfigure on the first call to XLogPrefetch(). */
state->reader = reader;
state->prefetcher = NULL;
state->reconfigure_count = XLogPrefetchReconfigureCount - 1;
}
/*
* Shut down the prefetching infrastructure, if configured.
*/
void
XLogPrefetchEnd(XLogPrefetchState *state)
{
XLogPrefetchSaveStats();
if (state->prefetcher)
XLogPrefetcherFree(state->prefetcher);
state->prefetcher = NULL;
SharedStats->queue_depth = 0;
SharedStats->distance = 0;
}
/*
* Create a prefetcher that is ready to begin prefetching blocks referenced by
* WAL records.
*/
XLogPrefetcher *
XLogPrefetcherAllocate(XLogReaderState *reader)
{
XLogPrefetcher *prefetcher;
static HASHCTL hash_table_ctl = {
.keysize = sizeof(RelFileNode),
.entrysize = sizeof(XLogPrefetcherFilter)
};
/*
* The size of the queue is based on the maintenance_io_concurrency
* setting. In theory we might have a separate queue for each tablespace,
* but it's not clear how that should work, so for now we'll just use the
* general GUC to rate-limit all prefetching. The queue has space for up
* the highest possible value of the GUC + 1, because our circular buffer
* has a gap between head and tail when full.
*/
prefetcher = palloc0(sizeof(XLogPrefetcher));
prefetcher->prefetch_queue_size = maintenance_io_concurrency + 1;
prefetcher->reader = reader;
prefetcher->filter_table = hash_create("XLogPrefetcherFilterTable", 1024,
&hash_table_ctl,
HASH_ELEM | HASH_BLOBS);
dlist_init(&prefetcher->filter_queue);
SharedStats->queue_depth = 0;
SharedStats->distance = 0;
return prefetcher;
}
/*
* Destroy a prefetcher and release all resources.
*/
void
XLogPrefetcherFree(XLogPrefetcher *prefetcher)
{
/* Log final statistics. */
ereport(LOG,
(errmsg("recovery finished prefetching at %X/%X; "
"prefetch = %llu, "
"skip_hit = %llu, "
"skip_new = %llu, "
"skip_fpw = %llu, "
"skip_seq = %llu, "
"avg_distance = %f, "
"avg_queue_depth = %f",
LSN_FORMAT_ARGS(prefetcher->reader->EndRecPtr),
(unsigned long long) pg_atomic_read_u64(&SharedStats->prefetch),
(unsigned long long) pg_atomic_read_u64(&SharedStats->skip_hit),
(unsigned long long) pg_atomic_read_u64(&SharedStats->skip_new),
(unsigned long long) pg_atomic_read_u64(&SharedStats->skip_fpw),
(unsigned long long) pg_atomic_read_u64(&SharedStats->skip_seq),
SharedStats->avg_distance,
SharedStats->avg_queue_depth)));
hash_destroy(prefetcher->filter_table);
pfree(prefetcher);
}
/*
* Called when recovery is replaying a new LSN, to check if we can read ahead.
*/
bool
XLogPrefetcherReadAhead(XLogPrefetcher *prefetcher, XLogRecPtr replaying_lsn)
{
uint32 reset_request;
/* If an error has occurred or we've hit the end of the WAL, do nothing. */
if (prefetcher->shutdown)
return false;
/*
* Have any in-flight prefetches definitely completed, judging by the LSN
* that is currently being replayed?
*/
XLogPrefetcherCompletedIO(prefetcher, replaying_lsn);
/*
* Do we already have the maximum permitted number of I/Os running
* (according to the information we have)? If so, we have to wait for at
* least one to complete, so give up early and let recovery catch up.
*/
if (XLogPrefetcherSaturated(prefetcher))
return false;
/*
* Can we drop any filters yet? This happens when the LSN that is
* currently being replayed has moved past a record that prevents
* prefetching of a block range, such as relation extension.
*/
XLogPrefetcherCompleteFilters(prefetcher, replaying_lsn);
/*
* Have we been asked to reset our stats counters? This is checked with
* an unsynchronized memory read, but we'll see it eventually and we'll be
* accessing that cache line anyway.
*/
reset_request = pg_atomic_read_u32(&SharedStats->reset_request);
if (reset_request != SharedStats->reset_handled)
{
XLogPrefetchResetStats();
SharedStats->reset_handled = reset_request;
prefetcher->avg_distance = 0;
prefetcher->avg_queue_depth = 0;
prefetcher->samples = 0;
}
/* OK, we can now try reading ahead. */
return XLogPrefetcherScanRecords(prefetcher, replaying_lsn);
}
/*
* Read ahead as far as we are allowed to, considering the LSN that recovery
* is currently replaying.
*
* Return true if the xlogreader would like more data.
*/
static bool
XLogPrefetcherScanRecords(XLogPrefetcher *prefetcher, XLogRecPtr replaying_lsn)
{
XLogReaderState *reader = prefetcher->reader;
DecodedXLogRecord *record;
Assert(!XLogPrefetcherSaturated(prefetcher));
for (;;)
{
char *error;
int64 distance;
/* If we don't already have a record, then try to read one. */
if (prefetcher->record == NULL)
{
switch (XLogReadAhead(reader, &record, &error))
{
case XLREAD_NEED_DATA:
return true;
case XLREAD_FAIL:
if (error)
ereport(LOG,
(errmsg("recovery no longer prefetching: %s",
error)));
else
ereport(LOG,
(errmsg("recovery no longer prefetching")));
prefetcher->shutdown = true;
SharedStats->queue_depth = 0;
SharedStats->distance = 0;
return false;
case XLREAD_FULL:
return false;
case XLREAD_SUCCESS:
prefetcher->record = record;
prefetcher->next_block_id = 0;
break;
}
}
else
{
/*
* We ran out of I/O queue while part way through a record. We'll
* carry on where we left off, according to next_block_id.
*/
record = prefetcher->record;
}
/* How far ahead of replay are we now? */
distance = record->lsn - replaying_lsn;
/* Update distance shown in shm. */
SharedStats->distance = distance;
/* Periodically recompute some statistics. */
if (unlikely(replaying_lsn >= prefetcher->next_sample_lsn))
{
/* Compute online averages. */
prefetcher->samples++;
if (prefetcher->samples == 1)
{
prefetcher->avg_distance = SharedStats->distance;
prefetcher->avg_queue_depth = SharedStats->queue_depth;
}
else
{
prefetcher->avg_distance +=
(SharedStats->distance - prefetcher->avg_distance) /
prefetcher->samples;
prefetcher->avg_queue_depth +=
(SharedStats->queue_depth - prefetcher->avg_queue_depth) /
prefetcher->samples;
}
/* Expose it in shared memory. */
SharedStats->avg_distance = prefetcher->avg_distance;
SharedStats->avg_queue_depth = prefetcher->avg_queue_depth;
/* Also periodically save the simple counters. */
XLogPrefetchSaveStats();
prefetcher->next_sample_lsn =
replaying_lsn + XLOGPREFETCHER_SAMPLE_DISTANCE;
}
/* Are we not far enough ahead? */
if (distance <= 0)
{
/* XXX Is this still possible? */
prefetcher->record = NULL; /* skip this record */
continue;
}
/*
* If this is a record that creates a new SMGR relation, we'll avoid
* prefetching anything from that rnode until it has been replayed.
*/
if (replaying_lsn < record->lsn &&
record->header.xl_rmid == RM_SMGR_ID &&
(record->header.xl_info & ~XLR_INFO_MASK) == XLOG_SMGR_CREATE)
{
xl_smgr_create *xlrec = (xl_smgr_create *) record->main_data;
XLogPrefetcherAddFilter(prefetcher, xlrec->rnode, 0, record->lsn);
}
/* Scan the record's block references. */
if (!XLogPrefetcherScanBlocks(prefetcher))
return false;
/* Advance to the next record. */
prefetcher->record = NULL;
}
}
/*
* Scan the current record for block references, and consider prefetching.
*
* Return true if we processed the current record to completion and still have
* queue space to process a new record, and false if we saturated the I/O
* queue and need to wait for recovery to advance before we continue.
*/
static bool
XLogPrefetcherScanBlocks(XLogPrefetcher *prefetcher)
{
DecodedXLogRecord *record = prefetcher->record;
Assert(!XLogPrefetcherSaturated(prefetcher));
/*
* We might already have been partway through processing this record when
* our queue became saturated, so we need to start where we left off.
*/
for (int block_id = prefetcher->next_block_id;
block_id <= record->max_block_id;
++block_id)
{
DecodedBkpBlock *block = &record->blocks[block_id];
PrefetchBufferResult prefetch;
SMgrRelation reln;
/* Ignore everything but the main fork for now. */
if (block->forknum != MAIN_FORKNUM)
continue;
/*
* If there is a full page image attached, we won't be reading the
* page, so you might think we should skip it. However, if the
* underlying filesystem uses larger logical blocks than us, it might
* still need to perform a read-before-write some time later.
* Therefore, only prefetch if configured to do so.
*/
if (block->has_image && !recovery_prefetch_fpw)
{
XLogPrefetchIncrement(&SharedStats->skip_fpw);
continue;
}
/*
* If this block will initialize a new page then it's probably a
* relation extension. Since that might create a new segment, we
* can't try to prefetch this block until the record has been
* replayed, or we might try to open a file that doesn't exist yet.
*/
if (block->flags & BKPBLOCK_WILL_INIT)
{
XLogPrefetcherAddFilter(prefetcher, block->rnode, block->blkno,
record->lsn);
XLogPrefetchIncrement(&SharedStats->skip_new);
continue;
}
/* Should we skip this block due to a filter? */
if (XLogPrefetcherIsFiltered(prefetcher, block->rnode, block->blkno))
{
XLogPrefetchIncrement(&SharedStats->skip_new);
continue;
}
/* Fast path for repeated references to the same relation. */
if (RelFileNodeEquals(block->rnode, prefetcher->last_rnode))
{
/*
* If this is a repeat access to the same block, then skip it.
*
* XXX We could also check for last_blkno + 1 too, and also update
* last_blkno; it's not clear if the kernel would do a better job
* of sequential prefetching.
*/
if (block->blkno == prefetcher->last_blkno)
{
XLogPrefetchIncrement(&SharedStats->skip_seq);
continue;
}
/* We can avoid calling smgropen(). */
reln = prefetcher->last_reln;
}
else
{
/* Otherwise we have to open it. */
reln = smgropen(block->rnode, InvalidBackendId);
prefetcher->last_rnode = block->rnode;
prefetcher->last_reln = reln;
}
prefetcher->last_blkno = block->blkno;
/* Try to prefetch this block! */
prefetch = PrefetchSharedBuffer(reln, block->forknum, block->blkno);
if (BufferIsValid(prefetch.recent_buffer))
{
/*
* It was already cached, so do nothing. We'll remember the
* buffer, so that recovery can try to avoid looking it up again.
*/
block->recent_buffer = prefetch.recent_buffer;
XLogPrefetchIncrement(&SharedStats->skip_hit);
}
else if (prefetch.initiated_io)
{
/*
* I/O has possibly been initiated (though we don't know if it was
* already cached by the kernel, so we just have to assume that it
* has due to lack of better information). Record this as an I/O
* in progress until eventually we replay this LSN.
*/
XLogPrefetchIncrement(&SharedStats->prefetch);
XLogPrefetcherInitiatedIO(prefetcher, record->lsn);
/*
* If the queue is now full, we'll have to wait before processing
* any more blocks from this record, or move to a new record if
* that was the last block.
*/
if (XLogPrefetcherSaturated(prefetcher))
{
prefetcher->next_block_id = block_id + 1;
return false;
}
}
else
{
/*
* Neither cached nor initiated. The underlying segment file
* doesn't exist. Presumably it will be unlinked by a later WAL
* record. When recovery reads this block, it will use the
* EXTENSION_CREATE_RECOVERY flag. We certainly don't want to do
* that sort of thing while merely prefetching, so let's just
* ignore references to this relation until this record is
* replayed, and let recovery create the dummy file or complain if
* something is wrong.
*/
XLogPrefetcherAddFilter(prefetcher, block->rnode, 0,
record->lsn);
XLogPrefetchIncrement(&SharedStats->skip_new);
}
}
return true;
}
/*
* Expose statistics about recovery prefetching.
*/
Datum
pg_stat_get_prefetch_recovery(PG_FUNCTION_ARGS)
{
#define PG_STAT_GET_PREFETCH_RECOVERY_COLS 10
ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
TupleDesc tupdesc;
Tuplestorestate *tupstore;
MemoryContext per_query_ctx;
MemoryContext oldcontext;
Datum values[PG_STAT_GET_PREFETCH_RECOVERY_COLS];
bool nulls[PG_STAT_GET_PREFETCH_RECOVERY_COLS];
if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("set-valued function called in context that cannot accept a set")));
if (!(rsinfo->allowedModes & SFRM_Materialize))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("materialize mode required, but it is not allowed in this context")));
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
elog(ERROR, "return type must be a row type");
per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
oldcontext = MemoryContextSwitchTo(per_query_ctx);
tupstore = tuplestore_begin_heap(true, false, work_mem);
rsinfo->returnMode = SFRM_Materialize;
rsinfo->setResult = tupstore;
rsinfo->setDesc = tupdesc;
MemoryContextSwitchTo(oldcontext);
if (pg_atomic_read_u32(&SharedStats->reset_request) != SharedStats->reset_handled)
{
/* There's an unhandled reset request, so just show NULLs */
for (int i = 0; i < PG_STAT_GET_PREFETCH_RECOVERY_COLS; ++i)
nulls[i] = true;
}
else
{
for (int i = 0; i < PG_STAT_GET_PREFETCH_RECOVERY_COLS; ++i)
nulls[i] = false;
}
values[0] = TimestampTzGetDatum(pg_atomic_read_u64(&SharedStats->reset_time));
values[1] = Int64GetDatum(pg_atomic_read_u64(&SharedStats->prefetch));
values[2] = Int64GetDatum(pg_atomic_read_u64(&SharedStats->skip_hit));
values[3] = Int64GetDatum(pg_atomic_read_u64(&SharedStats->skip_new));
values[4] = Int64GetDatum(pg_atomic_read_u64(&SharedStats->skip_fpw));
values[5] = Int64GetDatum(pg_atomic_read_u64(&SharedStats->skip_seq));
values[6] = Int32GetDatum(SharedStats->distance);
values[7] = Int32GetDatum(SharedStats->queue_depth);
values[8] = Float4GetDatum(SharedStats->avg_distance);
values[9] = Float4GetDatum(SharedStats->avg_queue_depth);
tuplestore_putvalues(tupstore, tupdesc, values, nulls);
tuplestore_donestoring(tupstore);
return (Datum) 0;
}
/*
* Compute (n + 1) % prefetch_queue_size, assuming n < prefetch_queue_size,
* without using division.
*/
static inline int
XLogPrefetcherNext(XLogPrefetcher *prefetcher, int n)
{
int next = n + 1;
return next == prefetcher->prefetch_queue_size ? 0 : next;
}
/*
* Don't prefetch any blocks >= 'blockno' from a given 'rnode', until 'lsn'
* has been replayed.
*/
static inline void
XLogPrefetcherAddFilter(XLogPrefetcher *prefetcher, RelFileNode rnode,
BlockNumber blockno, XLogRecPtr lsn)
{
XLogPrefetcherFilter *filter;
bool found;
filter = hash_search(prefetcher->filter_table, &rnode, HASH_ENTER, &found);
if (!found)
{
/*
* Don't allow any prefetching of this block or higher until replayed.
*/
filter->filter_until_replayed = lsn;
filter->filter_from_block = blockno;
dlist_push_head(&prefetcher->filter_queue, &filter->link);
}
else
{
/*
* We were already filtering this rnode. Extend the filter's lifetime
* to cover this WAL record, but leave the (presumably lower) block
* number there because we don't want to have to track individual
* blocks.
*/
filter->filter_until_replayed = lsn;
dlist_delete(&filter->link);
dlist_push_head(&prefetcher->filter_queue, &filter->link);
}
}
/*
* Have we replayed the records that caused us to begin filtering a block
* range? That means that relations should have been created, extended or
* dropped as required, so we can drop relevant filters.
*/
static inline void
XLogPrefetcherCompleteFilters(XLogPrefetcher *prefetcher, XLogRecPtr replaying_lsn)
{
while (unlikely(!dlist_is_empty(&prefetcher->filter_queue)))
{
XLogPrefetcherFilter *filter = dlist_tail_element(XLogPrefetcherFilter,
link,
&prefetcher->filter_queue);
if (filter->filter_until_replayed >= replaying_lsn)
break;
dlist_delete(&filter->link);
hash_search(prefetcher->filter_table, filter, HASH_REMOVE, NULL);
}
}
/*
* Check if a given block should be skipped due to a filter.
*/
static inline bool
XLogPrefetcherIsFiltered(XLogPrefetcher *prefetcher, RelFileNode rnode,
BlockNumber blockno)
{
/*
* Test for empty queue first, because we expect it to be empty most of
* the time and we can avoid the hash table lookup in that case.
*/
if (unlikely(!dlist_is_empty(&prefetcher->filter_queue)))
{
XLogPrefetcherFilter *filter = hash_search(prefetcher->filter_table, &rnode,
HASH_FIND, NULL);
if (filter && filter->filter_from_block <= blockno)
return true;
}
return false;
}
/*
* Insert an LSN into the queue. The queue must not be full already. This
* tracks the fact that we have (to the best of our knowledge) initiated an
* I/O, so that we can impose a cap on concurrent prefetching.
*/
static inline void
XLogPrefetcherInitiatedIO(XLogPrefetcher *prefetcher,
XLogRecPtr prefetching_lsn)
{
Assert(!XLogPrefetcherSaturated(prefetcher));
prefetcher->prefetch_queue[prefetcher->prefetch_head] = prefetching_lsn;
prefetcher->prefetch_head =
XLogPrefetcherNext(prefetcher, prefetcher->prefetch_head);
SharedStats->queue_depth++;
Assert(SharedStats->queue_depth <= prefetcher->prefetch_queue_size);
}
/*
* Have we replayed the records that caused us to initiate the oldest
* prefetches yet? That means that they're definitely finished, so we can can
* forget about them and allow ourselves to initiate more prefetches. For now
* we don't have any awareness of when I/O really completes.
*/
static inline void
XLogPrefetcherCompletedIO(XLogPrefetcher *prefetcher, XLogRecPtr replaying_lsn)
{
while (prefetcher->prefetch_head != prefetcher->prefetch_tail &&
prefetcher->prefetch_queue[prefetcher->prefetch_tail] < replaying_lsn)
{
prefetcher->prefetch_tail =
XLogPrefetcherNext(prefetcher, prefetcher->prefetch_tail);
SharedStats->queue_depth--;
Assert(SharedStats->queue_depth >= 0);
}
}
/*
* Check if the maximum allowed number of I/Os is already in flight.
*/
static inline bool
XLogPrefetcherSaturated(XLogPrefetcher *prefetcher)
{
int next = XLogPrefetcherNext(prefetcher, prefetcher->prefetch_head);
return next == prefetcher->prefetch_tail;
}
void
assign_recovery_prefetch(bool new_value, void *extra)
{
/* Reconfigure prefetching, because a setting it depends on changed. */
recovery_prefetch = new_value;
if (AmStartupProcess())
XLogPrefetchReconfigure();
}
void
assign_recovery_prefetch_fpw(bool new_value, void *extra)
{
/* Reconfigure prefetching, because a setting it depends on changed. */
recovery_prefetch_fpw = new_value;
if (AmStartupProcess())
XLogPrefetchReconfigure();
}

File diff suppressed because it is too large Load Diff

View File

@ -335,13 +335,11 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
RelFileNode rnode;
ForkNumber forknum;
BlockNumber blkno;
Buffer recent_buffer;
Page page;
bool zeromode;
bool willinit;
if (!XLogRecGetRecentBuffer(record, block_id, &rnode, &forknum, &blkno,
&recent_buffer))
if (!XLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blkno))
{
/* Caller specified a bogus block_id */
elog(PANIC, "failed to locate backup block with ID %d", block_id);
@ -352,7 +350,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
* going to initialize it. And vice versa.
*/
zeromode = (mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK);
willinit = (record->record->blocks[block_id].flags & BKPBLOCK_WILL_INIT) != 0;
willinit = (record->blocks[block_id].flags & BKPBLOCK_WILL_INIT) != 0;
if (willinit && !zeromode)
elog(PANIC, "block with WILL_INIT flag in WAL record must be zeroed by redo routine");
if (!willinit && zeromode)
@ -363,8 +361,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
{
Assert(XLogRecHasBlockImage(record, block_id));
*buf = XLogReadBufferExtended(rnode, forknum, blkno,
get_cleanup_lock ? RBM_ZERO_AND_CLEANUP_LOCK : RBM_ZERO_AND_LOCK,
recent_buffer);
get_cleanup_lock ? RBM_ZERO_AND_CLEANUP_LOCK : RBM_ZERO_AND_LOCK);
page = BufferGetPage(*buf);
if (!RestoreBlockImage(record, block_id, page))
elog(ERROR, "failed to restore block image");
@ -393,8 +390,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
}
else
{
*buf = XLogReadBufferExtended(rnode, forknum, blkno, mode,
recent_buffer);
*buf = XLogReadBufferExtended(rnode, forknum, blkno, mode);
if (BufferIsValid(*buf))
{
if (mode != RBM_ZERO_AND_LOCK && mode != RBM_ZERO_AND_CLEANUP_LOCK)
@ -441,8 +437,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
*/
Buffer
XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
BlockNumber blkno, ReadBufferMode mode,
Buffer recent_buffer)
BlockNumber blkno, ReadBufferMode mode)
{
BlockNumber lastblock;
Buffer buffer;
@ -450,15 +445,6 @@ XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
Assert(blkno != P_NEW);
/* Do we have a clue where the buffer might be already? */
if (BufferIsValid(recent_buffer) &&
mode == RBM_NORMAL &&
ReadRecentBuffer(rnode, forknum, blkno, recent_buffer))
{
buffer = recent_buffer;
goto recent_buffer_fast_path;
}
/* Open the relation at smgr level */
smgr = smgropen(rnode, InvalidBackendId);
@ -517,7 +503,6 @@ XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
}
}
recent_buffer_fast_path:
if (mode == RBM_NORMAL)
{
/* check that page has been initialized */
@ -701,7 +686,8 @@ XLogTruncateRelation(RelFileNode rnode, ForkNumber forkNum,
void
XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wantLength)
{
const XLogRecPtr lastReadPage = state->readPagePtr;
const XLogRecPtr lastReadPage = (state->seg.ws_segno *
state->segcxt.ws_segsize + state->segoff);
Assert(wantPage != InvalidXLogRecPtr && wantPage % XLOG_BLCKSZ == 0);
Assert(wantLength <= XLOG_BLCKSZ);
@ -716,7 +702,7 @@ XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wa
* current TLI has since become historical.
*/
if (lastReadPage == wantPage &&
state->page_verified &&
state->readLen != 0 &&
lastReadPage + state->readLen >= wantPage + Min(wantLength, XLOG_BLCKSZ - 1))
return;
@ -838,12 +824,10 @@ wal_segment_close(XLogReaderState *state)
* exists for normal backends, so we have to do a check/sleep/repeat style of
* loop for now.
*/
bool
read_local_xlog_page(XLogReaderState *state)
int
read_local_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr,
int reqLen, XLogRecPtr targetRecPtr, char *cur_page)
{
XLogRecPtr targetPagePtr = state->readPagePtr;
int reqLen = state->reqLen;
char *cur_page = state->readBuf;
XLogRecPtr read_upto,
loc;
TimeLineID tli;
@ -942,8 +926,7 @@ read_local_xlog_page(XLogReaderState *state)
else if (targetPagePtr + reqLen > read_upto)
{
/* not enough data there */
XLogReaderSetInputData(state, -1);
return false;
return -1;
}
else
{
@ -956,14 +939,12 @@ read_local_xlog_page(XLogReaderState *state)
* as 'count', read the whole page anyway. It's guaranteed to be
* zero-padded up to the page boundary if it's incomplete.
*/
if (!WALRead(state, wal_segment_open, wal_segment_close,
cur_page, targetPagePtr, XLOG_BLCKSZ, tli, &errinfo))
if (!WALRead(state, cur_page, targetPagePtr, XLOG_BLCKSZ, tli,
&errinfo))
WALReadRaiseError(&errinfo);
/* number of valid bytes in the buffer */
state->readPagePtr = targetPagePtr;
XLogReaderSetInputData(state, count);
return true;
return count;
}
/*

View File

@ -899,20 +899,6 @@ CREATE VIEW pg_stat_wal_receiver AS
FROM pg_stat_get_wal_receiver() s
WHERE s.pid IS NOT NULL;
CREATE VIEW pg_stat_prefetch_recovery AS
SELECT
s.stats_reset,
s.prefetch,
s.skip_hit,
s.skip_new,
s.skip_fpw,
s.skip_seq,
s.distance,
s.queue_depth,
s.avg_distance,
s.avg_queue_depth
FROM pg_stat_get_prefetch_recovery() s;
CREATE VIEW pg_stat_subscription AS
SELECT
su.oid AS subid,

View File

@ -38,7 +38,6 @@
#include "access/transam.h"
#include "access/twophase_rmgr.h"
#include "access/xact.h"
#include "access/xlogprefetch.h"
#include "catalog/partition.h"
#include "catalog/pg_database.h"
#include "catalog/pg_proc.h"
@ -280,7 +279,6 @@ static PgStat_GlobalStats globalStats;
static PgStat_WalStats walStats;
static PgStat_SLRUStats slruStats[SLRU_NUM_ELEMENTS];
static HTAB *replSlotStatHash = NULL;
static PgStat_RecoveryPrefetchStats recoveryPrefetchStats;
/*
* List of OIDs of databases we need to write out. If an entry is InvalidOid,
@ -352,7 +350,6 @@ static void pgstat_recv_archiver(PgStat_MsgArchiver *msg, int len);
static void pgstat_recv_bgwriter(PgStat_MsgBgWriter *msg, int len);
static void pgstat_recv_wal(PgStat_MsgWal *msg, int len);
static void pgstat_recv_slru(PgStat_MsgSLRU *msg, int len);
static void pgstat_recv_recoveryprefetch(PgStat_MsgRecoveryPrefetch *msg, int len);
static void pgstat_recv_funcstat(PgStat_MsgFuncstat *msg, int len);
static void pgstat_recv_funcpurge(PgStat_MsgFuncpurge *msg, int len);
static void pgstat_recv_recoveryconflict(PgStat_MsgRecoveryConflict *msg, int len);
@ -1446,20 +1443,11 @@ pgstat_reset_shared_counters(const char *target)
msg.m_resettarget = RESET_BGWRITER;
else if (strcmp(target, "wal") == 0)
msg.m_resettarget = RESET_WAL;
else if (strcmp(target, "prefetch_recovery") == 0)
{
/*
* We can't ask the stats collector to do this for us as it is not
* attached to shared memory.
*/
XLogPrefetchRequestResetStats();
return;
}
else
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("unrecognized reset target: \"%s\"", target),
errhint("Target must be \"archiver\", \"bgwriter\", \"wal\" or \"prefetch_recovery\".")));
errhint("Target must be \"archiver\", \"bgwriter\" or \"wal\".")));
pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER);
pgstat_send(&msg, sizeof(msg));
@ -2897,22 +2885,6 @@ pgstat_fetch_replslot(NameData slotname)
return pgstat_get_replslot_entry(slotname, false);
}
/*
* ---------
* pgstat_fetch_recoveryprefetch() -
*
* Support function for restoring the counters managed by xlogprefetch.c.
* ---------
*/
PgStat_RecoveryPrefetchStats *
pgstat_fetch_recoveryprefetch(void)
{
backend_read_statsfile();
return &recoveryPrefetchStats;
}
/*
* Shut down a single backend's statistics reporting at process exit.
*
@ -3188,23 +3160,6 @@ pgstat_send_slru(void)
}
/* ----------
* pgstat_send_recoveryprefetch() -
*
* Send recovery prefetch statistics to the collector
* ----------
*/
void
pgstat_send_recoveryprefetch(PgStat_RecoveryPrefetchStats *stats)
{
PgStat_MsgRecoveryPrefetch msg;
pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RECOVERYPREFETCH);
msg.m_stats = *stats;
pgstat_send(&msg, sizeof(msg));
}
/* ----------
* PgstatCollectorMain() -
*
@ -3422,10 +3377,6 @@ PgstatCollectorMain(int argc, char *argv[])
pgstat_recv_slru(&msg.msg_slru, len);
break;
case PGSTAT_MTYPE_RECOVERYPREFETCH:
pgstat_recv_recoveryprefetch(&msg.msg_recoveryprefetch, len);
break;
case PGSTAT_MTYPE_FUNCSTAT:
pgstat_recv_funcstat(&msg.msg_funcstat, len);
break;
@ -3718,13 +3669,6 @@ pgstat_write_statsfiles(bool permanent, bool allDbs)
rc = fwrite(slruStats, sizeof(slruStats), 1, fpout);
(void) rc; /* we'll check for error with ferror */
/*
* Write recovery prefetch stats struct
*/
rc = fwrite(&recoveryPrefetchStats, sizeof(recoveryPrefetchStats), 1,
fpout);
(void) rc; /* we'll check for error with ferror */
/*
* Walk through the database table.
*/
@ -4000,7 +3944,6 @@ pgstat_read_statsfiles(Oid onlydb, bool permanent, bool deep)
memset(&archiverStats, 0, sizeof(archiverStats));
memset(&walStats, 0, sizeof(walStats));
memset(&slruStats, 0, sizeof(slruStats));
memset(&recoveryPrefetchStats, 0, sizeof(recoveryPrefetchStats));
/*
* Set the current timestamp (will be kept only in case we can't load an
@ -4100,18 +4043,6 @@ pgstat_read_statsfiles(Oid onlydb, bool permanent, bool deep)
goto done;
}
/*
* Read recoveryPrefetchStats struct
*/
if (fread(&recoveryPrefetchStats, 1, sizeof(recoveryPrefetchStats),
fpin) != sizeof(recoveryPrefetchStats))
{
ereport(pgStatRunningInCollector ? LOG : WARNING,
(errmsg("corrupted statistics file \"%s\"", statfile)));
memset(&recoveryPrefetchStats, 0, sizeof(recoveryPrefetchStats));
goto done;
}
/*
* We found an existing collector stats file. Read it and put all the
* hashtable entries into place.
@ -4452,7 +4383,6 @@ pgstat_read_db_statsfile_timestamp(Oid databaseid, bool permanent,
PgStat_WalStats myWalStats;
PgStat_SLRUStats mySLRUStats[SLRU_NUM_ELEMENTS];
PgStat_StatReplSlotEntry myReplSlotStats;
PgStat_RecoveryPrefetchStats myRecoveryPrefetchStats;
FILE *fpin;
int32 format_id;
const char *statfile = permanent ? PGSTAT_STAT_PERMANENT_FILENAME : pgstat_stat_filename;
@ -4529,18 +4459,6 @@ pgstat_read_db_statsfile_timestamp(Oid databaseid, bool permanent,
return false;
}
/*
* Read recovery prefetch stats struct
*/
if (fread(&myRecoveryPrefetchStats, 1, sizeof(myRecoveryPrefetchStats),
fpin) != sizeof(myRecoveryPrefetchStats))
{
ereport(pgStatRunningInCollector ? LOG : WARNING,
(errmsg("corrupted statistics file \"%s\"", statfile)));
FreeFile(fpin);
return false;
}
/* By default, we're going to return the timestamp of the global file. */
*ts = myGlobalStats.stats_timestamp;
@ -4724,13 +4642,6 @@ backend_read_statsfile(void)
if (ok && file_ts >= min_ts)
break;
/*
* If we're in crash recovery, the collector may not even be running,
* so work with what we have.
*/
if (InRecovery)
break;
/* Not there or too old, so kick the collector and wait a bit */
if ((count % PGSTAT_INQ_LOOP_COUNT) == 0)
pgstat_send_inquiry(cur_ts, min_ts, inquiry_db);
@ -5470,18 +5381,6 @@ pgstat_recv_slru(PgStat_MsgSLRU *msg, int len)
slruStats[msg->m_index].truncate += msg->m_truncate;
}
/* ----------
* pgstat_recv_recoveryprefetch() -
*
* Process a recovery prefetch message.
* ----------
*/
static void
pgstat_recv_recoveryprefetch(PgStat_MsgRecoveryPrefetch *msg, int len)
{
recoveryPrefetchStats = msg->m_stats;
}
/* ----------
* pgstat_recv_recoveryconflict() -
*

View File

@ -123,7 +123,7 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
{
ReorderBufferAssignChild(ctx->reorder,
txid,
XLogRecGetXid(record),
record->decoded_record->xl_xid,
buf.origptr);
}

View File

@ -148,8 +148,7 @@ StartupDecodingContext(List *output_plugin_options,
TransactionId xmin_horizon,
bool need_full_snapshot,
bool fast_forward,
LogicalDecodingXLogPageReadCB page_read,
WALSegmentCleanupCB cleanup_cb,
XLogReaderRoutine *xl_routine,
LogicalOutputPluginWriterPrepareWrite prepare_write,
LogicalOutputPluginWriterWrite do_write,
LogicalOutputPluginWriterUpdateProgress update_progress)
@ -199,12 +198,11 @@ StartupDecodingContext(List *output_plugin_options,
ctx->slot = slot;
ctx->reader = XLogReaderAllocate(wal_segment_size, NULL, cleanup_cb);
ctx->reader = XLogReaderAllocate(wal_segment_size, NULL, xl_routine, ctx);
if (!ctx->reader)
ereport(ERROR,
(errcode(ERRCODE_OUT_OF_MEMORY),
errmsg("out of memory")));
ctx->page_read = page_read;
ctx->reorder = ReorderBufferAllocate();
ctx->snapshot_builder =
@ -321,8 +319,7 @@ CreateInitDecodingContext(const char *plugin,
List *output_plugin_options,
bool need_full_snapshot,
XLogRecPtr restart_lsn,
LogicalDecodingXLogPageReadCB page_read,
WALSegmentCleanupCB cleanup_cb,
XLogReaderRoutine *xl_routine,
LogicalOutputPluginWriterPrepareWrite prepare_write,
LogicalOutputPluginWriterWrite do_write,
LogicalOutputPluginWriterUpdateProgress update_progress)
@ -425,7 +422,7 @@ CreateInitDecodingContext(const char *plugin,
ctx = StartupDecodingContext(NIL, restart_lsn, xmin_horizon,
need_full_snapshot, false,
page_read, cleanup_cb, prepare_write, do_write,
xl_routine, prepare_write, do_write,
update_progress);
/* call output plugin initialization callback */
@ -479,8 +476,7 @@ LogicalDecodingContext *
CreateDecodingContext(XLogRecPtr start_lsn,
List *output_plugin_options,
bool fast_forward,
LogicalDecodingXLogPageReadCB page_read,
WALSegmentCleanupCB cleanup_cb,
XLogReaderRoutine *xl_routine,
LogicalOutputPluginWriterPrepareWrite prepare_write,
LogicalOutputPluginWriterWrite do_write,
LogicalOutputPluginWriterUpdateProgress update_progress)
@ -532,8 +528,8 @@ CreateDecodingContext(XLogRecPtr start_lsn,
ctx = StartupDecodingContext(output_plugin_options,
start_lsn, InvalidTransactionId, false,
fast_forward, page_read, cleanup_cb,
prepare_write, do_write, update_progress);
fast_forward, xl_routine, prepare_write,
do_write, update_progress);
/* call output plugin initialization callback */
old_context = MemoryContextSwitchTo(ctx->context);
@ -589,13 +585,7 @@ DecodingContextFindStartpoint(LogicalDecodingContext *ctx)
char *err = NULL;
/* the read_page callback waits for new WAL */
while (XLogReadRecord(ctx->reader, &record, &err) ==
XLREAD_NEED_DATA)
{
if (!ctx->page_read(ctx->reader))
break;
}
record = XLogReadRecord(ctx->reader, &err);
if (err)
elog(ERROR, "%s", err);
if (!record)

View File

@ -233,8 +233,9 @@ pg_logical_slot_get_changes_guts(FunctionCallInfo fcinfo, bool confirm, bool bin
ctx = CreateDecodingContext(InvalidXLogRecPtr,
options,
false,
read_local_xlog_page,
wal_segment_close,
XL_ROUTINE(.page_read = read_local_xlog_page,
.segment_open = wal_segment_open,
.segment_close = wal_segment_close),
LogicalOutputPrepareWrite,
LogicalOutputWrite, NULL);
@ -283,13 +284,7 @@ pg_logical_slot_get_changes_guts(FunctionCallInfo fcinfo, bool confirm, bool bin
XLogRecord *record;
char *errm = NULL;
while (XLogReadRecord(ctx->reader, &record, &errm) ==
XLREAD_NEED_DATA)
{
if (!ctx->page_read(ctx->reader))
break;
}
record = XLogReadRecord(ctx->reader, &errm);
if (errm)
elog(ERROR, "%s", errm);

View File

@ -153,8 +153,9 @@ create_logical_replication_slot(char *name, char *plugin,
ctx = CreateInitDecodingContext(plugin, NIL,
false, /* just catalogs is OK */
restart_lsn,
read_local_xlog_page,
wal_segment_close,
XL_ROUTINE(.page_read = read_local_xlog_page,
.segment_open = wal_segment_open,
.segment_close = wal_segment_close),
NULL, NULL, NULL);
/*
@ -511,8 +512,9 @@ pg_logical_replication_slot_advance(XLogRecPtr moveto)
ctx = CreateDecodingContext(InvalidXLogRecPtr,
NIL,
true, /* fast_forward */
read_local_xlog_page,
wal_segment_close,
XL_ROUTINE(.page_read = read_local_xlog_page,
.segment_open = wal_segment_open,
.segment_close = wal_segment_close),
NULL, NULL, NULL);
/*
@ -534,13 +536,7 @@ pg_logical_replication_slot_advance(XLogRecPtr moveto)
* Read records. No changes are generated in fast_forward mode,
* but snapbuilder/slot statuses are updated properly.
*/
while (XLogReadRecord(ctx->reader, &record, &errm) ==
XLREAD_NEED_DATA)
{
if (!ctx->page_read(ctx->reader))
break;
}
record = XLogReadRecord(ctx->reader, &errm);
if (errm)
elog(ERROR, "%s", errm);

View File

@ -580,7 +580,10 @@ StartReplication(StartReplicationCmd *cmd)
/* create xlogreader for physical replication */
xlogreader =
XLogReaderAllocate(wal_segment_size, NULL, wal_segment_close);
XLogReaderAllocate(wal_segment_size, NULL,
XL_ROUTINE(.segment_open = WalSndSegmentOpen,
.segment_close = wal_segment_close),
NULL);
if (!xlogreader)
ereport(ERROR,
@ -803,12 +806,10 @@ StartReplication(StartReplicationCmd *cmd)
* which has to do a plain sleep/busy loop, because the walsender's latch gets
* set every time WAL is flushed.
*/
static bool
logical_read_xlog_page(XLogReaderState *state)
static int
logical_read_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr, int reqLen,
XLogRecPtr targetRecPtr, char *cur_page)
{
XLogRecPtr targetPagePtr = state->readPagePtr;
int reqLen = state->reqLen;
char *cur_page = state->readBuf;
XLogRecPtr flushptr;
int count;
WALReadError errinfo;
@ -825,10 +826,7 @@ logical_read_xlog_page(XLogReaderState *state)
/* fail if not (implies we are going to shut down) */
if (flushptr < targetPagePtr + reqLen)
{
XLogReaderSetInputData(state, -1);
return false;
}
return -1;
if (targetPagePtr + XLOG_BLCKSZ <= flushptr)
count = XLOG_BLCKSZ; /* more than one block available */
@ -836,7 +834,7 @@ logical_read_xlog_page(XLogReaderState *state)
count = flushptr - targetPagePtr; /* part of the page available */
/* now actually read the data, we know it's there */
if (!WALRead(state, WalSndSegmentOpen, wal_segment_close,
if (!WALRead(state,
cur_page,
targetPagePtr,
XLOG_BLCKSZ,
@ -856,8 +854,7 @@ logical_read_xlog_page(XLogReaderState *state)
XLByteToSeg(targetPagePtr, segno, state->segcxt.ws_segsize);
CheckXLogRemoved(segno, state->seg.ws_tli);
XLogReaderSetInputData(state, count);
return true;
return count;
}
/*
@ -1010,8 +1007,9 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)
ctx = CreateInitDecodingContext(cmd->plugin, NIL, need_full_snapshot,
InvalidXLogRecPtr,
logical_read_xlog_page,
wal_segment_close,
XL_ROUTINE(.page_read = logical_read_xlog_page,
.segment_open = WalSndSegmentOpen,
.segment_close = wal_segment_close),
WalSndPrepareWrite, WalSndWriteData,
WalSndUpdateProgress);
@ -1169,8 +1167,9 @@ StartLogicalReplication(StartReplicationCmd *cmd)
*/
logical_decoding_ctx =
CreateDecodingContext(cmd->startpoint, cmd->options, false,
logical_read_xlog_page,
wal_segment_close,
XL_ROUTINE(.page_read = logical_read_xlog_page,
.segment_open = WalSndSegmentOpen,
.segment_close = wal_segment_close),
WalSndPrepareWrite, WalSndWriteData,
WalSndUpdateProgress);
xlogreader = logical_decoding_ctx->reader;
@ -2763,7 +2762,7 @@ XLogSendPhysical(void)
enlargeStringInfo(&output_message, nbytes);
retry:
if (!WALRead(xlogreader, WalSndSegmentOpen, wal_segment_close,
if (!WALRead(xlogreader,
&output_message.data[output_message.len],
startptr,
nbytes,
@ -2861,12 +2860,7 @@ XLogSendLogical(void)
*/
WalSndCaughtUp = false;
while (XLogReadRecord(logical_decoding_ctx->reader, &record, &errm) ==
XLREAD_NEED_DATA)
{
if (!logical_decoding_ctx->page_read(logical_decoding_ctx->reader))
break;
}
record = XLogReadRecord(logical_decoding_ctx->reader, &errm);
/* xlog record was invalid */
if (errm != NULL)

View File

@ -210,8 +210,7 @@ XLogRecordPageWithFreeSpace(RelFileNode rnode, BlockNumber heapBlk,
blkno = fsm_logical_to_physical(addr);
/* If the page doesn't exist already, extend */
buf = XLogReadBufferExtended(rnode, FSM_FORKNUM, blkno, RBM_ZERO_ON_ERROR,
InvalidBuffer);
buf = XLogReadBufferExtended(rnode, FSM_FORKNUM, blkno, RBM_ZERO_ON_ERROR);
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
page = BufferGetPage(buf);

View File

@ -22,7 +22,6 @@
#include "access/subtrans.h"
#include "access/syncscan.h"
#include "access/twophase.h"
#include "access/xlogprefetch.h"
#include "commands/async.h"
#include "miscadmin.h"
#include "pgstat.h"
@ -127,7 +126,6 @@ CreateSharedMemoryAndSemaphores(void)
size = add_size(size, PredicateLockShmemSize());
size = add_size(size, ProcGlobalShmemSize());
size = add_size(size, XLOGShmemSize());
size = add_size(size, XLogPrefetchShmemSize());
size = add_size(size, CLOGShmemSize());
size = add_size(size, CommitTsShmemSize());
size = add_size(size, SUBTRANSShmemSize());
@ -219,7 +217,6 @@ CreateSharedMemoryAndSemaphores(void)
* Set up xlog, clog, and buffers
*/
XLOGShmemInit();
XLogPrefetchShmemInit();
CLOGShmemInit();
CommitTsShmemInit();
SUBTRANSShmemInit();

View File

@ -41,7 +41,6 @@
#include "access/twophase.h"
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "access/xlogprefetch.h"
#include "catalog/namespace.h"
#include "catalog/pg_authid.h"
#include "catalog/storage.h"
@ -210,7 +209,6 @@ static bool check_effective_io_concurrency(int *newval, void **extra, GucSource
static bool check_maintenance_io_concurrency(int *newval, void **extra, GucSource source);
static bool check_huge_page_size(int *newval, void **extra, GucSource source);
static bool check_client_connection_check_interval(int *newval, void **extra, GucSource source);
static void assign_maintenance_io_concurrency(int newval, void *extra);
static void assign_pgstat_temp_directory(const char *newval, void *extra);
static bool check_application_name(char **newval, void **extra, GucSource source);
static void assign_application_name(const char *newval, void *extra);
@ -727,8 +725,6 @@ const char *const config_group_names[] =
gettext_noop("Write-Ahead Log / Checkpoints"),
/* WAL_ARCHIVING */
gettext_noop("Write-Ahead Log / Archiving"),
/* WAL_RECOVERY */
gettext_noop("Write-Ahead Log / Recovery"),
/* WAL_ARCHIVE_RECOVERY */
gettext_noop("Write-Ahead Log / Archive Recovery"),
/* WAL_RECOVERY_TARGET */
@ -1280,27 +1276,6 @@ static struct config_bool ConfigureNamesBool[] =
true,
NULL, NULL, NULL
},
{
{"recovery_prefetch", PGC_SIGHUP, WAL_RECOVERY,
gettext_noop("Prefetch referenced blocks during recovery."),
gettext_noop("Read ahead of the current replay position to find uncached blocks.")
},
&recovery_prefetch,
false,
NULL, assign_recovery_prefetch, NULL
},
{
{"recovery_prefetch_fpw", PGC_SIGHUP, WAL_RECOVERY,
gettext_noop("Prefetch blocks that have full page images in the WAL."),
gettext_noop("On some systems, there is no benefit to prefetching pages that will be "
"entirely overwritten, but if the logical page size of the filesystem is "
"larger than PostgreSQL's, this can be beneficial. This option has no "
"effect unless recovery_prefetch is enabled.")
},
&recovery_prefetch_fpw,
false,
NULL, assign_recovery_prefetch_fpw, NULL
},
{
{"wal_log_hints", PGC_POSTMASTER, WAL_SETTINGS,
@ -2755,17 +2730,6 @@ static struct config_int ConfigureNamesInt[] =
NULL, NULL, NULL
},
{
{"wal_decode_buffer_size", PGC_POSTMASTER, WAL_RECOVERY,
gettext_noop("Maximum buffer size for reading ahead in the WAL during recovery."),
gettext_noop("This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks."),
GUC_UNIT_BYTE
},
&wal_decode_buffer_size,
512 * 1024, 64 * 1024, INT_MAX,
NULL, NULL, NULL
},
{
{"wal_keep_size", PGC_SIGHUP, REPLICATION_SENDING,
gettext_noop("Sets the size of WAL files held for standby servers."),
@ -3086,8 +3050,7 @@ static struct config_int ConfigureNamesInt[] =
0,
#endif
0, MAX_IO_CONCURRENCY,
check_maintenance_io_concurrency, assign_maintenance_io_concurrency,
NULL
check_maintenance_io_concurrency, NULL, NULL
},
{
@ -12091,20 +12054,6 @@ check_client_connection_check_interval(int *newval, void **extra, GucSource sour
return true;
}
static void
assign_maintenance_io_concurrency(int newval, void *extra)
{
#ifdef USE_PREFETCH
/*
* Reconfigure recovery prefetching, because a setting it depends on
* changed.
*/
maintenance_io_concurrency = newval;
if (AmStartupProcess())
XLogPrefetchReconfigure();
#endif
}
static void
assign_pgstat_temp_directory(const char *newval, void *extra)
{

View File

@ -250,12 +250,6 @@
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Recovery -
#recovery_prefetch = off # prefetch pages referenced in the WAL?
#recovery_prefetch_fpw = off # even pages logged with full page?
#wal_decode_buffer_size = 512kB # lookahead window used for prefetching
# - Archive Recovery -
# These are only used in recovery mode.

View File

@ -41,9 +41,15 @@ static int xlogreadfd = -1;
static XLogSegNo xlogreadsegno = -1;
static char xlogfpath[MAXPGPATH];
static bool SimpleXLogPageRead(XLogReaderState *xlogreader,
const char *datadir, int *tliIndex,
const char *restoreCommand);
typedef struct XLogPageReadPrivate
{
const char *restoreCommand;
int tliIndex;
} XLogPageReadPrivate;
static int SimpleXLogPageRead(XLogReaderState *xlogreader,
XLogRecPtr targetPagePtr,
int reqLen, XLogRecPtr targetRecPtr, char *readBuf);
/*
* Read WAL from the datadir/pg_wal, starting from 'startpoint' on timeline
@ -60,22 +66,20 @@ extractPageMap(const char *datadir, XLogRecPtr startpoint, int tliIndex,
XLogRecord *record;
XLogReaderState *xlogreader;
char *errormsg;
XLogPageReadPrivate private;
xlogreader = XLogReaderAllocate(WalSegSz, datadir, NULL);
private.tliIndex = tliIndex;
private.restoreCommand = restoreCommand;
xlogreader = XLogReaderAllocate(WalSegSz, datadir,
XL_ROUTINE(.page_read = &SimpleXLogPageRead),
&private);
if (xlogreader == NULL)
pg_fatal("out of memory");
XLogBeginRead(xlogreader, startpoint);
do
{
while (XLogReadRecord(xlogreader, &record, &errormsg) ==
XLREAD_NEED_DATA)
{
if (!SimpleXLogPageRead(xlogreader, datadir,
&tliIndex, restoreCommand))
break;
}
record = XLogReadRecord(xlogreader, &errormsg);
if (record == NULL)
{
@ -119,19 +123,19 @@ readOneRecord(const char *datadir, XLogRecPtr ptr, int tliIndex,
XLogRecord *record;
XLogReaderState *xlogreader;
char *errormsg;
XLogPageReadPrivate private;
XLogRecPtr endptr;
xlogreader = XLogReaderAllocate(WalSegSz, datadir, NULL);
private.tliIndex = tliIndex;
private.restoreCommand = restoreCommand;
xlogreader = XLogReaderAllocate(WalSegSz, datadir,
XL_ROUTINE(.page_read = &SimpleXLogPageRead),
&private);
if (xlogreader == NULL)
pg_fatal("out of memory");
XLogBeginRead(xlogreader, ptr);
while (XLogReadRecord(xlogreader, &record, &errormsg) ==
XLREAD_NEED_DATA)
{
if (!SimpleXLogPageRead(xlogreader, datadir, &tliIndex, restoreCommand))
break;
}
record = XLogReadRecord(xlogreader, &errormsg);
if (record == NULL)
{
if (errormsg)
@ -166,6 +170,7 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
XLogRecPtr searchptr;
XLogReaderState *xlogreader;
char *errormsg;
XLogPageReadPrivate private;
/*
* The given fork pointer points to the end of the last common record,
@ -181,7 +186,11 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
forkptr += SizeOfXLogShortPHD;
}
xlogreader = XLogReaderAllocate(WalSegSz, datadir, NULL);
private.tliIndex = tliIndex;
private.restoreCommand = restoreCommand;
xlogreader = XLogReaderAllocate(WalSegSz, datadir,
XL_ROUTINE(.page_read = &SimpleXLogPageRead),
&private);
if (xlogreader == NULL)
pg_fatal("out of memory");
@ -191,13 +200,7 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
uint8 info;
XLogBeginRead(xlogreader, searchptr);
while (XLogReadRecord(xlogreader, &record, &errormsg) ==
XLREAD_NEED_DATA)
{
if (!SimpleXLogPageRead(xlogreader, datadir,
&tliIndex, restoreCommand))
break;
}
record = XLogReadRecord(xlogreader, &errormsg);
if (record == NULL)
{
@ -243,19 +246,16 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
}
/* XLogReader callback function, to read a WAL page */
static bool
SimpleXLogPageRead(XLogReaderState *xlogreader, const char *datadir,
int *tliIndex, const char *restoreCommand)
static int
SimpleXLogPageRead(XLogReaderState *xlogreader, XLogRecPtr targetPagePtr,
int reqLen, XLogRecPtr targetRecPtr, char *readBuf)
{
XLogRecPtr targetPagePtr = xlogreader->readPagePtr;
char *readBuf = xlogreader->readBuf;
XLogPageReadPrivate *private = (XLogPageReadPrivate *) xlogreader->private_data;
uint32 targetPageOff;
XLogRecPtr targetSegEnd;
XLogSegNo targetSegNo;
int r;
Assert(xlogreader->reqLen <= XLOG_BLCKSZ);
XLByteToSeg(targetPagePtr, targetSegNo, WalSegSz);
XLogSegNoOffsetToRecPtr(targetSegNo + 1, 0, WalSegSz, targetSegEnd);
targetPageOff = XLogSegmentOffset(targetPagePtr, WalSegSz);
@ -283,14 +283,14 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, const char *datadir,
* be done both forward and backward, consider also switching timeline
* accordingly.
*/
while (*tliIndex < targetNentries - 1 &&
targetHistory[*tliIndex].end < targetSegEnd)
(*tliIndex)++;
while (*tliIndex > 0 &&
targetHistory[*tliIndex].begin >= targetSegEnd)
(*tliIndex)--;
while (private->tliIndex < targetNentries - 1 &&
targetHistory[private->tliIndex].end < targetSegEnd)
private->tliIndex++;
while (private->tliIndex > 0 &&
targetHistory[private->tliIndex].begin >= targetSegEnd)
private->tliIndex--;
XLogFileName(xlogfname, targetHistory[*tliIndex].tli,
XLogFileName(xlogfname, targetHistory[private->tliIndex].tli,
xlogreadsegno, WalSegSz);
snprintf(xlogfpath, MAXPGPATH, "%s/" XLOGDIR "/%s",
@ -303,11 +303,10 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, const char *datadir,
/*
* If we have no restore_command to execute, then exit.
*/
if (restoreCommand == NULL)
if (private->restoreCommand == NULL)
{
pg_log_error("could not open file \"%s\": %m", xlogfpath);
XLogReaderSetInputData(xlogreader, -1);
return false;
return -1;
}
/*
@ -317,13 +316,10 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, const char *datadir,
xlogreadfd = RestoreArchivedFile(xlogreader->segcxt.ws_dir,
xlogfname,
WalSegSz,
restoreCommand);
private->restoreCommand);
if (xlogreadfd < 0)
{
XLogReaderSetInputData(xlogreader, -1);
return false;
}
return -1;
else
pg_log_debug("using file \"%s\" restored from archive",
xlogfpath);
@ -339,8 +335,7 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, const char *datadir,
if (lseek(xlogreadfd, (off_t) targetPageOff, SEEK_SET) < 0)
{
pg_log_error("could not seek in file \"%s\": %m", xlogfpath);
XLogReaderSetInputData(xlogreader, -1);
return false;
return -1;
}
@ -353,15 +348,13 @@ SimpleXLogPageRead(XLogReaderState *xlogreader, const char *datadir,
pg_log_error("could not read file \"%s\": read %d of %zu",
xlogfpath, r, (Size) XLOG_BLCKSZ);
XLogReaderSetInputData(xlogreader, -1);
return false;
return -1;
}
Assert(targetSegNo == xlogreadsegno);
xlogreader->seg.ws_tli = targetHistory[*tliIndex].tli;
XLogReaderSetInputData(xlogreader, XLOG_BLCKSZ);
return true;
xlogreader->seg.ws_tli = targetHistory[private->tliIndex].tli;
return XLOG_BLCKSZ;
}
/*
@ -439,7 +432,7 @@ extractPageInfo(XLogReaderState *record)
RmgrNames[rmid], info);
}
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
RelFileNode rnode;
ForkNumber forknum;

View File

@ -29,6 +29,14 @@ static const char *progname;
static int WalSegSz;
typedef struct XLogDumpPrivate
{
TimeLineID timeline;
XLogRecPtr startptr;
XLogRecPtr endptr;
bool endptr_reached;
} XLogDumpPrivate;
typedef struct XLogDumpConfig
{
/* display options */
@ -322,41 +330,30 @@ WALDumpCloseSegment(XLogReaderState *state)
state->seg.ws_file = -1;
}
/*
* pg_waldump's WAL page reader
*
* timeline and startptr specifies the LSN, and reads up to endptr.
*/
static bool
WALDumpReadPage(XLogReaderState *state, TimeLineID timeline,
XLogRecPtr startptr, XLogRecPtr endptr)
/* pg_waldump's XLogReaderRoutine->page_read callback */
static int
WALDumpReadPage(XLogReaderState *state, XLogRecPtr targetPagePtr, int reqLen,
XLogRecPtr targetPtr, char *readBuff)
{
XLogRecPtr targetPagePtr = state->readPagePtr;
int reqLen = state->reqLen;
char *readBuff = state->readBuf;
XLogDumpPrivate *private = state->private_data;
int count = XLOG_BLCKSZ;
WALReadError errinfo;
/* determine the number of bytes to read on the page */
if (endptr != InvalidXLogRecPtr)
if (private->endptr != InvalidXLogRecPtr)
{
if (targetPagePtr + XLOG_BLCKSZ <= endptr)
if (targetPagePtr + XLOG_BLCKSZ <= private->endptr)
count = XLOG_BLCKSZ;
else if (targetPagePtr + reqLen <= endptr)
count = endptr - targetPagePtr;
else if (targetPagePtr + reqLen <= private->endptr)
count = private->endptr - targetPagePtr;
else
{
/* Notify xlogreader that we didn't read at all */
XLogReaderSetInputData(state, -1);
return false;
private->endptr_reached = true;
return -1;
}
}
/* We should read more than requested by xlogreader */
Assert(count >= state->readLen);
if (!WALRead(state, WALDumpOpenSegment, WALDumpCloseSegment,
readBuff, targetPagePtr, count, timeline, &errinfo))
if (!WALRead(state, readBuff, targetPagePtr, count, private->timeline,
&errinfo))
{
WALOpenSegment *seg = &errinfo.wre_seg;
char fname[MAXPGPATH];
@ -376,9 +373,7 @@ WALDumpReadPage(XLogReaderState *state, TimeLineID timeline,
(Size) errinfo.wre_req);
}
/* Notify xlogreader of how many bytes we have read */
XLogReaderSetInputData(state, count);
return true;
return count;
}
/*
@ -397,10 +392,10 @@ XLogDumpRecordLen(XLogReaderState *record, uint32 *rec_len, uint32 *fpi_len)
* add an accessor macro for this.
*/
*fpi_len = 0;
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
if (XLogRecHasBlockImage(record, block_id))
*fpi_len += record->record->blocks[block_id].bimg_len;
*fpi_len += record->blocks[block_id].bimg_len;
}
/*
@ -498,7 +493,7 @@ XLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)
if (!config->bkp_details)
{
/* print block references (short format) */
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
if (!XLogRecHasBlockRef(record, block_id))
continue;
@ -529,7 +524,7 @@ XLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)
{
/* print block references (detailed format) */
putchar('\n');
for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)
for (block_id = 0; block_id <= record->max_block_id; block_id++)
{
if (!XLogRecHasBlockRef(record, block_id))
continue;
@ -542,26 +537,26 @@ XLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)
blk);
if (XLogRecHasBlockImage(record, block_id))
{
if (record->record->blocks[block_id].bimg_info &
if (record->blocks[block_id].bimg_info &
BKPIMAGE_IS_COMPRESSED)
{
printf(" (FPW%s); hole: offset: %u, length: %u, "
"compression saved: %u",
XLogRecBlockImageApply(record, block_id) ?
"" : " for WAL verification",
record->record->blocks[block_id].hole_offset,
record->record->blocks[block_id].hole_length,
record->blocks[block_id].hole_offset,
record->blocks[block_id].hole_length,
BLCKSZ -
record->record->blocks[block_id].hole_length -
record->record->blocks[block_id].bimg_len);
record->blocks[block_id].hole_length -
record->blocks[block_id].bimg_len);
}
else
{
printf(" (FPW%s); hole: offset: %u, length: %u",
XLogRecBlockImageApply(record, block_id) ?
"" : " for WAL verification",
record->record->blocks[block_id].hole_offset,
record->record->blocks[block_id].hole_length);
record->blocks[block_id].hole_offset,
record->blocks[block_id].hole_length);
}
}
putchar('\n');
@ -759,10 +754,7 @@ main(int argc, char **argv)
uint32 xlogid;
uint32 xrecoff;
XLogReaderState *xlogreader_state;
XLogFindNextRecordState *findnext_state;
TimeLineID timeline;
XLogRecPtr startptr;
XLogRecPtr endptr;
XLogDumpPrivate private;
XLogDumpConfig config;
XLogDumpStats stats;
XLogRecord *record;
@ -808,12 +800,14 @@ main(int argc, char **argv)
}
}
memset(&private, 0, sizeof(XLogDumpPrivate));
memset(&config, 0, sizeof(XLogDumpConfig));
memset(&stats, 0, sizeof(XLogDumpStats));
timeline = 1;
startptr = InvalidXLogRecPtr;
endptr = InvalidXLogRecPtr;
private.timeline = 1;
private.startptr = InvalidXLogRecPtr;
private.endptr = InvalidXLogRecPtr;
private.endptr_reached = false;
config.quiet = false;
config.bkp_details = false;
@ -847,7 +841,7 @@ main(int argc, char **argv)
optarg);
goto bad_argument;
}
endptr = (uint64) xlogid << 32 | xrecoff;
private.endptr = (uint64) xlogid << 32 | xrecoff;
break;
case 'f':
config.follow = true;
@ -900,10 +894,10 @@ main(int argc, char **argv)
goto bad_argument;
}
else
startptr = (uint64) xlogid << 32 | xrecoff;
private.startptr = (uint64) xlogid << 32 | xrecoff;
break;
case 't':
if (sscanf(optarg, "%d", &timeline) != 1)
if (sscanf(optarg, "%d", &private.timeline) != 1)
{
pg_log_error("could not parse timeline \"%s\"", optarg);
goto bad_argument;
@ -980,21 +974,21 @@ main(int argc, char **argv)
close(fd);
/* parse position from file */
XLogFromFileName(fname, &timeline, &segno, WalSegSz);
XLogFromFileName(fname, &private.timeline, &segno, WalSegSz);
if (XLogRecPtrIsInvalid(startptr))
XLogSegNoOffsetToRecPtr(segno, 0, WalSegSz, startptr);
else if (!XLByteInSeg(startptr, segno, WalSegSz))
if (XLogRecPtrIsInvalid(private.startptr))
XLogSegNoOffsetToRecPtr(segno, 0, WalSegSz, private.startptr);
else if (!XLByteInSeg(private.startptr, segno, WalSegSz))
{
pg_log_error("start WAL location %X/%X is not inside file \"%s\"",
LSN_FORMAT_ARGS(startptr),
LSN_FORMAT_ARGS(private.startptr),
fname);
goto bad_argument;
}
/* no second file specified, set end position */
if (!(optind + 1 < argc) && XLogRecPtrIsInvalid(endptr))
XLogSegNoOffsetToRecPtr(segno + 1, 0, WalSegSz, endptr);
if (!(optind + 1 < argc) && XLogRecPtrIsInvalid(private.endptr))
XLogSegNoOffsetToRecPtr(segno + 1, 0, WalSegSz, private.endptr);
/* parse ENDSEG if passed */
if (optind + 1 < argc)
@ -1010,26 +1004,26 @@ main(int argc, char **argv)
close(fd);
/* parse position from file */
XLogFromFileName(fname, &timeline, &endsegno, WalSegSz);
XLogFromFileName(fname, &private.timeline, &endsegno, WalSegSz);
if (endsegno < segno)
fatal_error("ENDSEG %s is before STARTSEG %s",
argv[optind + 1], argv[optind]);
if (XLogRecPtrIsInvalid(endptr))
if (XLogRecPtrIsInvalid(private.endptr))
XLogSegNoOffsetToRecPtr(endsegno + 1, 0, WalSegSz,
endptr);
private.endptr);
/* set segno to endsegno for check of --end */
segno = endsegno;
}
if (!XLByteInSeg(endptr, segno, WalSegSz) &&
endptr != (segno + 1) * WalSegSz)
if (!XLByteInSeg(private.endptr, segno, WalSegSz) &&
private.endptr != (segno + 1) * WalSegSz)
{
pg_log_error("end WAL location %X/%X is not inside file \"%s\"",
LSN_FORMAT_ARGS(endptr),
LSN_FORMAT_ARGS(private.endptr),
argv[argc - 1]);
goto bad_argument;
}
@ -1038,7 +1032,7 @@ main(int argc, char **argv)
waldir = identify_target_directory(waldir, NULL);
/* we don't know what to print */
if (XLogRecPtrIsInvalid(startptr))
if (XLogRecPtrIsInvalid(private.startptr))
{
pg_log_error("no start WAL location given");
goto bad_argument;
@ -1048,56 +1042,42 @@ main(int argc, char **argv)
/* we have everything we need, start reading */
xlogreader_state =
XLogReaderAllocate(WalSegSz, waldir, WALDumpCloseSegment);
XLogReaderAllocate(WalSegSz, waldir,
XL_ROUTINE(.page_read = WALDumpReadPage,
.segment_open = WALDumpOpenSegment,
.segment_close = WALDumpCloseSegment),
&private);
if (!xlogreader_state)
fatal_error("out of memory");
findnext_state =
InitXLogFindNextRecord(xlogreader_state, startptr);
if (!findnext_state)
fatal_error("out of memory");
/* first find a valid recptr to start from */
while (XLogFindNextRecord(findnext_state))
{
if (!WALDumpReadPage(xlogreader_state, timeline, startptr, endptr))
break;
}
first_record = XLogFindNextRecord(xlogreader_state, private.startptr);
first_record = findnext_state->currRecPtr;
if (first_record == InvalidXLogRecPtr)
fatal_error("could not find a valid record after %X/%X",
LSN_FORMAT_ARGS(startptr));
LSN_FORMAT_ARGS(private.startptr));
/*
* Display a message that we're skipping data if `from` wasn't a pointer
* to the start of a record and also wasn't a pointer to the beginning of
* a segment (e.g. we were used in file mode).
*/
if (first_record != startptr &&
XLogSegmentOffset(startptr, WalSegSz) != 0)
if (first_record != private.startptr &&
XLogSegmentOffset(private.startptr, WalSegSz) != 0)
printf(ngettext("first record is after %X/%X, at %X/%X, skipping over %u byte\n",
"first record is after %X/%X, at %X/%X, skipping over %u bytes\n",
(first_record - startptr)),
LSN_FORMAT_ARGS(startptr),
(first_record - private.startptr)),
LSN_FORMAT_ARGS(private.startptr),
LSN_FORMAT_ARGS(first_record),
(uint32) (first_record - startptr));
(uint32) (first_record - private.startptr));
for (;;)
{
/* try to read the next record */
while (XLogReadRecord(xlogreader_state, &record, &errormsg) ==
XLREAD_NEED_DATA)
{
if (!WALDumpReadPage(xlogreader_state, timeline, startptr, endptr))
break;
}
record = XLogReadRecord(xlogreader_state, &errormsg);
if (!record)
{
if (!config.follow)
if (!config.follow || private.endptr_reached)
break;
else
{

View File

@ -132,7 +132,6 @@ extern char *PrimaryConnInfo;
extern char *PrimarySlotName;
extern bool wal_receiver_create_temp_slot;
extern bool track_wal_io_timing;
extern int wal_decode_buffer_size;
/* indirectly set via GUC system */
extern TransactionId recoveryTargetXid;

View File

@ -1,82 +0,0 @@
/*-------------------------------------------------------------------------
*
* xlogprefetch.h
* Declarations for the recovery prefetching module.
*
* Portions Copyright (c) 2021, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/include/access/xlogprefetch.h
*-------------------------------------------------------------------------
*/
#ifndef XLOGPREFETCH_H
#define XLOGPREFETCH_H
#include "access/xlogreader.h"
/* GUCs */
extern bool recovery_prefetch;
extern bool recovery_prefetch_fpw;
struct XLogPrefetcher;
typedef struct XLogPrefetcher XLogPrefetcher;
extern int XLogPrefetchReconfigureCount;
typedef struct XLogPrefetchState
{
XLogReaderState *reader;
XLogPrefetcher *prefetcher;
int reconfigure_count;
} XLogPrefetchState;
extern size_t XLogPrefetchShmemSize(void);
extern void XLogPrefetchShmemInit(void);
extern void XLogPrefetchReconfigure(void);
extern void XLogPrefetchRequestResetStats(void);
extern void XLogPrefetchBegin(XLogPrefetchState *state, XLogReaderState *reader);
extern void XLogPrefetchEnd(XLogPrefetchState *state);
/* Functions exposed only for the use of XLogPrefetch(). */
extern XLogPrefetcher *XLogPrefetcherAllocate(XLogReaderState *reader);
extern void XLogPrefetcherFree(XLogPrefetcher *prefetcher);
extern bool XLogPrefetcherReadAhead(XLogPrefetcher *prefetch,
XLogRecPtr replaying_lsn);
/*
* Tell the prefetching module that we are now replaying a given LSN, so that
* it can decide how far ahead to read in the WAL, if configured. Return
* true if more data is needed by the reader.
*/
static inline bool
XLogPrefetch(XLogPrefetchState *state, XLogRecPtr replaying_lsn)
{
/*
* Handle any configuration changes. Rather than trying to deal with
* various parameter changes, we just tear down and set up a new
* prefetcher if anything we depend on changes.
*/
if (unlikely(state->reconfigure_count != XLogPrefetchReconfigureCount))
{
/* If we had a prefetcher, tear it down. */
if (state->prefetcher)
{
XLogPrefetcherFree(state->prefetcher);
state->prefetcher = NULL;
}
/* If we want a prefetcher, set it up. */
if (recovery_prefetch)
state->prefetcher = XLogPrefetcherAllocate(state->reader);
state->reconfigure_count = XLogPrefetchReconfigureCount;
}
if (state->prefetcher)
return XLogPrefetcherReadAhead(state->prefetcher, replaying_lsn);
return false;
}
#endif

View File

@ -39,7 +39,6 @@
#endif
#include "access/xlogrecord.h"
#include "storage/buf.h"
/* WALOpenSegment represents a WAL segment being read. */
typedef struct WALOpenSegment
@ -57,17 +56,65 @@ typedef struct WALSegmentContext
} WALSegmentContext;
typedef struct XLogReaderState XLogReaderState;
typedef struct XLogFindNextRecordState XLogFindNextRecordState;
/* Function type definition for the segment cleanup callback */
typedef void (*WALSegmentCleanupCB) (XLogReaderState *xlogreader);
/* Function type definition for the open/close callbacks for WALRead() */
/* Function type definitions for various xlogreader interactions */
typedef int (*XLogPageReadCB) (XLogReaderState *xlogreader,
XLogRecPtr targetPagePtr,
int reqLen,
XLogRecPtr targetRecPtr,
char *readBuf);
typedef void (*WALSegmentOpenCB) (XLogReaderState *xlogreader,
XLogSegNo nextSegNo,
TimeLineID *tli_p);
typedef void (*WALSegmentCloseCB) (XLogReaderState *xlogreader);
typedef struct XLogReaderRoutine
{
/*
* Data input callback
*
* This callback shall read at least reqLen valid bytes of the xlog page
* starting at targetPagePtr, and store them in readBuf. The callback
* shall return the number of bytes read (never more than XLOG_BLCKSZ), or
* -1 on failure. The callback shall sleep, if necessary, to wait for the
* requested bytes to become available. The callback will not be invoked
* again for the same page unless more than the returned number of bytes
* are needed.
*
* targetRecPtr is the position of the WAL record we're reading. Usually
* it is equal to targetPagePtr + reqLen, but sometimes xlogreader needs
* to read and verify the page or segment header, before it reads the
* actual WAL record it's interested in. In that case, targetRecPtr can
* be used to determine which timeline to read the page from.
*
* The callback shall set ->seg.ws_tli to the TLI of the file the page was
* read from.
*/
XLogPageReadCB page_read;
/*
* Callback to open the specified WAL segment for reading. ->seg.ws_file
* shall be set to the file descriptor of the opened segment. In case of
* failure, an error shall be raised by the callback and it shall not
* return.
*
* "nextSegNo" is the number of the segment to be opened.
*
* "tli_p" is an input/output argument. WALRead() uses it to pass the
* timeline in which the new segment should be found, but the callback can
* use it to return the TLI that it actually opened.
*/
WALSegmentOpenCB segment_open;
/*
* WAL segment close callback. ->seg.ws_file shall be set to a negative
* number.
*/
WALSegmentCloseCB segment_close;
} XLogReaderRoutine;
#define XL_ROUTINE(...) &(XLogReaderRoutine){__VA_ARGS__}
typedef struct
{
/* Is this block ref in use? */
@ -78,9 +125,6 @@ typedef struct
ForkNumber forknum;
BlockNumber blkno;
/* Workspace for remembering last known buffer holding this block. */
Buffer recent_buffer;
/* copy of the fork_flags field from the XLogRecordBlockHeader */
uint8 flags;
@ -100,61 +144,12 @@ typedef struct
uint16 data_bufsz;
} DecodedBkpBlock;
/* Return code from XLogReadRecord */
typedef enum XLogReadRecordResult
{
XLREAD_SUCCESS, /* record is successfully read */
XLREAD_NEED_DATA, /* need more data. see XLogReadRecord. */
XLREAD_FULL, /* cannot hold more data while reading ahead */
XLREAD_FAIL /* failed during reading a record */
} XLogReadRecordResult;
/*
* internal state of XLogReadRecord
*
* XLogReadState runs a state machine while reading a record. Theses states
* are not seen outside the function. Each state may repeat several times
* exiting requesting caller for new data. See the comment of XLogReadRecrod
* for details.
*/
typedef enum XLogReadRecordState
{
XLREAD_NEXT_RECORD,
XLREAD_TOT_LEN,
XLREAD_FIRST_FRAGMENT,
XLREAD_CONTINUATION
} XLogReadRecordState;
/*
* The decoded contents of a record. This occupies a contiguous region of
* memory, with main_data and blocks[n].data pointing to memory after the
* members declared here.
*/
typedef struct DecodedXLogRecord
{
/* Private member used for resource management. */
size_t size; /* total size of decoded record */
bool oversized; /* outside the regular decode buffer? */
struct DecodedXLogRecord *next; /* decoded record queue link */
/* Public members. */
XLogRecPtr lsn; /* location */
XLogRecPtr next_lsn; /* location of next record */
XLogRecord header; /* header */
RepOriginId record_origin;
TransactionId toplevel_xid; /* XID of top-level transaction */
char *main_data; /* record's main data portion */
uint32 main_data_len; /* main data portion's length */
int max_block_id; /* highest block_id in use (-1 if none) */
DecodedBkpBlock blocks[FLEXIBLE_ARRAY_MEMBER];
} DecodedXLogRecord;
struct XLogReaderState
{
/*
* Operational callbacks
*/
WALSegmentCleanupCB cleanup_cb;
XLogReaderRoutine routine;
/* ----------------------------------------
* Public parameters
@ -167,33 +162,19 @@ struct XLogReaderState
*/
uint64 system_identifier;
/*
* Opaque data for callbacks to use. Not used by XLogReader.
*/
void *private_data;
/*
* Start and end point of last record read. EndRecPtr is also used as the
* position to read next. Calling XLogBeginRead() sets EndRecPtr to the
* starting position and ReadRecPtr to invalid.
*
* Start and end point of last record returned by XLogReadRecord(). These
* are also available as record->lsn and record->next_lsn.
*/
XLogRecPtr ReadRecPtr; /* start of last record read or being read */
XLogRecPtr ReadRecPtr; /* start of last record read */
XLogRecPtr EndRecPtr; /* end+1 of last record read */
/* ----------------------------------------
* Communication with page reader
* readBuf is XLOG_BLCKSZ bytes, valid up to at least reqLen bytes.
* ----------------------------------------
*/
/* variables the clients of xlogreader can examine */
XLogRecPtr readPagePtr; /* page pointer to read */
int32 reqLen; /* bytes requested to the caller */
char *readBuf; /* buffer to store data */
bool page_verified; /* is the page header on the buffer verified? */
bool record_verified;/* is the current record header verified? */
/* variables set by the client of xlogreader */
int32 readLen; /* actual bytes copied into readBuf by client,
* which should be >= reqLen. Client should
* use XLogReaderSetInputData() to set. */
/* ----------------------------------------
* Decoded representation of current record
@ -201,17 +182,21 @@ struct XLogReaderState
* Use XLogRecGet* functions to investigate the record; these fields
* should not be accessed directly.
* ----------------------------------------
* Start and end point of the last record read and decoded by
* XLogReadRecordInternal(). NextRecPtr is also used as the position to
* decode next. Calling XLogBeginRead() sets NextRecPtr and EndRecPtr to
* the requested starting position.
*/
XLogRecPtr DecodeRecPtr; /* start of last record decoded */
XLogRecPtr NextRecPtr; /* end+1 of last record decoded */
XLogRecPtr PrevRecPtr; /* start of previous record decoded */
XLogRecord *decoded_record; /* currently decoded record */
/* Last record returned by XLogReadRecord(). */
DecodedXLogRecord *record;
char *main_data; /* record's main data portion */
uint32 main_data_len; /* main data portion's length */
uint32 main_data_bufsz; /* allocated size of the buffer */
RepOriginId record_origin;
TransactionId toplevel_xid; /* XID of top-level transaction */
/* information about blocks referenced by the record. */
DecodedBkpBlock blocks[XLR_MAX_BLOCK_ID + 1];
int max_block_id; /* highest block_id in use (-1 if none) */
/* ----------------------------------------
* private/internal state
@ -219,24 +204,11 @@ struct XLogReaderState
*/
/*
* Buffer for decoded records. This is a circular buffer, though
* individual records can't be split in the middle, so some space is often
* wasted at the end. Oversized records that don't fit in this space are
* allocated separately.
* Buffer for currently read page (XLOG_BLCKSZ bytes, valid up to at least
* readLen bytes)
*/
char *decode_buffer;
size_t decode_buffer_size;
bool free_decode_buffer; /* need to free? */
char *decode_buffer_head; /* write head */
char *decode_buffer_tail; /* read head */
/*
* Queue of records that have been decoded. This is a linked list that
* usually consists of consecutive records in decode_buffer, but may also
* contain oversized records allocated with palloc().
*/
DecodedXLogRecord *decode_queue_head; /* newest decoded record */
DecodedXLogRecord *decode_queue_tail; /* oldest decoded record */
char *readBuf;
uint32 readLen;
/* last read XLOG position for data currently in readBuf */
WALSegmentContext segcxt;
@ -250,6 +222,8 @@ struct XLogReaderState
XLogRecPtr latestPagePtr;
TimeLineID latestPageTLI;
/* beginning of the WAL record being read. */
XLogRecPtr currRecPtr;
/* timeline to read it from, 0 if a lookup is required */
TimeLineID currTLI;
@ -276,70 +250,29 @@ struct XLogReaderState
char *readRecordBuf;
uint32 readRecordBufSize;
/*
* XLogReadRecordInternal() state
*/
XLogReadRecordState readRecordState; /* state machine state */
int recordGotLen; /* amount of current record that has already
* been read */
int recordRemainLen; /* length of current record that remains */
XLogRecPtr recordContRecPtr; /* where the current record continues */
DecodedXLogRecord *decoding; /* record currently being decoded */
/* Buffer to hold error message */
char *errormsg_buf;
bool errormsg_deferred;
};
struct XLogFindNextRecordState
{
XLogReaderState *reader_state;
XLogRecPtr targetRecPtr;
XLogRecPtr currRecPtr;
};
/* Report that data is available for decoding. */
static inline void
XLogReaderSetInputData(XLogReaderState *state, int32 len)
{
state->readLen = len;
}
/* Get a new XLogReader */
extern XLogReaderState *XLogReaderAllocate(int wal_segment_size,
const char *waldir,
WALSegmentCleanupCB cleanup_cb);
XLogReaderRoutine *routine,
void *private_data);
extern XLogReaderRoutine *LocalXLogReaderRoutine(void);
/* Free an XLogReader */
extern void XLogReaderFree(XLogReaderState *state);
/* Optionally provide a circular decoding buffer to allow readahead. */
extern void XLogReaderSetDecodeBuffer(XLogReaderState *state,
void *buffer,
size_t size);
/* Position the XLogReader to given record */
extern void XLogBeginRead(XLogReaderState *state, XLogRecPtr RecPtr);
#ifdef FRONTEND
extern XLogFindNextRecordState *InitXLogFindNextRecord(XLogReaderState *reader_state, XLogRecPtr start_ptr);
extern bool XLogFindNextRecord(XLogFindNextRecordState *state);
extern XLogRecPtr XLogFindNextRecord(XLogReaderState *state, XLogRecPtr RecPtr);
#endif /* FRONTEND */
/* Read the next record's header. Returns NULL on end-of-WAL or failure. */
extern XLogReadRecordResult XLogReadRecord(XLogReaderState *state,
XLogRecord **record,
char **errormsg);
/* Read the next decoded record. Returns NULL on end-of-WAL or failure. */
extern XLogReadRecordResult XLogNextRecord(XLogReaderState *state,
DecodedXLogRecord **record,
char **errormsg);
/* Try to read ahead, if there is space in the decoding buffer. */
extern XLogReadRecordResult XLogReadAhead(XLogReaderState *state,
DecodedXLogRecord **record,
char **errormsg);
/* Read the next XLog record. Returns NULL on end-of-WAL or failure */
extern struct XLogRecord *XLogReadRecord(XLogReaderState *state,
char **errormsg);
/* Validate a page */
extern bool XLogReaderValidatePageHeader(XLogReaderState *state,
@ -359,38 +292,30 @@ typedef struct WALReadError
} WALReadError;
extern bool WALRead(XLogReaderState *state,
WALSegmentOpenCB segopenfn, WALSegmentCloseCB sgclosefn,
char *buf, XLogRecPtr startptr, Size count,
TimeLineID tli, WALReadError *errinfo);
/* Functions for decoding an XLogRecord */
extern size_t DecodeXLogRecordRequiredSpace(size_t xl_tot_len);
extern bool DecodeXLogRecord(XLogReaderState *state,
DecodedXLogRecord *decoded,
XLogRecord *record,
XLogRecPtr lsn,
extern bool DecodeXLogRecord(XLogReaderState *state, XLogRecord *record,
char **errmsg);
#define XLogRecGetTotalLen(decoder) ((decoder)->record->header.xl_tot_len)
#define XLogRecGetPrev(decoder) ((decoder)->record->header.xl_prev)
#define XLogRecGetInfo(decoder) ((decoder)->record->header.xl_info)
#define XLogRecGetRmid(decoder) ((decoder)->record->header.xl_rmid)
#define XLogRecGetXid(decoder) ((decoder)->record->header.xl_xid)
#define XLogRecGetOrigin(decoder) ((decoder)->record->record_origin)
#define XLogRecGetTopXid(decoder) ((decoder)->record->toplevel_xid)
#define XLogRecGetData(decoder) ((decoder)->record->main_data)
#define XLogRecGetDataLen(decoder) ((decoder)->record->main_data_len)
#define XLogRecHasAnyBlockRefs(decoder) ((decoder)->record->max_block_id >= 0)
#define XLogRecMaxBlockId(decoder) ((decoder)->record->max_block_id)
#define XLogRecGetBlock(decoder, i) (&(decoder)->record->blocks[(i)])
#define XLogRecGetTotalLen(decoder) ((decoder)->decoded_record->xl_tot_len)
#define XLogRecGetPrev(decoder) ((decoder)->decoded_record->xl_prev)
#define XLogRecGetInfo(decoder) ((decoder)->decoded_record->xl_info)
#define XLogRecGetRmid(decoder) ((decoder)->decoded_record->xl_rmid)
#define XLogRecGetXid(decoder) ((decoder)->decoded_record->xl_xid)
#define XLogRecGetOrigin(decoder) ((decoder)->record_origin)
#define XLogRecGetTopXid(decoder) ((decoder)->toplevel_xid)
#define XLogRecGetData(decoder) ((decoder)->main_data)
#define XLogRecGetDataLen(decoder) ((decoder)->main_data_len)
#define XLogRecHasAnyBlockRefs(decoder) ((decoder)->max_block_id >= 0)
#define XLogRecHasBlockRef(decoder, block_id) \
((decoder)->record->max_block_id >= (block_id) && \
(decoder)->record->blocks[block_id].in_use)
((decoder)->blocks[block_id].in_use)
#define XLogRecHasBlockImage(decoder, block_id) \
((decoder)->record->blocks[block_id].has_image)
((decoder)->blocks[block_id].has_image)
#define XLogRecBlockImageApply(decoder, block_id) \
((decoder)->record->blocks[block_id].apply_image)
((decoder)->blocks[block_id].apply_image)
#ifndef FRONTEND
extern FullTransactionId XLogRecGetFullXid(XLogReaderState *record);
@ -401,8 +326,5 @@ extern char *XLogRecGetBlockData(XLogReaderState *record, uint8 block_id, Size *
extern bool XLogRecGetBlockTag(XLogReaderState *record, uint8 block_id,
RelFileNode *rnode, ForkNumber *forknum,
BlockNumber *blknum);
extern bool XLogRecGetRecentBuffer(XLogReaderState *record, uint8 block_id,
RelFileNode *rnode, ForkNumber *forknum,
BlockNumber *blknum, Buffer *recent_buffer);
#endif /* XLOGREADER_H */

View File

@ -42,13 +42,14 @@ extern XLogRedoAction XLogReadBufferForRedoExtended(XLogReaderState *record,
Buffer *buf);
extern Buffer XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
BlockNumber blkno, ReadBufferMode mode,
Buffer recent_buffer);
BlockNumber blkno, ReadBufferMode mode);
extern Relation CreateFakeRelcacheEntry(RelFileNode rnode);
extern void FreeFakeRelcacheEntry(Relation fakerel);
extern bool read_local_xlog_page(XLogReaderState *state);
extern int read_local_xlog_page(XLogReaderState *state,
XLogRecPtr targetPagePtr, int reqLen,
XLogRecPtr targetRecPtr, char *cur_page);
extern void wal_segment_open(XLogReaderState *state,
XLogSegNo nextSegNo,
TimeLineID *tli_p);

View File

@ -53,6 +53,6 @@
*/
/* yyyymmddN */
#define CATALOG_VERSION_NO 202105051
#define CATALOG_VERSION_NO 202105091
#endif

View File

@ -6287,14 +6287,6 @@
prorettype => 'text', proargtypes => '',
prosrc => 'pg_get_wal_replay_pause_state' },
{ oid => '9085', descr => 'statistics: information about WAL prefetching',
proname => 'pg_stat_get_prefetch_recovery', prorows => '1', provolatile => 'v',
proretset => 't', prorettype => 'record', proargtypes => '',
proallargtypes => '{timestamptz,int8,int8,int8,int8,int8,int4,int4,float4,float4}',
proargmodes => '{o,o,o,o,o,o,o,o,o,o}',
proargnames => '{stats_reset,prefetch,skip_hit,skip_new,skip_fpw,skip_seq,distance,queue_depth,avg_distance,avg_queue_depth}',
prosrc => 'pg_stat_get_prefetch_recovery' },
{ oid => '2621', descr => 'reload configuration files',
proname => 'pg_reload_conf', provolatile => 'v', prorettype => 'bool',
proargtypes => '', prosrc => 'pg_reload_conf' },

View File

@ -74,7 +74,6 @@ typedef enum StatMsgType
PGSTAT_MTYPE_BGWRITER,
PGSTAT_MTYPE_WAL,
PGSTAT_MTYPE_SLRU,
PGSTAT_MTYPE_RECOVERYPREFETCH,
PGSTAT_MTYPE_FUNCSTAT,
PGSTAT_MTYPE_FUNCPURGE,
PGSTAT_MTYPE_RECOVERYCONFLICT,
@ -198,19 +197,6 @@ typedef struct PgStat_TableXactStatus
struct PgStat_TableXactStatus *next; /* next of same subxact */
} PgStat_TableXactStatus;
/*
* Recovery prefetching statistics persisted on disk by pgstat.c, but kept in
* shared memory by xlogprefetch.c.
*/
typedef struct PgStat_RecoveryPrefetchStats
{
PgStat_Counter prefetch;
PgStat_Counter skip_hit;
PgStat_Counter skip_new;
PgStat_Counter skip_fpw;
PgStat_Counter skip_seq;
TimestampTz stat_reset_timestamp;
} PgStat_RecoveryPrefetchStats;
/* ------------------------------------------------------------
* Message formats follow
@ -553,15 +539,6 @@ typedef struct PgStat_MsgReplSlot
PgStat_Counter m_total_bytes;
} PgStat_MsgReplSlot;
/* ----------
* PgStat_MsgRecoveryPrefetch Sent by XLogPrefetch to save statistics.
* ----------
*/
typedef struct PgStat_MsgRecoveryPrefetch
{
PgStat_MsgHdr m_hdr;
PgStat_RecoveryPrefetchStats m_stats;
} PgStat_MsgRecoveryPrefetch;
/* ----------
* PgStat_MsgRecoveryConflict Sent by the backend upon recovery conflict
@ -725,7 +702,6 @@ typedef union PgStat_Msg
PgStat_MsgBgWriter msg_bgwriter;
PgStat_MsgWal msg_wal;
PgStat_MsgSLRU msg_slru;
PgStat_MsgRecoveryPrefetch msg_recoveryprefetch;
PgStat_MsgFuncstat msg_funcstat;
PgStat_MsgFuncpurge msg_funcpurge;
PgStat_MsgRecoveryConflict msg_recoveryconflict;
@ -1115,7 +1091,6 @@ extern void pgstat_twophase_postabort(TransactionId xid, uint16 info,
extern void pgstat_send_archiver(const char *xlog, bool failed);
extern void pgstat_send_bgwriter(void);
extern void pgstat_send_recoveryprefetch(PgStat_RecoveryPrefetchStats *stats);
extern void pgstat_report_wal(void);
extern bool pgstat_send_wal(bool force);
@ -1132,7 +1107,6 @@ extern PgStat_GlobalStats *pgstat_fetch_global(void);
extern PgStat_WalStats *pgstat_fetch_stat_wal(void);
extern PgStat_SLRUStats *pgstat_fetch_slru(void);
extern PgStat_StatReplSlotEntry *pgstat_fetch_replslot(NameData slotname);
extern PgStat_RecoveryPrefetchStats *pgstat_fetch_recoveryprefetch(void);
extern void pgstat_count_slru_page_zeroed(int slru_idx);
extern void pgstat_count_slru_page_hit(int slru_idx);

View File

@ -29,8 +29,6 @@ typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct LogicalDecodingC
TransactionId xid
);
typedef bool (*LogicalDecodingXLogPageReadCB)(XLogReaderState *ctx);
typedef struct LogicalDecodingContext
{
/* memory context this is all allocated in */
@ -41,7 +39,6 @@ typedef struct LogicalDecodingContext
/* infrastructure pieces for decoding */
XLogReaderState *reader;
LogicalDecodingXLogPageReadCB page_read;
struct ReorderBuffer *reorder;
struct SnapBuild *snapshot_builder;
@ -108,16 +105,14 @@ extern LogicalDecodingContext *CreateInitDecodingContext(const char *plugin,
List *output_plugin_options,
bool need_full_snapshot,
XLogRecPtr restart_lsn,
LogicalDecodingXLogPageReadCB page_read,
WALSegmentCleanupCB cleanup_cb,
XLogReaderRoutine *xl_routine,
LogicalOutputPluginWriterPrepareWrite prepare_write,
LogicalOutputPluginWriterWrite do_write,
LogicalOutputPluginWriterUpdateProgress update_progress);
extern LogicalDecodingContext *CreateDecodingContext(XLogRecPtr start_lsn,
List *output_plugin_options,
bool fast_forward,
LogicalDecodingXLogPageReadCB page_read,
WALSegmentCleanupCB cleanup_cb,
XLogReaderRoutine *xl_routine,
LogicalOutputPluginWriterPrepareWrite prepare_write,
LogicalOutputPluginWriterWrite do_write,
LogicalOutputPluginWriterUpdateProgress update_progress);

View File

@ -442,8 +442,4 @@ extern void assign_search_path(const char *newval, void *extra);
extern bool check_wal_buffers(int *newval, void **extra, GucSource source);
extern void assign_xlog_sync_method(int new_sync_method, void *extra);
/* in access/transam/xlogprefetch.c */
extern void assign_recovery_prefetch(bool new_value, void *extra);
extern void assign_recovery_prefetch_fpw(bool new_value, void *extra);
#endif /* GUC_H */

View File

@ -67,7 +67,6 @@ enum config_group
WAL_SETTINGS,
WAL_CHECKPOINTS,
WAL_ARCHIVING,
WAL_RECOVERY,
WAL_ARCHIVE_RECOVERY,
WAL_RECOVERY_TARGET,
REPLICATION_SENDING,

View File

@ -1879,17 +1879,6 @@ pg_stat_gssapi| SELECT s.pid,
s.gss_enc AS encrypted
FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, wait_event_type, wait_event, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port, backend_xid, backend_xmin, backend_type, ssl, sslversion, sslcipher, sslbits, ssl_client_dn, ssl_client_serial, ssl_issuer_dn, gss_auth, gss_princ, gss_enc, leader_pid, query_id)
WHERE (s.client_port IS NOT NULL);
pg_stat_prefetch_recovery| SELECT s.stats_reset,
s.prefetch,
s.skip_hit,
s.skip_new,
s.skip_fpw,
s.skip_seq,
s.distance,
s.queue_depth,
s.avg_distance,
s.avg_queue_depth
FROM pg_stat_get_prefetch_recovery() s(stats_reset, prefetch, skip_hit, skip_new, skip_fpw, skip_seq, distance, queue_depth, avg_distance, avg_queue_depth);
pg_stat_progress_analyze| SELECT s.pid,
s.datid,
d.datname,

View File

@ -2804,10 +2804,6 @@ XLogPageHeader
XLogPageHeaderData
XLogPageReadCB
XLogPageReadPrivate
XLogPrefetcher
XLogPrefetcherFilter
XLogPrefetchState
XLogPrefetchStats
XLogReaderRoutine
XLogReaderState
XLogRecData