From 5860b22c455ff197d568dabcea591d2afc769f34 Mon Sep 17 00:00:00 2001 From: Michael Paquier Date: Mon, 18 Jun 2018 10:43:27 +0900 Subject: [PATCH] Prevent hard failures of standbys caused by recycled WAL segments When a standby's WAL receiver stops reading WAL from a WAL stream, it writes data to the current WAL segment without having priorily zero'ed the page currently written to, which can cause the WAL reader to read junk data from a past recycled segment and then it would try to get a record from it. While sanity checks in place provide most of the protection needed, in some rare circumstances, with chances increasing when a record header crosses a page boundary, then the startup process could fail violently on an allocation failure, as follows: FATAL: invalid memory alloc request size XXX This is confusing for the user and also unhelpful as this requires in the worst case a manual restart of the instance, impacting potentially the availability of the cluster, and this also makes WAL data look like it is in a corrupted state. The chances of seeing failures are higher if the connection between the standby and its root node is unstable, causing WAL pages to be written in the middle. A couple of approaches have been discussed, like zero-ing new WAL pages within the WAL receiver itself but this has the disadvantage of impacting performance of any existing instances as this breaks the sequential writes done by the WAL receiver. This commit deals with the problem with a more simple approach, which has no performance impact without reducing the detection of the problem: if a record is found with a length higher than 1GB for backends, then do not try any allocation and report a soft failure which will force the standby to retry reading WAL. It could be possible that the allocation call passes and that an unnecessary amount of memory is allocated, however follow-up checks on records would just fail, making this allocation short-lived anyway. This patch owes a great deal to Tsunakawa Takayuki for reporting the failure first, and then discussing a couple of potential approaches to the problem. Backpatch down to 9.5, which is where palloc_extended has been introduced. Reported-by: Tsunakawa Takayuki Reviewed-by: Tsunakawa Takayuki Author: Michael Paquier Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8B57AD@G01JPEXMBYT05 --- src/backend/access/transam/xlogreader.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c index f950c5f46e..8e0d17d0f5 100644 --- a/src/backend/access/transam/xlogreader.c +++ b/src/backend/access/transam/xlogreader.c @@ -25,6 +25,10 @@ #include "common/pg_lzcompress.h" #include "replication/origin.h" +#ifndef FRONTEND +#include "utils/memutils.h" +#endif + static bool allocate_recordbuf(XLogReaderState *state, uint32 reclength); static bool ValidXLogRecordHeader(XLogReaderState *state, XLogRecPtr RecPtr, @@ -158,6 +162,25 @@ allocate_recordbuf(XLogReaderState *state, uint32 reclength) newSize += XLOG_BLCKSZ - (newSize % XLOG_BLCKSZ); newSize = Max(newSize, 5 * Max(BLCKSZ, XLOG_BLCKSZ)); +#ifndef FRONTEND + + /* + * Note that in much unlucky circumstances, the random data read from a + * recycled segment can cause this routine to be called with a size + * causing a hard failure at allocation. For a standby, this would cause + * the instance to stop suddenly with a hard failure, preventing it to + * retry fetching WAL from one of its sources which could allow it to move + * on with replay without a manual restart. If the data comes from a past + * recycled segment and is still valid, then the allocation may succeed + * but record checks are going to fail so this would be short-lived. If + * the allocation fails because of a memory shortage, then this is not a + * hard failure either per the guarantee given by MCXT_ALLOC_NO_OOM. + */ + if (!AllocSizeIsValid(newSize)) + return false; + +#endif + if (state->readRecordBuf) pfree(state->readRecordBuf); state->readRecordBuf =