2000-10-13 14:06:40 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
2000-10-20 13:01:21 +02:00
|
|
|
* xlogutils.c
|
2000-10-13 14:06:40 +02:00
|
|
|
*
|
2004-07-22 00:31:26 +02:00
|
|
|
* PostgreSQL transaction log manager utility routines
|
|
|
|
*
|
|
|
|
* This file contains support routines that are used by XLOG replay functions.
|
|
|
|
* None of this code is used during normal system operation.
|
|
|
|
*
|
2000-10-13 14:06:40 +02:00
|
|
|
*
|
2016-01-02 19:33:40 +01:00
|
|
|
* Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
|
2000-10-13 14:06:40 +02:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/access/transam/xlogutils.c
|
XLOG (and related) changes:
* Store two past checkpoint locations, not just one, in pg_control.
On startup, we fall back to the older checkpoint if the newer one
is unreadable. Also, a physical copy of the newest checkpoint record
is kept in pg_control for possible use in disaster recovery (ie,
complete loss of pg_xlog). Also add a version number for pg_control
itself. Remove archdir from pg_control; it ought to be a GUC
parameter, not a special case (not that it's implemented yet anyway).
* Suppress successive checkpoint records when nothing has been entered
in the WAL log since the last one. This is not so much to avoid I/O
as to make it actually useful to keep track of the last two
checkpoints. If the things are right next to each other then there's
not a lot of redundancy gained...
* Change CRC scheme to a true 64-bit CRC, not a pair of 32-bit CRCs
on alternate bytes. Polynomial borrowed from ECMA DLT1 standard.
* Fix XLOG record length handling so that it will work at BLCKSZ = 32k.
* Change XID allocation to work more like OID allocation. (This is of
dubious necessity, but I think it's a good idea anyway.)
* Fix a number of minor bugs, such as off-by-one logic for XLOG file
wraparound at the 4 gig mark.
* Add documentation and clean up some coding infelicities; move file
format declarations out to include files where planned contrib
utilities can get at them.
* Checkpoint will now occur every CHECKPOINT_SEGMENTS log segments or
every CHECKPOINT_TIMEOUT seconds, whichever comes first. It is also
possible to force a checkpoint by sending SIGUSR1 to the postmaster
(undocumented feature...)
* Defend against kill -9 postmaster by storing shmem block's key and ID
in postmaster.pid lockfile, and checking at startup to ensure that no
processes are still connected to old shmem block (if it still exists).
* Switch backends to accept SIGQUIT rather than SIGUSR1 for emergency
stop, for symmetry with postmaster and xlog utilities. Clean up signal
handling in bootstrap.c so that xlog utilities launched by postmaster
will react to signals better.
* Standalone bootstrap now grabs lockfile in target directory, as added
insurance against running it in parallel with live postmaster.
2001-03-13 02:17:06 +01:00
|
|
|
*
|
2000-10-13 14:06:40 +02:00
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
XLOG (and related) changes:
* Store two past checkpoint locations, not just one, in pg_control.
On startup, we fall back to the older checkpoint if the newer one
is unreadable. Also, a physical copy of the newest checkpoint record
is kept in pg_control for possible use in disaster recovery (ie,
complete loss of pg_xlog). Also add a version number for pg_control
itself. Remove archdir from pg_control; it ought to be a GUC
parameter, not a special case (not that it's implemented yet anyway).
* Suppress successive checkpoint records when nothing has been entered
in the WAL log since the last one. This is not so much to avoid I/O
as to make it actually useful to keep track of the last two
checkpoints. If the things are right next to each other then there's
not a lot of redundancy gained...
* Change CRC scheme to a true 64-bit CRC, not a pair of 32-bit CRCs
on alternate bytes. Polynomial borrowed from ECMA DLT1 standard.
* Fix XLOG record length handling so that it will work at BLCKSZ = 32k.
* Change XID allocation to work more like OID allocation. (This is of
dubious necessity, but I think it's a good idea anyway.)
* Fix a number of minor bugs, such as off-by-one logic for XLOG file
wraparound at the 4 gig mark.
* Add documentation and clean up some coding infelicities; move file
format declarations out to include files where planned contrib
utilities can get at them.
* Checkpoint will now occur every CHECKPOINT_SEGMENTS log segments or
every CHECKPOINT_TIMEOUT seconds, whichever comes first. It is also
possible to force a checkpoint by sending SIGUSR1 to the postmaster
(undocumented feature...)
* Defend against kill -9 postmaster by storing shmem block's key and ID
in postmaster.pid lockfile, and checking at startup to ensure that no
processes are still connected to old shmem block (if it still exists).
* Switch backends to accept SIGQUIT rather than SIGUSR1 for emergency
stop, for symmetry with postmaster and xlog utilities. Clean up signal
handling in bootstrap.c so that xlog utilities launched by postmaster
will react to signals better.
* Standalone bootstrap now grabs lockfile in target directory, as added
insurance against running it in parallel with live postmaster.
2001-03-13 02:17:06 +01:00
|
|
|
|
2016-01-21 02:18:58 +01:00
|
|
|
#include <unistd.h>
|
|
|
|
|
2011-08-26 22:52:16 +02:00
|
|
|
#include "access/xlog.h"
|
2016-01-21 02:18:58 +01:00
|
|
|
#include "access/xlog_internal.h"
|
2000-10-20 13:01:21 +02:00
|
|
|
#include "access/xlogutils.h"
|
2008-11-11 14:19:16 +01:00
|
|
|
#include "catalog/catalog.h"
|
2016-03-30 23:56:13 +02:00
|
|
|
#include "miscadmin.h"
|
2001-08-25 20:52:43 +02:00
|
|
|
#include "storage/smgr.h"
|
2011-09-04 07:13:16 +02:00
|
|
|
#include "utils/guc.h"
|
|
|
|
#include "utils/hsearch.h"
|
2008-06-19 02:46:06 +02:00
|
|
|
#include "utils/rel.h"
|
2000-10-13 14:06:40 +02:00
|
|
|
|
2001-08-25 20:52:43 +02:00
|
|
|
|
2006-04-14 22:27:24 +02:00
|
|
|
/*
|
|
|
|
* During XLOG replay, we may see XLOG records for incremental updates of
|
|
|
|
* pages that no longer exist, because their relation was later dropped or
|
|
|
|
* truncated. (Note: this is only possible when full_page_writes = OFF,
|
|
|
|
* since when it's ON, the first reference we see to a page should always
|
|
|
|
* be a full-page rewrite not an incremental update.) Rather than simply
|
|
|
|
* ignoring such records, we make a note of the referenced page, and then
|
|
|
|
* complain if we don't actually see a drop or truncate covering the page
|
|
|
|
* later in replay.
|
|
|
|
*/
|
|
|
|
typedef struct xl_invalid_page_key
|
|
|
|
{
|
|
|
|
RelFileNode node; /* the relation */
|
2008-08-11 13:05:11 +02:00
|
|
|
ForkNumber forkno; /* the fork number */
|
2006-04-14 22:27:24 +02:00
|
|
|
BlockNumber blkno; /* the page */
|
|
|
|
} xl_invalid_page_key;
|
|
|
|
|
|
|
|
typedef struct xl_invalid_page
|
|
|
|
{
|
|
|
|
xl_invalid_page_key key; /* hash key ... must be first */
|
|
|
|
bool present; /* page existed but contained zeroes */
|
|
|
|
} xl_invalid_page;
|
|
|
|
|
|
|
|
static HTAB *invalid_page_tab = NULL;
|
|
|
|
|
|
|
|
|
2011-12-02 09:49:54 +01:00
|
|
|
/* Report a reference to an invalid page */
|
|
|
|
static void
|
|
|
|
report_invalid_page(int elevel, RelFileNode node, ForkNumber forkno,
|
|
|
|
BlockNumber blkno, bool present)
|
|
|
|
{
|
|
|
|
char *path = relpathperm(node, forkno);
|
|
|
|
|
|
|
|
if (present)
|
|
|
|
elog(elevel, "page %u of relation %s is uninitialized",
|
|
|
|
blkno, path);
|
|
|
|
else
|
|
|
|
elog(elevel, "page %u of relation %s does not exist",
|
|
|
|
blkno, path);
|
|
|
|
pfree(path);
|
|
|
|
}
|
|
|
|
|
2006-04-14 22:27:24 +02:00
|
|
|
/* Log a reference to an invalid page */
|
|
|
|
static void
|
2008-08-11 13:05:11 +02:00
|
|
|
log_invalid_page(RelFileNode node, ForkNumber forkno, BlockNumber blkno,
|
|
|
|
bool present)
|
2006-04-14 22:27:24 +02:00
|
|
|
{
|
|
|
|
xl_invalid_page_key key;
|
|
|
|
xl_invalid_page *hentry;
|
|
|
|
bool found;
|
|
|
|
|
2011-12-02 09:49:54 +01:00
|
|
|
/*
|
|
|
|
* Once recovery has reached a consistent state, the invalid-page table
|
|
|
|
* should be empty and remain so. If a reference to an invalid page is
|
2012-06-10 21:20:04 +02:00
|
|
|
* found after consistency is reached, PANIC immediately. This might seem
|
|
|
|
* aggressive, but it's better than letting the invalid reference linger
|
|
|
|
* in the hash table until the end of recovery and PANIC there, which
|
|
|
|
* might come only much later if this is a standby server.
|
2011-12-02 09:49:54 +01:00
|
|
|
*/
|
2011-12-09 13:32:42 +01:00
|
|
|
if (reachedConsistency)
|
2011-12-02 09:49:54 +01:00
|
|
|
{
|
|
|
|
report_invalid_page(WARNING, node, forkno, blkno, present);
|
|
|
|
elog(PANIC, "WAL contains references to invalid pages");
|
|
|
|
}
|
|
|
|
|
2006-04-14 22:27:24 +02:00
|
|
|
/*
|
|
|
|
* Log references to invalid pages at DEBUG1 level. This allows some
|
|
|
|
* tracing of the cause (note the elog context mechanism will tell us
|
|
|
|
* something about the XLOG record that generated the reference).
|
|
|
|
*/
|
2008-11-11 14:19:16 +01:00
|
|
|
if (log_min_messages <= DEBUG1 || client_min_messages <= DEBUG1)
|
2011-12-02 09:49:54 +01:00
|
|
|
report_invalid_page(DEBUG1, node, forkno, blkno, present);
|
2006-04-14 22:27:24 +02:00
|
|
|
|
|
|
|
if (invalid_page_tab == NULL)
|
|
|
|
{
|
|
|
|
/* create hash table when first needed */
|
|
|
|
HASHCTL ctl;
|
|
|
|
|
|
|
|
memset(&ctl, 0, sizeof(ctl));
|
|
|
|
ctl.keysize = sizeof(xl_invalid_page_key);
|
|
|
|
ctl.entrysize = sizeof(xl_invalid_page);
|
|
|
|
|
|
|
|
invalid_page_tab = hash_create("XLOG invalid-page table",
|
|
|
|
100,
|
|
|
|
&ctl,
|
Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create(). Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate. Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions. hash_create() itself will
take care of optimizing when the key size is four bytes.
This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.
In future we could look into offering a similar optimized hashing function
for 8-byte keys. Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.
For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules. Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.
Teodor Sigaev and Tom Lane
2014-12-18 19:36:29 +01:00
|
|
|
HASH_ELEM | HASH_BLOBS);
|
2006-04-14 22:27:24 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* we currently assume xl_invalid_page_key contains no padding */
|
|
|
|
key.node = node;
|
2008-08-11 13:05:11 +02:00
|
|
|
key.forkno = forkno;
|
2006-04-14 22:27:24 +02:00
|
|
|
key.blkno = blkno;
|
|
|
|
hentry = (xl_invalid_page *)
|
|
|
|
hash_search(invalid_page_tab, (void *) &key, HASH_ENTER, &found);
|
|
|
|
|
|
|
|
if (!found)
|
|
|
|
{
|
|
|
|
/* hash_search already filled in the key */
|
|
|
|
hentry->present = present;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* repeat reference ... leave "present" as it was */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Forget any invalid pages >= minblkno, because they've been dropped */
|
|
|
|
static void
|
2008-08-11 13:05:11 +02:00
|
|
|
forget_invalid_pages(RelFileNode node, ForkNumber forkno, BlockNumber minblkno)
|
2006-04-14 22:27:24 +02:00
|
|
|
{
|
|
|
|
HASH_SEQ_STATUS status;
|
|
|
|
xl_invalid_page *hentry;
|
|
|
|
|
|
|
|
if (invalid_page_tab == NULL)
|
|
|
|
return; /* nothing to do */
|
|
|
|
|
|
|
|
hash_seq_init(&status, invalid_page_tab);
|
|
|
|
|
|
|
|
while ((hentry = (xl_invalid_page *) hash_seq_search(&status)) != NULL)
|
|
|
|
{
|
|
|
|
if (RelFileNodeEquals(hentry->key.node, node) &&
|
2008-08-11 13:05:11 +02:00
|
|
|
hentry->key.forkno == forkno &&
|
2006-04-14 22:27:24 +02:00
|
|
|
hentry->key.blkno >= minblkno)
|
|
|
|
{
|
2008-11-11 14:19:16 +01:00
|
|
|
if (log_min_messages <= DEBUG2 || client_min_messages <= DEBUG2)
|
|
|
|
{
|
2010-08-13 22:10:54 +02:00
|
|
|
char *path = relpathperm(hentry->key.node, forkno);
|
2009-06-11 16:49:15 +02:00
|
|
|
|
2008-11-11 14:19:16 +01:00
|
|
|
elog(DEBUG2, "page %u of relation %s has been dropped",
|
|
|
|
hentry->key.blkno, path);
|
|
|
|
pfree(path);
|
|
|
|
}
|
2006-04-14 22:27:24 +02:00
|
|
|
|
|
|
|
if (hash_search(invalid_page_tab,
|
|
|
|
(void *) &hentry->key,
|
|
|
|
HASH_REMOVE, NULL) == NULL)
|
|
|
|
elog(ERROR, "hash table corrupted");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Forget any invalid pages in a whole database */
|
|
|
|
static void
|
|
|
|
forget_invalid_pages_db(Oid dbid)
|
|
|
|
{
|
|
|
|
HASH_SEQ_STATUS status;
|
|
|
|
xl_invalid_page *hentry;
|
|
|
|
|
|
|
|
if (invalid_page_tab == NULL)
|
|
|
|
return; /* nothing to do */
|
|
|
|
|
|
|
|
hash_seq_init(&status, invalid_page_tab);
|
|
|
|
|
|
|
|
while ((hentry = (xl_invalid_page *) hash_seq_search(&status)) != NULL)
|
|
|
|
{
|
|
|
|
if (hentry->key.node.dbNode == dbid)
|
|
|
|
{
|
2008-11-11 14:19:16 +01:00
|
|
|
if (log_min_messages <= DEBUG2 || client_min_messages <= DEBUG2)
|
|
|
|
{
|
2010-08-13 22:10:54 +02:00
|
|
|
char *path = relpathperm(hentry->key.node, hentry->key.forkno);
|
2009-06-11 16:49:15 +02:00
|
|
|
|
2008-11-11 14:19:16 +01:00
|
|
|
elog(DEBUG2, "page %u of relation %s has been dropped",
|
|
|
|
hentry->key.blkno, path);
|
|
|
|
pfree(path);
|
|
|
|
}
|
2006-04-14 22:27:24 +02:00
|
|
|
|
|
|
|
if (hash_search(invalid_page_tab,
|
|
|
|
(void *) &hentry->key,
|
|
|
|
HASH_REMOVE, NULL) == NULL)
|
|
|
|
elog(ERROR, "hash table corrupted");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-12-02 09:49:54 +01:00
|
|
|
/* Are there any unresolved references to invalid pages? */
|
|
|
|
bool
|
|
|
|
XLogHaveInvalidPages(void)
|
|
|
|
{
|
|
|
|
if (invalid_page_tab != NULL &&
|
|
|
|
hash_get_num_entries(invalid_page_tab) > 0)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2006-04-14 22:27:24 +02:00
|
|
|
/* Complain about any remaining invalid-page entries */
|
|
|
|
void
|
|
|
|
XLogCheckInvalidPages(void)
|
|
|
|
{
|
|
|
|
HASH_SEQ_STATUS status;
|
|
|
|
xl_invalid_page *hentry;
|
|
|
|
bool foundone = false;
|
|
|
|
|
|
|
|
if (invalid_page_tab == NULL)
|
|
|
|
return; /* nothing to do */
|
|
|
|
|
|
|
|
hash_seq_init(&status, invalid_page_tab);
|
|
|
|
|
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Our strategy is to emit WARNING messages for all remaining entries and
|
|
|
|
* only PANIC after we've dumped all the available info.
|
2006-04-14 22:27:24 +02:00
|
|
|
*/
|
|
|
|
while ((hentry = (xl_invalid_page *) hash_seq_search(&status)) != NULL)
|
|
|
|
{
|
2011-12-02 09:49:54 +01:00
|
|
|
report_invalid_page(WARNING, hentry->key.node, hentry->key.forkno,
|
|
|
|
hentry->key.blkno, hentry->present);
|
2006-04-14 22:27:24 +02:00
|
|
|
foundone = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (foundone)
|
|
|
|
elog(PANIC, "WAL contains references to invalid pages");
|
2008-06-12 11:12:31 +02:00
|
|
|
|
|
|
|
hash_destroy(invalid_page_tab);
|
|
|
|
invalid_page_tab = NULL;
|
2006-04-14 22:27:24 +02:00
|
|
|
}
|
|
|
|
|
2014-08-13 14:39:08 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* XLogReadBufferForRedo
|
|
|
|
* Read a page during XLOG replay
|
|
|
|
*
|
|
|
|
* Reads a block referenced by a WAL record into shared buffer cache, and
|
|
|
|
* determines what needs to be done to redo the changes to it. If the WAL
|
|
|
|
* record includes a full-page image of the page, it is restored.
|
|
|
|
*
|
|
|
|
* 'lsn' is the LSN of the record being replayed. It is compared with the
|
|
|
|
* page's LSN to determine if the record has already been replayed.
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
* 'block_id' is the ID number the block was registered with, when the WAL
|
|
|
|
* record was created.
|
2014-08-13 14:39:08 +02:00
|
|
|
*
|
|
|
|
* Returns one of the following:
|
|
|
|
*
|
|
|
|
* BLK_NEEDS_REDO - changes from the WAL record need to be applied
|
|
|
|
* BLK_DONE - block doesn't need replaying
|
|
|
|
* BLK_RESTORED - block was restored from a full-page image included in
|
|
|
|
* the record
|
|
|
|
* BLK_NOTFOUND - block was not found (because it was truncated away by
|
|
|
|
* an operation later in the WAL stream)
|
|
|
|
*
|
|
|
|
* On return, the buffer is locked in exclusive-mode, and returned in *buf.
|
|
|
|
* Note that the buffer is locked and returned even if it doesn't need
|
|
|
|
* replaying. (Getting the buffer lock is not really necessary during
|
|
|
|
* single-process crash recovery, but some subroutines such as MarkBufferDirty
|
|
|
|
* will complain if we don't have the lock. In hot standby mode it's
|
|
|
|
* definitely necessary.)
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
*
|
|
|
|
* Note: when a backup block is available in XLOG, we restore it
|
|
|
|
* unconditionally, even if the page in the database appears newer. This is
|
|
|
|
* to protect ourselves against database pages that were partially or
|
|
|
|
* incorrectly written during a crash. We assume that the XLOG data must be
|
|
|
|
* good because it has passed a CRC check, while the database page might not
|
|
|
|
* be. This will force us to replay all subsequent modifications of the page
|
|
|
|
* that appear in XLOG, rather than possibly ignoring them as already
|
|
|
|
* applied, but that's not a huge drawback.
|
2014-08-13 14:39:08 +02:00
|
|
|
*/
|
|
|
|
XLogRedoAction
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
XLogReadBufferForRedo(XLogReaderState *record, uint8 block_id,
|
2014-08-13 14:39:08 +02:00
|
|
|
Buffer *buf)
|
|
|
|
{
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
return XLogReadBufferForRedoExtended(record, block_id, RBM_NORMAL,
|
|
|
|
false, buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pin and lock a buffer referenced by a WAL record, for the purpose of
|
|
|
|
* re-initializing it.
|
|
|
|
*/
|
|
|
|
Buffer
|
|
|
|
XLogInitBufferForRedo(XLogReaderState *record, uint8 block_id)
|
|
|
|
{
|
|
|
|
Buffer buf;
|
|
|
|
|
|
|
|
XLogReadBufferForRedoExtended(record, block_id, RBM_ZERO_AND_LOCK, false,
|
|
|
|
&buf);
|
|
|
|
return buf;
|
2014-08-13 14:39:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* XLogReadBufferForRedoExtended
|
|
|
|
* Like XLogReadBufferForRedo, but with extra options.
|
|
|
|
*
|
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
2014-11-13 18:47:44 +01:00
|
|
|
* In RBM_ZERO_* modes, if the page doesn't exist, the relation is extended
|
|
|
|
* with all-zeroes pages up to the referenced block number. In
|
|
|
|
* RBM_ZERO_AND_LOCK and RBM_ZERO_AND_CLEANUP_LOCK modes, the return value
|
|
|
|
* is always BLK_NEEDS_REDO.
|
|
|
|
*
|
|
|
|
* (The RBM_ZERO_AND_CLEANUP_LOCK mode is redundant with the get_cleanup_lock
|
|
|
|
* parameter. Do not use an inconsistent combination!)
|
2014-08-13 14:39:08 +02:00
|
|
|
*
|
|
|
|
* If 'get_cleanup_lock' is true, a "cleanup lock" is acquired on the buffer
|
|
|
|
* using LockBufferForCleanup(), instead of a regular exclusive lock.
|
|
|
|
*/
|
|
|
|
XLogRedoAction
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
XLogReadBufferForRedoExtended(XLogReaderState *record,
|
|
|
|
uint8 block_id,
|
2014-08-13 14:39:08 +02:00
|
|
|
ReadBufferMode mode, bool get_cleanup_lock,
|
|
|
|
Buffer *buf)
|
|
|
|
{
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
XLogRecPtr lsn = record->EndRecPtr;
|
|
|
|
RelFileNode rnode;
|
|
|
|
ForkNumber forknum;
|
|
|
|
BlockNumber blkno;
|
|
|
|
Page page;
|
2015-07-20 15:02:28 +02:00
|
|
|
bool zeromode;
|
|
|
|
bool willinit;
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
|
|
|
|
if (!XLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blkno))
|
|
|
|
{
|
|
|
|
/* Caller specified a bogus block_id */
|
|
|
|
elog(PANIC, "failed to locate backup block with ID %d", block_id);
|
|
|
|
}
|
|
|
|
|
2015-07-20 15:02:28 +02:00
|
|
|
/*
|
|
|
|
* Make sure that if the block is marked with WILL_INIT, the caller is
|
|
|
|
* going to initialize it. And vice versa.
|
|
|
|
*/
|
|
|
|
zeromode = (mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK);
|
|
|
|
willinit = (record->blocks[block_id].flags & BKPBLOCK_WILL_INIT) != 0;
|
|
|
|
if (willinit && !zeromode)
|
|
|
|
elog(PANIC, "block with WILL_INIT flag in WAL record must be zeroed by redo routine");
|
|
|
|
if (!willinit && zeromode)
|
|
|
|
elog(PANIC, "block to be initialized in redo routine must be marked with WILL_INIT flag in the WAL record");
|
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
/* If it's a full-page image, restore it. */
|
|
|
|
if (XLogRecHasBlockImage(record, block_id))
|
2014-08-13 14:39:08 +02:00
|
|
|
{
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
*buf = XLogReadBufferExtended(rnode, forknum, blkno,
|
|
|
|
get_cleanup_lock ? RBM_ZERO_AND_CLEANUP_LOCK : RBM_ZERO_AND_LOCK);
|
2016-04-20 15:31:19 +02:00
|
|
|
page = BufferGetPage(*buf);
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
if (!RestoreBlockImage(record, block_id, page))
|
|
|
|
elog(ERROR, "failed to restore block image");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The page may be uninitialized. If so, we can't set the LSN because
|
|
|
|
* that would corrupt the page.
|
|
|
|
*/
|
|
|
|
if (!PageIsNew(page))
|
|
|
|
{
|
|
|
|
PageSetLSN(page, lsn);
|
|
|
|
}
|
|
|
|
|
|
|
|
MarkBufferDirty(*buf);
|
|
|
|
|
2015-12-10 16:25:12 +01:00
|
|
|
/*
|
|
|
|
* At the end of crash recovery the init forks of unlogged relations
|
|
|
|
* are copied, without going through shared buffers. So we need to
|
|
|
|
* force the on-disk state of init forks to always be in sync with the
|
|
|
|
* state in shared buffers.
|
|
|
|
*/
|
|
|
|
if (forknum == INIT_FORKNUM)
|
|
|
|
FlushOneBuffer(*buf);
|
|
|
|
|
2014-08-13 14:39:08 +02:00
|
|
|
return BLK_RESTORED;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
*buf = XLogReadBufferExtended(rnode, forknum, blkno, mode);
|
2014-08-13 14:39:08 +02:00
|
|
|
if (BufferIsValid(*buf))
|
|
|
|
{
|
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
2014-11-13 18:47:44 +01:00
|
|
|
if (mode != RBM_ZERO_AND_LOCK && mode != RBM_ZERO_AND_CLEANUP_LOCK)
|
|
|
|
{
|
|
|
|
if (get_cleanup_lock)
|
|
|
|
LockBufferForCleanup(*buf);
|
|
|
|
else
|
|
|
|
LockBuffer(*buf, BUFFER_LOCK_EXCLUSIVE);
|
|
|
|
}
|
2016-04-20 15:31:19 +02:00
|
|
|
if (lsn <= PageGetLSN(BufferGetPage(*buf)))
|
2014-08-13 14:39:08 +02:00
|
|
|
return BLK_DONE;
|
|
|
|
else
|
|
|
|
return BLK_NEEDS_REDO;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
return BLK_NOTFOUND;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2000-10-20 13:01:21 +02:00
|
|
|
/*
|
2008-11-03 16:10:17 +01:00
|
|
|
* XLogReadBufferExtended
|
2006-03-29 23:17:39 +02:00
|
|
|
* Read a page during XLOG replay
|
2000-10-20 13:01:21 +02:00
|
|
|
*
|
2009-01-20 19:59:37 +01:00
|
|
|
* This is functionally comparable to ReadBufferExtended. There's some
|
|
|
|
* differences in the behavior wrt. the "mode" argument:
|
2006-03-29 23:17:39 +02:00
|
|
|
*
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
* In RBM_NORMAL mode, if the page doesn't exist, or contains all-zeroes, we
|
|
|
|
* return InvalidBuffer. In this case the caller should silently skip the
|
|
|
|
* update on this page. (In this situation, we expect that the page was later
|
|
|
|
* dropped or truncated. If we don't see evidence of that later in the WAL
|
|
|
|
* sequence, we'll complain at the end of WAL replay.)
|
|
|
|
*
|
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
2014-11-13 18:47:44 +01:00
|
|
|
* In RBM_ZERO_* modes, if the page doesn't exist, the relation is extended
|
|
|
|
* with all-zeroes pages up to the given block number.
|
Fix multiple bugs in index page locking during hot-standby WAL replay.
In ordinary operation, VACUUM must be careful to take a cleanup lock on
each leaf page of a btree index; this ensures that no indexscans could
still be "in flight" to heap tuples due to be deleted. (Because of
possible index-tuple motion due to concurrent page splits, it's not enough
to lock only the pages we're deleting index tuples from.) In Hot Standby,
the WAL replay process must likewise lock every leaf page. There were
several bugs in the code for that:
* The replay scan might come across unused, all-zero pages in the index.
While btree_xlog_vacuum itself did the right thing (ie, nothing) with
such pages, xlogutils.c supposed that such pages must be corrupt and
would throw an error. This accounts for various reports of replication
failures with "PANIC: WAL contains references to invalid pages". To
fix, add a ReadBufferMode value that instructs XLogReadBufferExtended
not to complain when we're doing this.
* btree_xlog_vacuum performed the extra locking if standbyState ==
STANDBY_SNAPSHOT_READY, but that's not the correct test: we won't open up
for hot standby queries until the database has reached consistency, and
we don't want to do the extra locking till then either, for fear of reading
corrupted pages (which bufmgr.c would complain about). Fix by exporting a
new function from xlog.c that will report whether we're actually in hot
standby replay mode.
* To ensure full coverage of the index in the replay scan, btvacuumscan
would emit a dummy WAL record for the last page of the index, if no
vacuuming work had been done on that page. However, if the last page
of the index is all-zero, that would result in corruption of said page,
since the functions called on it weren't prepared to handle that case.
There's no need to lock any such pages, so change the logic to target
the last normal leaf page instead.
The first two of these bugs were diagnosed by Andres Freund, the other one
by me. Fixes based on ideas from Heikki Linnakangas and myself.
This has been wrong since Hot Standby was introduced, so back-patch to 9.0.
2014-01-14 23:34:47 +01:00
|
|
|
*
|
|
|
|
* In RBM_NORMAL_NO_LOG mode, we return InvalidBuffer if the page doesn't
|
|
|
|
* exist, and we don't check for all-zeroes. Thus, no log entry is made
|
|
|
|
* to imply that the page should be dropped or truncated later.
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
*
|
|
|
|
* NB: A redo function should normally not call this directly. To get a page
|
|
|
|
* to modify, use XLogReplayBuffer instead. It is important that all pages
|
|
|
|
* modified by a WAL record are registered in the WAL records, or they will be
|
|
|
|
* invisible to tools that that need to know which pages are modified.
|
2008-08-11 13:05:11 +02:00
|
|
|
*/
|
|
|
|
Buffer
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
|
|
|
|
BlockNumber blkno, ReadBufferMode mode)
|
2000-10-20 13:01:21 +02:00
|
|
|
{
|
2008-06-12 11:12:31 +02:00
|
|
|
BlockNumber lastblock;
|
2000-10-20 13:01:21 +02:00
|
|
|
Buffer buffer;
|
2008-06-12 11:12:31 +02:00
|
|
|
SMgrRelation smgr;
|
2000-10-20 13:01:21 +02:00
|
|
|
|
2006-03-29 23:17:39 +02:00
|
|
|
Assert(blkno != P_NEW);
|
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
/* Open the relation at smgr level */
|
2010-08-13 22:10:54 +02:00
|
|
|
smgr = smgropen(rnode, InvalidBackendId);
|
2008-06-12 11:12:31 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create the target file if it doesn't already exist. This lets us cope
|
|
|
|
* if the replay sequence contains writes to a relation that is later
|
|
|
|
* deleted. (The original coding of this routine would instead suppress
|
|
|
|
* the writes, but that seems like it risks losing valuable data if the
|
|
|
|
* filesystem loses an inode during a crash. Better to write the data
|
|
|
|
* until we are actually told to delete the file.)
|
|
|
|
*/
|
2008-11-19 11:34:52 +01:00
|
|
|
smgrcreate(smgr, forknum, true);
|
2008-06-12 11:12:31 +02:00
|
|
|
|
2008-08-11 13:05:11 +02:00
|
|
|
lastblock = smgrnblocks(smgr, forknum);
|
2008-06-12 11:12:31 +02:00
|
|
|
|
2006-03-29 23:17:39 +02:00
|
|
|
if (blkno < lastblock)
|
2000-10-20 13:01:21 +02:00
|
|
|
{
|
2006-03-29 23:17:39 +02:00
|
|
|
/* page exists in file */
|
2010-08-13 22:10:54 +02:00
|
|
|
buffer = ReadBufferWithoutRelcache(rnode, forknum, blkno,
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
mode, NULL);
|
2006-03-29 23:17:39 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* hm, page doesn't exist in file */
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
if (mode == RBM_NORMAL)
|
2006-04-14 22:27:24 +02:00
|
|
|
{
|
2008-08-11 13:05:11 +02:00
|
|
|
log_invalid_page(rnode, forknum, blkno, false);
|
2006-04-14 22:27:24 +02:00
|
|
|
return InvalidBuffer;
|
|
|
|
}
|
Fix multiple bugs in index page locking during hot-standby WAL replay.
In ordinary operation, VACUUM must be careful to take a cleanup lock on
each leaf page of a btree index; this ensures that no indexscans could
still be "in flight" to heap tuples due to be deleted. (Because of
possible index-tuple motion due to concurrent page splits, it's not enough
to lock only the pages we're deleting index tuples from.) In Hot Standby,
the WAL replay process must likewise lock every leaf page. There were
several bugs in the code for that:
* The replay scan might come across unused, all-zero pages in the index.
While btree_xlog_vacuum itself did the right thing (ie, nothing) with
such pages, xlogutils.c supposed that such pages must be corrupt and
would throw an error. This accounts for various reports of replication
failures with "PANIC: WAL contains references to invalid pages". To
fix, add a ReadBufferMode value that instructs XLogReadBufferExtended
not to complain when we're doing this.
* btree_xlog_vacuum performed the extra locking if standbyState ==
STANDBY_SNAPSHOT_READY, but that's not the correct test: we won't open up
for hot standby queries until the database has reached consistency, and
we don't want to do the extra locking till then either, for fear of reading
corrupted pages (which bufmgr.c would complain about). Fix by exporting a
new function from xlog.c that will report whether we're actually in hot
standby replay mode.
* To ensure full coverage of the index in the replay scan, btvacuumscan
would emit a dummy WAL record for the last page of the index, if no
vacuuming work had been done on that page. However, if the last page
of the index is all-zero, that would result in corruption of said page,
since the functions called on it weren't prepared to handle that case.
There's no need to lock any such pages, so change the logic to target
the last normal leaf page instead.
The first two of these bugs were diagnosed by Andres Freund, the other one
by me. Fixes based on ideas from Heikki Linnakangas and myself.
This has been wrong since Hot Standby was introduced, so back-patch to 9.0.
2014-01-14 23:34:47 +01:00
|
|
|
if (mode == RBM_NORMAL_NO_LOG)
|
|
|
|
return InvalidBuffer;
|
2006-03-29 23:17:39 +02:00
|
|
|
/* OK to extend the file */
|
|
|
|
/* we do this in recovery only - no rel-extension lock needed */
|
|
|
|
Assert(InRecovery);
|
2000-10-20 13:01:21 +02:00
|
|
|
buffer = InvalidBuffer;
|
In XLogReadBufferExtended, don't assume P_NEW yields consecutive pages.
In a database that's not yet reached consistency, it's possible that some
segments of a relation are not full-size but are not the last ones either.
Because of the way smgrnblocks() works, asking for a new page with P_NEW
will fill in the last not-full-size segment --- and if that makes it full
size, the apparent EOF of the relation will increase by more than one page,
so that the next P_NEW request will yield a page past the next consecutive
one. This breaks the relation-extension logic in XLogReadBufferExtended,
possibly allowing a page update to be applied to some page far past where
it was intended to go. This appears to be the explanation for reports of
table bloat on replication slaves compared to their masters, and probably
explains some corrupted-slave reports as well.
Fix the loop to check the page number it actually got, rather than merely
Assert()'ing that dead reckoning got it to the desired place. AFAICT,
there are no other places that make assumptions about exactly which page
they'll get from P_NEW.
Problem identified by Greg Stark, though this is not the same as his
proposed patch.
It's been like this for a long time, so back-patch to all supported
branches.
2014-02-12 20:52:16 +01:00
|
|
|
do
|
2000-10-20 13:01:21 +02:00
|
|
|
{
|
2006-03-29 23:17:39 +02:00
|
|
|
if (buffer != InvalidBuffer)
|
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
2014-11-13 18:47:44 +01:00
|
|
|
{
|
|
|
|
if (mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK)
|
|
|
|
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
|
2006-04-01 01:32:07 +02:00
|
|
|
ReleaseBuffer(buffer);
|
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
2014-11-13 18:47:44 +01:00
|
|
|
}
|
2010-08-13 22:10:54 +02:00
|
|
|
buffer = ReadBufferWithoutRelcache(rnode, forknum,
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
P_NEW, mode, NULL);
|
2000-10-20 13:01:21 +02:00
|
|
|
}
|
In XLogReadBufferExtended, don't assume P_NEW yields consecutive pages.
In a database that's not yet reached consistency, it's possible that some
segments of a relation are not full-size but are not the last ones either.
Because of the way smgrnblocks() works, asking for a new page with P_NEW
will fill in the last not-full-size segment --- and if that makes it full
size, the apparent EOF of the relation will increase by more than one page,
so that the next P_NEW request will yield a page past the next consecutive
one. This breaks the relation-extension logic in XLogReadBufferExtended,
possibly allowing a page update to be applied to some page far past where
it was intended to go. This appears to be the explanation for reports of
table bloat on replication slaves compared to their masters, and probably
explains some corrupted-slave reports as well.
Fix the loop to check the page number it actually got, rather than merely
Assert()'ing that dead reckoning got it to the desired place. AFAICT,
there are no other places that make assumptions about exactly which page
they'll get from P_NEW.
Problem identified by Greg Stark, though this is not the same as his
proposed patch.
It's been like this for a long time, so back-patch to all supported
branches.
2014-02-12 20:52:16 +01:00
|
|
|
while (BufferGetBlockNumber(buffer) < blkno);
|
|
|
|
/* Handle the corner case that P_NEW returns non-consecutive pages */
|
|
|
|
if (BufferGetBlockNumber(buffer) != blkno)
|
|
|
|
{
|
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
2014-11-13 18:47:44 +01:00
|
|
|
if (mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK)
|
|
|
|
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
|
In XLogReadBufferExtended, don't assume P_NEW yields consecutive pages.
In a database that's not yet reached consistency, it's possible that some
segments of a relation are not full-size but are not the last ones either.
Because of the way smgrnblocks() works, asking for a new page with P_NEW
will fill in the last not-full-size segment --- and if that makes it full
size, the apparent EOF of the relation will increase by more than one page,
so that the next P_NEW request will yield a page past the next consecutive
one. This breaks the relation-extension logic in XLogReadBufferExtended,
possibly allowing a page update to be applied to some page far past where
it was intended to go. This appears to be the explanation for reports of
table bloat on replication slaves compared to their masters, and probably
explains some corrupted-slave reports as well.
Fix the loop to check the page number it actually got, rather than merely
Assert()'ing that dead reckoning got it to the desired place. AFAICT,
there are no other places that make assumptions about exactly which page
they'll get from P_NEW.
Problem identified by Greg Stark, though this is not the same as his
proposed patch.
It's been like this for a long time, so back-patch to all supported
branches.
2014-02-12 20:52:16 +01:00
|
|
|
ReleaseBuffer(buffer);
|
|
|
|
buffer = ReadBufferWithoutRelcache(rnode, forknum, blkno,
|
|
|
|
mode, NULL);
|
|
|
|
}
|
2006-03-29 23:17:39 +02:00
|
|
|
}
|
|
|
|
|
Unite ReadBufferWithFork, ReadBufferWithStrategy, and ZeroOrReadBuffer
functions into one ReadBufferExtended function, that takes the strategy
and mode as argument. There's three modes, RBM_NORMAL which is the default
used by plain ReadBuffer(), RBM_ZERO, which replaces ZeroOrReadBuffer, and
a new mode RBM_ZERO_ON_ERROR, which allows callers to read corrupt pages
without throwing an error. The FSM needs the new mode to recover from
corrupt pages, which could happend if we crash after extending an FSM file,
and the new page is "torn".
Add fork number to some error messages in bufmgr.c, that still lacked it.
2008-10-31 16:05:00 +01:00
|
|
|
if (mode == RBM_NORMAL)
|
2006-03-29 23:17:39 +02:00
|
|
|
{
|
|
|
|
/* check that page has been initialized */
|
2016-04-20 15:31:19 +02:00
|
|
|
Page page = (Page) BufferGetPage(buffer);
|
2006-03-29 23:17:39 +02:00
|
|
|
|
2009-01-20 19:59:37 +01:00
|
|
|
/*
|
|
|
|
* We assume that PageIsNew is safe without a lock. During recovery,
|
|
|
|
* there should be no other backends that could modify the buffer at
|
|
|
|
* the same time.
|
|
|
|
*/
|
2008-07-13 22:45:47 +02:00
|
|
|
if (PageIsNew(page))
|
2006-04-14 22:27:24 +02:00
|
|
|
{
|
2009-01-20 19:59:37 +01:00
|
|
|
ReleaseBuffer(buffer);
|
2008-08-11 13:05:11 +02:00
|
|
|
log_invalid_page(rnode, forknum, blkno, true);
|
2006-04-14 22:27:24 +02:00
|
|
|
return InvalidBuffer;
|
|
|
|
}
|
2000-10-20 13:01:21 +02:00
|
|
|
}
|
|
|
|
|
2006-01-11 09:43:13 +01:00
|
|
|
return buffer;
|
2000-10-20 13:01:21 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2008-06-12 11:12:31 +02:00
|
|
|
* Struct actually returned by XLogFakeRelcacheEntry, though the declared
|
|
|
|
* return type is Relation.
|
2000-10-20 13:01:21 +02:00
|
|
|
*/
|
2008-06-12 11:12:31 +02:00
|
|
|
typedef struct
|
2000-10-20 13:01:21 +02:00
|
|
|
{
|
2009-06-11 16:49:15 +02:00
|
|
|
RelationData reldata; /* Note: this must be first */
|
|
|
|
FormData_pg_class pgc;
|
2008-06-12 11:12:31 +02:00
|
|
|
} FakeRelCacheEntryData;
|
2000-10-20 13:01:21 +02:00
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
typedef FakeRelCacheEntryData *FakeRelCacheEntry;
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
/*
|
|
|
|
* Create a fake relation cache entry for a physical relation
|
|
|
|
*
|
|
|
|
* It's often convenient to use the same functions in XLOG replay as in the
|
2009-06-11 16:49:15 +02:00
|
|
|
* main codepath, but those functions typically work with a relcache entry.
|
|
|
|
* We don't have a working relation cache during XLOG replay, but this
|
|
|
|
* function can be used to create a fake relcache entry instead. Only the
|
|
|
|
* fields related to physical storage, like rd_rel, are initialized, so the
|
2008-06-12 11:12:31 +02:00
|
|
|
* fake entry is only usable in low-level operations like ReadBuffer().
|
|
|
|
*
|
|
|
|
* Caller must free the returned entry with FreeFakeRelcacheEntry().
|
|
|
|
*/
|
|
|
|
Relation
|
|
|
|
CreateFakeRelcacheEntry(RelFileNode rnode)
|
2000-10-28 18:21:00 +02:00
|
|
|
{
|
2008-06-12 11:12:31 +02:00
|
|
|
FakeRelCacheEntry fakeentry;
|
2009-06-11 16:49:15 +02:00
|
|
|
Relation rel;
|
2000-10-20 13:01:21 +02:00
|
|
|
|
2012-09-14 15:35:07 +02:00
|
|
|
Assert(InRecovery);
|
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
/* Allocate the Relation struct and all related space in one block. */
|
|
|
|
fakeentry = palloc0(sizeof(FakeRelCacheEntryData));
|
|
|
|
rel = (Relation) fakeentry;
|
2000-10-28 18:21:00 +02:00
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
rel->rd_rel = &fakeentry->pgc;
|
|
|
|
rel->rd_node = rnode;
|
2010-08-30 18:46:23 +02:00
|
|
|
/* We will never be working with temp rels during recovery */
|
|
|
|
rel->rd_backend = InvalidBackendId;
|
2001-10-05 19:28:13 +02:00
|
|
|
|
2012-09-14 15:35:07 +02:00
|
|
|
/* It must be a permanent table if we're in recovery. */
|
|
|
|
rel->rd_rel->relpersistence = RELPERSISTENCE_PERMANENT;
|
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
/* We don't know the name of the relation; use relfilenode instead */
|
|
|
|
sprintf(RelationGetRelationName(rel), "%u", rnode.relNode);
|
2001-10-05 19:28:13 +02:00
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
/*
|
|
|
|
* We set up the lockRelId in case anything tries to lock the dummy
|
|
|
|
* relation. Note that this is fairly bogus since relNode may be
|
2009-06-11 16:49:15 +02:00
|
|
|
* different from the relation's OID. It shouldn't really matter though,
|
|
|
|
* since we are presumably running by ourselves and can't have any lock
|
|
|
|
* conflicts ...
|
2008-06-12 11:12:31 +02:00
|
|
|
*/
|
|
|
|
rel->rd_lockInfo.lockRelId.dbId = rnode.dbNode;
|
|
|
|
rel->rd_lockInfo.lockRelId.relId = rnode.relNode;
|
2000-10-28 18:21:00 +02:00
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
rel->rd_smgr = NULL;
|
2000-10-28 18:21:00 +02:00
|
|
|
|
2008-06-12 11:12:31 +02:00
|
|
|
return rel;
|
2000-10-20 13:01:21 +02:00
|
|
|
}
|
|
|
|
|
2004-02-11 23:55:26 +01:00
|
|
|
/*
|
2008-06-12 11:12:31 +02:00
|
|
|
* Free a fake relation cache entry.
|
2004-02-11 23:55:26 +01:00
|
|
|
*/
|
2008-06-12 11:12:31 +02:00
|
|
|
void
|
|
|
|
FreeFakeRelcacheEntry(Relation fakerel)
|
2000-10-20 13:01:21 +02:00
|
|
|
{
|
2014-03-07 12:25:11 +01:00
|
|
|
/* make sure the fakerel is not referenced by the SmgrRelation anymore */
|
|
|
|
if (fakerel->rd_smgr != NULL)
|
|
|
|
smgrclearowner(&fakerel->rd_smgr, fakerel->rd_smgr);
|
2008-06-12 11:12:31 +02:00
|
|
|
pfree(fakerel);
|
2000-10-20 13:01:21 +02:00
|
|
|
}
|
2004-02-11 23:55:26 +01:00
|
|
|
|
|
|
|
/*
|
2006-03-29 23:17:39 +02:00
|
|
|
* Drop a relation during XLOG replay
|
2004-02-11 23:55:26 +01:00
|
|
|
*
|
2008-06-12 11:12:31 +02:00
|
|
|
* This is called when the relation is about to be deleted; we need to remove
|
|
|
|
* any open "invalid-page" records for the relation.
|
2004-02-11 23:55:26 +01:00
|
|
|
*/
|
|
|
|
void
|
2008-08-11 13:05:11 +02:00
|
|
|
XLogDropRelation(RelFileNode rnode, ForkNumber forknum)
|
2004-02-11 23:55:26 +01:00
|
|
|
{
|
2008-08-11 13:05:11 +02:00
|
|
|
forget_invalid_pages(rnode, forknum, 0);
|
2004-02-11 23:55:26 +01:00
|
|
|
}
|
2006-03-29 23:17:39 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop a whole database during XLOG replay
|
|
|
|
*
|
|
|
|
* As above, but for DROP DATABASE instead of dropping a single rel
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
XLogDropDatabase(Oid dbid)
|
|
|
|
{
|
2008-06-12 11:12:31 +02:00
|
|
|
/*
|
|
|
|
* This is unnecessarily heavy-handed, as it will close SMgrRelation
|
2009-06-11 16:49:15 +02:00
|
|
|
* objects for other databases as well. DROP DATABASE occurs seldom enough
|
|
|
|
* that it's not worth introducing a variant of smgrclose for just this
|
|
|
|
* purpose. XXX: Or should we rather leave the smgr entries dangling?
|
2008-06-12 11:12:31 +02:00
|
|
|
*/
|
|
|
|
smgrcloseall();
|
2006-04-14 22:27:24 +02:00
|
|
|
|
|
|
|
forget_invalid_pages_db(dbid);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Truncate a relation during XLOG replay
|
|
|
|
*
|
2008-06-12 11:12:31 +02:00
|
|
|
* We need to clean up any open "invalid-page" records for the dropped pages.
|
2006-04-14 22:27:24 +02:00
|
|
|
*/
|
|
|
|
void
|
2008-08-11 13:05:11 +02:00
|
|
|
XLogTruncateRelation(RelFileNode rnode, ForkNumber forkNum,
|
|
|
|
BlockNumber nblocks)
|
2006-04-14 22:27:24 +02:00
|
|
|
{
|
2008-08-11 13:05:11 +02:00
|
|
|
forget_invalid_pages(rnode, forkNum, nblocks);
|
2006-03-29 23:17:39 +02:00
|
|
|
}
|
2016-01-21 02:18:58 +01:00
|
|
|
|
|
|
|
/*
|
2016-03-30 23:56:13 +02:00
|
|
|
* Read 'count' bytes from WAL into 'buf', starting at location 'startptr'
|
|
|
|
* in timeline 'tli'.
|
|
|
|
*
|
|
|
|
* Will open, and keep open, one WAL segment stored in the static file
|
|
|
|
* descriptor 'sendFile'. This means if XLogRead is used once, there will
|
|
|
|
* always be one descriptor left open until the process ends, but never
|
|
|
|
* more than one.
|
|
|
|
*
|
|
|
|
* XXX This is very similar to pg_xlogdump's XLogDumpXLogRead and to XLogRead
|
|
|
|
* in walsender.c but for small differences (such as lack of elog() in
|
|
|
|
* frontend). Probably these should be merged at some point.
|
2016-01-21 02:18:58 +01:00
|
|
|
*/
|
|
|
|
static void
|
|
|
|
XLogRead(char *buf, TimeLineID tli, XLogRecPtr startptr, Size count)
|
|
|
|
{
|
|
|
|
char *p;
|
|
|
|
XLogRecPtr recptr;
|
|
|
|
Size nbytes;
|
|
|
|
|
2016-03-30 23:56:13 +02:00
|
|
|
/* state maintained across calls */
|
2016-01-21 02:18:58 +01:00
|
|
|
static int sendFile = -1;
|
|
|
|
static XLogSegNo sendSegNo = 0;
|
|
|
|
static uint32 sendOff = 0;
|
|
|
|
|
|
|
|
p = buf;
|
|
|
|
recptr = startptr;
|
|
|
|
nbytes = count;
|
|
|
|
|
|
|
|
while (nbytes > 0)
|
|
|
|
{
|
|
|
|
uint32 startoff;
|
|
|
|
int segbytes;
|
|
|
|
int readbytes;
|
|
|
|
|
|
|
|
startoff = recptr % XLogSegSize;
|
|
|
|
|
2016-03-30 23:56:13 +02:00
|
|
|
/* Do we need to switch to a different xlog segment? */
|
2016-05-04 22:32:22 +02:00
|
|
|
if (sendFile < 0 || !XLByteInSeg(recptr, sendSegNo))
|
2016-01-21 02:18:58 +01:00
|
|
|
{
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
|
|
|
|
if (sendFile >= 0)
|
|
|
|
close(sendFile);
|
|
|
|
|
|
|
|
XLByteToSeg(recptr, sendSegNo);
|
|
|
|
|
|
|
|
XLogFilePath(path, tli, sendSegNo);
|
|
|
|
|
|
|
|
sendFile = BasicOpenFile(path, O_RDONLY | PG_BINARY, 0);
|
|
|
|
|
|
|
|
if (sendFile < 0)
|
|
|
|
{
|
|
|
|
if (errno == ENOENT)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("requested WAL segment %s has already been removed",
|
|
|
|
path)));
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not open file \"%s\": %m",
|
|
|
|
path)));
|
|
|
|
}
|
|
|
|
sendOff = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Need to seek in the file? */
|
|
|
|
if (sendOff != startoff)
|
|
|
|
{
|
|
|
|
if (lseek(sendFile, (off_t) startoff, SEEK_SET) < 0)
|
|
|
|
{
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
|
|
|
|
XLogFilePath(path, tli, sendSegNo);
|
|
|
|
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not seek in log segment %s to offset %u: %m",
|
|
|
|
path, startoff)));
|
|
|
|
}
|
|
|
|
sendOff = startoff;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* How many bytes are within this segment? */
|
|
|
|
if (nbytes > (XLogSegSize - startoff))
|
|
|
|
segbytes = XLogSegSize - startoff;
|
|
|
|
else
|
|
|
|
segbytes = nbytes;
|
|
|
|
|
|
|
|
readbytes = read(sendFile, p, segbytes);
|
|
|
|
if (readbytes <= 0)
|
|
|
|
{
|
|
|
|
char path[MAXPGPATH];
|
|
|
|
|
|
|
|
XLogFilePath(path, tli, sendSegNo);
|
|
|
|
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode_for_file_access(),
|
|
|
|
errmsg("could not read from log segment %s, offset %u, length %lu: %m",
|
|
|
|
path, sendOff, (unsigned long) segbytes)));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Update state for read */
|
|
|
|
recptr += readbytes;
|
|
|
|
|
|
|
|
sendOff += readbytes;
|
|
|
|
nbytes -= readbytes;
|
|
|
|
p += readbytes;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* read_page callback for reading local xlog files
|
|
|
|
*
|
|
|
|
* Public because it would likely be very helpful for someone writing another
|
|
|
|
* output method outside walsender, e.g. in a bgworker.
|
|
|
|
*
|
2016-03-30 23:56:13 +02:00
|
|
|
* TODO: The walsender has its own version of this, but it relies on the
|
2016-01-21 02:18:58 +01:00
|
|
|
* walsender's latch being set whenever WAL is flushed. No such infrastructure
|
|
|
|
* exists for normal backends, so we have to do a check/sleep/repeat style of
|
|
|
|
* loop for now.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
read_local_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr,
|
2016-03-31 01:07:05 +02:00
|
|
|
int reqLen, XLogRecPtr targetRecPtr, char *cur_page,
|
|
|
|
TimeLineID *pageTLI)
|
2016-01-21 02:18:58 +01:00
|
|
|
{
|
2016-03-31 01:07:05 +02:00
|
|
|
XLogRecPtr read_upto,
|
2016-01-21 02:18:58 +01:00
|
|
|
loc;
|
|
|
|
int count;
|
|
|
|
|
|
|
|
loc = targetPagePtr + reqLen;
|
|
|
|
while (1)
|
|
|
|
{
|
|
|
|
/*
|
2016-05-04 22:32:22 +02:00
|
|
|
* TODO: we're going to have to do something more intelligent about
|
|
|
|
* timelines on standbys. Use readTimeLineHistory() and
|
|
|
|
* tliOfPointInHistory() to get the proper LSN? For now we'll catch
|
|
|
|
* that case earlier, but the code and TODO is left in here for when
|
|
|
|
* that changes.
|
2016-01-21 02:18:58 +01:00
|
|
|
*/
|
2016-05-04 22:32:22 +02:00
|
|
|
if (!RecoveryInProgress())
|
2016-01-21 02:18:58 +01:00
|
|
|
{
|
2016-05-04 22:32:22 +02:00
|
|
|
*pageTLI = ThisTimeLineID;
|
|
|
|
read_upto = GetFlushRecPtr();
|
2016-01-21 02:18:58 +01:00
|
|
|
}
|
|
|
|
else
|
2016-05-04 22:32:22 +02:00
|
|
|
read_upto = GetXLogReplayRecPtr(pageTLI);
|
|
|
|
|
|
|
|
if (loc <= read_upto)
|
2016-01-21 02:18:58 +01:00
|
|
|
break;
|
2016-05-04 22:32:22 +02:00
|
|
|
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
|
|
|
pg_usleep(1000L);
|
2016-01-21 02:18:58 +01:00
|
|
|
}
|
|
|
|
|
2016-03-31 01:07:05 +02:00
|
|
|
if (targetPagePtr + XLOG_BLCKSZ <= read_upto)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* more than one block available; read only that block, have caller
|
|
|
|
* come back if they need more.
|
|
|
|
*/
|
2016-01-21 02:18:58 +01:00
|
|
|
count = XLOG_BLCKSZ;
|
2016-03-31 01:07:05 +02:00
|
|
|
}
|
|
|
|
else if (targetPagePtr + reqLen > read_upto)
|
|
|
|
{
|
|
|
|
/* not enough data there */
|
2016-01-21 02:18:58 +01:00
|
|
|
return -1;
|
2016-03-31 01:07:05 +02:00
|
|
|
}
|
2016-01-21 02:18:58 +01:00
|
|
|
else
|
2016-03-31 01:07:05 +02:00
|
|
|
{
|
|
|
|
/* enough bytes available to satisfy the request */
|
|
|
|
count = read_upto - targetPagePtr;
|
|
|
|
}
|
2016-01-21 02:18:58 +01:00
|
|
|
|
2016-03-31 01:07:05 +02:00
|
|
|
/*
|
|
|
|
* Even though we just determined how much of the page can be validly read
|
|
|
|
* as 'count', read the whole page anyway. It's guaranteed to be
|
|
|
|
* zero-padded up to the page boundary if it's incomplete.
|
|
|
|
*/
|
2016-01-21 02:18:58 +01:00
|
|
|
XLogRead(cur_page, *pageTLI, targetPagePtr, XLOG_BLCKSZ);
|
|
|
|
|
2016-03-31 01:07:05 +02:00
|
|
|
/* number of valid bytes in the buffer */
|
2016-01-21 02:18:58 +01:00
|
|
|
return count;
|
|
|
|
}
|