2008-12-03 14:05:22 +01:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* visibilitymap.c
|
|
|
|
* bitmap for tracking visibility of heap tuples
|
|
|
|
*
|
2011-01-01 19:18:15 +01:00
|
|
|
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
|
2008-12-03 14:05:22 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/access/heap/visibilitymap.c
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* INTERFACE ROUTINES
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
* visibilitymap_clear - clear a bit in the visibility map
|
|
|
|
* visibilitymap_pin - pin a map page for setting a bit
|
|
|
|
* visibilitymap_pin_ok - check whether correct map page is already pinned
|
|
|
|
* visibilitymap_set - set a bit in a previously pinned page
|
|
|
|
* visibilitymap_test - test if a bit is set
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* NOTES
|
|
|
|
*
|
|
|
|
* The visibility map is a bitmap with one bit per heap page. A set bit means
|
2010-02-26 03:01:40 +01:00
|
|
|
* that all tuples on the page are known visible to all transactions, and
|
2009-08-24 04:18:32 +02:00
|
|
|
* therefore the page doesn't need to be vacuumed. The map is conservative in
|
|
|
|
* the sense that we make sure that whenever a bit is set, we know the
|
|
|
|
* condition is true, but if a bit is not set, it might or might not be true.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* There's no explicit WAL logging in the functions in this file. The callers
|
|
|
|
* must make sure that whenever a bit is cleared, the bit is cleared on WAL
|
|
|
|
* replay of the updating operation as well. Setting bits during recovery
|
|
|
|
* isn't necessary for correctness.
|
|
|
|
*
|
|
|
|
* Currently, the visibility map is only used as a hint, to speed up VACUUM.
|
|
|
|
* A corrupted visibility map won't cause data corruption, although it can
|
|
|
|
* make VACUUM skip pages that need vacuuming, until the next anti-wraparound
|
|
|
|
* vacuum. The visibility map is not used for anti-wraparound vacuums, because
|
|
|
|
* an anti-wraparound vacuum needs to freeze tuples and observe the latest xid
|
2009-08-24 04:18:32 +02:00
|
|
|
* present in the table, even on pages that don't have any dead tuples.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* Although the visibility map is just a hint at the moment, the PD_ALL_VISIBLE
|
2009-08-24 04:18:32 +02:00
|
|
|
* flag on heap pages *must* be correct, because it is used to skip visibility
|
|
|
|
* checking.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* LOCKING
|
|
|
|
*
|
|
|
|
* In heapam.c, whenever a page is modified so that not all tuples on the
|
|
|
|
* page are visible to everyone anymore, the corresponding bit in the
|
|
|
|
* visibility map is cleared. The bit in the visibility map is cleared
|
|
|
|
* after releasing the lock on the heap page, to avoid holding the lock
|
|
|
|
* over possible I/O to read in the visibility map page.
|
|
|
|
*
|
|
|
|
* To set a bit, you need to hold a lock on the heap page. That prevents
|
|
|
|
* the race condition where VACUUM sees that all tuples on the page are
|
|
|
|
* visible to everyone, but another backend modifies the page before VACUUM
|
|
|
|
* sets the bit in the visibility map.
|
|
|
|
*
|
|
|
|
* When a bit is set, the LSN of the visibility map page is updated to make
|
|
|
|
* sure that the visibility map update doesn't get written to disk before the
|
|
|
|
* WAL record of the changes that made it possible to set the bit is flushed.
|
2009-08-24 04:18:32 +02:00
|
|
|
* But when a bit is cleared, we don't have to do that because it's always
|
|
|
|
* safe to clear a bit in the map from correctness point of view.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* TODO
|
|
|
|
*
|
2009-08-24 04:18:32 +02:00
|
|
|
* It would be nice to use the visibility map to skip visibility checks in
|
2008-12-03 14:05:22 +01:00
|
|
|
* index scans.
|
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
#include "access/heapam.h"
|
2008-12-03 14:05:22 +01:00
|
|
|
#include "access/visibilitymap.h"
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
#include "miscadmin.h"
|
2008-12-03 14:05:22 +01:00
|
|
|
#include "storage/bufmgr.h"
|
|
|
|
#include "storage/lmgr.h"
|
|
|
|
#include "storage/smgr.h"
|
2010-02-09 22:43:30 +01:00
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
/*#define TRACE_VISIBILITYMAP */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Size of the bitmap on each visibility map page, in bytes. There's no
|
2009-06-18 12:08:08 +02:00
|
|
|
* extra headers, so the whole page minus the standard page header is
|
|
|
|
* used for the bitmap.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
2008-12-06 18:31:37 +01:00
|
|
|
#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
/* Number of bits allocated for each heap block. */
|
|
|
|
#define BITS_PER_HEAPBLOCK 1
|
|
|
|
|
|
|
|
/* Number of heap blocks we can represent in one byte. */
|
|
|
|
#define HEAPBLOCKS_PER_BYTE 8
|
|
|
|
|
|
|
|
/* Number of heap blocks we can represent in one visibility map page. */
|
|
|
|
#define HEAPBLOCKS_PER_PAGE (MAPSIZE * HEAPBLOCKS_PER_BYTE)
|
|
|
|
|
|
|
|
/* Mapping from heap block number to the right bit in the visibility map */
|
|
|
|
#define HEAPBLK_TO_MAPBLOCK(x) ((x) / HEAPBLOCKS_PER_PAGE)
|
|
|
|
#define HEAPBLK_TO_MAPBYTE(x) (((x) % HEAPBLOCKS_PER_PAGE) / HEAPBLOCKS_PER_BYTE)
|
|
|
|
#define HEAPBLK_TO_MAPBIT(x) ((x) % HEAPBLOCKS_PER_BYTE)
|
|
|
|
|
|
|
|
/* prototypes for internal routines */
|
|
|
|
static Buffer vm_readbuf(Relation rel, BlockNumber blkno, bool extend);
|
|
|
|
static void vm_extend(Relation rel, BlockNumber nvmblocks);
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* visibilitymap_clear - clear a bit in visibility map
|
|
|
|
*
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
* You must pass a buffer containing the correct map page to this function.
|
|
|
|
* Call visibilitymap_pin first to pin the right one. This function doesn't do
|
|
|
|
* any I/O.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
|
|
|
void
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer buf)
|
2008-12-03 14:05:22 +01:00
|
|
|
{
|
|
|
|
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
|
|
|
|
int mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
|
|
|
|
int mapBit = HEAPBLK_TO_MAPBIT(heapBlk);
|
|
|
|
uint8 mask = 1 << mapBit;
|
|
|
|
char *map;
|
|
|
|
|
|
|
|
#ifdef TRACE_VISIBILITYMAP
|
|
|
|
elog(DEBUG1, "vm_clear %s %d", RelationGetRelationName(rel), heapBlk);
|
|
|
|
#endif
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
if (!BufferIsValid(buf) || BufferGetBlockNumber(buf) != mapBlock)
|
|
|
|
elog(ERROR, "wrong buffer passed to visibilitymap_clear");
|
2008-12-03 14:05:22 +01:00
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
|
|
|
|
map = PageGetContents(BufferGetPage(buf));
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
if (map[mapByte] & mask)
|
|
|
|
{
|
|
|
|
map[mapByte] &= ~mask;
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
MarkBufferDirty(buf);
|
2008-12-03 14:05:22 +01:00
|
|
|
}
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
LockBuffer(buf, BUFFER_LOCK_UNLOCK);
|
2008-12-03 14:05:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* visibilitymap_pin - pin a map page for setting a bit
|
|
|
|
*
|
|
|
|
* Setting a bit in the visibility map is a two-phase operation. First, call
|
|
|
|
* visibilitymap_pin, to pin the visibility map page containing the bit for
|
|
|
|
* the heap page. Because that can require I/O to read the map page, you
|
|
|
|
* shouldn't hold a lock on the heap page while doing that. Then, call
|
|
|
|
* visibilitymap_set to actually set the bit.
|
|
|
|
*
|
|
|
|
* On entry, *buf should be InvalidBuffer or a valid buffer returned by
|
|
|
|
* an earlier call to visibilitymap_pin or visibilitymap_test on the same
|
|
|
|
* relation. On return, *buf is a valid buffer with the map page containing
|
2010-04-24 01:21:44 +02:00
|
|
|
* the bit for heapBlk.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
|
|
|
* If the page doesn't exist in the map file yet, it is extended.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
visibilitymap_pin(Relation rel, BlockNumber heapBlk, Buffer *buf)
|
|
|
|
{
|
|
|
|
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
|
|
|
|
|
|
|
|
/* Reuse the old pinned buffer if possible */
|
|
|
|
if (BufferIsValid(*buf))
|
|
|
|
{
|
|
|
|
if (BufferGetBlockNumber(*buf) == mapBlock)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ReleaseBuffer(*buf);
|
|
|
|
}
|
|
|
|
*buf = vm_readbuf(rel, mapBlock, true);
|
|
|
|
}
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
/*
|
|
|
|
* visibilitymap_pin_ok - do we already have the correct page pinned?
|
|
|
|
*
|
|
|
|
* On entry, buf should be InvalidBuffer or a valid buffer returned by
|
|
|
|
* an earlier call to visibilitymap_pin or visibilitymap_test on the same
|
|
|
|
* relation. The return value indicates whether the buffer covers the
|
|
|
|
* given heapBlk.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
visibilitymap_pin_ok(BlockNumber heapBlk, Buffer buf)
|
|
|
|
{
|
|
|
|
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
|
|
|
|
|
|
|
|
return BufferIsValid(buf) && BufferGetBlockNumber(buf) == mapBlock;
|
|
|
|
}
|
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
/*
|
|
|
|
* visibilitymap_set - set a bit on a previously pinned page
|
|
|
|
*
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
* recptr is the LSN of the XLOG record we're replaying, if we're in recovery,
|
|
|
|
* or InvalidXLogRecPtr in normal running. The page LSN is advanced to the
|
|
|
|
* one provided; in normal running, we generate a new XLOG record and set the
|
|
|
|
* page LSN to that value.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
* You must pass a buffer containing the correct map page to this function.
|
|
|
|
* Call visibilitymap_pin first to pin the right one. This function doesn't do
|
|
|
|
* any I/O.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
visibilitymap_set(Relation rel, BlockNumber heapBlk, XLogRecPtr recptr,
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
Buffer buf)
|
2008-12-03 14:05:22 +01:00
|
|
|
{
|
|
|
|
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
|
|
|
|
uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
|
|
|
|
uint8 mapBit = HEAPBLK_TO_MAPBIT(heapBlk);
|
|
|
|
Page page;
|
|
|
|
char *map;
|
|
|
|
|
|
|
|
#ifdef TRACE_VISIBILITYMAP
|
|
|
|
elog(DEBUG1, "vm_set %s %d", RelationGetRelationName(rel), heapBlk);
|
|
|
|
#endif
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
Assert(InRecovery || XLogRecPtrIsInvalid(recptr));
|
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
/* Check that we have the right page pinned */
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
if (!BufferIsValid(buf) || BufferGetBlockNumber(buf) != mapBlock)
|
|
|
|
elog(ERROR, "wrong buffer passed to visibilitymap_set");
|
2008-12-03 14:05:22 +01:00
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
page = BufferGetPage(buf);
|
2008-12-03 14:05:22 +01:00
|
|
|
map = PageGetContents(page);
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
if (!(map[mapByte] & (1 << mapBit)))
|
|
|
|
{
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
START_CRIT_SECTION();
|
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
map[mapByte] |= (1 << mapBit);
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
MarkBufferDirty(buf);
|
2008-12-03 14:05:22 +01:00
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
if (RelationNeedsWAL(rel))
|
|
|
|
{
|
|
|
|
if (XLogRecPtrIsInvalid(recptr))
|
|
|
|
recptr = log_heap_visible(rel->rd_node, heapBlk, buf);
|
2008-12-03 14:05:22 +01:00
|
|
|
PageSetLSN(page, recptr);
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
PageSetTLI(page, ThisTimeLineID);
|
|
|
|
}
|
|
|
|
|
|
|
|
END_CRIT_SECTION();
|
2008-12-03 14:05:22 +01:00
|
|
|
}
|
|
|
|
|
Make the visibility map crash-safe.
This involves two main changes from the previous behavior. First,
when we set a bit in the visibility map, emit a new WAL record of type
XLOG_HEAP2_VISIBLE. Replay sets the page-level PD_ALL_VISIBLE bit and
the visibility map bit. Second, when inserting, updating, or deleting
a tuple, we can no longer get away with clearing the visibility map
bit after releasing the lock on the corresponding heap page, because
an intervening crash might leave the visibility map bit set and the
page-level bit clear. Making this work requires a bit of interface
refactoring.
In passing, a few minor but related cleanups: change the test in
visibilitymap_set and visibilitymap_clear to throw an error if the
wrong page (or no page) is pinned, rather than silently doing nothing;
this case should never occur. Also, remove duplicate definitions of
InvalidXLogRecPtr.
Patch by me, review by Noah Misch.
2011-06-22 05:04:40 +02:00
|
|
|
LockBuffer(buf, BUFFER_LOCK_UNLOCK);
|
2008-12-03 14:05:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* visibilitymap_test - test if a bit is set
|
|
|
|
*
|
|
|
|
* Are all tuples on heapBlk visible to all, according to the visibility map?
|
|
|
|
*
|
|
|
|
* On entry, *buf should be InvalidBuffer or a valid buffer returned by an
|
|
|
|
* earlier call to visibilitymap_pin or visibilitymap_test on the same
|
|
|
|
* relation. On return, *buf is a valid buffer with the map page containing
|
2010-04-24 01:21:44 +02:00
|
|
|
* the bit for heapBlk, or InvalidBuffer. The caller is responsible for
|
2008-12-03 14:05:22 +01:00
|
|
|
* releasing *buf after it's done testing and setting bits.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
visibilitymap_test(Relation rel, BlockNumber heapBlk, Buffer *buf)
|
|
|
|
{
|
|
|
|
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
|
|
|
|
uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
|
|
|
|
uint8 mapBit = HEAPBLK_TO_MAPBIT(heapBlk);
|
|
|
|
bool result;
|
|
|
|
char *map;
|
|
|
|
|
|
|
|
#ifdef TRACE_VISIBILITYMAP
|
|
|
|
elog(DEBUG1, "vm_test %s %d", RelationGetRelationName(rel), heapBlk);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Reuse the old pinned buffer if possible */
|
|
|
|
if (BufferIsValid(*buf))
|
|
|
|
{
|
|
|
|
if (BufferGetBlockNumber(*buf) != mapBlock)
|
|
|
|
{
|
|
|
|
ReleaseBuffer(*buf);
|
|
|
|
*buf = InvalidBuffer;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!BufferIsValid(*buf))
|
|
|
|
{
|
|
|
|
*buf = vm_readbuf(rel, mapBlock, false);
|
|
|
|
if (!BufferIsValid(*buf))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
map = PageGetContents(BufferGetPage(*buf));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't need to lock the page, as we're only looking at a single bit.
|
|
|
|
*/
|
|
|
|
result = (map[mapByte] & (1 << mapBit)) ? true : false;
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-02-09 22:43:30 +01:00
|
|
|
* visibilitymap_truncate - truncate the visibility map
|
|
|
|
*
|
|
|
|
* The caller must hold AccessExclusiveLock on the relation, to ensure that
|
|
|
|
* other backends receive the smgr invalidation event that this function sends
|
|
|
|
* before they access the VM again.
|
|
|
|
*
|
|
|
|
* nheapblocks is the new size of the heap.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
visibilitymap_truncate(Relation rel, BlockNumber nheapblocks)
|
|
|
|
{
|
|
|
|
BlockNumber newnblocks;
|
2009-06-11 16:49:15 +02:00
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
/* last remaining block, byte, and bit */
|
|
|
|
BlockNumber truncBlock = HEAPBLK_TO_MAPBLOCK(nheapblocks);
|
2009-06-11 16:49:15 +02:00
|
|
|
uint32 truncByte = HEAPBLK_TO_MAPBYTE(nheapblocks);
|
|
|
|
uint8 truncBit = HEAPBLK_TO_MAPBIT(nheapblocks);
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
#ifdef TRACE_VISIBILITYMAP
|
|
|
|
elog(DEBUG1, "vm_truncate %s %d", RelationGetRelationName(rel), nheapblocks);
|
|
|
|
#endif
|
|
|
|
|
2010-02-09 22:43:30 +01:00
|
|
|
RelationOpenSmgr(rel);
|
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
/*
|
|
|
|
* If no visibility map has been created yet for this relation, there's
|
|
|
|
* nothing to truncate.
|
|
|
|
*/
|
|
|
|
if (!smgrexists(rel->rd_smgr, VISIBILITYMAP_FORKNUM))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Unless the new size is exactly at a visibility map page boundary, the
|
|
|
|
* tail bits in the last remaining map page, representing truncated heap
|
|
|
|
* blocks, need to be cleared. This is not only tidy, but also necessary
|
2009-06-11 16:49:15 +02:00
|
|
|
* because we don't get a chance to clear the bits if the heap is extended
|
|
|
|
* again.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
|
|
|
if (truncByte != 0 || truncBit != 0)
|
|
|
|
{
|
2009-06-11 16:49:15 +02:00
|
|
|
Buffer mapBuffer;
|
|
|
|
Page page;
|
|
|
|
char *map;
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
newnblocks = truncBlock + 1;
|
|
|
|
|
|
|
|
mapBuffer = vm_readbuf(rel, truncBlock, false);
|
|
|
|
if (!BufferIsValid(mapBuffer))
|
|
|
|
{
|
|
|
|
/* nothing to do, the file was already smaller */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
page = BufferGetPage(mapBuffer);
|
|
|
|
map = PageGetContents(page);
|
|
|
|
|
|
|
|
LockBuffer(mapBuffer, BUFFER_LOCK_EXCLUSIVE);
|
|
|
|
|
|
|
|
/* Clear out the unwanted bytes. */
|
|
|
|
MemSet(&map[truncByte + 1], 0, MAPSIZE - (truncByte + 1));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Mask out the unwanted bits of the last remaining byte.
|
|
|
|
*
|
2009-06-11 16:49:15 +02:00
|
|
|
* ((1 << 0) - 1) = 00000000 ((1 << 1) - 1) = 00000001 ... ((1 << 6) -
|
|
|
|
* 1) = 00111111 ((1 << 7) - 1) = 01111111
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
|
|
|
map[truncByte] &= (1 << truncBit) - 1;
|
|
|
|
|
|
|
|
MarkBufferDirty(mapBuffer);
|
|
|
|
UnlockReleaseBuffer(mapBuffer);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
newnblocks = truncBlock;
|
|
|
|
|
2010-02-09 22:43:30 +01:00
|
|
|
if (smgrnblocks(rel->rd_smgr, VISIBILITYMAP_FORKNUM) <= newnblocks)
|
2008-12-03 14:05:22 +01:00
|
|
|
{
|
|
|
|
/* nothing to do, the file was already smaller than requested size */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-02-09 22:43:30 +01:00
|
|
|
/* Truncate the unused VM pages, and send smgr inval message */
|
2010-08-13 22:10:54 +02:00
|
|
|
smgrtruncate(rel->rd_smgr, VISIBILITYMAP_FORKNUM, newnblocks);
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* We might as well update the local smgr_vm_nblocks setting. smgrtruncate
|
|
|
|
* sent an smgr cache inval message, which will cause other backends to
|
|
|
|
* invalidate their copy of smgr_vm_nblocks, and this one too at the next
|
|
|
|
* command boundary. But this ensures it isn't outright wrong until then.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
2010-02-09 22:43:30 +01:00
|
|
|
if (rel->rd_smgr)
|
|
|
|
rel->rd_smgr->smgr_vm_nblocks = newnblocks;
|
2008-12-03 14:05:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Read a visibility map page.
|
|
|
|
*
|
|
|
|
* If the page doesn't exist, InvalidBuffer is returned, or if 'extend' is
|
|
|
|
* true, the visibility map file is extended.
|
|
|
|
*/
|
|
|
|
static Buffer
|
|
|
|
vm_readbuf(Relation rel, BlockNumber blkno, bool extend)
|
|
|
|
{
|
2009-06-11 16:49:15 +02:00
|
|
|
Buffer buf;
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
RelationOpenSmgr(rel);
|
|
|
|
|
|
|
|
/*
|
2010-02-09 22:43:30 +01:00
|
|
|
* If we haven't cached the size of the visibility map fork yet, check it
|
|
|
|
* first. Also recheck if the requested block seems to be past end, since
|
|
|
|
* our cached value might be stale. (We send smgr inval messages on
|
|
|
|
* truncation, but not on extension.)
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
2010-02-09 22:43:30 +01:00
|
|
|
if (rel->rd_smgr->smgr_vm_nblocks == InvalidBlockNumber ||
|
|
|
|
blkno >= rel->rd_smgr->smgr_vm_nblocks)
|
2008-12-03 14:05:22 +01:00
|
|
|
{
|
|
|
|
if (smgrexists(rel->rd_smgr, VISIBILITYMAP_FORKNUM))
|
2010-02-09 22:43:30 +01:00
|
|
|
rel->rd_smgr->smgr_vm_nblocks = smgrnblocks(rel->rd_smgr,
|
2010-02-26 03:01:40 +01:00
|
|
|
VISIBILITYMAP_FORKNUM);
|
2008-12-03 14:05:22 +01:00
|
|
|
else
|
2010-02-09 22:43:30 +01:00
|
|
|
rel->rd_smgr->smgr_vm_nblocks = 0;
|
2008-12-03 14:05:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Handle requests beyond EOF */
|
2010-02-09 22:43:30 +01:00
|
|
|
if (blkno >= rel->rd_smgr->smgr_vm_nblocks)
|
2008-12-03 14:05:22 +01:00
|
|
|
{
|
|
|
|
if (extend)
|
|
|
|
vm_extend(rel, blkno + 1);
|
|
|
|
else
|
|
|
|
return InvalidBuffer;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use ZERO_ON_ERROR mode, and initialize the page if necessary. It's
|
|
|
|
* always safe to clear bits, so it's better to clear corrupt pages than
|
|
|
|
* error out.
|
|
|
|
*/
|
|
|
|
buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,
|
|
|
|
RBM_ZERO_ON_ERROR, NULL);
|
|
|
|
if (PageIsNew(BufferGetPage(buf)))
|
|
|
|
PageInit(BufferGetPage(buf), BLCKSZ, 0);
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure that the visibility map fork is at least vm_nblocks long, extending
|
|
|
|
* it if necessary with zeroed pages.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vm_extend(Relation rel, BlockNumber vm_nblocks)
|
|
|
|
{
|
|
|
|
BlockNumber vm_nblocks_now;
|
2009-06-11 16:49:15 +02:00
|
|
|
Page pg;
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
pg = (Page) palloc(BLCKSZ);
|
|
|
|
PageInit(pg, BLCKSZ, 0);
|
|
|
|
|
|
|
|
/*
|
2009-06-11 16:49:15 +02:00
|
|
|
* We use the relation extension lock to lock out other backends trying to
|
|
|
|
* extend the visibility map at the same time. It also locks out extension
|
|
|
|
* of the main fork, unnecessarily, but extending the visibility map
|
|
|
|
* happens seldom enough that it doesn't seem worthwhile to have a
|
|
|
|
* separate lock tag type for it.
|
2008-12-03 14:05:22 +01:00
|
|
|
*
|
2009-06-11 16:49:15 +02:00
|
|
|
* Note that another backend might have extended or created the relation
|
2010-02-09 22:43:30 +01:00
|
|
|
* by the time we get the lock.
|
2008-12-03 14:05:22 +01:00
|
|
|
*/
|
|
|
|
LockRelationForExtension(rel, ExclusiveLock);
|
|
|
|
|
2010-02-09 22:43:30 +01:00
|
|
|
/* Might have to re-open if a cache flush happened */
|
|
|
|
RelationOpenSmgr(rel);
|
|
|
|
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Create the file first if it doesn't exist. If smgr_vm_nblocks is
|
|
|
|
* positive then it must exist, no need for an smgrexists call.
|
2010-02-09 22:43:30 +01:00
|
|
|
*/
|
|
|
|
if ((rel->rd_smgr->smgr_vm_nblocks == 0 ||
|
|
|
|
rel->rd_smgr->smgr_vm_nblocks == InvalidBlockNumber) &&
|
|
|
|
!smgrexists(rel->rd_smgr, VISIBILITYMAP_FORKNUM))
|
2008-12-03 14:05:22 +01:00
|
|
|
smgrcreate(rel->rd_smgr, VISIBILITYMAP_FORKNUM, false);
|
2010-02-09 22:43:30 +01:00
|
|
|
|
|
|
|
vm_nblocks_now = smgrnblocks(rel->rd_smgr, VISIBILITYMAP_FORKNUM);
|
2008-12-03 14:05:22 +01:00
|
|
|
|
|
|
|
while (vm_nblocks_now < vm_nblocks)
|
|
|
|
{
|
|
|
|
smgrextend(rel->rd_smgr, VISIBILITYMAP_FORKNUM, vm_nblocks_now,
|
2010-08-19 04:58:37 +02:00
|
|
|
(char *) pg, false);
|
2008-12-03 14:05:22 +01:00
|
|
|
vm_nblocks_now++;
|
|
|
|
}
|
|
|
|
|
2010-02-09 22:43:30 +01:00
|
|
|
/* Update local cache with the up-to-date size */
|
|
|
|
rel->rd_smgr->smgr_vm_nblocks = vm_nblocks_now;
|
|
|
|
|
2008-12-03 14:05:22 +01:00
|
|
|
UnlockRelationForExtension(rel, ExclusiveLock);
|
|
|
|
|
|
|
|
pfree(pg);
|
|
|
|
}
|