Remove old-style VACUUM FULL (which was known for a little while as

VACUUM FULL INPLACE), along with a boatload of subsidiary code and complexity.
Per discussion, the use case for this method of vacuuming is no longer large
enough to justify maintaining it; not to mention that we don't wish to invest
the work that would be needed to make it play nicely with Hot Standby.

Aside from the code directly related to old-style VACUUM FULL, this commit
removes support for certain WAL record types that could only be generated
within VACUUM FULL, redirect-pointer removal in heap_page_prune, and
nontransactional generation of cache invalidation sinval messages (the last
being the sticking point for Hot Standby).

We still have to retain all code that copes with finding HEAP_MOVED_OFF and
HEAP_MOVED_IN flag bits on existing tuples.  This can't be removed as long
as we want to support in-place update from pre-9.0 databases.
This commit is contained in:
Tom Lane 2010-02-08 04:33:55 +00:00
parent 1ddc2703a9
commit 0a469c8769
41 changed files with 247 additions and 3737 deletions

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/indexam.sgml,v 2.32 2010/01/01 21:53:49 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/indexam.sgml,v 2.33 2010/02/08 04:33:51 tgl Exp $ -->
<chapter id="indexam"> <chapter id="indexam">
<title>Index Access Method Interface Definition</title> <title>Index Access Method Interface Definition</title>
@ -577,11 +577,10 @@ amrestrpos (IndexScanDesc scan);
The core <productname>PostgreSQL</productname> system obtains The core <productname>PostgreSQL</productname> system obtains
<literal>AccessShareLock</> on the index during an index scan, and <literal>AccessShareLock</> on the index during an index scan, and
<literal>RowExclusiveLock</> when updating the index (including plain <literal>RowExclusiveLock</> when updating the index (including plain
<command>VACUUM</>). Since these lock <command>VACUUM</>). Since these lock types do not conflict, the access
types do not conflict, the access method is responsible for handling any method is responsible for handling any fine-grained locking it might need.
fine-grained locking it might need. An exclusive lock on the index as a whole An exclusive lock on the index as a whole will be taken only during index
will be taken only during index creation, destruction, creation, destruction, or <command>REINDEX</>.
<command>REINDEX</>, or <command>VACUUM FULL</>.
</para> </para>
<para> <para>

View File

@ -1,4 +1,4 @@
<!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.98 2010/02/03 17:25:05 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.99 2010/02/08 04:33:51 tgl Exp $ -->
<chapter id="maintenance"> <chapter id="maintenance">
<title>Routine Database Maintenance Tasks</title> <title>Routine Database Maintenance Tasks</title>
@ -123,9 +123,7 @@
<command>ALTER TABLE</command> while it is being vacuumed.) <command>ALTER TABLE</command> while it is being vacuumed.)
<command>VACUUM FULL</> requires exclusive lock on the table it is <command>VACUUM FULL</> requires exclusive lock on the table it is
working on, and therefore cannot be done in parallel with other use working on, and therefore cannot be done in parallel with other use
of the table. Another disadvantage of <command>VACUUM FULL</> is that of the table. Generally, therefore,
while it reduces table size, it does not reduce index size proportionally;
in fact it can make indexes <emphasis>larger</>. Generally, therefore,
administrators should strive to use standard <command>VACUUM</> and administrators should strive to use standard <command>VACUUM</> and
avoid <command>VACUUM FULL</>. avoid <command>VACUUM FULL</>.
</para> </para>
@ -166,13 +164,10 @@
system, except in the special case where one or more pages at the system, except in the special case where one or more pages at the
end of a table become entirely free and an exclusive table lock can be end of a table become entirely free and an exclusive table lock can be
easily obtained. In contrast, <command>VACUUM FULL</> actively compacts easily obtained. In contrast, <command>VACUUM FULL</> actively compacts
tables by moving row versions to earlier pages. It is thus able to tables by writing a complete new version of the table file with no dead
force pages at the end of the table to become entirely free, whereupon space. This minimizes the size of the table, but can take a long time.
it will return them to the operating system. However, if many rows It also requires extra disk space for the new copy of the table, until
must be moved, this can take a long time. Also, moving a row requires the operation completes.
transiently making duplicate index entries for it (the entry pointing
to its new location must be made before the old entry can be removed);
so moving a lot of rows this way causes severe index bloat.
</para> </para>
<para> <para>
@ -220,20 +215,19 @@
<tip> <tip>
<para> <para>
Neither form of <command>VACUUM</> is entirely satisfactory when Plain <command>VACUUM</> may not be satisfactory when
a table contains large numbers of dead row versions as a result of a table contains large numbers of dead row versions as a result of
massive update or delete activity. If you have such a table and massive update or delete activity. If you have such a table and
you need to reclaim the excess disk space it occupies, the best you need to reclaim the excess disk space it occupies, you will need
way is to use <xref linkend="sql-cluster" endterm="sql-cluster-title"> to use <command>VACUUM FULL</>, or alternatively
<xref linkend="sql-cluster" endterm="sql-cluster-title">
or one of the table-rewriting variants of or one of the table-rewriting variants of
<xref linkend="sql-altertable" endterm="sql-altertable-title">. <xref linkend="sql-altertable" endterm="sql-altertable-title">.
These commands rewrite an entire new copy of the table and build These commands rewrite an entire new copy of the table and build
new indexes for it. Like <command>VACUUM FULL</>, they require new indexes for it. All these options require exclusive lock. Note that
exclusive lock. Note that they also temporarily use extra disk they also temporarily use extra disk space approximately equal to the size
space, since the old copies of the table and indexes can't be of the table, since the old copies of the table and indexes can't be
released until the new ones are complete. In the worst case where released until the new ones are complete.
your disk is nearly full, <command>VACUUM FULL</> may be the only
workable alternative.
</para> </para>
</tip> </tip>
@ -579,22 +573,22 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
<firstterm>autovacuum launcher</firstterm>, which is in charge of starting <firstterm>autovacuum launcher</firstterm>, which is in charge of starting
<firstterm>autovacuum worker</firstterm> processes for all databases. The <firstterm>autovacuum worker</firstterm> processes for all databases. The
launcher will distribute the work across time, attempting to start one launcher will distribute the work across time, attempting to start one
worker on each database every <xref linkend="guc-autovacuum-naptime"> worker within each database every <xref linkend="guc-autovacuum-naptime">
seconds. One worker will be launched for each database, with a maximum seconds. (Therefore, if the installation has <replaceable>N</> databases,
of <xref linkend="guc-autovacuum-max-workers"> processes running at the a new worker will be launched every
same time. If there are more than <varname>autovacuum_naptime</>/<replaceable>N</> seconds.)
<xref linkend="guc-autovacuum-max-workers"> databases to be processed, A maximum of <xref linkend="guc-autovacuum-max-workers"> worker processes
are allowed to run at the same time. If there are more than
<varname>autovacuum_max_workers</> databases to be processed,
the next database will be processed as soon as the first worker finishes. the next database will be processed as soon as the first worker finishes.
Each worker process will check each table within its database and Each worker process will check each table within its database and
execute <command>VACUUM</> and/or <command>ANALYZE</> as needed. execute <command>VACUUM</> and/or <command>ANALYZE</> as needed.
</para> </para>
<para> <para>
The <xref linkend="guc-autovacuum-max-workers"> setting limits how many If several large tables all become eligible for vacuuming in a short
workers may be running at any time. If several large tables all become amount of time, all autovacuum workers might become occupied with
eligible for vacuuming in a short amount of time, all autovacuum workers vacuuming those tables for a long period. This would result
might become occupied with vacuuming those tables for a long period.
This would result
in other tables and databases not being vacuumed until a worker became in other tables and databases not being vacuumed until a worker became
available. There is no limit on how many workers might be in a available. There is no limit on how many workers might be in a
single database, but workers do try to avoid repeating work that has single database, but workers do try to avoid repeating work that has
@ -700,8 +694,8 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
</para> </para>
<para> <para>
Index pages that have become B-tree index pages that have become completely empty are reclaimed for
completely empty are reclaimed for re-use. However, here is still the possibility re-use. However, there is still a possibility
of inefficient use of space: if all but a few index keys on a page have of inefficient use of space: if all but a few index keys on a page have
been deleted, the page remains allocated. Therefore, a usage been deleted, the page remains allocated. Therefore, a usage
pattern in which most, but not all, keys in each range are eventually pattern in which most, but not all, keys in each range are eventually

View File

@ -1,5 +1,5 @@
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/vacuum.sgml,v 1.57 2010/01/06 05:31:13 itagaki Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/vacuum.sgml,v 1.58 2010/02/08 04:33:51 tgl Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
@ -21,7 +21,7 @@ PostgreSQL documentation
<refsynopsisdiv> <refsynopsisdiv>
<synopsis> <synopsis>
VACUUM [ ( { FULL [ INPLACE ] | FREEZE | VERBOSE | ANALYZE } [, ...] ) ] [ <replaceable class="PARAMETER">table</replaceable> [ (<replaceable class="PARAMETER">column</replaceable> [, ...] ) ] ] VACUUM [ ( { FULL | FREEZE | VERBOSE | ANALYZE } [, ...] ) ] [ <replaceable class="PARAMETER">table</replaceable> [ (<replaceable class="PARAMETER">column</replaceable> [, ...] ) ] ]
VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ <replaceable class="PARAMETER">table</replaceable> ] VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ <replaceable class="PARAMETER">table</replaceable> ]
VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ <replaceable class="PARAMETER">table</replaceable> [ (<replaceable class="PARAMETER">column</replaceable> [, ...] ) ] ] VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ <replaceable class="PARAMETER">table</replaceable> [ (<replaceable class="PARAMETER">column</replaceable> [, ...] ) ] ]
</synopsis> </synopsis>
@ -58,11 +58,12 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ <replaceable class="PARAMETER">
space and makes it space and makes it
available for re-use. This form of the command can operate in parallel available for re-use. This form of the command can operate in parallel
with normal reading and writing of the table, as an exclusive lock with normal reading and writing of the table, as an exclusive lock
is not obtained. <command>VACUUM is not obtained. However, extra space is not returned to the operating
FULL</command> does more extensive processing, including moving of tuples system (in most cases); it's just kept available for re-use within the
across blocks to try to compact the table to the minimum number of disk same table. <command>VACUUM FULL</command> rewrites the entire contents
blocks. This form is much slower and requires an exclusive lock on each of the table into a new disk file with no extra space, allowing unused
table while it is being processed. space to be returned to the operating system. This form is much slower and
requires an exclusive lock on each table while it is being processed.
</para> </para>
<para> <para>
@ -85,27 +86,10 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ <replaceable class="PARAMETER">
<para> <para>
Selects <quote>full</quote> vacuum, which can reclaim more Selects <quote>full</quote> vacuum, which can reclaim more
space, but takes much longer and exclusively locks the table. space, but takes much longer and exclusively locks the table.
</para> This method also requires extra disk space, since it writes a
<para> new copy of the table and doesn't release the old copy until
For user tables, all table data and indexes are rewritten. This the operation is complete. Usually this should only be used when a
method requires extra disk space in which to write the new data, significant amount of space needs to be reclaimed from within the table.
and is generally useful when a significant amount of space needs
to be reclaimed from within the table.
</para>
<para>
For system tables, all table data and indexes are modified in
place to reclaim space. This method may require less disk space
for the table data than <command>VACUUM FULL</command> on a
comparable user table, but the indexes will grow which may
counteract that benefit. Additionally, the operation is often
slower than <command>VACUUM FULL</command> on a comparable user
table.
</para>
<para>
If <literal>FULL INPLACE</literal> is specified, the space is
reclaimed in the same manner as a system table, even if it is a
user table. Specifying <literal>INPLACE</literal> explicitly is
rarely useful.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -217,10 +201,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ <replaceable class="PARAMETER">
or updated most of the rows in a table and would like the table to or updated most of the rows in a table and would like the table to
physically shrink to occupy less disk space and allow faster table physically shrink to occupy less disk space and allow faster table
scans. <command>VACUUM FULL</command> will usually shrink the table scans. <command>VACUUM FULL</command> will usually shrink the table
more than a plain <command>VACUUM</command> would. The more than a plain <command>VACUUM</command> would.
<option>FULL</option> option does not shrink indexes; a periodic
<command>REINDEX</> is still recommended. In fact, it is often faster
to drop all indexes, <command>VACUUM FULL</>, and recreate the indexes.
</para> </para>
<para> <para>

View File

@ -1,5 +1,5 @@
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/vacuumdb.sgml,v 1.49 2010/01/07 14:35:44 momjian Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/vacuumdb.sgml,v 1.50 2010/02/08 04:33:51 tgl Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
@ -24,7 +24,6 @@ PostgreSQL documentation
<command>vacuumdb</command> <command>vacuumdb</command>
<arg rep="repeat"><replaceable>connection-option</replaceable></arg> <arg rep="repeat"><replaceable>connection-option</replaceable></arg>
<group><arg>--full</arg><arg>-f</arg></group> <group><arg>--full</arg><arg>-f</arg></group>
<group><arg>--inplace</arg><arg>-i</arg></group>
<group><arg>--freeze</arg><arg>-F</arg></group> <group><arg>--freeze</arg><arg>-F</arg></group>
<group><arg>--verbose</arg><arg>-v</arg></group> <group><arg>--verbose</arg><arg>-v</arg></group>
<group><arg>--analyze</arg><arg>-z</arg></group> <group><arg>--analyze</arg><arg>-z</arg></group>
@ -38,7 +37,6 @@ PostgreSQL documentation
<arg rep="repeat"><replaceable>connection-options</replaceable></arg> <arg rep="repeat"><replaceable>connection-options</replaceable></arg>
<group><arg>--all</arg><arg>-a</arg></group> <group><arg>--all</arg><arg>-a</arg></group>
<group><arg>--full</arg><arg>-f</arg></group> <group><arg>--full</arg><arg>-f</arg></group>
<group><arg>--inplace</arg><arg>-i</arg></group>
<group><arg>--freeze</arg><arg>-F</arg></group> <group><arg>--freeze</arg><arg>-F</arg></group>
<group><arg>--verbose</arg><arg>-v</arg></group> <group><arg>--verbose</arg><arg>-v</arg></group>
<group><arg>--analyze</arg><arg>-z</arg></group> <group><arg>--analyze</arg><arg>-z</arg></group>
@ -46,7 +44,6 @@ PostgreSQL documentation
</cmdsynopsis> </cmdsynopsis>
</refsynopsisdiv> </refsynopsisdiv>
<refsect1> <refsect1>
<title>Description</title> <title>Description</title>
@ -73,7 +70,6 @@ PostgreSQL documentation
<para> <para>
<application>vacuumdb</application> accepts the following command-line arguments: <application>vacuumdb</application> accepts the following command-line arguments:
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term><option>-a</option></term> <term><option>-a</option></term>
@ -131,16 +127,6 @@ PostgreSQL documentation
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term><option>-i</option></term>
<term><option>--inplace</option></term>
<listitem>
<para>
Perform <quote>full inplace</quote> vacuuming.
</para>
</listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term><option>-Z</option></term> <term><option>-Z</option></term>
<term><option>--analyze-only</option></term> <term><option>--analyze-only</option></term>
@ -205,15 +191,13 @@ PostgreSQL documentation
<para> <para>
<application>vacuumdb</application> also accepts <application>vacuumdb</application> also accepts
the following command-line arguments for connection parameters: the following command-line arguments for connection parameters:
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term><option>-h <replaceable class="parameter">host</replaceable></></term> <term><option>-h <replaceable class="parameter">host</replaceable></></term>
<term><option>--host <replaceable class="parameter">host</replaceable></></term> <term><option>--host <replaceable class="parameter">host</replaceable></></term>
<listitem> <listitem>
<para> <para>
Specifies the host name of the machine on which the Specifies the host name of the machine on which the server
server
is running. If the value begins with a slash, it is used is running. If the value begins with a slash, it is used
as the directory for the Unix domain socket. as the directory for the Unix domain socket.
</para> </para>

View File

@ -1,4 +1,4 @@
$PostgreSQL: pgsql/src/backend/access/gin/README,v 1.6 2008/07/08 03:25:42 neilc Exp $ $PostgreSQL: pgsql/src/backend/access/gin/README,v 1.7 2010/02/08 04:33:52 tgl Exp $
Gin for PostgreSQL Gin for PostgreSQL
================== ==================
@ -98,13 +98,6 @@ We appreciate any comments, help and suggestions.
* Teach optimizer/executor that GIN is intrinsically clustered. i.e., it * Teach optimizer/executor that GIN is intrinsically clustered. i.e., it
always returns ItemPointer in ascending order. always returns ItemPointer in ascending order.
* Tweak gincostestimate. * Tweak gincostestimate.
* GIN stores several ItemPointer to heap tuple, so VACUUM FULL produces
this warning message:
WARNING: index "idx" contains 88395 row versions, but table contains
51812 row versions
HINT: Rebuild the index with REINDEX.
**** Workaround added
TODO TODO
---- ----

View File

@ -8,7 +8,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/gin/ginvacuum.c,v 1.32 2010/01/02 16:57:33 momjian Exp $ * $PostgreSQL: pgsql/src/backend/access/gin/ginvacuum.c,v 1.33 2010/02/08 04:33:52 tgl Exp $
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -745,12 +745,8 @@ ginvacuumcleanup(PG_FUNCTION_ARGS)
stats->estimated_count = info->estimated_count; stats->estimated_count = info->estimated_count;
/* /*
* If vacuum full, we already have exclusive lock on the index. Otherwise, * Need lock unless it's local to this backend.
* need lock unless it's local to this backend.
*/ */
if (info->vacuum_full)
needLock = false;
else
needLock = !RELATION_IS_LOCAL(index); needLock = !RELATION_IS_LOCAL(index);
if (needLock) if (needLock)
@ -785,15 +781,6 @@ ginvacuumcleanup(PG_FUNCTION_ARGS)
} }
lastBlock = npages - 1; lastBlock = npages - 1;
if (info->vacuum_full && lastBlock > lastFilledBlock)
{
/* try to truncate index */
RelationTruncate(index, lastFilledBlock + 1);
stats->pages_removed = lastBlock - lastFilledBlock;
totFreePages = totFreePages - stats->pages_removed;
}
/* Finally, vacuum the FSM */ /* Finally, vacuum the FSM */
IndexFreeSpaceMapVacuum(info->index); IndexFreeSpaceMapVacuum(info->index);

View File

@ -8,7 +8,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/gist/gistvacuum.c,v 1.46 2010/01/02 16:57:34 momjian Exp $ * $PostgreSQL: pgsql/src/backend/access/gist/gistvacuum.c,v 1.47 2010/02/08 04:33:52 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -29,7 +29,7 @@
typedef struct GistBulkDeleteResult typedef struct GistBulkDeleteResult
{ {
IndexBulkDeleteResult std; /* common state */ IndexBulkDeleteResult std; /* common state */
bool needFullVacuum; bool needReindex;
} GistBulkDeleteResult; } GistBulkDeleteResult;
typedef struct typedef struct
@ -496,12 +496,8 @@ gistVacuumUpdate(GistVacuum *gv, BlockNumber blkno, bool needunion)
} }
/* /*
* For usual vacuum just update FSM, for full vacuum * VACUUM cleanup: update FSM
* reforms parent tuples if some of childs was deleted or changed,
* update invalid tuples (they can exist from last crash recovery only),
* tries to get smaller index
*/ */
Datum Datum
gistvacuumcleanup(PG_FUNCTION_ARGS) gistvacuumcleanup(PG_FUNCTION_ARGS)
{ {
@ -533,46 +529,14 @@ gistvacuumcleanup(PG_FUNCTION_ARGS)
*/ */
} }
/* gistVacuumUpdate may cause hard work */ if (stats->needReindex)
if (info->vacuum_full)
{
GistVacuum gv;
ArrayTuple res;
/* note: vacuum.c already acquired AccessExclusiveLock on index */
gv.index = rel;
initGISTstate(&(gv.giststate), rel);
gv.opCtx = createTempGistContext();
gv.result = stats;
gv.strategy = info->strategy;
/* walk through the entire index for update tuples */
res = gistVacuumUpdate(&gv, GIST_ROOT_BLKNO, false);
/* cleanup */
if (res.itup)
{
int i;
for (i = 0; i < res.ituplen; i++)
pfree(res.itup[i]);
pfree(res.itup);
}
freeGISTstate(&(gv.giststate));
MemoryContextDelete(gv.opCtx);
}
else if (stats->needFullVacuum)
ereport(NOTICE, ereport(NOTICE,
(errmsg("index \"%s\" needs VACUUM FULL or REINDEX to finish crash recovery", (errmsg("index \"%s\" needs VACUUM FULL or REINDEX to finish crash recovery",
RelationGetRelationName(rel)))); RelationGetRelationName(rel))));
/* /*
* If vacuum full, we already have exclusive lock on the index. Otherwise, * Need lock unless it's local to this backend.
* need lock unless it's local to this backend.
*/ */
if (info->vacuum_full)
needLock = false;
else
needLock = !RELATION_IS_LOCAL(rel); needLock = !RELATION_IS_LOCAL(rel);
/* try to find deleted pages */ /* try to find deleted pages */
@ -606,14 +570,6 @@ gistvacuumcleanup(PG_FUNCTION_ARGS)
} }
lastBlock = npages - 1; lastBlock = npages - 1;
if (info->vacuum_full && lastFilledBlock < lastBlock)
{ /* try to truncate index */
RelationTruncate(rel, lastFilledBlock + 1);
stats->std.pages_removed = lastBlock - lastFilledBlock;
totFreePages = totFreePages - stats->std.pages_removed;
}
/* Finally, vacuum the FSM */ /* Finally, vacuum the FSM */
IndexFreeSpaceMapVacuum(info->index); IndexFreeSpaceMapVacuum(info->index);
@ -799,7 +755,7 @@ gistbulkdelete(PG_FUNCTION_ARGS)
stack->next = ptr; stack->next = ptr;
if (GistTupleIsInvalid(idxtuple)) if (GistTupleIsInvalid(idxtuple))
stats->needFullVacuum = true; stats->needReindex = true;
} }
} }

View File

@ -1,4 +1,4 @@
$PostgreSQL: pgsql/src/backend/access/heap/README.HOT,v 1.4 2008/10/02 20:59:31 momjian Exp $ $PostgreSQL: pgsql/src/backend/access/heap/README.HOT,v 1.5 2010/02/08 04:33:52 tgl Exp $
Heap Only Tuples (HOT) Heap Only Tuples (HOT)
====================== ======================
@ -255,27 +255,6 @@ dead heap-only tuples, and cleans up any dead line pointers as if they were
regular dead tuples. regular dead tuples.
VACUUM FULL
-----------
VACUUM FULL performs an extra operation of collapsing out redirecting line
pointers, by moving the first non-DEAD tuple of each HOT chain to the root
position and clearing its heap-only-tuple flag. This effectively changes
the user-visible CTID of that tuple. This would be completely unsafe
during normal concurrent operation, but since VACUUM FULL takes full
exclusive lock on the table, it should be OK. (Note that VACUUM FULL has
always felt free to change tuples' CTIDs by moving them across pages.)
Eliminating redirection links means that the main body of VACUUM FULL
doesn't have to deal with them, which seems a good thing since VACUUM FULL
is horrendously complex already.
When VACUUM FULL tries to move tuple chains, it does not distinguish regular
and heap-only tuples, but just moves both types the same. This is OK because
it will move the entire non-DEAD tail of an update chain and remove index
entries for each item moved. At worst, we'll uselessly search for index
entries matching the heap-only tuples included in the move.
Statistics Statistics
---------- ----------

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/heap/heapam.c,v 1.285 2010/02/03 10:01:29 heikki Exp $ * $PostgreSQL: pgsql/src/backend/access/heap/heapam.c,v 1.286 2010/02/08 04:33:52 tgl Exp $
* *
* *
* INTERFACE ROUTINES * INTERFACE ROUTINES
@ -79,7 +79,7 @@ static HeapScanDesc heap_beginscan_internal(Relation relation,
bool allow_strat, bool allow_sync, bool allow_strat, bool allow_sync,
bool is_bitmapscan); bool is_bitmapscan);
static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf, static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
ItemPointerData from, Buffer newbuf, HeapTuple newtup, bool move, ItemPointerData from, Buffer newbuf, HeapTuple newtup,
bool all_visible_cleared, bool new_all_visible_cleared); bool all_visible_cleared, bool new_all_visible_cleared);
static bool HeapSatisfiesHOTUpdate(Relation relation, Bitmapset *hot_attrs, static bool HeapSatisfiesHOTUpdate(Relation relation, Bitmapset *hot_attrs,
HeapTuple oldtup, HeapTuple newtup); HeapTuple oldtup, HeapTuple newtup);
@ -2785,7 +2785,7 @@ l2:
if (!relation->rd_istemp) if (!relation->rd_istemp)
{ {
XLogRecPtr recptr = log_heap_update(relation, buffer, oldtup.t_self, XLogRecPtr recptr = log_heap_update(relation, buffer, oldtup.t_self,
newbuf, heaptup, false, newbuf, heaptup,
all_visible_cleared, all_visible_cleared,
all_visible_cleared_new); all_visible_cleared_new);
@ -3664,9 +3664,13 @@ recheck_xmax:
} }
/* /*
* Although xvac per se could only be set by VACUUM, it shares physical * Although xvac per se could only be set by old-style VACUUM FULL, it
* storage space with cmax, and so could be wiped out by someone setting * shares physical storage space with cmax, and so could be wiped out by
* xmax. Hence recheck after changing lock, same as for xmax itself. * someone setting xmax. Hence recheck after changing lock, same as for
* xmax itself.
*
* Old-style VACUUM FULL is gone, but we have to keep this code as long
* as we support having MOVED_OFF/MOVED_IN tuples in the database.
*/ */
recheck_xvac: recheck_xvac:
if (tuple->t_infomask & HEAP_MOVED) if (tuple->t_infomask & HEAP_MOVED)
@ -3785,8 +3789,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
TransactionId xmax = HeapTupleHeaderGetXmax(tuple); TransactionId xmax = HeapTupleHeaderGetXmax(tuple);
TransactionId xvac = HeapTupleHeaderGetXvac(tuple); TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
if (tuple->t_infomask & HEAP_MOVED_OFF || if (tuple->t_infomask & HEAP_MOVED)
tuple->t_infomask & HEAP_MOVED_IN)
{ {
if (TransactionIdPrecedes(*latestRemovedXid, xvac)) if (TransactionIdPrecedes(*latestRemovedXid, xvac))
*latestRemovedXid = xvac; *latestRemovedXid = xvac;
@ -3844,7 +3847,7 @@ log_heap_clean(Relation reln, Buffer buffer,
OffsetNumber *redirected, int nredirected, OffsetNumber *redirected, int nredirected,
OffsetNumber *nowdead, int ndead, OffsetNumber *nowdead, int ndead,
OffsetNumber *nowunused, int nunused, OffsetNumber *nowunused, int nunused,
TransactionId latestRemovedXid, bool redirect_move) TransactionId latestRemovedXid)
{ {
xl_heap_clean xlrec; xl_heap_clean xlrec;
uint8 info; uint8 info;
@ -3915,7 +3918,7 @@ log_heap_clean(Relation reln, Buffer buffer,
rdata[3].buffer_std = true; rdata[3].buffer_std = true;
rdata[3].next = NULL; rdata[3].next = NULL;
info = redirect_move ? XLOG_HEAP2_CLEAN_MOVE : XLOG_HEAP2_CLEAN; info = XLOG_HEAP2_CLEAN;
recptr = XLogInsert(RM_HEAP2_ID, info, rdata); recptr = XLogInsert(RM_HEAP2_ID, info, rdata);
return recptr; return recptr;
@ -3970,23 +3973,11 @@ log_heap_freeze(Relation reln, Buffer buffer,
*/ */
static XLogRecPtr static XLogRecPtr
log_heap_update(Relation reln, Buffer oldbuf, ItemPointerData from, log_heap_update(Relation reln, Buffer oldbuf, ItemPointerData from,
Buffer newbuf, HeapTuple newtup, bool move, Buffer newbuf, HeapTuple newtup,
bool all_visible_cleared, bool new_all_visible_cleared) bool all_visible_cleared, bool new_all_visible_cleared)
{ {
/*
* Note: xlhdr is declared to have adequate size and correct alignment for
* an xl_heap_header. However the two tids, if present at all, will be
* packed in with no wasted space after the xl_heap_header; they aren't
* necessarily aligned as implied by this struct declaration.
*/
struct
{
xl_heap_header hdr;
TransactionId tid1;
TransactionId tid2;
} xlhdr;
int hsize = SizeOfHeapHeader;
xl_heap_update xlrec; xl_heap_update xlrec;
xl_heap_header xlhdr;
uint8 info; uint8 info;
XLogRecPtr recptr; XLogRecPtr recptr;
XLogRecData rdata[4]; XLogRecData rdata[4];
@ -3995,12 +3986,7 @@ log_heap_update(Relation reln, Buffer oldbuf, ItemPointerData from,
/* Caller should not call me on a temp relation */ /* Caller should not call me on a temp relation */
Assert(!reln->rd_istemp); Assert(!reln->rd_istemp);
if (move) if (HeapTupleIsHeapOnly(newtup))
{
Assert(!HeapTupleIsHeapOnly(newtup));
info = XLOG_HEAP_MOVE;
}
else if (HeapTupleIsHeapOnly(newtup))
info = XLOG_HEAP_HOT_UPDATE; info = XLOG_HEAP_HOT_UPDATE;
else else
info = XLOG_HEAP_UPDATE; info = XLOG_HEAP_UPDATE;
@ -4022,30 +4008,16 @@ log_heap_update(Relation reln, Buffer oldbuf, ItemPointerData from,
rdata[1].buffer_std = true; rdata[1].buffer_std = true;
rdata[1].next = &(rdata[2]); rdata[1].next = &(rdata[2]);
xlhdr.hdr.t_infomask2 = newtup->t_data->t_infomask2; xlhdr.t_infomask2 = newtup->t_data->t_infomask2;
xlhdr.hdr.t_infomask = newtup->t_data->t_infomask; xlhdr.t_infomask = newtup->t_data->t_infomask;
xlhdr.hdr.t_hoff = newtup->t_data->t_hoff; xlhdr.t_hoff = newtup->t_data->t_hoff;
if (move) /* remember xmax & xmin */
{
TransactionId xid[2]; /* xmax, xmin */
if (newtup->t_data->t_infomask & (HEAP_XMAX_INVALID | HEAP_IS_LOCKED))
xid[0] = InvalidTransactionId;
else
xid[0] = HeapTupleHeaderGetXmax(newtup->t_data);
xid[1] = HeapTupleHeaderGetXmin(newtup->t_data);
memcpy((char *) &xlhdr + hsize,
(char *) xid,
2 * sizeof(TransactionId));
hsize += 2 * sizeof(TransactionId);
}
/* /*
* As with insert records, we need not store the rdata[2] segment if we * As with insert records, we need not store the rdata[2] segment if we
* decide to store the whole buffer instead. * decide to store the whole buffer instead.
*/ */
rdata[2].data = (char *) &xlhdr; rdata[2].data = (char *) &xlhdr;
rdata[2].len = hsize; rdata[2].len = SizeOfHeapHeader;
rdata[2].buffer = newbuf; rdata[2].buffer = newbuf;
rdata[2].buffer_std = true; rdata[2].buffer_std = true;
rdata[2].next = &(rdata[3]); rdata[2].next = &(rdata[3]);
@ -4070,19 +4042,6 @@ log_heap_update(Relation reln, Buffer oldbuf, ItemPointerData from,
return recptr; return recptr;
} }
/*
* Perform XLogInsert for a heap-move operation. Caller must already
* have modified the buffers and marked them dirty.
*/
XLogRecPtr
log_heap_move(Relation reln, Buffer oldbuf, ItemPointerData from,
Buffer newbuf, HeapTuple newtup,
bool all_visible_cleared, bool new_all_visible_cleared)
{
return log_heap_update(reln, oldbuf, from, newbuf, newtup, true,
all_visible_cleared, new_all_visible_cleared);
}
/* /*
* Perform XLogInsert of a HEAP_NEWPAGE record to WAL. Caller is responsible * Perform XLogInsert of a HEAP_NEWPAGE record to WAL. Caller is responsible
* for writing the page to disk after calling this routine. * for writing the page to disk after calling this routine.
@ -4149,10 +4108,10 @@ heap_xlog_cleanup_info(XLogRecPtr lsn, XLogRecord *record)
} }
/* /*
* Handles CLEAN and CLEAN_MOVE record types * Handles HEAP_CLEAN record type
*/ */
static void static void
heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record, bool clean_move) heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record)
{ {
xl_heap_clean *xlrec = (xl_heap_clean *) XLogRecGetData(record); xl_heap_clean *xlrec = (xl_heap_clean *) XLogRecGetData(record);
Buffer buffer; Buffer buffer;
@ -4171,7 +4130,8 @@ heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record, bool clean_move)
* no queries running for which the removed tuples are still visible. * no queries running for which the removed tuples are still visible.
*/ */
if (InHotStandby) if (InHotStandby)
ResolveRecoveryConflictWithSnapshot(xlrec->latestRemovedXid, xlrec->node); ResolveRecoveryConflictWithSnapshot(xlrec->latestRemovedXid,
xlrec->node);
RestoreBkpBlocks(lsn, record, true); RestoreBkpBlocks(lsn, record, true);
@ -4203,8 +4163,7 @@ heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record, bool clean_move)
heap_page_prune_execute(buffer, heap_page_prune_execute(buffer,
redirected, nredirected, redirected, nredirected,
nowdead, ndead, nowdead, ndead,
nowunused, nunused, nowunused, nunused);
clean_move);
freespace = PageGetHeapFreeSpace(page); /* needed to update FSM below */ freespace = PageGetHeapFreeSpace(page); /* needed to update FSM below */
@ -4489,10 +4448,10 @@ heap_xlog_insert(XLogRecPtr lsn, XLogRecord *record)
} }
/* /*
* Handles UPDATE, HOT_UPDATE & MOVE * Handles UPDATE and HOT_UPDATE
*/ */
static void static void
heap_xlog_update(XLogRecPtr lsn, XLogRecord *record, bool move, bool hot_update) heap_xlog_update(XLogRecPtr lsn, XLogRecord *record, bool hot_update)
{ {
xl_heap_update *xlrec = (xl_heap_update *) XLogRecGetData(record); xl_heap_update *xlrec = (xl_heap_update *) XLogRecGetData(record);
Buffer buffer; Buffer buffer;
@ -4558,19 +4517,6 @@ heap_xlog_update(XLogRecPtr lsn, XLogRecord *record, bool move, bool hot_update)
htup = (HeapTupleHeader) PageGetItem(page, lp); htup = (HeapTupleHeader) PageGetItem(page, lp);
if (move)
{
htup->t_infomask &= ~(HEAP_XMIN_COMMITTED |
HEAP_XMIN_INVALID |
HEAP_MOVED_IN);
htup->t_infomask |= HEAP_MOVED_OFF;
HeapTupleHeaderClearHotUpdated(htup);
HeapTupleHeaderSetXvac(htup, record->xl_xid);
/* Make sure there is no forward chain link in t_ctid */
htup->t_ctid = xlrec->target.tid;
}
else
{
htup->t_infomask &= ~(HEAP_XMAX_COMMITTED | htup->t_infomask &= ~(HEAP_XMAX_COMMITTED |
HEAP_XMAX_INVALID | HEAP_XMAX_INVALID |
HEAP_XMAX_IS_MULTI | HEAP_XMAX_IS_MULTI |
@ -4584,7 +4530,6 @@ heap_xlog_update(XLogRecPtr lsn, XLogRecord *record, bool move, bool hot_update)
HeapTupleHeaderSetCmax(htup, FirstCommandId, false); HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
/* Set forward chain link in t_ctid */ /* Set forward chain link in t_ctid */
htup->t_ctid = xlrec->newtid; htup->t_ctid = xlrec->newtid;
}
/* Mark the page as a candidate for pruning */ /* Mark the page as a candidate for pruning */
PageSetPrunable(page, record->xl_xid); PageSetPrunable(page, record->xl_xid);
@ -4655,8 +4600,6 @@ newsame:;
elog(PANIC, "heap_update_redo: invalid max offset number"); elog(PANIC, "heap_update_redo: invalid max offset number");
hsize = SizeOfHeapUpdate + SizeOfHeapHeader; hsize = SizeOfHeapUpdate + SizeOfHeapHeader;
if (move)
hsize += (2 * sizeof(TransactionId));
newlen = record->xl_len - hsize; newlen = record->xl_len - hsize;
Assert(newlen <= MaxHeapTupleSize); Assert(newlen <= MaxHeapTupleSize);
@ -4674,22 +4617,8 @@ newsame:;
htup->t_infomask = xlhdr.t_infomask; htup->t_infomask = xlhdr.t_infomask;
htup->t_hoff = xlhdr.t_hoff; htup->t_hoff = xlhdr.t_hoff;
if (move)
{
TransactionId xid[2]; /* xmax, xmin */
memcpy((char *) xid,
(char *) xlrec + SizeOfHeapUpdate + SizeOfHeapHeader,
2 * sizeof(TransactionId));
HeapTupleHeaderSetXmin(htup, xid[1]);
HeapTupleHeaderSetXmax(htup, xid[0]);
HeapTupleHeaderSetXvac(htup, record->xl_xid);
}
else
{
HeapTupleHeaderSetXmin(htup, record->xl_xid); HeapTupleHeaderSetXmin(htup, record->xl_xid);
HeapTupleHeaderSetCmin(htup, FirstCommandId); HeapTupleHeaderSetCmin(htup, FirstCommandId);
}
/* Make sure there is no forward chain link in t_ctid */ /* Make sure there is no forward chain link in t_ctid */
htup->t_ctid = xlrec->newtid; htup->t_ctid = xlrec->newtid;
@ -4857,13 +4786,10 @@ heap_redo(XLogRecPtr lsn, XLogRecord *record)
heap_xlog_delete(lsn, record); heap_xlog_delete(lsn, record);
break; break;
case XLOG_HEAP_UPDATE: case XLOG_HEAP_UPDATE:
heap_xlog_update(lsn, record, false, false); heap_xlog_update(lsn, record, false);
break;
case XLOG_HEAP_MOVE:
heap_xlog_update(lsn, record, true, false);
break; break;
case XLOG_HEAP_HOT_UPDATE: case XLOG_HEAP_HOT_UPDATE:
heap_xlog_update(lsn, record, false, true); heap_xlog_update(lsn, record, true);
break; break;
case XLOG_HEAP_NEWPAGE: case XLOG_HEAP_NEWPAGE:
heap_xlog_newpage(lsn, record); heap_xlog_newpage(lsn, record);
@ -4895,10 +4821,7 @@ heap2_redo(XLogRecPtr lsn, XLogRecord *record)
heap_xlog_freeze(lsn, record); heap_xlog_freeze(lsn, record);
break; break;
case XLOG_HEAP2_CLEAN: case XLOG_HEAP2_CLEAN:
heap_xlog_clean(lsn, record, false); heap_xlog_clean(lsn, record);
break;
case XLOG_HEAP2_CLEAN_MOVE:
heap_xlog_clean(lsn, record, true);
break; break;
case XLOG_HEAP2_CLEANUP_INFO: case XLOG_HEAP2_CLEANUP_INFO:
heap_xlog_cleanup_info(lsn, record); heap_xlog_cleanup_info(lsn, record);
@ -4953,19 +4876,6 @@ heap_desc(StringInfo buf, uint8 xl_info, char *rec)
ItemPointerGetBlockNumber(&(xlrec->newtid)), ItemPointerGetBlockNumber(&(xlrec->newtid)),
ItemPointerGetOffsetNumber(&(xlrec->newtid))); ItemPointerGetOffsetNumber(&(xlrec->newtid)));
} }
else if (info == XLOG_HEAP_MOVE)
{
xl_heap_update *xlrec = (xl_heap_update *) rec;
if (xl_info & XLOG_HEAP_INIT_PAGE)
appendStringInfo(buf, "move(init): ");
else
appendStringInfo(buf, "move: ");
out_target(buf, &(xlrec->target));
appendStringInfo(buf, "; new %u/%u",
ItemPointerGetBlockNumber(&(xlrec->newtid)),
ItemPointerGetOffsetNumber(&(xlrec->newtid)));
}
else if (info == XLOG_HEAP_HOT_UPDATE) else if (info == XLOG_HEAP_HOT_UPDATE)
{ {
xl_heap_update *xlrec = (xl_heap_update *) rec; xl_heap_update *xlrec = (xl_heap_update *) rec;
@ -5037,15 +4947,6 @@ heap2_desc(StringInfo buf, uint8 xl_info, char *rec)
xlrec->node.relNode, xlrec->block, xlrec->node.relNode, xlrec->block,
xlrec->latestRemovedXid); xlrec->latestRemovedXid);
} }
else if (info == XLOG_HEAP2_CLEAN_MOVE)
{
xl_heap_clean *xlrec = (xl_heap_clean *) rec;
appendStringInfo(buf, "clean_move: rel %u/%u/%u; blk %u remxid %u",
xlrec->node.spcNode, xlrec->node.dbNode,
xlrec->node.relNode, xlrec->block,
xlrec->latestRemovedXid);
}
else if (info == XLOG_HEAP2_CLEANUP_INFO) else if (info == XLOG_HEAP2_CLEANUP_INFO)
{ {
xl_heap_cleanup_info *xlrec = (xl_heap_cleanup_info *) rec; xl_heap_cleanup_info *xlrec = (xl_heap_cleanup_info *) rec;

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/heap/pruneheap.c,v 1.20 2010/01/02 16:57:34 momjian Exp $ * $PostgreSQL: pgsql/src/backend/access/heap/pruneheap.c,v 1.21 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -21,7 +21,6 @@
#include "pgstat.h" #include "pgstat.h"
#include "storage/bufmgr.h" #include "storage/bufmgr.h"
#include "storage/off.h" #include "storage/off.h"
#include "utils/inval.h"
#include "utils/rel.h" #include "utils/rel.h"
#include "utils/tqual.h" #include "utils/tqual.h"
@ -46,8 +45,7 @@ typedef struct
static int heap_prune_chain(Relation relation, Buffer buffer, static int heap_prune_chain(Relation relation, Buffer buffer,
OffsetNumber rootoffnum, OffsetNumber rootoffnum,
TransactionId OldestXmin, TransactionId OldestXmin,
PruneState *prstate, PruneState *prstate);
bool redirect_move);
static void heap_prune_record_prunable(PruneState *prstate, TransactionId xid); static void heap_prune_record_prunable(PruneState *prstate, TransactionId xid);
static void heap_prune_record_redirect(PruneState *prstate, static void heap_prune_record_redirect(PruneState *prstate,
OffsetNumber offnum, OffsetNumber rdoffnum); OffsetNumber offnum, OffsetNumber rdoffnum);
@ -123,8 +121,8 @@ heap_page_prune_opt(Relation relation, Buffer buffer, TransactionId OldestXmin)
*/ */
if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree) if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)
{ {
/* OK to prune (though not to remove redirects) */ /* OK to prune */
(void) heap_page_prune(relation, buffer, OldestXmin, false, true); (void) heap_page_prune(relation, buffer, OldestXmin, true);
} }
/* And release buffer lock */ /* And release buffer lock */
@ -141,14 +139,6 @@ heap_page_prune_opt(Relation relation, Buffer buffer, TransactionId OldestXmin)
* OldestXmin is the cutoff XID used to distinguish whether tuples are DEAD * OldestXmin is the cutoff XID used to distinguish whether tuples are DEAD
* or RECENTLY_DEAD (see HeapTupleSatisfiesVacuum). * or RECENTLY_DEAD (see HeapTupleSatisfiesVacuum).
* *
* If redirect_move is set, we remove redirecting line pointers by
* updating the root line pointer to point directly to the first non-dead
* tuple in the chain. NOTE: eliminating the redirect changes the first
* tuple's effective CTID, and is therefore unsafe except within VACUUM FULL.
* The only reason we support this capability at all is that by using it,
* VACUUM FULL need not cope with LP_REDIRECT items at all; which seems a
* good thing since VACUUM FULL is overly complicated already.
*
* If report_stats is true then we send the number of reclaimed heap-only * If report_stats is true then we send the number of reclaimed heap-only
* tuples to pgstats. (This must be FALSE during vacuum, since vacuum will * tuples to pgstats. (This must be FALSE during vacuum, since vacuum will
* send its own new total to pgstats, and we don't want this delta applied * send its own new total to pgstats, and we don't want this delta applied
@ -158,7 +148,7 @@ heap_page_prune_opt(Relation relation, Buffer buffer, TransactionId OldestXmin)
*/ */
int int
heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin, heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
bool redirect_move, bool report_stats) bool report_stats)
{ {
int ndeleted = 0; int ndeleted = 0;
Page page = BufferGetPage(buffer); Page page = BufferGetPage(buffer);
@ -172,17 +162,10 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
* logic as possible out of the critical section, and also ensures that * logic as possible out of the critical section, and also ensures that
* WAL replay will work the same as the normal case. * WAL replay will work the same as the normal case.
* *
* First, inform inval.c that upcoming CacheInvalidateHeapTuple calls are * First, initialize the new pd_prune_xid value to zero (indicating no
* nontransactional. * prunable tuples). If we find any tuples which may soon become
*/ * prunable, we will save the lowest relevant XID in new_prune_xid.
if (redirect_move) * Also initialize the rest of our working state.
BeginNonTransactionalInvalidation();
/*
* Initialize the new pd_prune_xid value to zero (indicating no prunable
* tuples). If we find any tuples which may soon become prunable, we will
* save the lowest relevant XID in new_prune_xid. Also initialize the rest
* of our working state.
*/ */
prstate.new_prune_xid = InvalidTransactionId; prstate.new_prune_xid = InvalidTransactionId;
prstate.latestRemovedXid = InvalidTransactionId; prstate.latestRemovedXid = InvalidTransactionId;
@ -209,22 +192,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
/* Process this item or chain of items */ /* Process this item or chain of items */
ndeleted += heap_prune_chain(relation, buffer, offnum, ndeleted += heap_prune_chain(relation, buffer, offnum,
OldestXmin, OldestXmin,
&prstate, &prstate);
redirect_move);
} }
/*
* Send invalidation messages for any tuples we are about to move. It is
* safe to do this now, even though we could theoretically still fail
* before making the actual page update, because a useless cache
* invalidation doesn't hurt anything. Also, no one else can reload the
* tuples while we have exclusive buffer lock, so it's not too early to
* send the invals. This avoids sending the invals while inside the
* critical section, which is a good thing for robustness.
*/
if (redirect_move)
EndNonTransactionalInvalidation();
/* Any error while applying the changes is critical */ /* Any error while applying the changes is critical */
START_CRIT_SECTION(); START_CRIT_SECTION();
@ -238,8 +208,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
heap_page_prune_execute(buffer, heap_page_prune_execute(buffer,
prstate.redirected, prstate.nredirected, prstate.redirected, prstate.nredirected,
prstate.nowdead, prstate.ndead, prstate.nowdead, prstate.ndead,
prstate.nowunused, prstate.nunused, prstate.nowunused, prstate.nunused);
redirect_move);
/* /*
* Update the page's pd_prune_xid field to either zero, or the lowest * Update the page's pd_prune_xid field to either zero, or the lowest
@ -257,7 +226,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
MarkBufferDirty(buffer); MarkBufferDirty(buffer);
/* /*
* Emit a WAL HEAP_CLEAN or HEAP_CLEAN_MOVE record showing what we did * Emit a WAL HEAP_CLEAN record showing what we did
*/ */
if (!relation->rd_istemp) if (!relation->rd_istemp)
{ {
@ -267,7 +236,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
prstate.redirected, prstate.nredirected, prstate.redirected, prstate.nredirected,
prstate.nowdead, prstate.ndead, prstate.nowdead, prstate.ndead,
prstate.nowunused, prstate.nunused, prstate.nowunused, prstate.nunused,
prstate.latestRemovedXid, redirect_move); prstate.latestRemovedXid);
PageSetLSN(BufferGetPage(buffer), recptr); PageSetLSN(BufferGetPage(buffer), recptr);
PageSetTLI(BufferGetPage(buffer), ThisTimeLineID); PageSetTLI(BufferGetPage(buffer), ThisTimeLineID);
@ -349,16 +318,12 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
* LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED * LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED
* state are added to nowunused[]. * state are added to nowunused[].
* *
* If redirect_move is true, we intend to get rid of redirecting line pointers,
* not just make redirection entries.
*
* Returns the number of tuples (to be) deleted from the page. * Returns the number of tuples (to be) deleted from the page.
*/ */
static int static int
heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum, heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
TransactionId OldestXmin, TransactionId OldestXmin,
PruneState *prstate, PruneState *prstate)
bool redirect_move)
{ {
int ndeleted = 0; int ndeleted = 0;
Page dp = (Page) BufferGetPage(buffer); Page dp = (Page) BufferGetPage(buffer);
@ -366,7 +331,6 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
ItemId rootlp; ItemId rootlp;
HeapTupleHeader htup; HeapTupleHeader htup;
OffsetNumber latestdead = InvalidOffsetNumber, OffsetNumber latestdead = InvalidOffsetNumber,
redirect_target = InvalidOffsetNumber,
maxoff = PageGetMaxOffsetNumber(dp), maxoff = PageGetMaxOffsetNumber(dp),
offnum; offnum;
OffsetNumber chainitems[MaxHeapTuplesPerPage]; OffsetNumber chainitems[MaxHeapTuplesPerPage];
@ -592,12 +556,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
if (i >= nchain) if (i >= nchain)
heap_prune_record_dead(prstate, rootoffnum); heap_prune_record_dead(prstate, rootoffnum);
else else
{
heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]); heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);
/* If the redirection will be a move, need more processing */
if (redirect_move)
redirect_target = chainitems[i];
}
} }
else if (nchain < 2 && ItemIdIsRedirected(rootlp)) else if (nchain < 2 && ItemIdIsRedirected(rootlp))
{ {
@ -610,42 +569,6 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
*/ */
heap_prune_record_dead(prstate, rootoffnum); heap_prune_record_dead(prstate, rootoffnum);
} }
else if (redirect_move && ItemIdIsRedirected(rootlp))
{
/*
* If we desire to eliminate LP_REDIRECT items by moving tuples, make
* a redirection entry for each redirected root item; this will cause
* heap_page_prune_execute to actually do the move. (We get here only
* when there are no DEAD tuples in the chain; otherwise the
* redirection entry was made above.)
*/
heap_prune_record_redirect(prstate, rootoffnum, chainitems[1]);
redirect_target = chainitems[1];
}
/*
* If we are going to implement a redirect by moving tuples, we have to
* issue a cache invalidation against the redirection target tuple,
* because its CTID will be effectively changed by the move. Note that
* CacheInvalidateHeapTuple only queues the request, it doesn't send it;
* if we fail before reaching EndNonTransactionalInvalidation, nothing
* happens and no harm is done.
*/
if (OffsetNumberIsValid(redirect_target))
{
ItemId firstlp = PageGetItemId(dp, redirect_target);
HeapTupleData firsttup;
Assert(ItemIdIsNormal(firstlp));
/* Set up firsttup to reference the tuple at its existing CTID */
firsttup.t_data = (HeapTupleHeader) PageGetItem(dp, firstlp);
firsttup.t_len = ItemIdGetLength(firstlp);
ItemPointerSet(&firsttup.t_self,
BufferGetBlockNumber(buffer),
redirect_target);
firsttup.t_tableOid = RelationGetRelid(relation);
CacheInvalidateHeapTuple(relation, &firsttup);
}
return ndeleted; return ndeleted;
} }
@ -715,14 +638,13 @@ void
heap_page_prune_execute(Buffer buffer, heap_page_prune_execute(Buffer buffer,
OffsetNumber *redirected, int nredirected, OffsetNumber *redirected, int nredirected,
OffsetNumber *nowdead, int ndead, OffsetNumber *nowdead, int ndead,
OffsetNumber *nowunused, int nunused, OffsetNumber *nowunused, int nunused)
bool redirect_move)
{ {
Page page = (Page) BufferGetPage(buffer); Page page = (Page) BufferGetPage(buffer);
OffsetNumber *offnum; OffsetNumber *offnum;
int i; int i;
/* Update all redirected or moved line pointers */ /* Update all redirected line pointers */
offnum = redirected; offnum = redirected;
for (i = 0; i < nredirected; i++) for (i = 0; i < nredirected; i++)
{ {
@ -730,31 +652,8 @@ heap_page_prune_execute(Buffer buffer,
OffsetNumber tooff = *offnum++; OffsetNumber tooff = *offnum++;
ItemId fromlp = PageGetItemId(page, fromoff); ItemId fromlp = PageGetItemId(page, fromoff);
if (redirect_move)
{
/* Physically move the "to" item to the "from" slot */
ItemId tolp = PageGetItemId(page, tooff);
HeapTupleHeader htup;
*fromlp = *tolp;
ItemIdSetUnused(tolp);
/*
* Change heap-only status of the tuple because after the line
* pointer manipulation, it's no longer a heap-only tuple, but is
* directly pointed to by index entries.
*/
Assert(ItemIdIsNormal(fromlp));
htup = (HeapTupleHeader) PageGetItem(page, fromlp);
Assert(HeapTupleHeaderIsHeapOnly(htup));
HeapTupleHeaderClearHeapOnly(htup);
}
else
{
/* Just insert a REDIRECT link at fromoff */
ItemIdSetRedirect(fromlp, tooff); ItemIdSetRedirect(fromlp, tooff);
} }
}
/* Update all now-dead line pointers */ /* Update all now-dead line pointers */
offnum = nowdead; offnum = nowdead;

View File

@ -1,4 +1,4 @@
$PostgreSQL: pgsql/src/backend/access/nbtree/README,v 1.21 2009/12/19 01:32:32 sriggs Exp $ $PostgreSQL: pgsql/src/backend/access/nbtree/README,v 1.22 2010/02/08 04:33:53 tgl Exp $
Btree Indexing Btree Indexing
============== ==============
@ -171,9 +171,9 @@ We consider deleting an entire page from the btree only when it's become
completely empty of items. (Merging partly-full pages would allow better completely empty of items. (Merging partly-full pages would allow better
space reuse, but it seems impractical to move existing data items left or space reuse, but it seems impractical to move existing data items left or
right to make this happen --- a scan moving in the opposite direction right to make this happen --- a scan moving in the opposite direction
might miss the items if so. We could do it during VACUUM FULL, though.) might miss the items if so.) Also, we *never* delete the rightmost page
Also, we *never* delete the rightmost page on a tree level (this on a tree level (this restriction simplifies the traversal algorithms, as
restriction simplifies the traversal algorithms, as explained below). explained below).
To delete an empty page, we acquire write lock on its left sibling (if To delete an empty page, we acquire write lock on its left sibling (if
any), the target page itself, the right sibling (there must be one), and any), the target page itself, the right sibling (there must be one), and
@ -266,8 +266,7 @@ transactions that were running at the time of deletion are dead; which is
overly strong, but is simple to implement within Postgres. When marked overly strong, but is simple to implement within Postgres. When marked
dead, a deleted page is labeled with the next-transaction counter value. dead, a deleted page is labeled with the next-transaction counter value.
VACUUM can reclaim the page for re-use when this transaction number is VACUUM can reclaim the page for re-use when this transaction number is
older than the oldest open transaction. (NOTE: VACUUM FULL can reclaim older than the oldest open transaction.
such pages immediately.)
Reclaiming a page doesn't actually change its state on disk --- we simply Reclaiming a page doesn't actually change its state on disk --- we simply
record it in the shared-memory free space map, from which it will be record it in the shared-memory free space map, from which it will be

View File

@ -9,7 +9,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtpage.c,v 1.117 2010/02/01 13:40:28 sriggs Exp $ * $PostgreSQL: pgsql/src/backend/access/nbtree/nbtpage.c,v 1.118 2010/02/08 04:33:53 tgl Exp $
* *
* NOTES * NOTES
* Postgres btree pages look like ordinary relation pages. The opaque * Postgres btree pages look like ordinary relation pages. The opaque
@ -877,7 +877,7 @@ _bt_parent_deletion_safe(Relation rel, BlockNumber target, BTStack stack)
* frequently. * frequently.
*/ */
int int
_bt_pagedel(Relation rel, Buffer buf, BTStack stack, bool vacuum_full) _bt_pagedel(Relation rel, Buffer buf, BTStack stack)
{ {
int result; int result;
BlockNumber target, BlockNumber target,
@ -1207,14 +1207,13 @@ _bt_pagedel(Relation rel, Buffer buf, BTStack stack, bool vacuum_full)
/* /*
* Mark the page itself deleted. It can be recycled when all current * Mark the page itself deleted. It can be recycled when all current
* transactions are gone; or immediately if we're doing VACUUM FULL. * transactions are gone.
*/ */
page = BufferGetPage(buf); page = BufferGetPage(buf);
opaque = (BTPageOpaque) PageGetSpecialPointer(page); opaque = (BTPageOpaque) PageGetSpecialPointer(page);
opaque->btpo_flags &= ~BTP_HALF_DEAD; opaque->btpo_flags &= ~BTP_HALF_DEAD;
opaque->btpo_flags |= BTP_DELETED; opaque->btpo_flags |= BTP_DELETED;
opaque->btpo.xact = opaque->btpo.xact = ReadNewTransactionId();
vacuum_full ? FrozenTransactionId : ReadNewTransactionId();
/* And update the metapage, if needed */ /* And update the metapage, if needed */
if (BufferIsValid(metabuf)) if (BufferIsValid(metabuf))
@ -1350,7 +1349,7 @@ _bt_pagedel(Relation rel, Buffer buf, BTStack stack, bool vacuum_full)
{ {
/* recursive call will release pbuf */ /* recursive call will release pbuf */
_bt_relbuf(rel, rbuf); _bt_relbuf(rel, rbuf);
result = _bt_pagedel(rel, pbuf, stack->bts_parent, vacuum_full) + 1; result = _bt_pagedel(rel, pbuf, stack->bts_parent) + 1;
_bt_relbuf(rel, buf); _bt_relbuf(rel, buf);
} }
else if (parent_one_child && rightsib_empty) else if (parent_one_child && rightsib_empty)
@ -1358,7 +1357,7 @@ _bt_pagedel(Relation rel, Buffer buf, BTStack stack, bool vacuum_full)
_bt_relbuf(rel, pbuf); _bt_relbuf(rel, pbuf);
_bt_relbuf(rel, buf); _bt_relbuf(rel, buf);
/* recursive call will release rbuf */ /* recursive call will release rbuf */
result = _bt_pagedel(rel, rbuf, stack, vacuum_full) + 1; result = _bt_pagedel(rel, rbuf, stack) + 1;
} }
else else
{ {

View File

@ -12,7 +12,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtree.c,v 1.174 2010/01/02 16:57:35 momjian Exp $ * $PostgreSQL: pgsql/src/backend/access/nbtree/nbtree.c,v 1.175 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -579,12 +579,12 @@ btvacuumcleanup(PG_FUNCTION_ARGS)
IndexFreeSpaceMapVacuum(info->index); IndexFreeSpaceMapVacuum(info->index);
/* /*
* During a non-FULL vacuum it's quite possible for us to be fooled by * It's quite possible for us to be fooled by concurrent page splits into
* concurrent page splits into double-counting some index tuples, so * double-counting some index tuples, so disbelieve any total that exceeds
* disbelieve any total that exceeds the underlying heap's count ... if we * the underlying heap's count ... if we know that accurately. Otherwise
* know that accurately. Otherwise this might just make matters worse. * this might just make matters worse.
*/ */
if (!info->vacuum_full && !info->estimated_count) if (!info->estimated_count)
{ {
if (stats->num_index_tuples > info->num_heap_tuples) if (stats->num_index_tuples > info->num_heap_tuples)
stats->num_index_tuples = info->num_heap_tuples; stats->num_index_tuples = info->num_heap_tuples;
@ -686,27 +686,6 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
} }
} }
/*
* During VACUUM FULL, we truncate off any recyclable pages at the end of
* the index. In a normal vacuum it'd be unsafe to do this except by
* acquiring exclusive lock on the index and then rechecking all the
* pages; doesn't seem worth it.
*/
if (info->vacuum_full && vstate.lastUsedPage < num_pages - 1)
{
BlockNumber new_pages = vstate.lastUsedPage + 1;
/*
* Okay to truncate.
*/
RelationTruncate(rel, new_pages);
/* update statistics */
stats->pages_removed += num_pages - new_pages;
vstate.totFreePages -= (num_pages - new_pages);
num_pages = new_pages;
}
/* /*
* InHotStandby we need to scan right up to the end of the index for * InHotStandby we need to scan right up to the end of the index for
* correct locking, so we may need to write a WAL record for the final * correct locking, so we may need to write a WAL record for the final
@ -963,26 +942,12 @@ restart:
MemoryContextReset(vstate->pagedelcontext); MemoryContextReset(vstate->pagedelcontext);
oldcontext = MemoryContextSwitchTo(vstate->pagedelcontext); oldcontext = MemoryContextSwitchTo(vstate->pagedelcontext);
ndel = _bt_pagedel(rel, buf, NULL, info->vacuum_full); ndel = _bt_pagedel(rel, buf, NULL);
/* count only this page, else may double-count parent */ /* count only this page, else may double-count parent */
if (ndel) if (ndel)
stats->pages_deleted++; stats->pages_deleted++;
/*
* During VACUUM FULL it's okay to recycle deleted pages immediately,
* since there can be no other transactions scanning the index. Note
* that we will only recycle the current page and not any parent pages
* that _bt_pagedel might have recursed to; this seems reasonable in
* the name of simplicity. (Trying to do otherwise would mean we'd
* have to sort the list of recyclable pages we're building.)
*/
if (ndel && info->vacuum_full)
{
RecordFreeIndexPage(rel, blkno);
vstate->totFreePages++;
}
MemoryContextSwitchTo(oldcontext); MemoryContextSwitchTo(oldcontext);
/* pagedel released buffer, so we shouldn't */ /* pagedel released buffer, so we shouldn't */
} }

View File

@ -8,7 +8,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtxlog.c,v 1.59 2010/01/29 17:10:05 sriggs Exp $ * $PostgreSQL: pgsql/src/backend/access/nbtree/nbtxlog.c,v 1.60 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -1079,8 +1079,8 @@ btree_xlog_cleanup(void)
Relation reln; Relation reln;
reln = CreateFakeRelcacheEntry(action->node); reln = CreateFakeRelcacheEntry(action->node);
if (_bt_pagedel(reln, buf, NULL, true) == 0) if (_bt_pagedel(reln, buf, NULL) == 0)
elog(PANIC, "btree_xlog_cleanup: _bt_pagdel failed"); elog(PANIC, "btree_xlog_cleanup: _bt_pagedel failed");
FreeFakeRelcacheEntry(reln); FreeFakeRelcacheEntry(reln);
} }
} }

View File

@ -10,7 +10,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/transam/xact.c,v 1.283 2010/02/07 20:48:09 tgl Exp $ * $PostgreSQL: pgsql/src/backend/access/transam/xact.c,v 1.284 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -881,11 +881,9 @@ AtSubStart_ResourceOwner(void)
* *
* Returns latest XID among xact and its children, or InvalidTransactionId * Returns latest XID among xact and its children, or InvalidTransactionId
* if the xact has no XID. (We compute that here just because it's easier.) * if the xact has no XID. (We compute that here just because it's easier.)
*
* This is exported only to support an ugly hack in VACUUM FULL.
*/ */
TransactionId static TransactionId
RecordTransactionCommit(bool isVacuumFull) RecordTransactionCommit(void)
{ {
TransactionId xid = GetTopTransactionIdIfAny(); TransactionId xid = GetTopTransactionIdIfAny();
bool markXidCommitted = TransactionIdIsValid(xid); bool markXidCommitted = TransactionIdIsValid(xid);
@ -950,8 +948,6 @@ RecordTransactionCommit(bool isVacuumFull)
xlrec.xinfo = 0; xlrec.xinfo = 0;
if (RelcacheInitFileInval) if (RelcacheInitFileInval)
xlrec.xinfo |= XACT_COMPLETION_UPDATE_RELCACHE_FILE; xlrec.xinfo |= XACT_COMPLETION_UPDATE_RELCACHE_FILE;
if (isVacuumFull)
xlrec.xinfo |= XACT_COMPLETION_VACUUM_FULL;
if (forceSyncCommit) if (forceSyncCommit)
xlrec.xinfo |= XACT_COMPLETION_FORCE_SYNC_COMMIT; xlrec.xinfo |= XACT_COMPLETION_FORCE_SYNC_COMMIT;
@ -1755,7 +1751,7 @@ CommitTransaction(void)
/* /*
* Here is where we really truly commit. * Here is where we really truly commit.
*/ */
latestXid = RecordTransactionCommit(false); latestXid = RecordTransactionCommit();
TRACE_POSTGRESQL_TRANSACTION_COMMIT(MyProc->lxid); TRACE_POSTGRESQL_TRANSACTION_COMMIT(MyProc->lxid);
@ -4374,28 +4370,23 @@ xact_redo_commit(xl_xact_commit *xlrec, TransactionId xid, XLogRecPtr lsn)
LWLockRelease(XidGenLock); LWLockRelease(XidGenLock);
} }
if (!InHotStandby || XactCompletionVacuumFull(xlrec)) if (!InHotStandby)
{ {
/* /*
* Mark the transaction committed in pg_clog. * Mark the transaction committed in pg_clog.
*
* If InHotStandby and this is the first commit of a VACUUM FULL INPLACE
* we perform only the actual commit to clog. Strangely, there are two
* commits that share the same xid for every VFI, so we need to skip
* some steps for the first commit. It's OK to repeat the clog update
* when we see the second commit on a VFI.
*/ */
TransactionIdCommitTree(xid, xlrec->nsubxacts, sub_xids); TransactionIdCommitTree(xid, xlrec->nsubxacts, sub_xids);
} }
else else
{ {
/* /*
* If a transaction completion record arrives that has as-yet unobserved * If a transaction completion record arrives that has as-yet
* subtransactions then this will not have been fully handled by the call * unobserved subtransactions then this will not have been fully
* to RecordKnownAssignedTransactionIds() in the main recovery loop in * handled by the call to RecordKnownAssignedTransactionIds() in the
* xlog.c. So we need to do bookkeeping again to cover that case. This is * main recovery loop in xlog.c. So we need to do bookkeeping again to
* confusing and it is easy to think this call is irrelevant, which has * cover that case. This is confusing and it is easy to think this
* happened three times in development already. Leave it in. * call is irrelevant, which has happened three times in development
* already. Leave it in.
*/ */
RecordKnownAssignedTransactionIds(max_xid); RecordKnownAssignedTransactionIds(max_xid);

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/backend/access/transam/xlog.c,v 1.367 2010/02/07 20:48:09 tgl Exp $ * $PostgreSQL: pgsql/src/backend/access/transam/xlog.c,v 1.368 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -2106,32 +2106,6 @@ XLogBackgroundFlush(void)
END_CRIT_SECTION(); END_CRIT_SECTION();
} }
/*
* Flush any previous asynchronously-committed transactions' commit records.
*
* NOTE: it is unwise to assume that this provides any strong guarantees.
* In particular, because of the inexact LSN bookkeeping used by clog.c,
* we cannot assume that hint bits will be settable for these transactions.
*/
void
XLogAsyncCommitFlush(void)
{
XLogRecPtr WriteRqstPtr;
/* use volatile pointer to prevent code rearrangement */
volatile XLogCtlData *xlogctl = XLogCtl;
/* There's no asynchronously committed transactions during recovery */
if (RecoveryInProgress())
return;
SpinLockAcquire(&xlogctl->info_lck);
WriteRqstPtr = xlogctl->asyncCommitLSN;
SpinLockRelease(&xlogctl->info_lck);
XLogFlush(WriteRqstPtr);
}
/* /*
* Test whether XLOG data has been flushed up to (at least) the given position. * Test whether XLOG data has been flushed up to (at least) the given position.
* *

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/catalog/index.c,v 1.334 2010/02/07 22:40:33 tgl Exp $ * $PostgreSQL: pgsql/src/backend/catalog/index.c,v 1.335 2010/02/08 04:33:53 tgl Exp $
* *
* *
* INTERFACE ROUTINES * INTERFACE ROUTINES
@ -2107,7 +2107,6 @@ validate_index(Oid heapId, Oid indexId, Snapshot snapshot)
* Scan the index and gather up all the TIDs into a tuplesort object. * Scan the index and gather up all the TIDs into a tuplesort object.
*/ */
ivinfo.index = indexRelation; ivinfo.index = indexRelation;
ivinfo.vacuum_full = false;
ivinfo.analyze_only = false; ivinfo.analyze_only = false;
ivinfo.estimated_count = true; ivinfo.estimated_count = true;
ivinfo.message_level = DEBUG2; ivinfo.message_level = DEBUG2;

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/commands/analyze.c,v 1.149 2010/02/01 19:28:56 rhaas Exp $ * $PostgreSQL: pgsql/src/backend/commands/analyze.c,v 1.150 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -595,7 +595,6 @@ cleanup:
IndexVacuumInfo ivinfo; IndexVacuumInfo ivinfo;
ivinfo.index = Irel[ind]; ivinfo.index = Irel[ind];
ivinfo.vacuum_full = false;
ivinfo.analyze_only = true; ivinfo.analyze_only = true;
ivinfo.estimated_count = true; ivinfo.estimated_count = true;
ivinfo.message_level = elevel; ivinfo.message_level = elevel;

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/commands/copy.c,v 1.323 2010/02/03 10:01:29 heikki Exp $ * $PostgreSQL: pgsql/src/backend/commands/copy.c,v 1.324 2010/02/08 04:33:53 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -2166,7 +2166,7 @@ CopyFrom(CopyState cstate)
if (resultRelInfo->ri_NumIndices > 0) if (resultRelInfo->ri_NumIndices > 0)
recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self), recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
estate, false); estate);
/* AFTER ROW INSERT Triggers */ /* AFTER ROW INSERT Triggers */
ExecARInsertTriggers(estate, resultRelInfo, tuple, ExecARInsertTriggers(estate, resultRelInfo, tuple,

File diff suppressed because it is too large Load Diff

View File

@ -29,7 +29,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/commands/vacuumlazy.c,v 1.128 2010/01/28 07:31:42 heikki Exp $ * $PostgreSQL: pgsql/src/backend/commands/vacuumlazy.c,v 1.129 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -501,8 +501,7 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
* *
* We count tuples removed by the pruning step as removed by VACUUM. * We count tuples removed by the pruning step as removed by VACUUM.
*/ */
tups_vacuumed += heap_page_prune(onerel, buf, OldestXmin, tups_vacuumed += heap_page_prune(onerel, buf, OldestXmin, false);
false, false);
/* /*
* Now scan the page to collect vacuumable items and check for tuples * Now scan the page to collect vacuumable items and check for tuples
@ -907,7 +906,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
recptr = log_heap_clean(onerel, buffer, recptr = log_heap_clean(onerel, buffer,
NULL, 0, NULL, 0, NULL, 0, NULL, 0,
unused, uncnt, unused, uncnt,
vacrelstats->latestRemovedXid, false); vacrelstats->latestRemovedXid);
PageSetLSN(page, recptr); PageSetLSN(page, recptr);
PageSetTLI(page, ThisTimeLineID); PageSetTLI(page, ThisTimeLineID);
} }
@ -934,7 +933,6 @@ lazy_vacuum_index(Relation indrel,
pg_rusage_init(&ru0); pg_rusage_init(&ru0);
ivinfo.index = indrel; ivinfo.index = indrel;
ivinfo.vacuum_full = false;
ivinfo.analyze_only = false; ivinfo.analyze_only = false;
ivinfo.estimated_count = true; ivinfo.estimated_count = true;
ivinfo.message_level = elevel; ivinfo.message_level = elevel;
@ -966,7 +964,6 @@ lazy_cleanup_index(Relation indrel,
pg_rusage_init(&ru0); pg_rusage_init(&ru0);
ivinfo.index = indrel; ivinfo.index = indrel;
ivinfo.vacuum_full = false;
ivinfo.analyze_only = false; ivinfo.analyze_only = false;
ivinfo.estimated_count = !vacrelstats->scanned_all; ivinfo.estimated_count = !vacrelstats->scanned_all;
ivinfo.message_level = elevel; ivinfo.message_level = elevel;

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/execUtils.c,v 1.169 2010/01/02 17:53:56 tgl Exp $ * $PostgreSQL: pgsql/src/backend/executor/execUtils.c,v 1.170 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -977,8 +977,7 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
List * List *
ExecInsertIndexTuples(TupleTableSlot *slot, ExecInsertIndexTuples(TupleTableSlot *slot,
ItemPointer tupleid, ItemPointer tupleid,
EState *estate, EState *estate)
bool is_vacuum_full)
{ {
List *result = NIL; List *result = NIL;
ResultRelInfo *resultRelInfo; ResultRelInfo *resultRelInfo;
@ -1070,12 +1069,8 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
* For a deferrable unique index, we tell the index AM to just detect * For a deferrable unique index, we tell the index AM to just detect
* possible non-uniqueness, and we add the index OID to the result * possible non-uniqueness, and we add the index OID to the result
* list if further checking is needed. * list if further checking is needed.
*
* Special hack: we suppress unique-index checks if we are being
* called from VACUUM FULL, since VACUUM FULL may need to move dead
* tuples that have the same keys as live ones.
*/ */
if (is_vacuum_full || !indexRelation->rd_index->indisunique) if (!indexRelation->rd_index->indisunique)
checkUnique = UNIQUE_CHECK_NO; checkUnique = UNIQUE_CHECK_NO;
else if (indexRelation->rd_index->indimmediate) else if (indexRelation->rd_index->indimmediate)
checkUnique = UNIQUE_CHECK_YES; checkUnique = UNIQUE_CHECK_YES;

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/nodeModifyTable.c,v 1.5 2010/01/31 18:15:39 tgl Exp $ * $PostgreSQL: pgsql/src/backend/executor/nodeModifyTable.c,v 1.6 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -249,7 +249,7 @@ ExecInsert(TupleTableSlot *slot,
*/ */
if (resultRelInfo->ri_NumIndices > 0) if (resultRelInfo->ri_NumIndices > 0)
recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self), recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
estate, false); estate);
/* AFTER ROW INSERT Triggers */ /* AFTER ROW INSERT Triggers */
ExecARInsertTriggers(estate, resultRelInfo, tuple, recheckIndexes); ExecARInsertTriggers(estate, resultRelInfo, tuple, recheckIndexes);
@ -566,7 +566,7 @@ lreplace:;
*/ */
if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple)) if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self), recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
estate, false); estate);
/* AFTER ROW UPDATE Triggers */ /* AFTER ROW UPDATE Triggers */
ExecARUpdateTriggers(estate, resultRelInfo, tupleid, tuple, ExecARUpdateTriggers(estate, resultRelInfo, tupleid, tuple,

View File

@ -11,7 +11,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/parser/gram.y,v 2.706 2010/01/28 23:21:12 petere Exp $ * $PostgreSQL: pgsql/src/backend/parser/gram.y,v 2.707 2010/02/08 04:33:54 tgl Exp $
* *
* HISTORY * HISTORY
* AUTHOR DATE MAJOR EVENT * AUTHOR DATE MAJOR EVENT
@ -491,7 +491,7 @@ static TypeName *TableFuncTypeName(List *columns);
IDENTITY_P IF_P ILIKE IMMEDIATE IMMUTABLE IMPLICIT_P IN_P IDENTITY_P IF_P ILIKE IMMEDIATE IMMUTABLE IMPLICIT_P IN_P
INCLUDING INCREMENT INDEX INDEXES INHERIT INHERITS INITIALLY INLINE_P INCLUDING INCREMENT INDEX INDEXES INHERIT INHERITS INITIALLY INLINE_P
INNER_P INOUT INPLACE INPUT_P INSENSITIVE INSERT INSTEAD INT_P INTEGER INNER_P INOUT INPUT_P INSENSITIVE INSERT INSTEAD INT_P INTEGER
INTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION INTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION
JOIN JOIN
@ -6830,7 +6830,6 @@ vacuum_option_elem:
| VERBOSE { $$ = VACOPT_VERBOSE; } | VERBOSE { $$ = VACOPT_VERBOSE; }
| FREEZE { $$ = VACOPT_FREEZE; } | FREEZE { $$ = VACOPT_FREEZE; }
| FULL { $$ = VACOPT_FULL; } | FULL { $$ = VACOPT_FULL; }
| FULL INPLACE { $$ = VACOPT_FULL | VACOPT_INPLACE; }
; ;
AnalyzeStmt: AnalyzeStmt:
@ -10822,7 +10821,6 @@ unreserved_keyword:
| INHERIT | INHERIT
| INHERITS | INHERITS
| INLINE_P | INLINE_P
| INPLACE
| INPUT_P | INPUT_P
| INSENSITIVE | INSENSITIVE
| INSERT | INSERT

View File

@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/storage/lmgr/proc.c,v 1.214 2010/01/23 16:37:12 sriggs Exp $ * $PostgreSQL: pgsql/src/backend/storage/lmgr/proc.c,v 1.215 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -617,8 +617,7 @@ LockWaitCancel(void)
* at main transaction commit or abort * at main transaction commit or abort
* *
* At main transaction commit, we release all locks except session locks. * At main transaction commit, we release all locks except session locks.
* At main transaction abort, we release all locks including session locks; * At main transaction abort, we release all locks including session locks.
* this lets us clean up after a VACUUM FULL failure.
* *
* At subtransaction commit, we don't release any locks (so this func is not * At subtransaction commit, we don't release any locks (so this func is not
* needed at all); we will defer the releasing to the parent transaction. * needed at all); we will defer the releasing to the parent transaction.

View File

@ -80,7 +80,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/cache/inval.c,v 1.94 2010/02/07 20:48:10 tgl Exp $ * $PostgreSQL: pgsql/src/backend/utils/cache/inval.c,v 1.95 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -1112,103 +1112,6 @@ CommandEndInvalidationMessages(void)
} }
/*
* BeginNonTransactionalInvalidation
* Prepare for invalidation messages for nontransactional updates.
*
* A nontransactional invalidation is one that must be sent whether or not
* the current transaction eventually commits. We arrange for all invals
* queued between this call and EndNonTransactionalInvalidation() to be sent
* immediately when the latter is called.
*
* Currently, this is only used by heap_page_prune(), and only when it is
* invoked during VACUUM FULL's first pass over a table. We expect therefore
* that we are not inside a subtransaction and there are no already-pending
* invalidations. This could be relaxed by setting up a new nesting level of
* invalidation data, but for now there's no need. Note that heap_page_prune
* knows that this function does not change any state, and therefore there's
* no need to worry about cleaning up if there's an elog(ERROR) before
* reaching EndNonTransactionalInvalidation (the invals will just be thrown
* away if that happens).
*
* Note that these are not replayed in standby mode.
*/
void
BeginNonTransactionalInvalidation(void)
{
/* Must be at top of stack */
Assert(transInvalInfo != NULL && transInvalInfo->parent == NULL);
/* Must not have any previously-queued activity */
Assert(transInvalInfo->PriorCmdInvalidMsgs.cclist == NULL);
Assert(transInvalInfo->PriorCmdInvalidMsgs.rclist == NULL);
Assert(transInvalInfo->CurrentCmdInvalidMsgs.cclist == NULL);
Assert(transInvalInfo->CurrentCmdInvalidMsgs.rclist == NULL);
Assert(transInvalInfo->RelcacheInitFileInval == false);
SharedInvalidMessagesArray = NULL;
numSharedInvalidMessagesArray = 0;
}
/*
* EndNonTransactionalInvalidation
* Process queued-up invalidation messages for nontransactional updates.
*
* We expect to find messages in CurrentCmdInvalidMsgs only (else there
* was a CommandCounterIncrement within the "nontransactional" update).
* We must process them locally and send them out to the shared invalidation
* message queue.
*
* We must also reset the lists to empty and explicitly free memory (we can't
* rely on end-of-transaction cleanup for that).
*/
void
EndNonTransactionalInvalidation(void)
{
InvalidationChunk *chunk;
InvalidationChunk *next;
/* Must be at top of stack */
Assert(transInvalInfo != NULL && transInvalInfo->parent == NULL);
/* Must not have any prior-command messages */
Assert(transInvalInfo->PriorCmdInvalidMsgs.cclist == NULL);
Assert(transInvalInfo->PriorCmdInvalidMsgs.rclist == NULL);
/*
* At present, this function is only used for CTID-changing updates; since
* the relcache init file doesn't store any tuple CTIDs, we don't have to
* invalidate it. That might not be true forever though, in which case
* we'd need code similar to AtEOXact_Inval.
*/
/* Send out the invals */
ProcessInvalidationMessages(&transInvalInfo->CurrentCmdInvalidMsgs,
LocalExecuteInvalidationMessage);
ProcessInvalidationMessagesMulti(&transInvalInfo->CurrentCmdInvalidMsgs,
SendSharedInvalidMessages);
/* Clean up and release memory */
for (chunk = transInvalInfo->CurrentCmdInvalidMsgs.cclist;
chunk != NULL;
chunk = next)
{
next = chunk->next;
pfree(chunk);
}
for (chunk = transInvalInfo->CurrentCmdInvalidMsgs.rclist;
chunk != NULL;
chunk = next)
{
next = chunk->next;
pfree(chunk);
}
transInvalInfo->CurrentCmdInvalidMsgs.cclist = NULL;
transInvalInfo->CurrentCmdInvalidMsgs.rclist = NULL;
transInvalInfo->RelcacheInitFileInval = false;
}
/* /*
* CacheInvalidateHeapTuple * CacheInvalidateHeapTuple
* Register the given tuple for invalidation at end of command * Register the given tuple for invalidation at end of command

View File

@ -50,7 +50,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/time/tqual.c,v 1.115 2010/01/02 16:57:58 momjian Exp $ * $PostgreSQL: pgsql/src/backend/utils/time/tqual.c,v 1.116 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -91,9 +91,12 @@ static bool XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot);
* code in heapam.c relies on that!) * code in heapam.c relies on that!)
* *
* Also, if we are cleaning up HEAP_MOVED_IN or HEAP_MOVED_OFF entries, then * Also, if we are cleaning up HEAP_MOVED_IN or HEAP_MOVED_OFF entries, then
* we can always set the hint bits, since VACUUM FULL always uses synchronous * we can always set the hint bits, since old-style VACUUM FULL always used
* commits and doesn't move tuples that weren't previously hinted. (This is * synchronous commits and didn't move tuples that weren't previously
* not known by this subroutine, but is applied by its callers.) * hinted. (This is not known by this subroutine, but is applied by its
* callers.) Note: old-style VACUUM FULL is gone, but we have to keep this
* module's support for MOVED_OFF/MOVED_IN flag bits for as long as we
* support in-place update from pre-9.0 databases.
* *
* Normal commits may be asynchronous, so for those we need to get the LSN * Normal commits may be asynchronous, so for those we need to get the LSN
* of the transaction and then check whether this is flushed. * of the transaction and then check whether this is flushed.

View File

@ -5,7 +5,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/bin/scripts/vacuumdb.c,v 1.33 2010/01/07 14:35:44 momjian Exp $ * $PostgreSQL: pgsql/src/bin/scripts/vacuumdb.c,v 1.34 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -14,12 +14,12 @@
#include "common.h" #include "common.h"
static void vacuum_one_database(const char *dbname, bool full, bool inplace, bool verbose, static void vacuum_one_database(const char *dbname, bool full, bool verbose,
bool and_analyze, bool analyze_only, bool freeze, bool and_analyze, bool analyze_only, bool freeze,
const char *table, const char *host, const char *port, const char *table, const char *host, const char *port,
const char *username, enum trivalue prompt_password, const char *username, enum trivalue prompt_password,
const char *progname, bool echo); const char *progname, bool echo);
static void vacuum_all_databases(bool full, bool inplace, bool verbose, bool and_analyze, static void vacuum_all_databases(bool full, bool verbose, bool and_analyze,
bool analyze_only, bool freeze, bool analyze_only, bool freeze,
const char *host, const char *port, const char *host, const char *port,
const char *username, enum trivalue prompt_password, const char *username, enum trivalue prompt_password,
@ -47,7 +47,6 @@ main(int argc, char *argv[])
{"table", required_argument, NULL, 't'}, {"table", required_argument, NULL, 't'},
{"full", no_argument, NULL, 'f'}, {"full", no_argument, NULL, 'f'},
{"verbose", no_argument, NULL, 'v'}, {"verbose", no_argument, NULL, 'v'},
{"inplace", no_argument, NULL, 'i'},
{NULL, 0, NULL, 0} {NULL, 0, NULL, 0}
}; };
@ -69,14 +68,13 @@ main(int argc, char *argv[])
char *table = NULL; char *table = NULL;
bool full = false; bool full = false;
bool verbose = false; bool verbose = false;
bool inplace = false;
progname = get_progname(argv[0]); progname = get_progname(argv[0]);
set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pgscripts")); set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pgscripts"));
handle_help_version_opts(argc, argv, "vacuumdb", help); handle_help_version_opts(argc, argv, "vacuumdb", help);
while ((c = getopt_long(argc, argv, "h:p:U:wWeqd:zaFt:fiv", long_options, &optindex)) != -1) while ((c = getopt_long(argc, argv, "h:p:U:wWeqd:zaFt:fv", long_options, &optindex)) != -1)
{ {
switch (c) switch (c)
{ {
@ -122,9 +120,6 @@ main(int argc, char *argv[])
case 'f': case 'f':
full = true; full = true;
break; break;
case 'i':
inplace = true;
break;
case 'v': case 'v':
verbose = true; verbose = true;
break; break;
@ -148,13 +143,6 @@ main(int argc, char *argv[])
exit(1); exit(1);
} }
if (inplace && !full)
{
fprintf(stderr, _("%s: cannot use the \"inplace\" option when performing full vacuum\n"),
progname);
exit(1);
}
if (analyze_only) if (analyze_only)
{ {
if (full) if (full)
@ -189,7 +177,7 @@ main(int argc, char *argv[])
exit(1); exit(1);
} }
vacuum_all_databases(full, inplace, verbose, and_analyze, analyze_only, freeze, vacuum_all_databases(full, verbose, and_analyze, analyze_only, freeze,
host, port, username, prompt_password, host, port, username, prompt_password,
progname, echo, quiet); progname, echo, quiet);
} }
@ -205,7 +193,7 @@ main(int argc, char *argv[])
dbname = get_user_name(progname); dbname = get_user_name(progname);
} }
vacuum_one_database(dbname, full, inplace, verbose, and_analyze, analyze_only, vacuum_one_database(dbname, full, verbose, and_analyze, analyze_only,
freeze, table, freeze, table,
host, port, username, prompt_password, host, port, username, prompt_password,
progname, echo); progname, echo);
@ -216,7 +204,7 @@ main(int argc, char *argv[])
static void static void
vacuum_one_database(const char *dbname, bool full, bool inplace, bool verbose, bool and_analyze, vacuum_one_database(const char *dbname, bool full, bool verbose, bool and_analyze,
bool analyze_only, bool freeze, const char *table, bool analyze_only, bool freeze, const char *table,
const char *host, const char *port, const char *host, const char *port,
const char *username, enum trivalue prompt_password, const char *username, enum trivalue prompt_password,
@ -247,8 +235,7 @@ vacuum_one_database(const char *dbname, bool full, bool inplace, bool verbose, b
if (full) if (full)
{ {
appendPQExpBuffer(&sql, "%sFULL%s", sep, appendPQExpBuffer(&sql, "%sFULL", sep);
inplace ? " INPLACE" : "");
sep = comma; sep = comma;
} }
if (freeze) if (freeze)
@ -271,10 +258,6 @@ vacuum_one_database(const char *dbname, bool full, bool inplace, bool verbose, b
} }
else else
{ {
/*
* On older servers, VACUUM FULL is equivalent to VACUUM (FULL
* INPLACE) on newer servers, so we can ignore 'inplace'.
*/
if (full) if (full)
appendPQExpBuffer(&sql, " FULL"); appendPQExpBuffer(&sql, " FULL");
if (freeze) if (freeze)
@ -306,7 +289,7 @@ vacuum_one_database(const char *dbname, bool full, bool inplace, bool verbose, b
static void static void
vacuum_all_databases(bool full, bool inplace, bool verbose, bool and_analyze, bool analyze_only, vacuum_all_databases(bool full, bool verbose, bool and_analyze, bool analyze_only,
bool freeze, const char *host, const char *port, bool freeze, const char *host, const char *port,
const char *username, enum trivalue prompt_password, const char *username, enum trivalue prompt_password,
const char *progname, bool echo, bool quiet) const char *progname, bool echo, bool quiet)
@ -329,7 +312,7 @@ vacuum_all_databases(bool full, bool inplace, bool verbose, bool and_analyze, bo
fflush(stdout); fflush(stdout);
} }
vacuum_one_database(dbname, full, inplace, verbose, and_analyze, analyze_only, vacuum_one_database(dbname, full, verbose, and_analyze, analyze_only,
freeze, NULL, host, port, username, prompt_password, freeze, NULL, host, port, username, prompt_password,
progname, echo); progname, echo);
} }
@ -350,7 +333,6 @@ help(const char *progname)
printf(_(" -e, --echo show the commands being sent to the server\n")); printf(_(" -e, --echo show the commands being sent to the server\n"));
printf(_(" -f, --full do full vacuuming\n")); printf(_(" -f, --full do full vacuuming\n"));
printf(_(" -F, --freeze freeze row transaction information\n")); printf(_(" -F, --freeze freeze row transaction information\n"));
printf(_(" -i, --inplace do full inplace vacuuming\n"));
printf(_(" -q, --quiet don't write any messages\n")); printf(_(" -q, --quiet don't write any messages\n"));
printf(_(" -t, --table='TABLE[(COLUMNS)]' vacuum specific table only\n")); printf(_(" -t, --table='TABLE[(COLUMNS)]' vacuum specific table only\n"));
printf(_(" -v, --verbose write a lot of output\n")); printf(_(" -v, --verbose write a lot of output\n"));

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/genam.h,v 1.82 2010/01/02 16:58:00 momjian Exp $ * $PostgreSQL: pgsql/src/include/access/genam.h,v 1.83 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -42,7 +42,6 @@ typedef struct IndexBuildResult
typedef struct IndexVacuumInfo typedef struct IndexVacuumInfo
{ {
Relation index; /* the index being vacuumed */ Relation index; /* the index being vacuumed */
bool vacuum_full; /* VACUUM FULL (we have exclusive lock) */
bool analyze_only; /* ANALYZE (without any actual vacuum) */ bool analyze_only; /* ANALYZE (without any actual vacuum) */
bool estimated_count; /* num_heap_tuples is an estimate */ bool estimated_count; /* num_heap_tuples is an estimate */
int message_level; /* ereport level for progress messages */ int message_level; /* ereport level for progress messages */

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/heapam.h,v 1.146 2010/01/02 16:58:00 momjian Exp $ * $PostgreSQL: pgsql/src/include/access/heapam.h,v 1.147 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -126,17 +126,13 @@ extern void heap_desc(StringInfo buf, uint8 xl_info, char *rec);
extern void heap2_redo(XLogRecPtr lsn, XLogRecord *rptr); extern void heap2_redo(XLogRecPtr lsn, XLogRecord *rptr);
extern void heap2_desc(StringInfo buf, uint8 xl_info, char *rec); extern void heap2_desc(StringInfo buf, uint8 xl_info, char *rec);
extern XLogRecPtr log_heap_move(Relation reln, Buffer oldbuf,
ItemPointerData from,
Buffer newbuf, HeapTuple newtup,
bool all_visible_cleared, bool new_all_visible_cleared);
extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode, extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
TransactionId latestRemovedXid); TransactionId latestRemovedXid);
extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer, extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
OffsetNumber *redirected, int nredirected, OffsetNumber *redirected, int nredirected,
OffsetNumber *nowdead, int ndead, OffsetNumber *nowdead, int ndead,
OffsetNumber *nowunused, int nunused, OffsetNumber *nowunused, int nunused,
TransactionId latestRemovedXid, bool redirect_move); TransactionId latestRemovedXid);
extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer, extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
TransactionId cutoff_xid, TransactionId cutoff_xid,
OffsetNumber *offsets, int offcnt); OffsetNumber *offsets, int offcnt);
@ -148,12 +144,11 @@ extern void heap_page_prune_opt(Relation relation, Buffer buffer,
TransactionId OldestXmin); TransactionId OldestXmin);
extern int heap_page_prune(Relation relation, Buffer buffer, extern int heap_page_prune(Relation relation, Buffer buffer,
TransactionId OldestXmin, TransactionId OldestXmin,
bool redirect_move, bool report_stats); bool report_stats);
extern void heap_page_prune_execute(Buffer buffer, extern void heap_page_prune_execute(Buffer buffer,
OffsetNumber *redirected, int nredirected, OffsetNumber *redirected, int nredirected,
OffsetNumber *nowdead, int ndead, OffsetNumber *nowdead, int ndead,
OffsetNumber *nowunused, int nunused, OffsetNumber *nowunused, int nunused);
bool redirect_move);
extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets); extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
/* in heap/syncscan.c */ /* in heap/syncscan.c */

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/htup.h,v 1.110 2010/01/10 04:26:36 rhaas Exp $ * $PostgreSQL: pgsql/src/include/access/htup.h,v 1.111 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -74,11 +74,11 @@
* transaction respectively. If a tuple is inserted and deleted in the same * transaction respectively. If a tuple is inserted and deleted in the same
* transaction, we store a "combo" command id that can be mapped to the real * transaction, we store a "combo" command id that can be mapped to the real
* cmin and cmax, but only by use of local state within the originating * cmin and cmax, but only by use of local state within the originating
* backend. See combocid.c for more details. Meanwhile, Xvac is only set * backend. See combocid.c for more details. Meanwhile, Xvac is only set by
* by VACUUM FULL, which does not have any command sub-structure and so does * old-style VACUUM FULL, which does not have any command sub-structure and so
* not need either Cmin or Cmax. (This requires that VACUUM FULL never try * does not need either Cmin or Cmax. (This requires that old-style VACUUM
* to move a tuple whose Cmin or Cmax is still interesting, ie, an insert- * FULL never try to move a tuple whose Cmin or Cmax is still interesting,
* in-progress or delete-in-progress tuple.) * ie, an insert-in-progress or delete-in-progress tuple.)
* *
* A word about t_ctid: whenever a new tuple is stored on disk, its t_ctid * A word about t_ctid: whenever a new tuple is stored on disk, its t_ctid
* is initialized with its own TID (location). If the tuple is ever updated, * is initialized with its own TID (location). If the tuple is ever updated,
@ -111,7 +111,7 @@ typedef struct HeapTupleFields
union union
{ {
CommandId t_cid; /* inserting or deleting command ID, or both */ CommandId t_cid; /* inserting or deleting command ID, or both */
TransactionId t_xvac; /* VACUUM FULL xact ID */ TransactionId t_xvac; /* old-style VACUUM FULL xact ID */
} t_field3; } t_field3;
} HeapTupleFields; } HeapTupleFields;
@ -175,10 +175,10 @@ typedef HeapTupleHeaderData *HeapTupleHeader;
#define HEAP_XMAX_INVALID 0x0800 /* t_xmax invalid/aborted */ #define HEAP_XMAX_INVALID 0x0800 /* t_xmax invalid/aborted */
#define HEAP_XMAX_IS_MULTI 0x1000 /* t_xmax is a MultiXactId */ #define HEAP_XMAX_IS_MULTI 0x1000 /* t_xmax is a MultiXactId */
#define HEAP_UPDATED 0x2000 /* this is UPDATEd version of row */ #define HEAP_UPDATED 0x2000 /* this is UPDATEd version of row */
#define HEAP_MOVED_OFF 0x4000 /* moved to another place by VACUUM #define HEAP_MOVED_OFF 0x4000 /* moved to another place by
* FULL */ * old-style VACUUM FULL */
#define HEAP_MOVED_IN 0x8000 /* moved from another place by VACUUM #define HEAP_MOVED_IN 0x8000 /* moved from another place by
* FULL */ * old-style VACUUM FULL */
#define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN) #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
#define HEAP_XACT_MASK 0xFFE0 /* visibility-related bits */ #define HEAP_XACT_MASK 0xFFE0 /* visibility-related bits */
@ -559,7 +559,7 @@ typedef HeapTupleData *HeapTuple;
#define XLOG_HEAP_INSERT 0x00 #define XLOG_HEAP_INSERT 0x00
#define XLOG_HEAP_DELETE 0x10 #define XLOG_HEAP_DELETE 0x10
#define XLOG_HEAP_UPDATE 0x20 #define XLOG_HEAP_UPDATE 0x20
#define XLOG_HEAP_MOVE 0x30 /* 0x030 is free, was XLOG_HEAP_MOVE */
#define XLOG_HEAP_HOT_UPDATE 0x40 #define XLOG_HEAP_HOT_UPDATE 0x40
#define XLOG_HEAP_NEWPAGE 0x50 #define XLOG_HEAP_NEWPAGE 0x50
#define XLOG_HEAP_LOCK 0x60 #define XLOG_HEAP_LOCK 0x60
@ -579,7 +579,7 @@ typedef HeapTupleData *HeapTuple;
*/ */
#define XLOG_HEAP2_FREEZE 0x00 #define XLOG_HEAP2_FREEZE 0x00
#define XLOG_HEAP2_CLEAN 0x10 #define XLOG_HEAP2_CLEAN 0x10
#define XLOG_HEAP2_CLEAN_MOVE 0x20 /* 0x20 is free, was XLOG_HEAP2_CLEAN_MOVE */
#define XLOG_HEAP2_CLEANUP_INFO 0x30 #define XLOG_HEAP2_CLEANUP_INFO 0x30
/* /*
@ -634,15 +634,14 @@ typedef struct xl_heap_insert
#define SizeOfHeapInsert (offsetof(xl_heap_insert, all_visible_cleared) + sizeof(bool)) #define SizeOfHeapInsert (offsetof(xl_heap_insert, all_visible_cleared) + sizeof(bool))
/* This is what we need to know about update|move|hot_update */ /* This is what we need to know about update|hot_update */
typedef struct xl_heap_update typedef struct xl_heap_update
{ {
xl_heaptid target; /* deleted tuple id */ xl_heaptid target; /* deleted tuple id */
ItemPointerData newtid; /* new inserted tuple id */ ItemPointerData newtid; /* new inserted tuple id */
bool all_visible_cleared; /* PD_ALL_VISIBLE was cleared */ bool all_visible_cleared; /* PD_ALL_VISIBLE was cleared */
bool new_all_visible_cleared; /* same for the page of newtid */ bool new_all_visible_cleared; /* same for the page of newtid */
/* NEW TUPLE xl_heap_header (PLUS xmax & xmin IF MOVE OP) */ /* NEW TUPLE xl_heap_header AND TUPLE DATA FOLLOWS AT END OF STRUCT */
/* and TUPLE DATA FOLLOWS AT END OF STRUCT */
} xl_heap_update; } xl_heap_update;
#define SizeOfHeapUpdate (offsetof(xl_heap_update, new_all_visible_cleared) + sizeof(bool)) #define SizeOfHeapUpdate (offsetof(xl_heap_update, new_all_visible_cleared) + sizeof(bool))
@ -657,13 +656,6 @@ typedef struct xl_heap_update
* The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused. * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
* Note that nunused is not explicitly stored, but may be found by reference * Note that nunused is not explicitly stored, but may be found by reference
* to the total record length. * to the total record length.
*
* If the opcode is CLEAN_MOVE instead of CLEAN, then each redirection pair
* should be interpreted as physically moving the "to" item pointer to the
* "from" slot, rather than placing a redirection item in the "from" slot.
* The moved pointers should be replaced by LP_UNUSED items (there will not
* be explicit entries in the "now-unused" list for this). Also, the
* HEAP_ONLY bit in the moved tuples must be turned off.
*/ */
typedef struct xl_heap_clean typedef struct xl_heap_clean
{ {

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/nbtree.h,v 1.127 2010/01/02 16:58:00 momjian Exp $ * $PostgreSQL: pgsql/src/include/access/nbtree.h,v 1.128 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -577,8 +577,7 @@ extern bool _bt_page_recyclable(Page page);
extern void _bt_delitems(Relation rel, Buffer buf, extern void _bt_delitems(Relation rel, Buffer buf,
OffsetNumber *itemnos, int nitems, bool isVacuum, OffsetNumber *itemnos, int nitems, bool isVacuum,
BlockNumber lastBlockVacuumed); BlockNumber lastBlockVacuumed);
extern int _bt_pagedel(Relation rel, Buffer buf, extern int _bt_pagedel(Relation rel, Buffer buf, BTStack stack);
BTStack stack, bool vacuum_full);
/* /*
* prototypes for functions in nbtsearch.c * prototypes for functions in nbtsearch.c

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/xact.h,v 1.100 2010/01/02 16:58:00 momjian Exp $ * $PostgreSQL: pgsql/src/include/access/xact.h,v 1.101 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -119,12 +119,10 @@ typedef struct xl_xact_commit
* transaction completion. * transaction completion.
*/ */
#define XACT_COMPLETION_UPDATE_RELCACHE_FILE 0x01 #define XACT_COMPLETION_UPDATE_RELCACHE_FILE 0x01
#define XACT_COMPLETION_VACUUM_FULL 0x02
#define XACT_COMPLETION_FORCE_SYNC_COMMIT 0x04 #define XACT_COMPLETION_FORCE_SYNC_COMMIT 0x04
/* Access macros for above flags */ /* Access macros for above flags */
#define XactCompletionRelcacheInitFileInval(xlrec) ((xlrec)->xinfo & XACT_COMPLETION_UPDATE_RELCACHE_FILE) #define XactCompletionRelcacheInitFileInval(xlrec) ((xlrec)->xinfo & XACT_COMPLETION_UPDATE_RELCACHE_FILE)
#define XactCompletionVacuumFull(xlrec) ((xlrec)->xinfo & XACT_COMPLETION_VACUUM_FULL)
#define XactCompletionForceSyncCommit(xlrec) ((xlrec)->xinfo & XACT_COMPLETION_FORCE_SYNC_COMMIT) #define XactCompletionForceSyncCommit(xlrec) ((xlrec)->xinfo & XACT_COMPLETION_FORCE_SYNC_COMMIT)
typedef struct xl_xact_abort typedef struct xl_xact_abort
@ -212,8 +210,6 @@ extern void UnregisterXactCallback(XactCallback callback, void *arg);
extern void RegisterSubXactCallback(SubXactCallback callback, void *arg); extern void RegisterSubXactCallback(SubXactCallback callback, void *arg);
extern void UnregisterSubXactCallback(SubXactCallback callback, void *arg); extern void UnregisterSubXactCallback(SubXactCallback callback, void *arg);
extern TransactionId RecordTransactionCommit(bool isVacuumFull);
extern int xactGetCommittedChildren(TransactionId **ptr); extern int xactGetCommittedChildren(TransactionId **ptr);
extern void xact_redo(XLogRecPtr lsn, XLogRecord *record); extern void xact_redo(XLogRecPtr lsn, XLogRecord *record);

View File

@ -6,7 +6,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/xlog.h,v 1.101 2010/02/01 13:40:28 sriggs Exp $ * $PostgreSQL: pgsql/src/include/access/xlog.h,v 1.102 2010/02/08 04:33:54 tgl Exp $
*/ */
#ifndef XLOG_H #ifndef XLOG_H
#define XLOG_H #define XLOG_H
@ -249,7 +249,6 @@ extern char *TriggerFile;
extern XLogRecPtr XLogInsert(RmgrId rmid, uint8 info, XLogRecData *rdata); extern XLogRecPtr XLogInsert(RmgrId rmid, uint8 info, XLogRecData *rdata);
extern void XLogFlush(XLogRecPtr RecPtr); extern void XLogFlush(XLogRecPtr RecPtr);
extern void XLogBackgroundFlush(void); extern void XLogBackgroundFlush(void);
extern void XLogAsyncCommitFlush(void);
extern bool XLogNeedsFlush(XLogRecPtr RecPtr); extern bool XLogNeedsFlush(XLogRecPtr RecPtr);
extern int XLogFileInit(uint32 log, uint32 seg, extern int XLogFileInit(uint32 log, uint32 seg,
bool *use_existent, bool use_lock); bool *use_existent, bool use_lock);

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/commands/vacuum.h,v 1.87 2010/01/02 16:58:03 momjian Exp $ * $PostgreSQL: pgsql/src/include/commands/vacuum.h,v 1.88 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -142,7 +142,6 @@ extern void vacuum_set_xid_limits(int freeze_min_age, int freeze_table_age,
TransactionId *freezeLimit, TransactionId *freezeLimit,
TransactionId *freezeTableLimit); TransactionId *freezeTableLimit);
extern void vac_update_datfrozenxid(void); extern void vac_update_datfrozenxid(void);
extern bool vac_is_partial_index(Relation indrel);
extern void vacuum_delay_point(void); extern void vacuum_delay_point(void);
/* in commands/vacuumlazy.c */ /* in commands/vacuumlazy.c */

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/executor/executor.h,v 1.166 2010/01/02 16:58:03 momjian Exp $ * $PostgreSQL: pgsql/src/include/executor/executor.h,v 1.167 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -320,7 +320,7 @@ extern void ExecCloseScanRelation(Relation scanrel);
extern void ExecOpenIndices(ResultRelInfo *resultRelInfo); extern void ExecOpenIndices(ResultRelInfo *resultRelInfo);
extern void ExecCloseIndices(ResultRelInfo *resultRelInfo); extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid, extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
EState *estate, bool is_vacuum_full); EState *estate);
extern bool check_exclusion_constraint(Relation heap, Relation index, extern bool check_exclusion_constraint(Relation heap, Relation index,
IndexInfo *indexInfo, IndexInfo *indexInfo,
ItemPointer tupleid, ItemPointer tupleid,

View File

@ -13,7 +13,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/nodes/parsenodes.h,v 1.427 2010/01/28 23:21:13 petere Exp $ * $PostgreSQL: pgsql/src/include/nodes/parsenodes.h,v 1.428 2010/02/08 04:33:54 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -2246,8 +2246,7 @@ typedef enum VacuumOption
VACOPT_ANALYZE = 1 << 1, /* do ANALYZE */ VACOPT_ANALYZE = 1 << 1, /* do ANALYZE */
VACOPT_VERBOSE = 1 << 2, /* print progress info */ VACOPT_VERBOSE = 1 << 2, /* print progress info */
VACOPT_FREEZE = 1 << 3, /* FREEZE option */ VACOPT_FREEZE = 1 << 3, /* FREEZE option */
VACOPT_FULL = 1 << 4, /* FULL (non-concurrent) vacuum */ VACOPT_FULL = 1 << 4 /* FULL (non-concurrent) vacuum */
VACOPT_INPLACE = 1 << 5 /* traditional FULL INPLACE vacuum */
} VacuumOption; } VacuumOption;
typedef struct VacuumStmt typedef struct VacuumStmt

View File

@ -11,7 +11,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/include/parser/kwlist.h,v 1.10 2010/01/06 05:31:14 itagaki Exp $ * $PostgreSQL: pgsql/src/include/parser/kwlist.h,v 1.11 2010/02/08 04:33:55 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -193,7 +193,6 @@ PG_KEYWORD("initially", INITIALLY, RESERVED_KEYWORD)
PG_KEYWORD("inline", INLINE_P, UNRESERVED_KEYWORD) PG_KEYWORD("inline", INLINE_P, UNRESERVED_KEYWORD)
PG_KEYWORD("inner", INNER_P, TYPE_FUNC_NAME_KEYWORD) PG_KEYWORD("inner", INNER_P, TYPE_FUNC_NAME_KEYWORD)
PG_KEYWORD("inout", INOUT, COL_NAME_KEYWORD) PG_KEYWORD("inout", INOUT, COL_NAME_KEYWORD)
PG_KEYWORD("inplace", INPLACE, UNRESERVED_KEYWORD)
PG_KEYWORD("input", INPUT_P, UNRESERVED_KEYWORD) PG_KEYWORD("input", INPUT_P, UNRESERVED_KEYWORD)
PG_KEYWORD("insensitive", INSENSITIVE, UNRESERVED_KEYWORD) PG_KEYWORD("insensitive", INSENSITIVE, UNRESERVED_KEYWORD)
PG_KEYWORD("insert", INSERT, UNRESERVED_KEYWORD) PG_KEYWORD("insert", INSERT, UNRESERVED_KEYWORD)

View File

@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/utils/inval.h,v 1.48 2010/02/07 20:48:13 tgl Exp $ * $PostgreSQL: pgsql/src/include/utils/inval.h,v 1.49 2010/02/08 04:33:55 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
@ -39,10 +39,6 @@ extern void PostPrepare_Inval(void);
extern void CommandEndInvalidationMessages(void); extern void CommandEndInvalidationMessages(void);
extern void BeginNonTransactionalInvalidation(void);
extern void EndNonTransactionalInvalidation(void);
extern void CacheInvalidateHeapTuple(Relation relation, HeapTuple tuple); extern void CacheInvalidateHeapTuple(Relation relation, HeapTuple tuple);
extern void CacheInvalidateCatalog(Oid catalogId); extern void CacheInvalidateCatalog(Oid catalogId);

View File

@ -57,65 +57,16 @@ SELECT * FROM vactst;
(0 rows) (0 rows)
VACUUM (FULL, FREEZE) vactst; VACUUM (FULL, FREEZE) vactst;
VACUUM (ANALYZE, FULL INPLACE) vactst; VACUUM (ANALYZE, FULL) vactst;
CREATE TABLE vaccluster (i INT PRIMARY KEY); CREATE TABLE vaccluster (i INT PRIMARY KEY);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "vaccluster_pkey" for table "vaccluster" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "vaccluster_pkey" for table "vaccluster"
ALTER TABLE vaccluster CLUSTER ON vaccluster_pkey; ALTER TABLE vaccluster CLUSTER ON vaccluster_pkey;
INSERT INTO vaccluster SELECT * FROM vactst; INSERT INTO vaccluster SELECT * FROM vactst;
CREATE TEMP TABLE vacid (
relid regclass,
filenode_0 oid,
filenode_1 oid,
filenode_2 oid,
filenode_3 oid
);
INSERT INTO vacid (relid, filenode_0)
SELECT oid, relfilenode FROM pg_class WHERE oid::regclass IN (
'pg_am', -- normal catalog
'pg_class', -- fundamental catalog
'pg_database', -- shared catalog
'vaccluster' , -- clustered table
'vacid', -- temp table
'vactst' -- normal table
);
-- only clusterd table should be changed
CLUSTER vaccluster; CLUSTER vaccluster;
UPDATE vacid SET filenode_1 = relfilenode
FROM pg_class WHERE oid = relid;
-- all tables should not be changed
VACUUM (FULL INPLACE) pg_am;
VACUUM (FULL INPLACE) pg_class;
VACUUM (FULL INPLACE) pg_database;
VACUUM (FULL INPLACE) vaccluster;
VACUUM (FULL INPLACE) vacid;
VACUUM (FULL INPLACE) vactst;
UPDATE vacid SET filenode_2 = relfilenode
FROM pg_class WHERE oid = relid;
-- only non-system tables should be changed
VACUUM FULL pg_am; VACUUM FULL pg_am;
VACUUM FULL pg_class; VACUUM FULL pg_class;
VACUUM FULL pg_database; VACUUM FULL pg_database;
VACUUM FULL vaccluster; VACUUM FULL vaccluster;
VACUUM FULL vacid;
VACUUM FULL vactst; VACUUM FULL vactst;
UPDATE vacid SET filenode_3 = relfilenode
FROM pg_class WHERE oid = relid;
SELECT relid,
filenode_0 = filenode_1 AS cluster,
filenode_1 = filenode_2 AS full_inplace,
filenode_2 = filenode_3 AS full
FROM vacid
ORDER BY relid::text;
relid | cluster | full_inplace | full
-------------+---------+--------------+------
pg_am | t | t | f
pg_class | t | t | t
pg_database | t | t | t
vaccluster | f | t | f
vacid | t | t | f
vactst | t | t | f
(6 rows)
DROP TABLE vaccluster; DROP TABLE vaccluster;
DROP TABLE vacid;
DROP TABLE vactst; DROP TABLE vactst;

View File

@ -40,62 +40,18 @@ DELETE FROM vactst;
SELECT * FROM vactst; SELECT * FROM vactst;
VACUUM (FULL, FREEZE) vactst; VACUUM (FULL, FREEZE) vactst;
VACUUM (ANALYZE, FULL INPLACE) vactst; VACUUM (ANALYZE, FULL) vactst;
CREATE TABLE vaccluster (i INT PRIMARY KEY); CREATE TABLE vaccluster (i INT PRIMARY KEY);
ALTER TABLE vaccluster CLUSTER ON vaccluster_pkey; ALTER TABLE vaccluster CLUSTER ON vaccluster_pkey;
INSERT INTO vaccluster SELECT * FROM vactst; INSERT INTO vaccluster SELECT * FROM vactst;
CREATE TEMP TABLE vacid (
relid regclass,
filenode_0 oid,
filenode_1 oid,
filenode_2 oid,
filenode_3 oid
);
INSERT INTO vacid (relid, filenode_0)
SELECT oid, relfilenode FROM pg_class WHERE oid::regclass IN (
'pg_am', -- normal catalog
'pg_class', -- fundamental catalog
'pg_database', -- shared catalog
'vaccluster' , -- clustered table
'vacid', -- temp table
'vactst' -- normal table
);
-- only clusterd table should be changed
CLUSTER vaccluster; CLUSTER vaccluster;
UPDATE vacid SET filenode_1 = relfilenode
FROM pg_class WHERE oid = relid;
-- all tables should not be changed
VACUUM (FULL INPLACE) pg_am;
VACUUM (FULL INPLACE) pg_class;
VACUUM (FULL INPLACE) pg_database;
VACUUM (FULL INPLACE) vaccluster;
VACUUM (FULL INPLACE) vacid;
VACUUM (FULL INPLACE) vactst;
UPDATE vacid SET filenode_2 = relfilenode
FROM pg_class WHERE oid = relid;
-- only non-system tables should be changed
VACUUM FULL pg_am; VACUUM FULL pg_am;
VACUUM FULL pg_class; VACUUM FULL pg_class;
VACUUM FULL pg_database; VACUUM FULL pg_database;
VACUUM FULL vaccluster; VACUUM FULL vaccluster;
VACUUM FULL vacid;
VACUUM FULL vactst; VACUUM FULL vactst;
UPDATE vacid SET filenode_3 = relfilenode
FROM pg_class WHERE oid = relid;
SELECT relid,
filenode_0 = filenode_1 AS cluster,
filenode_1 = filenode_2 AS full_inplace,
filenode_2 = filenode_3 AS full
FROM vacid
ORDER BY relid::text;
DROP TABLE vaccluster; DROP TABLE vaccluster;
DROP TABLE vacid;
DROP TABLE vactst; DROP TABLE vactst;