Adjust VACUUM's removable cutoff log message.

The age of OldestXmin (a.k.a. "removable cutoff") when VACUUM ends often
indicates the approximate number of XIDs consumed while VACUUM ran.
However, there is at least one important exception: the cutoff could be
held back by a snapshot that was acquired before our VACUUM even began.
Successive VACUUM operations may even use exactly the same old cutoff in
extreme cases involving long held snapshots.

The log messages that described how removable cutoff aged (which were
added by commit 872770fd) created the impression that we were reporting
on how VACUUM's usable cutoff advanced while VACUUM ran, which was
misleading in these extreme cases.  Fix by using a more general wording.

Per gripe from Tom Lane.

In passing, relocate related instrumentation code for clarity.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/1643035.1650035653@sss.pgh.pa.us
This commit is contained in:
Peter Geoghegan 2022-04-15 13:21:43 -07:00
parent 91998539b2
commit 357c8455e6
1 changed files with 15 additions and 19 deletions

View File

@ -633,30 +633,19 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
TimestampDifferenceExceeds(starttime, endtime,
params->log_min_duration))
{
long secs;
int usecs;
long secs_dur;
int usecs_dur;
WalUsage walusage;
double read_rate,
write_rate;
StringInfoData buf;
char *msgfmt;
int32 diff;
double read_rate = 0,
write_rate = 0;
TimestampDifference(starttime, endtime, &secs, &usecs);
TimestampDifference(starttime, endtime, &secs_dur, &usecs_dur);
memset(&walusage, 0, sizeof(WalUsage));
WalUsageAccumDiff(&walusage, &pgWalUsage, &walusage_start);
read_rate = 0;
write_rate = 0;
if ((secs > 0) || (usecs > 0))
{
read_rate = (double) BLCKSZ * VacuumPageMiss / (1024 * 1024) /
(secs + usecs / 1000000.0);
write_rate = (double) BLCKSZ * VacuumPageDirty / (1024 * 1024) /
(secs + usecs / 1000000.0);
}
initStringInfo(&buf);
if (verbose)
{
@ -710,20 +699,20 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
vacrel->missed_dead_pages);
diff = (int32) (ReadNextTransactionId() - OldestXmin);
appendStringInfo(&buf,
_("removable cutoff: %u, older by %d xids when operation ended\n"),
_("removable cutoff: %u, which was %d XIDs old when operation ended\n"),
OldestXmin, diff);
if (frozenxid_updated)
{
diff = (int32) (vacrel->NewRelfrozenXid - vacrel->relfrozenxid);
appendStringInfo(&buf,
_("new relfrozenxid: %u, which is %d xids ahead of previous value\n"),
_("new relfrozenxid: %u, which is %d XIDs ahead of previous value\n"),
vacrel->NewRelfrozenXid, diff);
}
if (minmulti_updated)
{
diff = (int32) (vacrel->NewRelminMxid - vacrel->relminmxid);
appendStringInfo(&buf,
_("new relminmxid: %u, which is %d mxids ahead of previous value\n"),
_("new relminmxid: %u, which is %d MXIDs ahead of previous value\n"),
vacrel->NewRelminMxid, diff);
}
if (orig_rel_pages > 0)
@ -774,6 +763,13 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
appendStringInfo(&buf, _("I/O timings: read: %.3f ms, write: %.3f ms\n"),
read_ms, write_ms);
}
if (secs_dur > 0 || usecs_dur > 0)
{
read_rate = (double) BLCKSZ * VacuumPageMiss / (1024 * 1024) /
(secs_dur + usecs_dur / 1000000.0);
write_rate = (double) BLCKSZ * VacuumPageDirty / (1024 * 1024) /
(secs_dur + usecs_dur / 1000000.0);
}
appendStringInfo(&buf, _("avg read rate: %.3f MB/s, avg write rate: %.3f MB/s\n"),
read_rate, write_rate);
appendStringInfo(&buf,