From fc8adddd9af24737e46006dcdf12b98c3e195fd7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 3 Jan 2019 17:00:08 -0500 Subject: [PATCH] Improve ANALYZE's handling of concurrent-update scenarios. This patch changes the rule for whether or not a tuple seen by ANALYZE should be included in its sample. When we last touched this logic, in commit 51e1445f1, we weren't thinking very hard about tuples being UPDATEd by a long-running concurrent transaction. In such a case, we might see the pre-image as either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see the post-image not at all, or as INSERT_IN_PROGRESS. Since the existing code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS tuples, this leads to concurrently-updated rows being omitted from the sample entirely. That's not very helpful, and it's especially the wrong thing if the concurrent transaction ends up rolling back. The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if they were live. This makes the "sample it" and "count it" decisions the same, which seems good for consistency. It's clearly the right thing if the concurrent transaction ends up rolling back; in effect, we are sampling as though IN_PROGRESS transactions haven't happened yet. Also, this combination of choices ensures maximum robustness against the different combinations of whether and in which state we might see the pre- and post-images of an update. It's slightly annoying that we end up recording immediately-out-of-date stats in the case where the transaction does commit, but on the other hand the stats are fine for columns that didn't change in the update. And the alternative of sampling INSERT_IN_PROGRESS rows instead seems like a bad idea, because then the sampling would be inconsistent with the way rows are counted for the stats report. Per report from Mark Chambers; thanks to Jeff Janes for diagnosing what was happening. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com --- src/backend/commands/analyze.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 25194e871c..d5d8cefc2f 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -1165,20 +1165,35 @@ acquire_sample_rows(Relation onerel, int elevel, case HEAPTUPLE_DELETE_IN_PROGRESS: /* - * We count delete-in-progress rows as still live, using - * the same reasoning given above; but we don't bother to - * include them in the sample. + * We count and sample delete-in-progress rows the same as + * live ones, so that the stats counters come out right if + * the deleting transaction commits after us, per the same + * reasoning given above. * * If the delete was done by our own transaction, however, * we must count the row as dead to make * pgstat_report_analyze's stats adjustments come out * right. (Note: this works out properly when the row was * both inserted and deleted in our xact.) + * + * The net effect of these choices is that we act as + * though an IN_PROGRESS transaction hasn't happened yet, + * except if it is our own transaction, which we assume + * has happened. + * + * This approach ensures that we behave sanely if we see + * both the pre-image and post-image rows for a row being + * updated by a concurrent transaction: we will sample the + * pre-image but not the post-image. We also get sane + * results if the concurrent transaction never commits. */ if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(targtuple.t_data))) deadrows += 1; else + { + sample_it = true; liverows += 1; + } break; default: