pgstattuple: Don't take heavyweight locks when examining a hash index.

It's currently necessary to take a heavyweight lock when scanning a
hash bucket, but pgstattuple only examines individual pages, so it
doesn't need to do this.  If, for some hypothetical reason, it did
need to do any heavyweight locking here, this logic would probably
still be incorrect, because most of the locks that it is taking are
meaningless.  Only a heavyweight lock on a primary bucket page has any
meaning, but this takes heavyweight locks on all pages regardless of
function - and in particular overflow pages, where you might imagine
that we'd want to lock the primary bucket page if we needed to lock
anything at all.

This is arguably a bug that has existed since this code was added in
commit dab42382f4, but I'm not going to
bother back-patching it because in most cases the only consequence is
that running pgstattuple() on a hash index is a little slower than it
otherwise might be, which is no big deal.

Extracted from a vastly larger patch by Amit Kapila which heavyweight
locking for hash indexes entirely; analysis of why this can be done
independently of the rest by me.
This commit is contained in:
Robert Haas 2016-10-28 12:21:15 -04:00
parent 33839b5ffb
commit d4b5d4cadd
1 changed files with 0 additions and 2 deletions

View File

@ -441,7 +441,6 @@ pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno,
Buffer buf;
Page page;
_hash_getlock(rel, blkno, HASH_SHARE);
buf = _hash_getbuf_with_strategy(rel, blkno, HASH_READ, 0, bstrategy);
page = BufferGetPage(buf);
@ -472,7 +471,6 @@ pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno,
}
_hash_relbuf(rel, buf);
_hash_droplock(rel, blkno, HASH_SHARE);
}
/*