Reduce overhead of cache-clobber testing in LookupOpclassInfo().

Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled.  It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.

There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive.  Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.

While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry.  (That seems to be my
oversight in 03ffc4d6d.)  Also, in >= v13, stop allocating one array
entry too many.  That's evidently left over from sloppy reversion in
851b14b0c.

Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.

Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
This commit is contained in:
Tom Lane 2021-07-05 16:51:57 -04:00
parent c04c767059
commit 9753324b7d
1 changed files with 24 additions and 20 deletions

View File

@ -1594,14 +1594,14 @@ LookupOpclassInfo(Oid operatorClassOid,
/* First time through: initialize the opclass cache */
HASHCTL ctl;
/* Also make sure CacheMemoryContext exists */
if (!CacheMemoryContext)
CreateCacheMemoryContext();
ctl.keysize = sizeof(Oid);
ctl.entrysize = sizeof(OpClassCacheEnt);
OpClassCache = hash_create("Operator class cache", 64,
&ctl, HASH_ELEM | HASH_BLOBS);
/* Also make sure CacheMemoryContext exists */
if (!CacheMemoryContext)
CreateCacheMemoryContext();
}
opcentry = (OpClassCacheEnt *) hash_search(OpClassCache,
@ -1610,16 +1610,10 @@ LookupOpclassInfo(Oid operatorClassOid,
if (!found)
{
/* Need to allocate memory for new entry */
/* Initialize new entry */
opcentry->valid = false; /* until known OK */
opcentry->numSupport = numSupport;
if (numSupport > 0)
opcentry->supportProcs = (RegProcedure *)
MemoryContextAllocZero(CacheMemoryContext,
(numSupport + 1) * sizeof(RegProcedure));
else
opcentry->supportProcs = NULL;
opcentry->supportProcs = NULL; /* filled below */
}
else
{
@ -1627,14 +1621,17 @@ LookupOpclassInfo(Oid operatorClassOid,
}
/*
* When testing for cache-flush hazards, we intentionally disable the
* operator class cache and force reloading of the info on each call. This
* is helpful because we want to test the case where a cache flush occurs
* while we are loading the info, and it's very hard to provoke that if
* this happens only once per opclass per backend.
* When aggressively testing cache-flush hazards, we disable the operator
* class cache and force reloading of the info on each call. This models
* no real-world behavior, since the cache entries are never invalidated
* otherwise. However it can be helpful for detecting bugs in the cache
* loading logic itself, such as reliance on a non-nailed index. Given
* the limited use-case and the fact that this adds a great deal of
* expense, we enable it only for high values of
* debug_invalidate_system_caches_always.
*/
#ifdef CLOBBER_CACHE_ENABLED
if (debug_invalidate_system_caches_always > 0)
if (debug_invalidate_system_caches_always > 2)
opcentry->valid = false;
#endif
@ -1642,8 +1639,15 @@ LookupOpclassInfo(Oid operatorClassOid,
return opcentry;
/*
* Need to fill in new entry.
*
* Need to fill in new entry. First allocate space, unless we already did
* so in some previous attempt.
*/
if (opcentry->supportProcs == NULL && numSupport > 0)
opcentry->supportProcs = (RegProcedure *)
MemoryContextAllocZero(CacheMemoryContext,
numSupport * sizeof(RegProcedure));
/*
* To avoid infinite recursion during startup, force heap scans if we're
* looking up info for the opclasses used by the indexes we would like to
* reference here.