Harden Memoization code against broken data types

Bug #17512 highlighted that a suitably broken data type could cause the
backend to crash if either the hash function or equality function were in
someway non-deterministic based on their input values.  Such a data type
could cause a crash of the backend due to some code which assumes that
we'll always find a hash table entry corresponding to an item in the
Memoize LRU list.

Here we remove the assumption that we'll always find the entry
corresponding to the given LRU list item and add run-time checks to verify
we have found the given item in the cache.

This is not a fix for bug #17512, but it will turn the crash reported by
that bug report into an internal ERROR.

Reported-by: Ales Zeleny
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
This commit is contained in:
David Rowley 2022-06-08 12:39:09 +12:00
parent bf4717b091
commit fa5185b26c
1 changed files with 7 additions and 3 deletions

View File

@ -446,9 +446,13 @@ cache_reduce_memory(MemoizeState *mstate, MemoizeKey *specialkey)
*/
entry = memoize_lookup(mstate->hashtable, NULL);
/* A good spot to check for corruption of the table and LRU list. */
Assert(entry != NULL);
Assert(entry->key == key);
/*
* Sanity check that we found the entry belonging to the LRU list
* item. A misbehaving hash or equality function could cause the
* entry not to be found or the wrong entry to be found.
*/
if (unlikely(entry == NULL || entry->key != key))
elog(ERROR, "could not find memoization table entry");
/*
* If we're being called to free memory while the cache is being