Commit Graph

437 Commits

Author SHA1 Message Date
Peter Eisentraut
dbbca2cf29 Remove unused #include's from backend .c files
as determined by include-what-you-use (IWYU)

While IWYU also suggests to *add* a bunch of #include's (which is its
main purpose), this patch does not do that.  In some cases, a more
specific #include replaces another less specific one.

Some manual adjustments of the automatic result:

- IWYU currently doesn't know about includes that provide global
  variable declarations (like -Wmissing-variable-declarations), so
  those includes are being kept manually.

- All includes for port(ability) headers are being kept for now, to
  play it safe.

- No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the
  patch from exploding in size.

Note that this patch touches just *.c files, so nothing declared in
header files changes in hidden ways.

As a small example, in src/backend/access/transam/rmgr.c, some IWYU
pragma annotations are added to handle a special case there.

Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
2024-03-04 12:02:20 +01:00
David Rowley
a0cd954480 Optimize GenerationAlloc() and SlabAlloc()
In a similar effort to 413c18401, separate out the hot and cold paths in
GenerationAlloc() and SlabAlloc() to avoid having to setup the stack frame
for the hot path.

This additionally adjusts how we use the GenerationContext's freeblock.
Freeblock, when set, is now always empty and we only switch to using it
when the current allocation request finds the current block does not have
enough space and the freeblock is large enough to accomodate the
allocation.

This commit also adjusts GenerationFree() so that if we pfree the final
allocation in the current generation block, we now mark that block as
empty and keep it as the current block.  Previously we free'd that block
and set the current block to NULL.  Doing that meant we needed a special
case in GenerationAlloc to check if GenerationContext.block was NULL.
So this both reduces free/malloc calls and reduces the work done in
GenerationAlloc().

In passing, improve some comments in aset.c

Discussion: https://postgr.es/m/CAApHDvpHVSJqqb4B4OZLixr=CotKq-eKkbwZqvZVo_biYvUvQA@mail.gmail.com
2024-03-04 17:42:10 +13:00
Daniel Gustafsson
6fd144e3a9 Fix integer underflow in shared memory debugging
dsa_dump would print a large negative number instead of zero for
segment bin 0.  Fix by explicitly checking for underflow and add
special case for bin 0. Backpatch to all supported versions.

Author: Ian Ilyasov <ianilyasov@outlook.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/GV1P251MB1004E0D09D117D3CECF9256ECD502@GV1P251MB1004.EURP251.PROD.OUTLOOK.COM
Backpatch-through: v12
2024-02-29 12:19:52 +01:00
David Rowley
413c18401d Refactor AllocSetAlloc(), separating hot and cold paths
Allocating from a free list or from a block which contains enough space
already, we deem to be common code paths and want to optimize for those.
Having to allocate a new block, either a normal block or a dedicated one
for a large allocation, we deem to be less common, therefore we class
that as "cold".  Both cold paths require a malloc so are going to be
slower as a result of that regardless.

The main motivation here is to remove the calls to malloc() in the hot
path and because of this, the compiler is now free to not bother setting
up the stack frame in AllocSetAlloc(), thus making the hot path much
cheaper.

Author: Andres Freund
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/20210719195950.gavgs6ujzmjfaiig@alap3.anarazel.de
2024-02-28 14:20:43 +13:00
David Rowley
743112a2e9 Adjust memory allocation functions to allow sibling calls
Many modern compilers are able to optimize function calls to functions
where the parameters of the called function match a leading subset of
the calling function's parameters.  If there are no instructions in the
calling function after the function is called, then the compiler is free
to avoid any stack frame setup and implement the function call as a
"jmp" rather than a "call".  This is called sibling call optimization.

Here we adjust the memory allocation functions in mcxt.c to allow this
optimization.  This requires moving some responsibility into the memory
context implementations themselves.  It's now the responsibility of the
MemoryContext to check for malloc failures.  This is good as it both
allows the sibling call optimization, but also because most small and
medium allocations won't call malloc and just allocate memory to an
existing block.  That can't fail, so checking for NULLs in that case
isn't required.

Also, traditionally it's been the responsibility of palloc and the other
allocation functions in mcxt.c to check for invalid allocation size
requests.  Here we also move the responsibility of checking that into the
MemoryContext.  This isn't to allow the sibling call optimization, but
more because most of our allocators handle large allocations separately
and we can just add the size check when doing large allocations.  We no
longer check this for non-large allocations at all.

To make checking the allocation request sizes and ERROR handling easier,
add some helper functions to mcxt.c for the allocators to use.

Author: Andres Freund
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/20210719195950.gavgs6ujzmjfaiig@alap3.anarazel.de
2024-02-27 16:39:42 +13:00
Alvaro Herrera
5de890e361
Add EXPLAIN (MEMORY) to report planner memory consumption
This adds a new "Memory:" line under the "Planning:" group (which
currently only has "Buffers:") when the MEMORY option is specified.

In order to make the reporting reasonably accurate, we create a separate
memory context for planner activities, to be used only when this option
is given.  The total amount of memory allocated by that context is
reported as "allocated"; we subtract memory in the context's freelists
from that and report that result as "used".  We use
MemoryContextStatsInternal() to obtain the quantities.

The code structure to show buffer usage during planning was not in
amazing shape, so I (Álvaro) modified the patch a bit to clean that up
in passing.

Author: Ashutosh Bapat
Reviewed-by: David Rowley, Andrey Lepikhov, Jian He, Andy Fan
Discussion: https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com
2024-01-29 17:53:03 +01:00
Heikki Linnakangas
376c216138 Update comment, generation mem contexts have a "keeper" block
The keeper block was introduced in commit 1b0d9aa4f7, but it forgot
to update this comment.
2024-01-26 01:04:58 +02:00
Bruce Momjian
29275b1d17 Update copyright for 2024
Reported-by: Michael Paquier

Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz

Backpatch-through: 12
2024-01-03 20:49:05 -05:00
Heikki Linnakangas
3c080fb4fa Simplify newNode() by removing special cases
- Remove MemoryContextAllocZeroAligned(). It was supposed to be a
  faster version of MemoryContextAllocZero(), but modern compilers turn
  the MemSetLoop() into a call to memset() anyway, making it more or
  less identical to MemoryContextAllocZero(). That was the only user of
  MemSetTest, MemSetLoop, so remove those too, as well as palloc0fast().

- Convert newNode() to a static inline function. When this was
  originally originally written, it was written as a macro because
  testing showed that gcc didn't inline the size check as we
  intended. Modern compiler versions do, and now that it just calls
  palloc0() there is no size-check to inline anyway.

One nice effect is that the palloc0() takes one less argument than
MemoryContextAllocZeroAligned(), which saves a few instructions in the
callers of newNode().

Reviewed-by: Peter Eisentraut, Tom Lane, John Naylor, Thomas Munro
Discussion: https://www.postgresql.org/message-id/b51f1fa7-7e6a-4ecc-936d-90a8a1659e7c@iki.fi
2023-12-19 12:11:47 +02:00
Heikki Linnakangas
a8b330ffb6 Fix dsa.c with different resource owners.
The comments in dsa.c suggested that areas were owned by resource
owners, but it was not in fact tracked explicitly. The DSM attachments
held by the dsa were owned by resource owners, but not the area
itself.  That led to confusion if you used one resource owner to
attach or create the area, but then switched to a different resource
owner before allocating or even just accessing the allocations in the
area with dsa_get_address(). The additional DSM segments associated
with the area would get owned by a different resource owner than the
initial segment.  To fix, add an explicit 'resowner' field to
dsa_area.  It replaces the 'mapping_pinned' flag; resowner == NULL now
indicates that the mapping is pinned.

This is arguably a bug fix, but I'm not backpatching because it
doesn't seem to be a live bug in the back branches. In 'master', it is
a bug because commit b8bff07daa made ResourceOwners more strict so
that you are no longer allowed to remember new resources in a
ResourceOwner after you have started to release it. Merely accessing a
dsa pointer might need to attach a new DSM segment, and before this
commit it was temporarily remembered in the current owner for a very
brief period even if the DSA was pinned. And that could happen in
AtEOXact_PgStat(), which is called after the owner is already released.

Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin, Thomas Munro, Andres Freund
Discussion: https://www.postgresql.org/message-id/11b70743-c5f3-3910-8e5b-dd6c115ff829%40gmail.com
2023-11-15 10:34:28 +01:00
David Rowley
2c2eb0d6b2 Shrink memory contexts struct sizes
Here we reduce the block size fields in AllocSetContext, GenerationContext
and SlabContext from Size down to uint32.  Ever since c6e0fe1f2, blocks
for non-dedicated palloc chunks can no longer be larger than 1GB, so
there's no need to store the various block size fields as 64-bit values.
32 bits are enough to store 2^30.

Here we also further reduce the memory context struct sizes by getting rid
of the 'keeper' field which stores a pointer to the context's keeper
block.  All the context types which have this field always allocate the
keeper block in the same allocation as the memory context itself, so the
keeper block always comes right at the end of the context struct.  Add
some macros to calculate that address rather than storing it in the
context.

Overall, in AllocSetContext and GenerationContext, this saves 20 bytes on
64-bit builds which for ALLOCSET_SMALL_SIZES can sometimes mean the
difference between having to allocate a 2nd block and storing all the
required allocations on the keeper block alone.  Such contexts are used
in relcache to store cache entries for indexes, of which there can be
a large number in a single backend.

Author: Melih Mutlu
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/CAGPVpCSOW3uJ1QJmsMR9_oE3X7fG_z4q0AoU4R_w+2RzvroPFg@mail.gmail.com
2023-07-17 11:16:56 +12:00
Thomas Munro
03f80daac8 Re-bin segment when memory pages are freed.
It's OK to be lazy about re-binning memory segments when allocating,
because that can only leave segments in a bin that's too high.  We'll
search higher bins if necessary while allocating next time, and
also eventually re-bin, so no memory can become unreachable that way.

However, when freeing memory, the largest contiguous range of free pages
might go up, so we should re-bin eagerly to make sure we don't leave the
segment in a bin that is too low for get_best_segment() to find.

The re-binning code is moved into a function of its own, so it can be
called whenever free pages are returned to the segment's free page map.

Back-patch to all supported releases.

Author: Dongming Liu <ldming101@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version)
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAL1p7e8LzB2LSeAXo2pXCW4%2BRya9s0sJ3G_ReKOU%3DAjSUWjHWQ%40mail.gmail.com
2023-07-04 15:16:47 +12:00
Tom Lane
0245f8db36 Pre-beta mechanical code beautification.
Run pgindent, pgperltidy, and reformat-dat-files.

This set of diffs is a bit larger than typical.  We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop).  We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up.  Going
forward, that should make for fewer random-seeming changes to existing
code.

Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
2023-05-19 17:24:48 -04:00
David Rowley
3f58a4e296 Fix various typos and incorrect/outdated name references
Author: Alexander Lakhin
Discussion: https://postgr.es/m/699beab4-a6ca-92c9-f152-f559caf6dc25@gmail.com
2023-04-19 13:50:33 +12:00
David Rowley
b4dbf3e924 Fix various typos
This fixes many spelling mistakes in comments, but a few references to
invalid parameter names, function names and option names too in comments
and also some in string constants

Also, fix an #undef that was undefining the incorrect definition

Author: Alexander Lakhin
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/d5f68d19-c0fc-91a9-118d-7c6a5a3f5fad@gmail.com
2023-04-18 13:23:23 +12:00
David Rowley
414d66220a Adjust Valgrind macro usage to protect chunk headers
Prior to this commit we only ever protected MemoryChunk's requested_size
field with Valgrind NOACCESS.  This means that if the hdrmask field is
ever accessed accidentally then we're not going to get any warnings from
Valgrind about it.  Valgrind would have warned us about the problem fixed
in 92957ed98 had we already been doing this.

Per suggestion from Tom Lane

Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/1650235.1672694719@sss.pgh.pa.us
Discussion: https://postgr.es/m/CAApHDvr=FZNGbj252Z6M9BSFKoq6BMxgkQ2yEAGUYoo7RquqZg@mail.gmail.com
2023-04-15 11:59:52 +12:00
Peter Eisentraut
c209d317e9 Fix minor signed/unsigned mixup
The chunk header is unsigned, and the output format takes unsigned, so
casting it to signed in between is incorrect.
2023-04-05 07:34:52 +02:00
Tom Lane
b3e184a5d4 Fix erroneous Valgrind markings in AllocSetRealloc.
If asked to decrease the size of a large (>8K) palloc chunk,
AllocSetRealloc could improperly change the Valgrind state of memory
beyond the new end of the chunk: it would mark data UNDEFINED as far
as the old end of the chunk after having done the realloc(3) call,
thus tromping on the state of memory that no longer belongs to it.
One would normally expect that memory to now be marked NOACCESS,
so that this mislabeling might prevent detection of later errors.
If realloc() had chosen to move the chunk someplace else (unlikely,
but well within its rights) we could also mismark perfectly-valid
DEFINED data as UNDEFINED, causing false-positive valgrind reports
later.  Also, any malloc bookkeeping data placed within this area
might now be wrongly marked, causing additional problems.

Fix by replacing relevant uses of "oldsize" with "Min(size, oldsize)".
It's sufficient to mark as far as "size" when that's smaller, because
whatever remains in the new chunk size will be marked NOACCESS below,
and we expect realloc() to have taken care of marking the memory
beyond the new official end of the chunk.

While we're here, also rename the function's "oldsize" variable
to "oldchksize" to more clearly explain what it actually holds,
namely the distance to the end of the chunk (that is, requested size
plus trailing padding).  This is more consistent with the use of
"size" and "chksize" to hold the new requested size and chunk size.
Add a new variable "oldsize" in the one stanza where we're actually
talking about the old requested size.

Oversight in commit c477f3e44.  Back-patch to all supported branches,
as that was, just in case anybody wants to do valgrind testing on back
branches.

Karina Litskevich

Discussion: https://postgr.es/m/CACiT8iaAET-fmzjjZLjaJC4zwSJmrFyL7LAdHwaYyjjQOQ4hcg@mail.gmail.com
2023-02-21 18:48:15 -05:00
John Naylor
6773197464 Add MSVC support for pg_leftmost_one_pos32() and friends
To allow testing for general support for fast bitscan intrinsics,
add symbols HAVE_BITSCAN_REVERSE and HAVE_BITSCAN_FORWARD.

Also do related cleanup in AllocSetFreeIndex(): Previously, we
tested for HAVE__BUILTIN_CLZ and copied the relevant internals of
pg_leftmost_one_pos32(), with a special fallback that does less
work than the general fallback for that function. Now that we have
a more general test, we just call pg_leftmost_one_pos32() directly
for platforms with intrinsic support. On gcc at least, there is no
difference in the binary for non-assert builds.

Discussion: https://www.postgresql.org/message-id/CAFBsxsEPc%2BFnX_0vmmQ5DHv60sk4rL_RZJ%2BMD6ei%3D76L0kFMvA%40mail.gmail.com
2023-02-20 14:55:32 +07:00
Tom Lane
3b4ac33254 Avoid type cheats for invalid dsa_handles and dshash_table_handles.
Invent separate macros for "invalid" values of these types, so that
we needn't embed knowledge of their representations into calling code.
These are all zeroes anyway ATM, so this is not fixing any live bug,
but it makes the code cleaner and more future-proof.

I (tgl) also chose to move DSM_HANDLE_INVALID into dsm_impl.h,
since it seems like it should live beside the typedef for dsm_handle.

Hou Zhijie, Nathan Bossart, Kyotaro Horiguchi, Tom Lane

Discussion: https://postgr.es/m/OS0PR01MB5716860B1454C34E5B179B6694C99@OS0PR01MB5716.jpnprd01.prod.outlook.com
2023-01-25 11:48:38 -05:00
Michael Paquier
5f6401f81c Fix typos in code and comments
Author: Justin Pryzby
Discussion: https://postgr.es/m/20230110045722.GD9837@telsasoft.com
2023-01-11 15:16:38 +09:00
David Rowley
b82557ecc2 Fix some compiler warnings in aset.c and generation.c
This fixes a couple of unused variable warnings that could be seen when
compiling with MEMORY_CONTEXT_CHECKING but not USE_ASSERT_CHECKING.
Defining MEMORY_CONTEXT_CHECKING without asserts is a little unusual,
however, we shouldn't be producing any warnings from such a build.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_D-vgLEh7eO47p=73u1jWO78NWf6Qfv1FndY1kG-Q-jA@mail.gmail.com
2023-01-05 12:56:17 +13:00
Bruce Momjian
c8e1ba736b Update copyright for 2023
Backpatch-through: 11
2023-01-02 15:00:37 -05:00
David Rowley
439f61757f Add palloc_aligned() to allow aligned memory allocations
This introduces palloc_aligned() and MemoryContextAllocAligned() which
allow callers to obtain memory which is allocated to the given size and
also aligned to the specified alignment boundary.  The alignment
boundaries may be any power-of-2 value.  Currently, the alignment is
capped at 2^26, however, we don't expect values anything like that large.
The primary expected use case is to align allocations to perhaps CPU
cache line size or to maybe I/O page size.  Certain use cases can benefit
from having aligned memory by either having better performance or more
predictable performance.

The alignment is achieved by requesting 'alignto' additional bytes from
the underlying allocator function and then aligning the address that is
returned to the requested alignment.  This obviously does waste some
memory, so alignments should be kept as small as what is required.

It's also important to note that these alignment bytes eat into the
maximum allocation size.  So something like:

palloc_aligned(MaxAllocSize, 64, 0);

will not work as we cannot request MaxAllocSize + 64 bytes.

Additionally, because we're just requesting the requested size plus the
alignment requirements from the given MemoryContext, if that context is
the Slab allocator, then since slab can only provide chunks of the size
that's specified when the slab context is created, then this is not going
to work.  Slab will generate an error to indicate that the requested size
is not supported.

The alignment that is requested in palloc_aligned() is stored along with
the allocated memory.  This allows the alignment to remain intact through
repalloc() calls.

Author: Andres Freund, David Rowley
Reviewed-by: Maxim Orlov, Andres Freund, John Naylor
Discussion: https://postgr.es/m/CAApHDvpxLPUMV1mhxs6g7GNwCP6Cs6hfnYQL5ffJQTuFAuxt8A%40mail.gmail.com
2022-12-22 13:32:05 +13:00
David Rowley
eb706fde83 Fix newly introduced bug in slab.c
d21ded75f changed the way slab.c works but introduced a bug that meant we
could end up with the slab's curBlocklistIndex pointing to the wrong list.
The condition which was checking for this was failing to account for two
things:

1. The curBlocklistIndex could be 0 as we've currently got no non-full
blocks to put chunks on.  In this case, the dlist_is_empty() check cannot
be performed as there can be any number of completely full blocks at that
index.

2. The curBlocklistIndex may be greater than the index we just moved the
block onto.  Since we need to ensure we fill up fuller blocks first, we
must reset curBlocklistIndex when changing any blocklist element that's
less than the curBlocklistIndex too.

Reported-by: Takamichi Osumi
Discussion: https://postgr.es/m/TYCPR01MB8373329C6329768D7E093D68EDEB9@TYCPR01MB8373.jpnprd01.prod.outlook.com
2022-12-22 09:57:49 +13:00
Andrew Dunstan
8284cf5f74 Add copyright notices to meson files
Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
2022-12-20 07:54:39 -05:00
David Rowley
d21ded75fd Improve the performance of the slab memory allocator
Slab has traditionally been fairly slow when compared with the AllocSet or
Generation memory allocators.  Part of this slowness came from having to
write out an entire block when we allocate a new block in order to
populate the free list indexes within the block's memory.  Additional
slowness came from having to move a block onto another dlist each time we
palloc or pfree a chunk from it.

Here we optimize both of those cases and do a little bit extra to improve
the performance of the slab allocator.

Here, instead of writing out the free list indexes when allocating a new
block, we introduce the concept of "unused" chunks.  When a block is first
allocated all chunks are unused.  These chunks only make it onto the
free list when they are pfree'd.  When allocating new chunks on an
existing block, we have the choice of consuming a chunk from the free list
or an unused chunk.  When both exist, we opt to use one from the free
list, as these have been used already and the memory of them is more
likely to be cached by the CPU.

Here we also reduce the number of block lists from there being one for
every possible value of free chunks on a block to just having a small
fixed number of block lists.  We keep the 0th block list for completely
full blocks and anything else stores blocks for some range of free chunks
with fuller blocks appearing on lower block list array elements.  This
reduces how often we must move a block to another list when we allocate or
free chunks, but still allows us to prefer to put new chunks on fuller
blocks and perhaps allow blocks with fewer chunks to be free'd later
once all their remaining chunks have been pfree'd.

Additionally, we now store a list of "emptyblocks", which are blocks that
no longer contain any allocated chunks.  We now keep up to 10 of these
around to avoid having to thrash malloc/free when allocation patterns
continually cause blocks to become free of any allocated chunks only to
allocate more chunks again.  Now only once we have 10 of these, we free
the block.  This does raise the high water mark for the total memory that
a slab context can consume.  It does not seem entirely unreasonable that
we might one day want to make this a property of SlabContext rather than a
compile-time constant.  Let's wait and see if there is any evidence to
support that this is required before doing it.

Author: Andres Freund, David Rowley
Tested-by: Tomas Vondra, John Naylor
Discussion: https://postgr.es/m/20210717194333.mr5io3zup3kxahfm@alap3.anarazel.de
2022-12-20 21:48:51 +13:00
Peter Eisentraut
75f49221c2 Static assertions cleanup
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.

Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level.  This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.

Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
2022-12-15 10:10:32 +01:00
Peter Eisentraut
b4b7ce8061 Add repalloc0 and repalloc0_array
These zero out the space added by repalloc.  This is a common pattern
that is quite hairy to code by hand.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/b66dfc89-9365-cb57-4e1f-b7d31813eeec@enterprisedb.com
2022-11-12 20:34:44 +01:00
Peter Eisentraut
b1099eca8f Remove AssertArg and AssertState
These don't offer anything over plain Assert, and their usage had
already been declared obsolescent.

Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/20221009210148.GA900071@nathanxps13
2022-10-28 09:19:06 +02:00
Michael Paquier
a19e5cee63 Rename SetSingleFuncCall() to InitMaterializedSRF()
Per discussion, the existing routine name able to initialize a SRF
function with materialize mode is unpopular, so rename it.  Equally, the
flags of this function are renamed, as of:
- SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC
- SRF_SINGLE_BLESS -> MAT_SRF_BLESS
The previous function and flags introduced in 9e98583 are kept around
for compatibility purposes, so as any extension code already compiled
with v15 continues to work as-is.  The declarations introduced here for
compatibility will be removed from HEAD in a follow-up commit.

The new names have been suggested by Andres Freund and Melanie
Plageman.

Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Backpatch-through: 15
2022-10-18 10:22:35 +09:00
Tom Lane
9c911ec065 Make some minor improvements in memory-context infrastructure.
We lack a version of repalloc() that supports MCXT_ALLOC_NO_OOM
semantics, so invent repalloc_extended() with the usual set of
flags.  repalloc_huge() becomes a legacy wrapper for that.

Also, fix dynahash.c so that it can support HASH_ENTER_NULL
requests when using the default palloc-based allocator.
The only reason it didn't do that already was the lack of the
MCXT_ALLOC_NO_OOM option when that code was written, ages ago.

While here, simplify a few overcomplicated tests in mcxt.c.

Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
2022-10-14 11:55:56 -04:00
Tom Lane
0e87dfe464 Harden memory context allocators against bogus chunk pointers.
Before commit c6e0fe1f2, functions such as AllocSetFree could pretty
safely presume that they were given a valid chunk pointer for their
own type of context, because the indirect call through a memory
context object and method struct would be very unlikely to work
otherwise.  But now, if pfree() is mistakenly invoked on a pointer
to garbage, we have three chances in eight of ending up at one of
these functions.  That means we need to take extra measures to
verify that we are looking at what we're supposed to be looking at,
especially in debug builds.

Hence, add code to verify that the chunk's back-link to a block header
leads to a memory context object that satisfies the right sort of
IsA() check.  This is still a bit weaker than what we did before,
but for the moment assume that an IsA() check is sufficient.

As a compromise between speed and safety, implement these checks
as Asserts when dealing with small chunks but plain test-and-elogs
when dealing with large (external) chunks.  The latter case should
not be too performance-critical, but the former case probably is.
In slab.c, all chunks are small; but nonetheless use a plain test
in SlabRealloc, because that is certainly not performance-critical,
indeed we should be suspicious that it's being called in error.

In aset.c, additionally add some assertions that the "value" field
of the chunk header is within the small range allowed for freelist
indexes.  Without that, we might find ourselves trying to wipe
most of memory when CLOBBER_FREED_MEMORY is enabled, or scribbling
on a "freelist header" that's far away from the context object.

Eventually, field experience might show us that it's smarter for
these tests to be active always, but for now we'll try to get
away with just having them as assertions.

While at it, also be more uniform about asserting that context
objects passed as parameters are of the type we expect.  Some
places missed that altogether, and slab.c was for no very good
reason doing it differently from the other allocators.

Discussion: https://postgr.es/m/3578387.1665244345@sss.pgh.pa.us
2022-10-10 18:45:34 -04:00
Tom Lane
80ef926758 Improve our ability to detect bogus pointers passed to pfree et al.
Commit c6e0fe1f2 was a shade too trusting that any pointer passed
to pfree, repalloc, etc will point at a valid chunk.  Notably,
passing a pointer that was actually obtained from malloc tended
to result in obscure assertion failures, if not worse.  (On FreeBSD
I've seen such mistakes take down the entire cluster, seemingly as
a result of clobbering shared memory.)

To improve matters, extend the mcxt_methods[] array so that it
has entries for every possible MemoryContextMethodID bit-pattern,
with the currently unassigned ID codes pointing to error-reporting
functions.  Then, fiddle with the ID assignments so that patterns
likely to be associated with bad pointers aren't valid ID codes.
In particular, we should avoid assigning bit patterns 000 (zeroed
memory) and 111 (wipe_mem'd memory).

It turns out that on glibc (Linux), malloc uses chunk headers that
have flag bits in the same place we keep MemoryContextMethodID,
and that the bit patterns 000, 001, 010 are the only ones we'll
see as long as the backend isn't threaded.  So we can have very
robust detection of pfree'ing a malloc-assigned block on that
platform, at least so long as we can refrain from using up those
ID codes.  On other platforms, we don't have such a good guarantee,
but keeping 000 reserved will be enough to catch many such cases.

While here, make GetMemoryChunkMethodID() local to mcxt.c, as there
seems no need for it to be exposed even in memutils_internal.h.

Patch by me, with suggestions from Andres Freund and David Rowley.

Discussion: https://postgr.es/m/2910981.1665080361@sss.pgh.pa.us
2022-10-06 21:24:00 -04:00
Tom Lane
9543eff5e0 Remove MemoryContextContains().
MemoryContextContains is no longer reliable in the wake of c6e0fe1f2,
because there's no longer very much redundancy in chunk headers.
(It wasn't *completely* reliable even before that, as there was a
chance of a false positive if you passed it something that didn't
point to an mcxt chunk at all.  But it was generally good enough.)

Hence, remove it.  There is no remaining core code that requires it.
Extensions that have been using it might be able to substitute a
test like "GetMemoryChunkContext(ptr) == context", recognizing that
this explicitly requires that the pointer point to some chunk.

Tom Lane and David Rowley

Discussion: https://postgr.es/m/1913788.1664898906@sss.pgh.pa.us
2022-10-06 13:35:31 -04:00
Andres Freund
e6927270cd meson: Add initial version of meson based build system
Autoconf is showing its age, fewer and fewer contributors know how to wrangle
it. Recursive make has a lot of hard to resolve dependency issues and slow
incremental rebuilds. Our home-grown MSVC build system is hard to maintain for
developers not using Windows and runs tests serially. While these and other
issues could individually be addressed with incremental improvements, together
they seem best addressed by moving to a more modern build system.

After evaluating different build system choices, we chose to use meson, to a
good degree based on the adoption by other open source projects.

We decided that it's more realistic to commit a relatively early version of
the new build system and mature it in tree.

This commit adds an initial version of a meson based build system. It supports
building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD,
Solaris and Windows (however only gcc is supported on aix, solaris). For
Windows/MSVC postgres can now be built with ninja (faster, particularly for
incremental builds) and msbuild (supporting the visual studio GUI, but
building slower).

Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM
bitcode generation, documentation adjustments) are done in subsequent commits
requiring further review. Other aspects (e.g. not installing test-only
extensions) are not yet addressed.

When building on Windows with msbuild, builds are slower when using a visual
studio version older than 2019, because those versions do not support
MultiToolTask, required by meson for intra-target parallelism.

The plan is to remove the MSVC specific build system in src/tools/msvc soon
after reaching feature parity. However, we're not planning to remove the
autoconf/make build system in the near future. Likely we're going to keep at
least the parts required for PGXS to keep working around until all supported
versions build with meson.

Some initial help for postgres developers is at
https://wiki.postgresql.org/wiki/Meson

With contributions from Thomas Munro, John Naylor, Stone Tickle and others.

Author: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
2022-09-21 22:37:17 -07:00
David Rowley
b76fb6c2a9 Temporarily make MemoryContextContains return false
5265e91fd changed MemoryContextContains to update it so that it works
correctly with the new MemoryChunk code added in c6e0fe1f2.  However,
5265e91fd was done with the assumption that MemoryContextContains would
only ever be given pointers to memory that had been returned by one of our
MemoryContext allocators.  It seems that's not true and many of our 32-bit
buildfarm animals are clearly showing that.

There are some code paths that call MemoryContextContains with a pointer
pointing part way into an allocated chunk.  The example of this found by
the 32-bit buildfarm animals is the int2int4_sum() function.  This
function returns transdata->sum, which is not a pointer to memory that was
allocated directly.  This return value is then subsequently passed to
MemoryContextContains which causes it to crash due to it thinking the
memory directly prior to that pointer is a MemoryChunk.  What's actually
in that memory is the field in the struct that comes prior to the "sum"
field.  This problem didn't occur in 64-bit world because BIGINT is a
byval type and the code which was calling MemoryContextContains with the
bad pointer only does so with non-byval types.

Here, instead of reverting 5265e91fd and making MemoryContextContains
completely broken again, let's just make it always return false for now.
Effectively prior to 5265e91fd it was doing that anyway, this at least
makes that more explicit.  The only repercussions of this with the current
MemoryContextContains calls are that we perform a datumCopy() when we
might not need to.  This should make the 32-bit buildfarm animals happy
again and give us more time to consider a long-term fix.

Discussion: https://postgr.es/m/20220907130552.sfjri7jublfxyyi4%40jrouhaud
2022-09-09 00:28:38 +12:00
David Rowley
5265e91fd1 Make MemoryContextContains work correctly again
c6e0fe1f2 recently changed the way we store headers for allocated chunks
of memory.  Prior to that commit, we stored a pointer to the owning
MemoryContext directly prior to the pointer to the allocated memory.
That's no longer true and c6e0fe1f2 neglected to update
MemoryContextContains() so that it correctly obtains the owning context
with the new method.

A side effect of this change and c6e0fe1f2, in general, is that it's even
less safe than it was previously to pass MemoryContextContains() an
arbitrary pointer which was not allocated by one of our MemoryContexts.
Previously some comments in MemoryContextContains() seemed to indicate
that the worst that could happen by passing an arbitrary pointer would be
a false positive return value.  It seems to me that this was a rather
wishful outlook as we subsequently proceeded to subtract sizeof(void *)
from the given pointer and then dereferenced that memory.  So it seems
quite likely that we could have segfaulted instead of returning a false
positive.  However, it's not impossible that the memory sizeof(void *)
bytes before the pointer could have been owned by the process, but it's
far less likely to work now as obtaining a pointer to the owning
MemoryContext is less direct than before c6e0fe1f2 and will access memory
that's possibly much further away to obtain the owning MemoryContext.
Because of this, I took the liberty of updating the comment to warn
against any future usages of the function and checked the existing core
usages to ensure that we only ever pass in a pointer to memory allocated
by a MemoryContext.

Extension authors updating their code for PG16 who are using
MemoryContextContains should check to ensure that only NULL pointers and
pointers to chunks allocated with a MemoryContext will ever be passed to
MemoryContextContains.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20220905230949.kb3x2fkpfwtngz43@awork3.anarazel.de
2022-09-08 00:20:20 +12:00
David Rowley
0e480385ec Make more effort to put a sentinel at the end of allocated memory
Traditionally, in MEMORY_CONTEXT_CHECKING builds, we only ever marked a
sentinel byte just beyond the requested size if there happened to be
enough space on the chunk to do so.  For Slab and Generation context
types, we only rounded the size of the chunk up to the next maxalign
boundary, so it was often not that likely that those would ever have space
for the sentinel given that the majority of allocation requests are going
to be for sizes which are maxaligned.  For AllocSet, it was a little
different as smaller allocations are rounded up to the next power-of-2
value rather than the next maxalign boundary, so we're a bit more likely
to have space for the sentinel byte, especially when we get away from tiny
sized allocations such as 8 or 16 bytes.

Here we make more of an effort to allow space so that there is enough room
for the sentinel byte in more cases.  This makes it more likely that we'll
detect when buggy code accidentally writes beyond the end of any of its
memory allocations.

Each of the 3 MemoryContext types has been changed as follows:

The Slab allocator will now always set a sentinel byte.  Both the current
usages of this MemoryContext type happen to use chunk sizes which were on
the maxalign boundary, so these never used sentinel bytes previously.

For the Generation allocator, we now always ensure there's enough space in
the allocation for a sentinel byte.

For AllocSet, this commit makes an adjustment for allocation sizes which
are greater than allocChunkLimit.  We now ensure there is always space for
a sentinel byte.  We don't alter the sentinel behavior for request sizes
<= allocChunkLimit.  Making way for the sentinel byte for power-of-2
request sizes would require doubling up to the next power of 2.  Some
analysis done on the request sizes made during installcheck shows that a
fairly large portion of allocation requests are for power-of-2 sizes.  The
amount of additional memory for the sentinel there seems prohibitive, so
we do nothing for those here.

Author: David Rowley
Discussion: https://postgr.es/m/3478405.1661824539@sss.pgh.pa.us
2022-09-07 15:46:57 +12:00
David Rowley
d5ee4db0ea Use MAXALIGN() in calculations using sizeof(SlabBlock)
c6e0fe1f2 added a new pointer field to SlabBlock to make it 4 bytes larger
on 32-bit machines.  Prior to that commit, the size of that struct was a
multiple of 8, which meant that MAXALIGN(sizeof(SlabBlock)) was the same
as sizeof(SlabBlock), however, after c6e0fe1f2, due to the addition of the
new pointer field to store a pointer to the owning context, that was no
longer true on builds with sizeof(void *) == 4.

This problem was highlighted by an Assert failure which was checking that
the pointer given to pfree() was MAXALIGNED.  Various 32-bit ARM buildfarm
animals were failing.  These have MAXIMUM_ALIGNOF of 8.  The only 32-bit
testing I'd managed to do on c6e0fe1f2 had been on x86, which has a
MAXIMUM_ALIGNOF of 4, therefore did not exhibit this issue.

Here we define Slab_BLOCKHDRSZ and copy what is being done in aset.c and
generation.c for doing calculations based on the size of the context's
block type.  This means that SlabAlloc() will now always return a
MAXALIGNed pointer.

This also fixes an incorrect sentinel_ok() check in SlabCheck() which was
incorrectly checking the wrong sentinel byte.  This must have previously
not caused any issues due to the fullChunkSize never being large enough to
store the sentinel byte.

Diagnosed-by: Tomas Vondra, Tom Lane
Author: Tomas Vondra, David Rowley
Discussion: https://postgr.es/m/CAA4eK1%2B1JyW5TiL%3DyV-3Uq1CrfnTyn0Xrk5uArt31Z%3D8rgPhXQ%40mail.gmail.com
2022-08-30 14:36:04 +12:00
David Rowley
c6e0fe1f2a Improve performance of and reduce overheads of memory management
Whenever we palloc a chunk of memory, traditionally, we prefix the
returned pointer with a pointer to the memory context to which the chunk
belongs.  This is required so that we're able to easily determine the
owning context when performing operations such as pfree() and repalloc().

For the AllocSet context, prior to this commit we additionally prefixed
the pointer to the owning context with the size of the chunk.  This made
the header 16 bytes in size.  This 16-byte overhead was required for all
AllocSet allocations regardless of the allocation size.

For the generation context, the problem was worse; in addition to the
pointer to the owning context and chunk size, we also stored a pointer to
the owning block so that we could track the number of freed chunks on a
block.

The slab allocator had a 16-byte chunk header.

The changes being made here reduce the chunk header size down to just 8
bytes for all 3 of our memory context types.  For small to medium sized
allocations, this significantly increases the number of chunks that we can
fit on a given block which results in much more efficient use of memory.

Additionally, this commit completely changes the rule that pointers to
palloc'd memory must be directly prefixed by a pointer to the owning
memory context and instead, we now insist that they're directly prefixed
by an 8-byte value where the least significant 3-bits are set to a value
to indicate which type of memory context the pointer belongs to.  Using
those 3 bits as an index (known as MemoryContextMethodID) to a new array
which stores the methods for each memory context type, we're now able to
pass the pointer given to functions such as pfree() and repalloc() to the
function specific to that context implementation to allow them to devise
their own methods of finding the memory context which owns the given
allocated chunk of memory.

The reason we're able to reduce the chunk header down to just 8 bytes is
because of the way we make use of the remaining 61 bits of the required
8-byte chunk header.  Here we also implement a general-purpose MemoryChunk
struct which makes use of those 61 remaining bits to allow the storage of
a 30-bit value which the MemoryContext is free to use as it pleases, and
also the number of bytes which must be subtracted from the chunk to get a
reference to the block that the chunk is stored on (also 30 bits).  The 1
additional remaining bit is to denote if the chunk is an "external" chunk
or not.  External here means that the chunk header does not store the
30-bit value or the block offset.  The MemoryContext can use these
external chunks at any time, but must use them if any of the two 30-bit
fields are not large enough for the value(s) that need to be stored in
them.  When the chunk is marked as external, it is up to the MemoryContext
to devise its own means to determine the block offset.

Using 3-bits for the MemoryContextMethodID does mean we're limiting
ourselves to only having a maximum of 8 different memory context types.
We could reduce the bit space for the 30-bit value a little to make way
for more than 3 bits, but it seems like it might be better to do that only
if we ever need more than 8 context types.  This would only be a problem
if some future memory context type which does not use MemoryChunk really
couldn't give up any of the 61 remaining bits in the chunk header.

With this MemoryChunk, each of our 3 memory context types can quickly
obtain a reference to the block any given chunk is located on.  AllocSet
is able to find the context to which the chunk is owned, by first
obtaining a reference to the block by subtracting the block offset as is
stored in the 'hdrmask' field and then referencing the block's 'aset'
field.  The Generation context uses the same method, but GenerationBlock
did not have a field pointing back to the owning context, so one is added
by this commit.

In aset.c and generation.c, all allocations larger than allocChunkLimit
are stored on dedicated blocks.  When there's just a single chunk on a
block like this, it's easy to find the block from the chunk, we just
subtract the size of the block header from the chunk pointer.  The size of
these chunks is also known as we store the endptr on the block, so we can
just subtract the pointer to the allocated memory from that.  Because we
can easily find the owning block and the size of the chunk for these
dedicated blocks, we just always use external chunks for allocation sizes
larger than allocChunkLimit.  For generation.c, this sidesteps the problem
of non-external MemoryChunks being unable to represent chunk sizes >= 1GB.
This is less of a problem for aset.c as we store the free list index in
the MemoryChunk's spare 30-bit field (the value of which will never be
close to using all 30-bits).  We can easily reverse engineer the chunk size
from this when needed.  Storing this saves AllocSetFree() from having to
make a call to AllocSetFreeIndex() to determine which free list to put the
newly freed chunk on.

For the slab allocator, this commit adds a new restriction that slab
chunks cannot be >= 1GB in size.  If there happened to be any users of
slab.c which used chunk sizes this large, they really should be using
AllocSet instead.

Here we also add a restriction that normal non-dedicated blocks cannot be
1GB or larger.  It's now not possible to pass a 'maxBlockSize' >= 1GB
during the creation of an AllocSet or Generation context.  Allocations can
still be larger than 1GB, it's just these will always be on dedicated
blocks (which do not have the 1GB restriction).

Author: Andres Freund, David Rowley
Discussion: https://postgr.es/m/CAApHDvpjauCRXcgcaL6+e3eqecEHoeRm9D-kcbuvBitgPnW=vw@mail.gmail.com
2022-08-29 17:15:00 +12:00
Peter Eisentraut
805a397db4 Add more detail why repalloc and pfree do not accept NULL pointers
Per discussion, we choose not to change this.  This just gives a
little bit more information.

Discussion: https://www.postgresql.org/message-id/flat/cf26e970-8e92-59f1-247a-aa265235075b%40enterprisedb.com
2022-08-28 09:55:04 +02:00
Thomas Munro
4f1f5a7f85 Remove fls(), use pg_leftmost_one_pos32() instead.
Commit 4f658dc8 provided the traditional BSD fls() function in
src/port/fls.c so it could be used in several places.  Later we added a
bunch of similar facilities in pg_bitutils.h, based on compiler
builtins that map to hardware instructions.  It's a bit confusing to
have both 1-based and 0-based variants of this operation in use in
different parts of the tree, and neither is blessed by a standard.
Let's drop fls.c and the configure probe, and reuse the newer code.

Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA%2BhUKG%2B7dSX1XF8yFGmYk-%3D48dbjH2kmzZj16XvhbrWP-9BzRg%40mail.gmail.com
2022-07-22 10:41:50 +12:00
Peter Eisentraut
9fd45870c1 Replace many MemSet calls with struct initialization
This replaces all MemSet() calls with struct initialization where that
is easily and obviously possible.  (For example, some cases have to
worry about padding bits, so I left those.)

(The same could be done with appropriate memset() calls, but this
patch is part of an effort to phase out MemSet(), so it doesn't touch
memset() calls.)

Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/9847b13c-b785-f4e2-75c3-12ec77a3b05c@enterprisedb.com
2022-07-16 08:50:49 +02:00
Thomas Munro
7201cd1862 Fix relptr's encoding of the base address.
Previously, we encoded both NULL and the first byte at the base address
as 0.  That confusion led to the assertion in commit e07d4ddc, which
failed when min_dynamic_shared_memory was used.  Give them distinct
encodings, by switching to 1-based offsets for non-NULL pointers.  Also
improve macro hygiene in passing (missing/misplaced parentheses), and
remove open-coded access to the raw offset value from freepage.c/h.

Although e07d4ddc was back-patched to 10, the only code that actually
makes use of relptr at the base address arrived in 84b1c63a, so no need
to back-patch further than 14 for now.

Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20220519193839.GT19626%40telsasoft.com
2022-06-27 11:34:26 +12:00
Alvaro Herrera
24d2b2680a
Remove extraneous blank lines before block-closing braces
These are useless and distracting.  We wouldn't have written the code
with them to begin with, so there's no reason to keep them.

Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
2022-04-13 19:16:02 +02:00
David Rowley
b0e5f02ddc Fix various typos and spelling mistakes in code comments
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
2022-04-11 20:49:41 +12:00
David Rowley
1b0d9aa4f7 Improve the generation memory allocator
Here we make a series of improvements to the generation memory
allocator, namely:

1. Allow generation contexts to have a minimum, initial and maximum block
sizes. The standard allocator allows this already but when the generation
context was added, it only allowed fixed-sized blocks.  The problem with
fixed-sized blocks is that it's difficult to choose how large to make the
blocks.  If the chosen size is too small then we'd end up with a large
number of blocks and a large number of malloc calls. If the block size is
made too large, then memory is wasted.

2. Add support for "keeper" blocks.  This is a special block that is
allocated along with the context itself but is never freed.  Instead,
when the last chunk in the keeper block is freed, we simply mark the block
as empty to allow new allocations to make use of it.

3. Add facility to "recycle" newly empty blocks instead of freeing them
and having to later malloc an entire new block again.  We do this by
recording a single GenerationBlock which has become empty of any chunks.
When we run out of space in the current block, we check to see if there is
a "freeblock" and use that if it contains enough space for the allocation.

Author: David Rowley, Tomas Vondra
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/d987fd54-01f8-0f73-af6c-519f799a0ab8@enterprisedb.com
2022-04-04 20:53:13 +12:00
Michael Paquier
9e98583898 Create routine able to set single-call SRFs for Materialize mode
Set-returning functions that use the Materialize mode, creating a
tuplestore to include all the tuples returned in a set rather than doing
so in multiple calls, use roughly the same set of steps to prepare
ReturnSetInfo for this job:
- Check if ReturnSetInfo supports returning a tuplestore and if the
materialize mode is enabled.
- Create a tuplestore for all the tuples part of the returned set in the
per-query memory context, stored in ReturnSetInfo->setResult.
- Build a tuple descriptor mostly from get_call_result_type(), then
stored in ReturnSetInfo->setDesc.  Note that there are some cases where
the SRF's tuple descriptor has to be the one specified by the function
caller.

This refactoring is done so as there are (well, should be) no behavior
changes in any of the in-core functions refactored, and the centralized
function that checks and sets up the function's ReturnSetInfo can be
controlled with a set of bits32 options.  Two of them prove to be
necessary now:
- SRF_SINGLE_USE_EXPECTED to use expectedDesc as tuple descriptor, as
expected by the function's caller.
- SRF_SINGLE_BLESS to validate the tuple descriptor for the SRF.

The same initialization pattern is simplified in 28 places per my
count as of src/backend/, shaving up to ~900 lines of code.  These
mostly come from the removal of the per-query initializations and the
sanity checks now grouped in a single location.  There are more
locations that could be simplified in contrib/, that are left for a
follow-up cleanup.

fcc2817, 07daca5 and d61a361 have prepared the areas of the code related
to this change, to ease this refactoring.

Author: Melanie Plageman, Michael Paquier
Reviewed-by: Álvaro Herrera, Justin Pryzby
Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
2022-03-07 10:26:29 +09:00
Tom Lane
2e517818f4 Fix SPI's handling of errors during transaction commit.
SPI_commit previously left it up to the caller to recover from any error
occurring during commit.  Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right.  Let's move the responsibility for cleanup into spi.c.  Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics.  We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op.  Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.

Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error.  Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.

While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place.  Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether.  Add the missing logic to catch such
errors and convert them into Python exceptions.

We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately.  Let's let it bake awhile in HEAD first.

Peter Eisentraut and Tom Lane

Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
2022-02-28 12:45:36 -05:00