Commit Graph

40539 Commits

Author SHA1 Message Date
Peter Geoghegan 07eef53955 Use scanned_pages to decide when to failsafe check.
Perform a failsafe check every time VACUUM's first heap scan scans a
further FAILSAFE_EVERY_PAGES pages, rather than using an approach based
on the number of physical blocks that our current blkno is from the
blkno at the time of the previous failsafe check.  That way VACUUM will
perform a failsafe check every time it has scanned a uniform number of
pages, without it mattering when or how VACUUM skipped pages using the
visibility map.

Sami Imseih, with changes to FAILSAFE_EVERY_PAGES comments added by me.

Author: Sami Imseih <simseih@amazon.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/401CE010-4049-4B94-9961-0B610A5D254D%40amazon.com
2022-12-22 10:41:40 -08:00
Peter Geoghegan 4ce3afb82e Refactor how VACUUM passes around its XID cutoffs.
Use a dedicated struct for the XID/MXID cutoffs used by VACUUM, such as
FreezeLimit and OldestXmin.  This state is initialized in vacuum.c, and
then passed around by code from vacuumlazy.c to heapam.c freezing
related routines.  The new convention is that everybody works off of the
same cutoff state, which is passed around via pointers to const.

Also simplify some of the logic for dealing with frozen xmin in
heap_prepare_freeze_tuple: add dedicated "xmin_already_frozen" state to
clearly distinguish xmin XIDs that we're going to freeze from those that
were already frozen from before.  That way the routine's xmin handling
code is symmetrical with the existing xmax handling code.  This is
preparation for an upcoming commit that will add page level freezing.

Also refactor the control flow within FreezeMultiXactId(), while adding
stricter sanity checks.  We now test OldestXmin directly, instead of
using FreezeLimit as an inexact proxy for OldestXmin.  This is further
preparation for the page level freezing work, which will make the
function's caller cede control of page level freezing to the function
where appropriate (where heap_prepare_freeze_tuple sees a tuple that
happens to contain a MultiXactId in its xmax).

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/CAH2-WznS9TxXmz2_=SY+SyJyDFbiOftKofM9=aDo68BbXNBUMA@mail.gmail.com
2022-12-22 09:37:59 -08:00
Tom Lane e42e312430 Avoid O(N^2) cost when pulling up lots of UNION ALL subqueries.
perform_pullup_replace_vars() knows how to scan the whole parent
query tree when we are replacing Vars during a subquery flattening
operation.  However, for the specific case of flattening a
UNION ALL leaf query, that's mostly wasted work: the only place
where relevant Vars could exist is in the AppendRelInfo that we
just made for this leaf.  Teaching perform_pullup_replace_vars()
to just deal with that and exit is worthwhile because, if we have
N such subqueries to pull up, we were spending O(N^2) work uselessly
mutating the AppendRelInfos for all the other subqueries.

While we're at it, avoid calling substitute_phv_relids if there are no
PlaceHolderVars, and remove an obsolete check of parse->hasSubLinks.

Andrey Lepikhov and Tom Lane

Discussion: https://postgr.es/m/703c09a2-08f3-d2ec-b33d-dbecd62428b8@postgrespro.ru
2022-12-22 11:02:03 -05:00
Tom Lane 5beb7881fb Add some recursion and looping defenses in prepjointree.c.
Andrey Lepikhov demonstrated a case where we spend an unreasonable
amount of time in pull_up_subqueries().  Not only is that recursing
with no explicit check for stack overrun, but the code seems not
interruptable by control-C.  Let's stick a CHECK_FOR_INTERRUPTS
there, along with sprinkling some stack depth checks.

An actual fix for the excessive time consumption seems a bit
risky to back-patch; but this isn't, so let's do so.

Discussion: https://postgr.es/m/703c09a2-08f3-d2ec-b33d-dbecd62428b8@postgrespro.ru
2022-12-22 10:35:02 -05:00
Peter Eisentraut 73bb72f0ca Remove dead code
The second appearance of NamespaceRelationId in this if-else chain is
in error and can be removed.
2022-12-22 08:12:41 +01:00
Thomas Munro 3f28bd7337 Add work-around for VA_ARGS_NARGS() on MSVC.
The previous coding of VA_ARGS_NARGS() always returned 1 on Visual
Studio, because it treats __VA_ARGS__ as a single token unless you jump
through extra hoops.  Newer compilers have an option to fix that.  Add a
comment about that so that we can remember to clean this up in the
future when our minimum MSVC version advances.

Author: Victor Spirin <v.spirin@postgrespro.ru>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/f450fc57-a147-19d0-e50c-33571c52cc13%40postgrespro.ru
2022-12-22 18:32:10 +13:00
Michael Paquier 01be9d498f Fix operator typo in tablecmds.c
A bitwise operator was getting used on two bools in
ATAddCheckConstraint() to track if constraints should be merged or not
with the existing ones of a relation, though obviously this should use
a boolean OR operator.  This led to the same result, but let's be
clean.

Oversight in 074c5cf.

Author: Ranier Vilela
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAEudQAp2R2fbbi0OHHhv_n4=Ch0t1VtjObR9YMqtGKHJ+faUFQ@mail.gmail.com
2022-12-22 12:08:45 +09:00
David Rowley 439f61757f Add palloc_aligned() to allow aligned memory allocations
This introduces palloc_aligned() and MemoryContextAllocAligned() which
allow callers to obtain memory which is allocated to the given size and
also aligned to the specified alignment boundary.  The alignment
boundaries may be any power-of-2 value.  Currently, the alignment is
capped at 2^26, however, we don't expect values anything like that large.
The primary expected use case is to align allocations to perhaps CPU
cache line size or to maybe I/O page size.  Certain use cases can benefit
from having aligned memory by either having better performance or more
predictable performance.

The alignment is achieved by requesting 'alignto' additional bytes from
the underlying allocator function and then aligning the address that is
returned to the requested alignment.  This obviously does waste some
memory, so alignments should be kept as small as what is required.

It's also important to note that these alignment bytes eat into the
maximum allocation size.  So something like:

palloc_aligned(MaxAllocSize, 64, 0);

will not work as we cannot request MaxAllocSize + 64 bytes.

Additionally, because we're just requesting the requested size plus the
alignment requirements from the given MemoryContext, if that context is
the Slab allocator, then since slab can only provide chunks of the size
that's specified when the slab context is created, then this is not going
to work.  Slab will generate an error to indicate that the requested size
is not supported.

The alignment that is requested in palloc_aligned() is stored along with
the allocated memory.  This allows the alignment to remain intact through
repalloc() calls.

Author: Andres Freund, David Rowley
Reviewed-by: Maxim Orlov, Andres Freund, John Naylor
Discussion: https://postgr.es/m/CAApHDvpxLPUMV1mhxs6g7GNwCP6Cs6hfnYQL5ffJQTuFAuxt8A%40mail.gmail.com
2022-12-22 13:32:05 +13:00
Andrew Dunstan 33dd895ef3 Introduce float4in_internal
This is the guts of float4in, callable as a routine to input floats,
which will be useful in an upcoming patch for allowing soft errors in
the seg module's input function.

A similar operation was performed some years ago for float8in in
commit 50861cd683.

Reviewed by Tom Lane

Discussion: https://postgr.es/m/cee4e426-d014-c0b7-aa22-a659f2cd9130@dunslane.net
2022-12-21 16:55:52 -05:00
David Rowley eb706fde83 Fix newly introduced bug in slab.c
d21ded75f changed the way slab.c works but introduced a bug that meant we
could end up with the slab's curBlocklistIndex pointing to the wrong list.
The condition which was checking for this was failing to account for two
things:

1. The curBlocklistIndex could be 0 as we've currently got no non-full
blocks to put chunks on.  In this case, the dlist_is_empty() check cannot
be performed as there can be any number of completely full blocks at that
index.

2. The curBlocklistIndex may be greater than the index we just moved the
block onto.  Since we need to ensure we fill up fuller blocks first, we
must reset curBlocklistIndex when changing any blocklist element that's
less than the curBlocklistIndex too.

Reported-by: Takamichi Osumi
Discussion: https://postgr.es/m/TYCPR01MB8373329C6329768D7E093D68EDEB9@TYCPR01MB8373.jpnprd01.prod.outlook.com
2022-12-22 09:57:49 +13:00
Michael Paquier f450695e88 Make more consistent some translated strings related to compression
This commit changes some of the bbstreamer files and pg_dump to use the
same style as a few other places (like common/compression.c), where the
name of the compression method is not part of the string, but an
argument of it.  This reduces a bit the translation work with less
string patterns.

Discussion: https://postgr.es/m/Y5/5tdK+4n3clvtU@paquier.xyz
2022-12-21 10:39:06 +09:00
Michael Paquier 22e3b55805 Switch some system functions to use get_call_result_type()
This shaves some code by replacing the combinations of
CreateTemplateTupleDesc()/TupleDescInitEntry() hardcoding a mapping of
the attributes listed in pg_proc.dat by get_call_result_type() to build
the TupleDesc needed for the rows generated.

get_call_result_type() is more expensive than the former style, but this
removes some duplication with the lists of OUT parameters (pg_proc.dat
and the attributes hardcoded in these code paths).  This is applied to
functions that are not considered as critical (aka that could be called
repeatedly for monitoring purposes).

Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV23HW5HP5hFjd89FNS-z5X8r2jNXdMXcpN2BgTtKd87w@mail.gmail.com
2022-12-21 10:11:22 +09:00
Andrew Dunstan f03bd5717e Use existing SSL certs in LDAP tests instead of generating them
The SSL test suite has a bunch of pre-existing certificates, so it's
better simply to use what we already have than generate new certificates
each time the LDAP tests are run.

Discussion: https://postgr.es/m/bc305c7a-f390-44f2-2e82-9bcaec6108da@dunslane.net
2022-12-20 10:02:49 -05:00
Andrew Dunstan 8284cf5f74 Add copyright notices to meson files
Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
2022-12-20 07:54:39 -05:00
Etsuro Fujita 594f8d3776 Allow batching of inserts during cross-partition updates.
Commit 927f453a9 disallowed batching added by commit b663a4136 to be
used for the inserts performed as part of cross-partition updates of
partitioned tables, mainly because the previous code in
nodeModifyTable.c couldn't handle pending inserts into foreign-table
partitions that are also UPDATE target partitions.  But we don't have
such a limitation anymore (cf. commit ffbb7e65a), so let's allow for
this by removing from execPartition.c the restriction added by commit
927f453a9 that batching is only allowed if the query command type is
CMD_INSERT.

In postgres_fdw, since commit 86dc90056 changed it to effectively
disable cross-partition updates in the case where a foreign-table
partition chosen to insert rows into is also an UPDATE target partition,
allow batching in the case where a foreign-table partition chosen to
do so is *not* also an UPDATE target partition.  This is enabled by the
"batch_size" option added by commit b663a4136, which is disabled by
default.

This patch also adjusts the test case added by commit 927f453a9 to
confirm that the inserts performed as part of a cross-partition update
of a partitioned table indeed uses batching.

Amit Langote, reviewed and/or tested by Georgios Kokolatos, Zhihong Yu,
Bharath Rupireddy, Hou Zhijie, Vignesh C, and me.

Discussion: http://postgr.es/m/CA%2BHiwqH1Lz1yJmPs%3DaD-pzd_HLLynLHvq5iYeT9mB0bBV7oJ6w%40mail.gmail.com
2022-12-20 19:05:00 +09:00
David Rowley 3226f47282 Add enable_presorted_aggregate GUC
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause.  Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order.  Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before 1349d279.

While developing 1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c.  However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice.  The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.

Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC.  This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).

Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
2022-12-20 22:28:58 +13:00
David Rowley d21ded75fd Improve the performance of the slab memory allocator
Slab has traditionally been fairly slow when compared with the AllocSet or
Generation memory allocators.  Part of this slowness came from having to
write out an entire block when we allocate a new block in order to
populate the free list indexes within the block's memory.  Additional
slowness came from having to move a block onto another dlist each time we
palloc or pfree a chunk from it.

Here we optimize both of those cases and do a little bit extra to improve
the performance of the slab allocator.

Here, instead of writing out the free list indexes when allocating a new
block, we introduce the concept of "unused" chunks.  When a block is first
allocated all chunks are unused.  These chunks only make it onto the
free list when they are pfree'd.  When allocating new chunks on an
existing block, we have the choice of consuming a chunk from the free list
or an unused chunk.  When both exist, we opt to use one from the free
list, as these have been used already and the memory of them is more
likely to be cached by the CPU.

Here we also reduce the number of block lists from there being one for
every possible value of free chunks on a block to just having a small
fixed number of block lists.  We keep the 0th block list for completely
full blocks and anything else stores blocks for some range of free chunks
with fuller blocks appearing on lower block list array elements.  This
reduces how often we must move a block to another list when we allocate or
free chunks, but still allows us to prefer to put new chunks on fuller
blocks and perhaps allow blocks with fewer chunks to be free'd later
once all their remaining chunks have been pfree'd.

Additionally, we now store a list of "emptyblocks", which are blocks that
no longer contain any allocated chunks.  We now keep up to 10 of these
around to avoid having to thrash malloc/free when allocation patterns
continually cause blocks to become free of any allocated chunks only to
allocate more chunks again.  Now only once we have 10 of these, we free
the block.  This does raise the high water mark for the total memory that
a slab context can consume.  It does not seem entirely unreasonable that
we might one day want to make this a property of SlabContext rather than a
compile-time constant.  Let's wait and see if there is any evidence to
support that this is required before doing it.

Author: Andres Freund, David Rowley
Tested-by: Tomas Vondra, John Naylor
Discussion: https://postgr.es/m/20210717194333.mr5io3zup3kxahfm@alap3.anarazel.de
2022-12-20 21:48:51 +13:00
John Naylor 995a9fb14f Move variable increment to the end of the loop
This is less error prone and matches the placement of other code
in the file.

Justin Pryzby

Reviewed by Tom Lane
Discussion: https://www.postgresql.org/message-id/20221123172436.GJ11463@telsasoft.com
2022-12-20 14:13:14 +07:00
Michael Paquier cca1863489 Add pg_dissect_walfile_name()
This function takes in input a WAL segment name and returns a tuple made
of the segment sequence number (dependent on the WAL segment size of the
cluster) and its timeline, as of a thin SQL wrapper around the existing
XLogFromFileName().

This function has multiple usages, like being able to compile a LSN from
a file name and an offset, or finding the timeline of a segment without
having to do to some maths based on the first eight characters of the
segment.

Bump catalog version.

Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
2022-12-20 13:36:27 +09:00
Michael Paquier b3bb7d12af Remove hardcoded dependency to cryptohash type in the internals of SCRAM
SCRAM_KEY_LEN was a variable used in the internal routines of SCRAM to
size a set of fixed-sized arrays used in the SHA and HMAC computations
during the SASL exchange or when building a SCRAM password.  This had a
hard dependency on SHA-256, reducing the flexibility of SCRAM when it
comes to the addition of more hash methods.  A second issue was that
SHA-256 is assumed as the cryptohash method to use all the time.

This commit renames SCRAM_KEY_LEN to a more generic SCRAM_KEY_MAX_LEN,
which is used as the size of the buffers used by the internal routines
of SCRAM.  This is aimed at tracking centrally the maximum size
necessary for all the hash methods supported by SCRAM.  A global
variable has the advantage of keeping the code in its simplest form,
reducing the need of more alloc/free logic for all the buffers used in
the hash calculations.

A second change is that the key length (SHA digest length) and hash
types are now tracked by the state data in the backend and the frontend,
the common portions being extended to handle these as arguments by the
internal routines of SCRAM.  There are a few RFC proposals floating
around to extend the SCRAM protocol, including some to use stronger
cryptohash algorithms, so this lifts some of the existing restrictions
in the code.

The code in charge of parsing and building SCRAM secrets is extended to
rely on the key length and on the cryptohash type used for the exchange,
assuming currently that only SHA-256 is supported for the moment.  Note
that the mock authentication simply enforces SHA-256.

Author: Michael Paquier
Reviewed-by: Peter Eisentraut, Jonathan Katz
Discussion: https://postgr.es/m/Y5k3Qiweo/1g9CG6@paquier.xyz
2022-12-20 08:53:22 +09:00
Robert Haas eb60eb08a9 Fix comment that was missing a word.
Ted Yu

Discussion: http://postgr.es/m/CALte62wkFB05=RTWf7BL_6MfWs2=DY=ai-K7LWn_+0TJUuPJ2w@mail.gmail.com
2022-12-19 15:59:24 -05:00
Peter Eisentraut af6284a666 Fix typo in comment
Author: Ted Yu <yuzhihong@gmail.com>
2022-12-19 21:08:28 +01:00
Robert Haas 10ea0f924a Expose some information about backend subxact status.
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.

Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.

Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
2022-12-19 14:43:09 -05:00
Tom Lane 7122f9d543 Fix bit-rotted planner test case.
While fooling with my pet outer-join-variables patch, I discovered
that the test case I added in commit 11086f2f2 no longer demonstrates
what it's supposed to.  The idea is to tempt the planner to reverse
the order of the two outer joins, which would leave noplace to
correctly evaluate the WHERE clause that's inserted between them.
Before the addition of the delay_upper_joins mechanism, it would
have taken the bait.

However, subsequent improvements broke the test in two different ways.
First, we now recognize the IS NULL coding pattern as an antijoin, and
we won't re-order antijoins; even if we did, the IS NULL test clauses
get removed so there would be no opportunity for them to misbehave.
Second, the planner now discovers that nested parameterized indexscans
are a lot cheaper than the double hash join it used back in the day,
and that approach doesn't want to re-order the joins anyway.  Thus,
in HEAD the test passes even if one dikes out delay_upper_joins.

To fix, change the IS NULL tests to COALESCE clauses, which produce
the same results but the planner isn't smart enough to convert them
to antijoins.  It'll still go for parameterized indexscans though,
so drop the index enabling that (don't know why I added that in the
first place), and disable nestloop joining just to be sure.

This time around, add an EXPLAIN to make the choice of plan visible.
2022-12-17 18:51:24 -05:00
Tom Lane 0efecb5518 Doc: update pg_list.h header comments to include XidLists.
I realize that the XidList infrastructure is rather incomplete,
but failing to mention it in adjacent comments takes that a bit
too far.
2022-12-17 10:31:25 -05:00
Tom Lane 935277b241 Fix inability to reference CYCLE column from inside its CTE.
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query.  We can just move that processing
to before the recursive parse_sub_analyze call, though.

While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.

Per bug #17723 from Vik Fearing.  Back-patch to v14 where
the CYCLE feature was added.

Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
2022-12-16 13:07:42 -05:00
Peter Eisentraut b059a2409f pg_upgrade: Make testing different transfer modes easier
The environment variable PG_TEST_PG_UPGRADE_MODE can be set to
override the default transfer mode for the pg_upgrade tests.
(Automatically running the pg_upgrade tests for all supported modes
would be too slow.)

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
2022-12-16 18:32:02 +01:00
Peter Eisentraut 746915c686 pg_upgrade: Add --copy option
This option selects the default transfer mode.  Having an explicit
option is handy to make scripts and tests more explicit.  It also
makes it easier to talk about a "copy" mode rather than "the default
mode" or something like that, since until now the default mode didn't
have an externally visible name.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
2022-12-16 18:32:02 +01:00
Tom Lane c4939f1215 Clean up dubious error handling in wellformed_xml().
This ancient bit of code was summarily trapping any ereport longjmp
whatsoever and assuming that it must represent an invalid-XML report.
It's not really appropriate to handle OOM-like situations that way:
maybe the input is valid or maybe not, but we couldn't find out.
And it'd be a seriously bad idea to ignore, say, a query cancel
error that way.  (Perhaps that can't happen because there is no
CHECK_FOR_INTERRUPTS anywhere within xml_parse, but even if that's
true today it's obviously a very fragile assumption.)

But in the wake of the previous commit, we can drop the PG_TRY
here altogether, and use the soft error mechanism to catch only
the kinds of errors that are legitimate to treat as invalid-XML.

(This is our first use of the soft error mechanism for something
not directly related to a datatype input function.  It won't be
the last.)

xml_is_document can be converted in the same way.  That one is
not actively broken, because it was checking specifically for
ERRCODE_INVALID_XML_DOCUMENT rather than trapping everything;
but the code is still shorter and probably faster this way.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
Tom Lane 37bef842f5 Convert xml_in to report errors softly.
The key idea here is that xml_parse must distinguish hard errors
from soft errors.  We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.

One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8.  This might be something to improve
later, but it seems like a pretty low-probability scenario.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
Thomas Munro e52f8b301e Fix typo in reference to __FreeBSD__.
Commit a2a8acd152 introduced a platform-dependent mechanism to prevent
developers from referencing errno in the argument list of
elog()/ereport(), but didn't use the right macro to detect FreeBSD, so
it didn't actually work there.

Reported-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/MEYP282MB16693AAEEF84F47D8F7CA007B6E69%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2022-12-16 17:36:22 +13:00
David Rowley 4a29eabd1d Remove pessimistic cost penalization from Incremental Sort
When incremental sorts were added in v13 a 1.5x pessimism factor was added
to the cost modal.  Seemingly this was done because the cost modal only
has an estimate of the total number of input rows and the number of
presorted groups.  It assumes that the input rows will be evenly
distributed throughout the presorted groups.  The 1.5x pessimism factor
was added to slightly reduce the likelihood of incremental sorts being
used in the hope to avoid performance regressions where an incremental
sort plan was picked and turned out slower due to a large skew in the
number of rows in the presorted groups.

An additional quirk with the path generation code meant that we could
consider both a sort and an incremental sort on paths with presorted keys.
This meant that with the pessimism factor, it was possible that we opted
to perform a sort rather than an incremental sort when the given path had
presorted keys.

Here we remove the 1.5x pessimism factor to allow incremental sorts to
have a fairer chance at being chosen against a full sort.

Previously we would generally create a sort path on the cheapest input
path (if that wasn't sorted already) and incremental sort paths on any
path which had presorted keys.  This meant that if the cheapest input path
wasn't completely sorted but happened to have presorted keys, we would
create a full sort path *and* an incremental sort path on that input path.
Here we change this logic so that if there are presorted keys, we only
create an incremental sort path, and create sort paths only when a full
sort is required.

Both the removal of the cost pessimism factor and the changes made to the
path generation make it more likely that incremental sorts will now be
chosen.  That, of course, as with teaching the planner any new tricks,
means an increased likelihood that the planner will perform an incremental
sort when it's not the best method.  Our standard escape hatch for these
cases is an enable_* GUC.  enable_incremental_sort already exists for
this.

This came out of a report by Pavel Luzanov where he mentioned that the
master branch was choosing to perform a Seq Scan -> Sort -> Group
Aggregate for his query with an ORDER BY aggregate function.  The v15 plan
for his query performed an Index Scan -> Group Aggregate, of course, the
aggregate performed the final sort internally in nodeAgg.c for the
aggregate's ORDER BY.  The ideal plan would have been to use the index,
which provided partially sorted input then use an incremental sort to
provide the aggregate with the sorted input.  This was not being chosen
due to the pessimism in the incremental sort cost modal, so here we remove
that and rationalize the path generation so that sort and incremental sort
plans don't have to needlessly compete.  We assume that it's senseless
to ever use a full sort on a given input path where an incremental sort
can be performed.

Reported-by: Pavel Luzanov
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/9f61ddbf-2989-1536-b31e-6459370a6baa%40postgrespro.ru
2022-12-16 15:22:23 +13:00
David Rowley 8b6b043cee Re-adjust drop-index-concurrently-1 isolation test
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing.  d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan.  This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that.  It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.

The reason this test has become broken seems to be due to how the test was
originally coded.  The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering.  Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR.  Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.

Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.

Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where d2d8a229b changed the expected test output
2022-12-16 11:39:40 +13:00
David Rowley ac99802080 Speed up creation of command completion tags
The building of command completion tags could often be seen showing up in
profiles when running high tps workloads.

The query completion tags were being built with snprintf, which is slow at
the best of times when compared with more manual ways of formatting
strings.  Here we introduce BuildQueryCompletionString() to do this job
for us.  We also now store the completion tag's strlen in the
CommandTagBehavior struct so that we can quickly memcpy this number of
bytes into the completion tag string.  Appending the rows affected is done
via pg_ulltoa_n.  BuildQueryCompletionString returns the length of the
built string.  This saves us having to call strlen to figure out how many
bytes to pass to pq_putmessage().

Author: David Rowley, Andres Freund
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAHoyFK-Xwqc-iY52shj0G+8K9FJpse+FuZ36XBKy78wDVnd=Qg@mail.gmail.com
2022-12-16 10:31:25 +13:00
Tom Lane d35a1af468 Convert range_in and multirange_in to report errors softly.
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input.  (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.)  Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well.  Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.

Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
2022-12-15 12:18:36 -05:00
Peter Eisentraut 75f49221c2 Static assertions cleanup
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.

Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level.  This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.

Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
2022-12-15 10:10:32 +01:00
Peter Eisentraut 2613dec4ed Move provariadic sanity check to a more appropriate place
35f059e9bd put the provariadic sanity
check into type_sanity.sql, even though it's not about types, and
moreover in the middle of some connected test group, which makes it
all very confusing.  Move it to opr_sanity.sql, where it is in better
company.
2022-12-15 07:54:48 +01:00
Tom Lane 3b9d2deb67 Convert a few more datatype input functions to report errors softly.
Convert the remaining string-category input functions
(bpcharin, varcharin, byteain) to the new style.

Discussion: https://postgr.es/m/3038346.1671060258@sss.pgh.pa.us
2022-12-14 19:42:05 -05:00
Tom Lane 90161dad4d Convert a few more datatype input functions to report errors softly.
Convert cash_in and uuid_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 18:03:11 -05:00
Tom Lane 47f3f97fcd Convert a few more datatype input functions to report errors softly.
Convert assorted internal-ish datatypes, namely aclitemin,
int2vectorin, oidin, oidvectorin, pg_lsn_in, pg_snapshot_in,
and tidin to the new style.

(Some others you might expect to find in this group, such as
cidin and xidin, need no changes because they never throw
errors at all.  That seems a little cheesy ... but it is not in
the charter of this patch series to add new error conditions.)

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 17:50:24 -05:00
Tom Lane 332741e739 Convert the geometric input functions to report errors softly.
Convert box_in, circle_in, line_in, lseg_in, path_in, point_in,
and poly_in to the new style.

line_in still throws hard errors for overflows/underflows that can occur
when the input is specified as two points rather than in the canonical
"Ax + By + C = 0" style.  I'm not too concerned about that: it won't be
reached in normal dump/restore cases, and it's fairly debatable that
such conversion should ever have been made part of a type input function
in the first place.  But in any case, I don't want to extend the soft
error conventions into float.h without more discussion than this patch
has had.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 16:10:20 -05:00
Tom Lane 17407a8eaa Convert a few more datatype input functions to report errors softly.
Convert bit_in, varbit_in, inet_in, cidr_in, macaddr_in, and
macaddr8_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 13:22:08 -05:00
Peter Eisentraut b18c2decd7 Rearrange some static assertions for consistency
Put lengthof first.

Reported-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAHut+PsUDMySVRuRc=h+P5N3+=TGvj4W_mi32XXg9dt4o-BXbA@mail.gmail.com
2022-12-14 16:08:13 +01:00
Peter Eisentraut 6fcda9aba8 Non-decimal integer literals
Add support for hexadecimal, octal, and binary integer literals:

    0x42F
    0o273
    0b100101

per SQL:202x draft.

This adds support in the lexer as well as in the integer type input
functions.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
2022-12-14 06:17:07 +01:00
Jeff Davis 60684dd834 Add grantable MAINTAIN privilege and pg_maintain role.
Allows VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER,
and LOCK TABLE.

Effectively reverts 4441fc704d. Instead of creating separate
privileges for VACUUM, ANALYZE, and other maintenance commands, group
them together under a single MAINTAIN privilege.

Author: Nathan Bossart
Discussion: https://postgr.es/m/20221212210136.GA449764@nathanxps13
Discussion: https://postgr.es/m/45224.1670476523@sss.pgh.pa.us
2022-12-13 17:33:28 -08:00
Michael Paquier c6f6646bb0 Remove SHA256_HMAC_B from scram-common.h
This referred to the size of the buffers for k_ipad and k_opad in HMAC
computations.  This is unused since e6bdfd9, where SCRAM has switched to
the cryptohash routines for its HMAC calculations rather than its own
maths.

Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/Y5gGMjXhyp0oK0mH@paquier.xyz
2022-12-14 09:51:19 +09:00
Tom Lane 20432f8731 Rethink handling of [Prevent|Is]InTransactionBlock in pipeline mode.
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock().  This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.

Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.

To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it.  Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".

Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems.  This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock.  (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)

For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock.  This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.

Per report from Peter Eisentraut.  As before, back-patch to all
supported branches (which sadly no longer includes v10).

Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
2022-12-13 14:23:58 -05:00
Peter Eisentraut 369f09e420 Refactor ExecGrant_*() functions
Instead of half a dozen of mostly-duplicate ExecGrant_Foo() functions,
write one common function ExecGrant_generic() that can handle most of
them.  We already have all the information we need, such as which
system catalog corresponds to which catalog table and which column is
the ACL column.

Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/22c7e802-4e7d-8d87-8b71-cba95e6f4bcf%40enterprisedb.com
2022-12-13 07:52:04 +01:00
Tom Lane b0feda79fd Fix jsonb subscripting to cope with toasted subscript values.
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum.  That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.

setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums.  push_path() had the same issue.  This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.

Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files.  There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is.  Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array.  Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do.  In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.

Per report from exe-dealer@yandex.ru.  Patch by me, but thanks to
David Johnston for initial investigation.  Back-patch to v14 where
jsonb subscripting was introduced.

Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
2022-12-12 16:17:54 -05:00
Jeff Davis 2af33369e7 Remove extra space from dumped ALTER DEFAULT PRIVILEGES.
Author: Nathan Bossart
Discussion: https://postgr.es/m/20221206232744.GA3560301@nathanxps13
2022-12-12 09:49:24 -08:00
Robert Haas 45f5c81ad2 Fix failure to advance content pointer in sendFileWithContent.
If sendFileWithContent were used to send a file larger than the
bbsink buffer size, this would result in corruption. The only
files that are sent via sendFileWithContent are the backup label
file, the tablespace map file, and .done files for WAL segments
included in the backup. Of these, it seems that only the
tablespace_map file can become large enough to cause a problem,
and then only if you have a lot of tablespaces. If you do have
that situation, you might end up with a corrupted
tablespace_map file, which would be bad.

My commit bef47ff85d introduced
this problem.

Report and patch by Antonin Houska.

Discussion: http://postgr.es/m/15764.1670528645@antos
2022-12-12 10:26:48 -05:00
Peter Eisentraut df8b8968d4 Order getopt arguments
Order the letters in the arguments of getopt() and getopt_long(), as
well as in the subsequent switch statements.  In most cases, I used
alphabetical with lower case first.  In a few cases, existing
different orders (e.g., upper case first) was kept to reduce the diff
size.

Discussion: https://www.postgresql.org/message-id/flat/3efd0fe8-351b-f836-9122-886002602357%40enterprisedb.com
2022-12-12 15:20:00 +01:00
Alvaro Herrera 840ff5f451
Get rid of recursion-marker values in enum AlterTableType
During ALTER TABLE execution, when prep-time handling of subcommands of
certain types determine that execution-time handling requires recursion,
they signal this by changing the subcommand type to a special value.
This can be done in a simpler way by using a separate flag introduced by
commit ec0925c22a, so do that.

Catversion bumped.  It's not clear to me that ALTER TABLE subcommands
are stored anywhere in catalogs (CREATE FUNCTION rejects it in BEGIN
ATOMIC function bodies), but we do have both write and read support for
them, so be safe.

Discussion: https://postgr.es/m/20220929090033.zxuaezcdwh2fgfjb@alvherre.pgsql
2022-12-12 11:13:26 +01:00
Michael Paquier 9d0cf57492 Add support for GRANT SET in psql tab completion
3d14e17 has added support for this query but psql was not able to
complete it.  Spotted while working on a different patch in the same
area.

Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/Y3hw7yvG0VwpC1jq@paquier.xyz
2022-12-12 16:47:24 +09:00
Michael Paquier eae7fe4859 Remove direct call to GetNewObjectId() for pg_auth_members.oid
This routine should not be called directly as mentioned at its top, so
replace it by GetNewOidWithIndex().  Issue introduced by 6566133 when
pg_auth_members.oid got added, so no backpatch is needed.

Author: Maciek Sakrejda
Discussion: https://postgr.es/m/CAOtHd0Ckbih7Ur7XeVyLAJ26VZOfTNcq9qV403bNF4uTGtAN+Q@mail.gmail.com
2022-12-12 09:01:39 +09:00
Tom Lane b8c0ffbd2c Convert domain_in to report errors softly.
This is straightforward as far as it goes.  However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints.  Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check.  Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating.  For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.

Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
2022-12-11 12:56:54 -05:00
Tom Lane c60c9badba Convert json_in and jsonb_in to report errors softly.
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.

In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase.  The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded.  We should improve that, but it seems like a
separable project.

Andrew Dunstan and Tom Lane

Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
2022-12-11 11:28:15 -05:00
Tom Lane 50428a301d Change JsonSemAction to allow non-throw error reporting.
Formerly, semantic action functions for the JSON parser returned void,
so that there was no way for them to affect the parser's behavior.
That means in particular that they can't force an error exit except by
longjmp'ing.  That won't do in the context of our project to make input
functions return errors softly.  Hence, change them to return the same
JsonParseErrorType enum value as the parser itself uses.  If an action
function returns anything besides JSON_SUCCESS, the parse is abandoned
and that error code is returned.

Action functions can thus easily return the same error conditions that
the parser already knows about.  As an escape hatch for expansion, also
invent a code JSON_SEM_ACTION_FAILED that the core parser does not know
the exact meaning of.  When returning this code, an action function
must use some out-of-band mechanism for reporting the error details.

This commit simply makes the API change and causes all the existing
action functions to return JSON_SUCCESS, so that there is no actual
change in behavior here.  This is long enough and boring enough that
it seemed best to commit it separately from the changes that make
real use of the new mechanism.

In passing, remove a duplicate assignment of
transform_string_values_scalar.

Discussion: https://postgr.es/m/1436686.1670701118@sss.pgh.pa.us
2022-12-11 10:39:05 -05:00
Tom Lane d02ef65bce Standardize error reports in unimplemented I/O functions.
We chose a specific wording of the not-implemented errors for
pseudotype I/O functions and other cases where there's little
value in implementing input and/or output.  gtsvectorin never
got that memo though, nor did most of contrib.  Make these all
fall in line, mostly because I'm a neatnik but also to remove
unnecessary translatable strings.

gbtreekey_in needs a bit of extra love since it supports
multiple SQL types.  Sadly, gbtreekey_out doesn't have the
ability to do that, but I think it's unreachable anyway.

Noted while surveying datatype input functions to see what we
have left to fix.
2022-12-10 18:26:43 -05:00
Tom Lane e730718072 Use the macro, not handwritten code, to construct anymultirange_in().
Apparently anymultirange_in was written before we converted all
these pseudotype input functions to use a common macro, and it didn't
get fixed before committing.  Sloppy merging probably explains its
unintuitive ordering, too, so rearrange.

Noted while surveying datatype input functions to see what we
have left to fix.  I'm inclined to leave the pseudotypes as
throwing hard errors, because it's difficult to see a reason why
anyone would need something else.  But in any case, if we want
to change that, we shouldn't have to change multiple copies of
the code.
2022-12-10 17:22:16 -05:00
David Rowley 94985c2102 Add subquery pullup handling for WindowClause runCondition
9d9c02ccd added code to allow WindowAgg to take some shortcuts when a
monotonic WindowFunc reached some value that it could never come back
from due to the function's monotonic nature.  That commit added a
runCondition field to WindowClause to store the condition which, when it
becomes false we can start taking shortcuts in nodeWindowAgg.c.

Here we fix an issue where subquery pullups didn't properly update the
runCondition to update the Vars to properly reference the new query level.

Here we also add a missing call to preprocess_expression() for the
WindowClause's runCondtion.  The WindowFuncs in the targetlist will have
had this process done, so we must also do it for the WindowFuncs in the
runCondition so that they can be correctly found in the targetlist
during setrefs.c

Bug: #17709
Reported-by: Alexey Makhmutov
Author: Richard Guo, David Rowley
Discussion: https://postgr.es/m/17709-4f557160e3e8ee9a@postgresql.org
Backpatch-through: 15, where 9d9c02ccd was introduced
2022-12-10 19:27:20 +13:00
Michael Paquier 66dcb09246 Fix macro definitions in pgstatfuncs.c
Buildfarm member wrasse has been complaining about empty declarations
as an effect of 8018ffb and 83a1a1b due to extra semicolons.

While on it, remove also the last backslash of the macros definitions,
causing more lines to be eaten in it than necessary, per comment from
Tom Lane.

Reported-by: Tom Lane, and buildfarm member wrasse
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/1188769.1670640236@sss.pgh.pa.us
2022-12-10 13:28:02 +09:00
Tom Lane 4dd687502d Restructure soft-error handling in formatting.c.
Replace the error trapping scheme introduced in 5bc450629 with our
shiny new errsave/ereturn mechanism.  This doesn't have any real
functional impact (although I think that the new coding is able
to report a few more errors softly than v15 did).  And I doubt
there's any measurable performance difference either.  But this
gets rid of an ad-hoc, one-of-a-kind design in favor of a mechanism
that will be widely used going forward, so it should be a net win
for code readability.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 20:15:56 -05:00
Tom Lane c60488b474 Convert datetime input functions to use "soft" error reporting.
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 16:07:49 -05:00
Tom Lane 2661469d86 Allow DateTimeParseError to handle bad-timezone error messages.
Pay down some ancient technical debt (dating to commit 022fd9966):
fix a couple of places in datetime parsing that were throwing
ereport's immediately instead of returning a DTERR code that could be
interpreted by DateTimeParseError.  The reason for that was that there
was no mechanism for passing any auxiliary data (such as a zone name)
to DateTimeParseError, and these errors seemed to really need it.
Up to now it didn't matter that much just where the error got thrown,
but now we'd like to have a hard policy that datetime parse errors
get thrown from just the one place.

Hence, invent a "DateTimeErrorExtra" struct that can be used to
carry any extra values needed for specific DTERR codes.  Perhaps
in the future somebody will be motivated to use this to improve
the specificity of other DateTimeParseError messages, but for now
just deal with the timezone-error cases.

This is on the way to making the datetime input functions report
parse errors softly; but it's really an independent change, so
commit separately.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 13:30:47 -05:00
Tom Lane bad5116957 Const-ify a couple of datetime parsing subroutines.
More could be done in this line, but I just grabbed some low-hanging
fruit.  Principal objective was to remove the need for several ugly
unconstify() usages in formatting.c.
2022-12-09 10:43:45 -05:00
Tom Lane ccff2d20ed Convert a few datatype input functions to use "soft" error reporting.
This patch converts the input functions for bool, int2, int4, int8,
float4, float8, numeric, and contrib/cube to the new soft-error style.
array_in and record_in are also converted.  There's lots more to do,
but this is enough to provide proof-of-concept that the soft-error
API is usable, as well as reference examples for how to convert
input functions.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:14:53 -05:00
Tom Lane 1939d26282 Add test scaffolding for soft error reporting from input functions.
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK.  The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.

Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:08:44 -05:00
Tom Lane d9f7f5d32f Create infrastructure for "soft" error reporting.
Postgres' standard mechanism for reporting errors (ereport() or elog())
is used for all sorts of error conditions.  This means that throwing
an exception via ereport(ERROR) requires an expensive transaction or
subtransaction abort and cleanup, since the exception catcher dare not
make many assumptions about what has gone wrong.  There are situations
where we would rather have a lighter-weight mechanism for dealing
with errors that are known to be safe to recover from without a full
transaction cleanup.  This commit creates infrastructure to let us
adapt existing error-reporting code for that purpose.  See the
included documentation changes for details.  Follow-on commits will
provide test code and usage examples.

The near-term plan is to convert most if not all datatype input
functions to report invalid input "softly".  This will enable
implementing some SQL/JSON features cleanly and without the cost
of subtransactions, and it will also allow creating COPY options
to deal with bad input without cancelling the whole COPY.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks also to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 09:58:38 -05:00
Alexander Korotkov beecbe8e50 Fix invalid role names introduced in 096dd80f3c
096dd80f3c added new regression tests dealing with roles.  By oversight, role
names didn't start with regress_ prefix.  This commit fixes that.
2022-12-09 13:53:32 +03:00
Alexander Korotkov 096dd80f3c Add USER SET parameter values for pg_db_role_setting
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role.  That lets ordinary roles set placeholder variables, which
permission requirements are not known yet.  Such a value wouldn't be used if
the variable finally appear to require superuser privileges.

The new flags are stored in the pg_db_role_setting.setuser array.  Catversion
is bumped.

This commit is inspired by the previous work by Steve Chavez.

Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
2022-12-09 13:12:20 +03:00
Dean Rasheed 5defdef8aa Update MERGE docs to mention that ONLY is supported.
Commit 7103ebb7aa added support for MERGE, which included support for
inheritance hierarchies, but didn't document the fact that ONLY could
be specified before the source and/or target tables to exclude tables
inheriting from the tables specified.

Update merge.sgml to mention this, and while at it, add some
regression tests to cover it.

Dean Rasheed, reviewed by Nathan Bossart.

Backpatch to 15, where MERGE was added.

Discussion: https://postgr.es/m/CAEZATCU0XM-bJCvpJuVRU3UYNRqEBS6g4-zH%3Dj9Ye0caX8F6uQ%40mail.gmail.com
2022-12-09 10:00:01 +00:00
Peter Eisentraut 07c29ca7fe Remove unnecessary casts
Some code carefully cast all data buffer arguments for BufFileWrite()
and BufFileRead() to void *, even though the arguments are already
void * (and AFAICT were never anything else).  Remove this unnecessary
clutter.

Discussion: https://www.postgresql.org/message-id/flat/11dda853-bb5b-59ba-a746-e168b1ce4bdb%40enterprisedb.com
2022-12-08 08:58:15 +01:00
Peter Eisentraut 2d4f1ba6cf Update types in File API
Make the argument types of the File API match stdio better:

- Change the data buffer to void *, from char *.
- Change FileWrite() data buffer to const on top of that.
- Change amounts to size_t, from int.

In passing, change the FilePrefetch() amount argument from int to
off_t, to match the underlying posix_fadvise().

Discussion: https://www.postgresql.org/message-id/flat/11dda853-bb5b-59ba-a746-e168b1ce4bdb%40enterprisedb.com
2022-12-08 08:58:15 +01:00
Etsuro Fujita 4b3e379932 Remove new structure member from ResultRelInfo.
In commit ffbb7e65a, I added a ModifyTableState member to ResultRelInfo
to save the owning ModifyTableState for use by nodeModifyTable.c when
performing batch inserts, but as pointed out by Tom Lane, that changed
the array stride of es_result_relations, and that would break any
previously-compiled extension code that accesses that array.  Fix by
removing that member from ResultRelInfo and instead adding a List member
at the end of EState to save such ModifyTableStates.

Per report from Tom Lane.  Back-patch to v14, like the previous commit;
I chose to apply the patch to HEAD as well, to make back-patching easy.

Discussion: http://postgr.es/m/4065383.1669395453%40sss.pgh.pa.us
2022-12-08 16:15:00 +09:00
Amit Kapila bf07ab492c Avoid unnecessary streaming of transactions during logical replication.
After restart, we don't perform streaming of an in-progress transaction if
it was previously decoded and confirmed by the client. To achieve that we
were comparing the END location of the WAL record being decoded with the
WAL location we have already decoded and confirmed by the client. While
decoding the commit record, to decide whether to process and send the
complete transaction, we compare its START location with the WAL location
we have already decoded and confirmed by the client. Now, if we need to
queue some change in the transaction while decoding the commit record
(e.g. snapshot), it is possible that we decide to stream the transaction
but later commit processing decides to skip it. In such a case, we would
needlessly send the changes and later when we decide to skip it, we will
send stream abort.

We also sometimes decide to stream the changes when we actually just need
to process them locally like a change for invalidations. This will lead us
to send empty streams. To avoid this, while queuing each change for
decoding, we remember whether the transaction has any change that actually
needs to be sent downstream and use that information later to decide
whether to stream the transaction or not.

Note, we can't avoid all cases where we have to send empty streams like
the case where the plugin later decides that the change is not
publishable. However, we will no longer need to send stream_abort when we
skip sending a particular transaction.

Author: Dilip Kumar
Reviewed-by: Hou Zhijie, Ashutosh Bapat, Shi yu, Amit Kapila
Discussion: https://postgr.es/m/CAFiTN-tHK=7LzfrPs8fbT2ksrOJGQbzywcgXst2bM9-rJJAAUg@mail.gmail.com
2022-12-08 06:05:09 +05:30
Andres Freund 3f0e786ccb meson: Add 'running' test setup, as a replacement for installcheck
To run all tests that support running against existing server:
$ meson test --setup running

To run just the main pg_regress tests against existing server:
$ meson test --setup running regress-running/regress

To ensure the 'running' setup continues to work, test it as part of the
freebsd CI task.

Discussion: https://postgr.es/m/CAH2-Wz=XDQcmLoo7RR_i6FKQdDmcyb9q5gStnfuuQXrOGhB2sQ@mail.gmail.com
2022-12-07 12:13:35 -08:00
Tom Lane 8305629afe Minor code refactoring in elog.c (no functional change).
Combine some duplicated code stanzas by creating small functions.
Most of these duplications arose at a time when I wouldn't have
trusted C compilers to auto-inline small functions intelligently,
but they're probably poor practice now.  Similarly split out some
bits that aren't actually duplicative as the code stands, but would
become so after an upcoming patch to add another error-handling
code path.

Take the opportunity to add some lengthier comments about what
we're doing here, too.  Re-order one function that seemed not
very well-placed.

Patch by me, per suggestions from Andres Freund.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-07 14:39:25 -05:00
Peter Eisentraut 5e9b122059 Fix FK comment think-o
from commit d6f96ed94e

Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/6a7c7338-1aa2-4689-d171-0b0b294fdd84%40illuminatedcomputing.com
2022-12-07 17:06:50 +01:00
Alvaro Herrera 29861e228a
Update outdated comment in ApplyRetrieveRule
After a61b1f7482.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGZm7hb2VAy8HGM22-fTDaQzqE6T=5GbAk=GkT9H0hJEg@mail.gmail.com
2022-12-07 12:35:59 +01:00
Andres Freund 5bdd0cfb91 meson: Add basic PGXS compatibility
Generate a Makefile.global that's complete enough for PGXS to work for some
extensions. It is likely that this compatibility layer will not suffice for
every extension and not all platforms - we can expand it over time.

This allows extensions to use a single buildsystem across all the supported
postgres versions. Once all supported PG versions support meson, we can remove
the compatibility layer.

Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
2022-12-06 18:56:46 -08:00
Andres Freund 9db49fc5bf autoconf: Move export_dynamic determination to configure
Previously export_dynamic was set in src/makefiles/Makefile.$port. For solaris
this required exporting with_gnu_ld. The determination of with_gnu_ld would be
nontrivial to copy for meson PGXS compatibility.  It's also nice to delete
libtool.m4.

This uses -Wl,--export-dynamic on all platforms, previously all platforms but
FreeBSD used -Wl,-E. The likelihood of a name conflict seems lower with the
longer spelling.

Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
2022-12-06 18:55:28 -08:00
Michael Paquier 8018ffbf58 Generate pg_stat_get*() functions for databases using macros
The same code pattern is repeated 21 times for int64 counters (0 for
missing entry) and 5 times for doubles (0 for missing entry) on database
entries.  This code is switched to use macros for the basic code
instead, shaving a few hundred lines of originally-duplicated code
patterns.  The function names remain the same, but some fields of
PgStat_StatDBEntry have to be renamed to cope with the new style.

This is in the same spirit as 83a1a1b.

Author: Michael Paquier
Reviewed-by: Nathan Bossart, Bertrand Drouvot
Discussion: https://postgr.es/m/Y46stlxQ2LQE20Na@paquier.xyz
2022-12-07 09:11:48 +09:00
Andres Freund 79f7c482f6 meson: Basic cygwin support
There likely are further issues, but as evidenced by the CI task proposed by
Justin in the referenced thread, this suffices to build and run basic tests in
cygwin (some fixes for the test infrastructure are needed, but that's
independent of the meson aspect).

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20221021034040.GT16921@telsasoft.com
2022-12-06 11:25:54 -08:00
Alvaro Herrera a61b1f7482
Rework query relation permission checking
Currently, information about the permissions to be checked on relations
mentioned in a query is stored in their range table entries.  So the
executor must scan the entire range table looking for relations that
need to have permissions checked.  This can make the permission checking
part of the executor initialization needlessly expensive when many
inheritance children are present in the range range.  While the
permissions need not be checked on the individual child relations, the
executor still must visit every range table entry to filter them out.

This commit moves the permission checking information out of the range
table entries into a new plan node called RTEPermissionInfo.  Every
top-level (inheritance "root") RTE_RELATION entry in the range table
gets one and a list of those is maintained alongside the range table.
This new list is initialized by the parser when initializing the range
table.  The rewriter can add more entries to it as rules/views are
expanded.  Finally, the planner combines the lists of the individual
subqueries into one flat list that is passed to the executor for
checking.

To make it quick to find the RTEPermissionInfo entry belonging to a
given relation, RangeTblEntry gets a new Index field 'perminfoindex'
that stores the corresponding RTEPermissionInfo's index in the query's
list of the latter.

ExecutorCheckPerms_hook has gained another List * argument; the
signature is now:
typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable,
					      List *rtePermInfos,
					      bool ereport_on_violation);
The first argument is no longer used by any in-core uses of the hook,
but we leave it in place because there may be other implementations that
do.  Implementations should likely scan the rtePermInfos list to
determine which operations to allow or deny.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
2022-12-06 16:09:24 +01:00
David Rowley a858327221 Fix 32-bit build dangling pointer issue in WindowAgg
9d9c02ccd added window "run conditions", which allows the evaluation of
monotonic window functions to be skipped when the run condition is no
longer true.  Prior to this commit, once the run condition was no longer
true and we stopped evaluating the window functions, we simply just left
the ecxt_aggvalues[] and ecxt_aggnulls[] arrays alone to store whatever
value was stored there the last time the window function was evaluated.
Leaving a stale value in there isn't really a problem on 64-bit builds as
all of the window functions which we recognize as monotonic all return
int8, which is passed by value on 64-bit builds.  However, on 32-bit
builds, this was a problem as the value stored in the ecxt_values[]
element would be a by-ref value and it would be pointing to some memory
which would get reset once the tuple context is destroyed.  Since the
WindowAgg node will output these values in the resulting tupleslot, this
could be problematic for the top-level WindowAgg node which must look at
these values to filter out the rows that don't meet its filter condition.

Here we fix this by just zeroing the ecxt_aggvalues[] and setting the
ecxt_aggnulls[] array to true when the run condition first becomes false.
This results in the WindowAgg's output having NULLs for the WindowFunc's
columns rather than the stale or pointer pointing to possibly freed
memory.  These tuples with the NULLs can only make it as far as the
top-level WindowAgg node before they're filtered out.  To ensure that
these tuples *are* always filtered out, we now insist that OpExprs making
up the run condition are strict OpExprs.  Currently, all the window
functions which the planner recognizes as monotonic return INT8 and the
operator which is used for the run condition must be a member of a btree
opclass.  In reality, these restrictions exclude nothing that's built-in
to Postgres and are unlikely to exclude anyone's custom operators due to
the requirement that the operator is part of a btree opclass.  It would be
unusual if those were not strict.

Reported-by: Sergey Shinderuk, using valgrind
Reviewed-by: Richard Guo, Sergey Shinderuk
Discussion: https://postgr.es/m/29184c50-429a-ebd7-f1fb-0589c6723a35@postgrespro.ru
Backpatch-through: 15, where 9d9c02ccd was added
2022-12-07 00:09:36 +13:00
Michael Paquier 83a1a1b566 Generate pg_stat_get*() functions for tables using macros
The same code pattern is repeated 17 times for int64 counters (0 for
missing entry) and 5 times for timestamps (NULL for missing entry) on
table entries.  This code is switched to use a macro for the basic code
instead, shaving a few hundred lines of originally-duplicated code.  The
function names remain the same, but some fields of PgStat_StatTabEntry
have to be renamed to cope with the new style.

Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart
Discussion: https:/postgr.es/m/20221204173207.GA2669116@nathanxps13
2022-12-06 10:46:35 +09:00
Alexander Korotkov 941aa6a626 Check the snapshot argument of index_beginscan and family
Passing a NULL snapshot (InvalidSnapshot) is going to work but only as long
as the index can't find any matching rows.  This can be confusing for
the extension authors, so add an explicit check for this argument.  The check
is implemented with Assert() in order to avoid overhead in release builds.

Reported-by: Sven Klemm
Discussion: https://postgr.es/m/CAJ7c6TPxitD4vbKyP-mpmC1XwyHdPPqvjLzm%2BVpB88h8LGgneQ%40mail.gmail.com
Author: Aleksander Alekseev
Reviewed-by: Pavel Borisov
2022-12-06 03:29:18 +03:00
Michael Paquier a7885c9bb2 Provide test coverage in pg_dump for default behaviors with compression
By default, the contents generated by the custom and directory dump
formats are compressed.  However, with the existing test facility, the
restore program will succeed regardless of whether the dumped output was
compressed or not without checking if anything has been compressed.

This commit implements a portable way to check the contents of the
custom and directory dump formats:
- glob_patterns, that can be defined for each test as an array of
glob()-compilable strings, tracking the contents that should or should
not be compressed.  While this is useful to make sure that the table
data is compressed, this also checks that blobs.toc and toc.dat are
never compressed.
- command_like, to execute a command on a dump and check its generated
output.  This is used here in correlation with pg_restore -l to check if
the dumps have been compressed or not, depending on if the build
supports gzip, or not.

This hole in the tests has come up when working on 5e73a60, where
compression has to be applied by default, if available, for both dump
formats.

The idea of glob_patterns comes from me, and Georgios has come up with
the design for command_like.

Author: Georgios Kokolatos, Michael Paquier
Discussion: https://postgr.es/m/DQn4czCWR1rcbGPLL7p3LfEr5-kGmlySm-H05VgroINdikvhtS5r9EdI6b8D8sjnbKdJ09k-cxs2AqijBeHAWk9Q8gvEAxPRHuLRhwONcGc=@pm.me
2022-12-06 09:20:13 +09:00
Peter Eisentraut 1bd47d0dca initdb: Refactor PG_CMD_PUTS loops
Keeping the SQL commands that initdb runs in string arrays before
feeding them to PG_CMD_PUTS() seems unnecessarily verbose and
inflexible.  In some cases, the array only has one member.  In other
cases, one might want to use PG_CMD_PRINTF() instead, to parametrize a
command, but that would require breaking up the loop or using
workarounds like replace_token().  Unwind all that; it's much simpler
that way.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/2c50823b-f453-bb97-e38b-34751c51dcdf%40enterprisedb.com
2022-12-05 23:33:31 +01:00
Tom Lane d69d01ba9d Fix Memoize to work with partitionwise joining.
A couple of places weren't up to speed for this.  By sheer good
luck, we didn't fail but just selected a non-memoized join plan,
at least in the test case we have.  Nonetheless, it's a bug,
and I'm not quite sure that it couldn't have worse consequences
in other examples.  So back-patch to v14 where Memoize came in.

Richard Guo

Discussion: https://postgr.es/m/CAMbWs48GkNom272sfp0-WeD6_0HSR19BJ4H1c9ZKSfbVnJsvRg@mail.gmail.com
2022-12-05 12:36:40 -05:00
Peter Eisentraut 35ce24c333 pg_dump: Remove "blob" terminology
For historical reasons, pg_dump refers to large objects as "BLOBs".
This term is not used anywhere else in PostgreSQL, and it also means
something different in the SQL standard and other SQL systems.

This patch renames internal functions, code comments, documentation,
etc. to use the "large object" or "LO" terminology instead.  There is
no functionality change, so the archive format still uses the name
"BLOB" for the archive entry.  Additional long command-line options
are added with the new naming.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com
2022-12-05 08:52:55 +01:00
Michael Paquier 71cb84ec69 Add LSN location in some error messages related to WAL pages
The error messages reported during any failures while reading or
validating the header of a WAL currently includes only the offset of the
page but not the compiled LSN referring to the page, requiring an extra
step to compile it if looking at the surroundings with pg_waldump or
similar.  Adding this information costs a bit in translation, but also
eases debugging.

Author: Bharath Rupireddy
Reviewed-by:  Álvaro Herrera, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
2022-12-05 09:28:29 +09:00
David Rowley 8692f6644e Fix thinko introduced in 6b423ec67
As pointed out by Dean Rasheed, we really should be using tmp >
-(PG_INTNN_MIN / 10) rather than tmp > (PG_INTNN_MAX / 10) for checking
for overflows in the accumulation in the pg_strtointNN functions.  This
does happen to be the same number when dividing by 10, but there is a
pending patch which adds other bases and this is not the same number if we
were to divide by 2 rather than 10, for example.  If the base 2 parsing
was to follow this example then we could accidentally think a string
containing the value of PG_INT32_MIN was an overflow in pg_strtoint32.
Clearly that shouldn't overflow.

This does not fix any actual live bugs, only some bad examples of overflow
checks for future bases.

Reported-by: Dean Rasheed
Discussion: https://postgr.es/m/CAEZATCVEtwfhdm-K-etZYFB0=qsR0nT6qXta_W+GQx4RYph1dg@mail.gmail.com
2022-12-05 11:55:05 +13:00
Tom Lane 92c4dafe1e Re-pgindent a few files.
Just because I'm a neatnik, and I'm currently working on
code in this area.  It annoys me to not be able to pgindent
my patches without working around unrelated changes.
2022-12-04 14:25:53 -05:00
Tom Lane e76913802c Fix broken MemoizePath support in reparameterize_path().
It neglected to recurse to the subpath, meaning you'd get back
a path identical to the input.  This could produce wrong query
results if the omission meant that the subpath fails to enforce
some join clause it should be enforcing.  We don't have a test
case for this at the moment, but the code is obviously broken
and the fix is equally obvious.  Back-patch to v14 where
Memoize was introduced.

Richard Guo

Discussion: https://postgr.es/m/CAMbWs4_R=ORpz=Lkn2q3ebPC5EuWyfZF+tmfCPVLBVK5W39mHA@mail.gmail.com
2022-12-04 13:48:12 -05:00
Tom Lane 6eb2f0ed4c Add missing MaterialPath support in reparameterize_path[_by_child].
These two functions failed to cover MaterialPath.  That's not a
fatal problem, but we can generate better plans in some cases
if we support it.

Tom Lane and Richard Guo

Discussion: https://postgr.es/m/1854233.1669949723@sss.pgh.pa.us
2022-12-04 13:35:42 -05:00
Tom Lane fe12f2f8fa Fix generate_partitionwise_join_paths() to tolerate failure.
We might fail to generate a partitionwise join, because
reparameterize_path_by_child() does not support all path types.
This should not be a hard failure condition: we should just fall back
to a non-partitioned join.  However, generate_partitionwise_join_paths
did not consider this possibility and would emit the (misleading)
error "could not devise a query plan for the given query" if we'd
failed to make any paths for a child join.  Fix it to give up on
partitionwise joining if so.  (The accepted technique for giving up
appears to be to set rel->nparts = 0, which I find pretty bizarre,
but there you have it.)

I have not added a test case because there'd be little point:
any omissions of this sort that we identify would soon get fixed
by extending reparameterize_path_by_child(), so the test would stop
proving anything.  However, right now there is a known test case based
on failure to cover MaterialPath, and with that I've found that this
is broken in all supported versions.  Hence, patch all the way back.

Original report and patch by me; thanks to Richard Guo for
identifying a test case that works against committed versions.

Discussion: https://postgr.es/m/1854233.1669949723@sss.pgh.pa.us
2022-12-04 13:17:18 -05:00
David Rowley 6b423ec677 Improve performance of pg_strtointNN functions
Experiments have shown that modern versions of both gcc and clang are
unable to fully optimize the multiplication by 10 that we're doing in the
pg_strtointNN functions.  Both compilers seem to be making use of "imul",
which is not the most efficient way to multiply by 10.  This seems to be
due to the overflow checking that we're doing.  Without the overflow
checks, both those compilers switch to a more efficient method of
multiplying by 10.  In absence of overflow concern, integer multiplication
by 10 can be done by bit-shifting left 3 places to multiply by 8 and then
adding the original value twice.

To allow compilers this flexibility, here we adjust the code so that we
accumulate the number as an unsigned version of the type and remove the
use of pg_mul_sNN_overflow() and pg_sub_sNN_overflow().  The overflow
checking can be done simply by checking if the accumulated value has gone
beyond a 10th of the maximum *signed* value for the given type.  If it has
then the accumulation of the next digit will cause an overflow.  After
this is done, we do a final overflow check before converting the unsigned
version of the number back to its signed counterpart.

Testing has shown about an 8% speedup of a COPY into a table containing 2
INT columns.

Author: David Rowley, Dean Rasheed
Discussion: https://postgr.es/m/CAApHDvrL6_+wKgPqRHr7gH_6xy3hXM6a3QCsZ5ForurjDFfenA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrdYByjfj-=WbmVNFgmVZg88-dE7heukw8p55aJ+W=qxQ@mail.gmail.com
2022-12-04 16:18:18 +13:00
Tom Lane 29452de734 Doc: flesh out fmgr/README's description of context-node usage.
I wrote this to provide a home for a planned discussion of error
return conventions for non-error-throwing functions.  But it seems
useful as documentation of existing code no matter what becomes of
that proposal, so commit separately.
2022-12-03 10:50:39 -05:00
Dean Rasheed 2605643a3a Fix DEFAULT handling for multi-row INSERT rules.
When updating a relation with a rule whose action performed an INSERT
from a multi-row VALUES list, the rewriter might skip processing the
VALUES list, and therefore fail to replace any DEFAULTs in it. This
would lead to an "unrecognized node type" error.

The reason was that RewriteQuery() assumed that a query doing an
INSERT from a multi-row VALUES list would necessarily only have one
item in its fromlist, pointing to the VALUES RTE to read from. That
assumption is correct for the original query, but not for product
queries produced for rule actions. In such cases, there may be
multiple items in the fromlist, possibly including multiple VALUES
RTEs.

What is required instead is for RewriteQuery() to skip any RTEs from
the product query's originating query, which might include one or more
already-processed VALUES RTEs. What's left should then include at most
one VALUES RTE (from the rule action) to be processed.

Patch by me. Thanks to Tom Lane for reviewing.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
2022-12-03 12:11:33 +00:00
David Rowley f73bd5fd08 Add missing const qualifier
This is present in the declaration for ReadDataFromArchive, so we'd better
have it in the definition too in order to avoid compilers from complaining
about the mismatch of function signatures.
2022-12-03 20:33:22 +13:00
Andres Freund cb2e7ddfe5 Prevent pgstats from getting confused when relkind of a relation changes
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.

For HEAD, Tom fixed this in b23cd185fd, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.

For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.

No known problem exists in 11-14, so just add the test there.

Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
2022-12-02 18:10:30 -08:00
Jeff Davis 7ac0f8d384 Fix broken hash function hashbpcharextended().
Ignore trailing spaces for non-deterministic collations when
hashing.

The previous behavior could lead to tuples falling into the wrong
partitions when hash partitioning is combined with the BPCHAR type and
a non-deterministic collation. Fortunately, it did not affect hash
indexes, because hash indexes do not use extended hash functions.

Decline to backpatch, per discussion.

Discussion: https://postgr.es/m/eb83d0ac7b299eb08f9b900dd08a5a0c5d90e517.camel@j-davis.com
Reviewed-by: Richard Guo, Tom Lane
2022-12-02 14:06:31 -08:00
Tom Lane 4c689a69ee Remove gen_node_support.pl's special treatment of EquivalenceClasses.
It seems better to deal with this by explicit annotations on the
fields in question, instead of magic knowledge embedded in the
script.  While that creates a risk-of-omission from failing to
annotate fields, the preceding commit should catch any such
oversights.

Discussion: https://postgr.es/m/263413.1669513145@sss.pgh.pa.us
2022-12-02 15:20:30 -05:00
Tom Lane b6bd5def3a Add some error cross-checks to gen_node_support.pl.
Check that if we generate a call to copy, compare, write, or read
a specific node type, that node type does have the appropriate
support function.  (This doesn't protect against trying to invoke
nonexistent code when considering generic field types such as
"Node *", but it seems like a useful check anyway.)

Check that array_size() refers to a field appearing earlier in
the struct.  Aside from catching obvious errors like a misspelled
field name, this protects against a more subtle mistake: if the
size field appears later in the struct than the array field, then
compare and read functions would misbehave.  There is actually
exactly that situation in PlannerInfo, but it's okay since we
do not need compare or read functionality for that (today anyway).

Discussion: https://postgr.es/m/263413.1669513145@sss.pgh.pa.us
2022-12-02 15:09:51 -05:00
Tom Lane cabfb8241d Fix psql's \sf and \ef for new-style SQL functions.
Some options of these commands need to be able to identify the start
of the function body within the output of pg_get_functiondef().
It used to be that that always began with "AS", but since the
introduction of new-style SQL functions, it might also start with
"BEGIN" or "RETURN".  Fix that on the psql side, and add some
regression tests.

Noted by me awhile ago, but I didn't do anything about it.
Thanks to David Johnston for a nag.

Discussion: https://postgr.es/m/AM9PR01MB8268D5CDABDF044EE9F42173FE8C9@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
2022-12-02 14:24:44 -05:00
Tom Lane b23cd185fd Remove logic for converting a table to a view.
Up to now we have allowed manual creation of an ON SELECT rule on
a table to convert it into a view.  That was never anything but a
horrid, error-prone hack though.  pg_dump used to rely on that
behavior to deal with cases involving circular dependencies,
where a dependency loop could be broken by separating the creation
of a view from installation of its ON SELECT rule.  However, we
changed pg_dump to use CREATE OR REPLACE VIEW for that in commit
d8c05aff5 (which was later back-patched as far as 9.4), so there's
not a good argument anymore for continuing to support the behavior.

The proximate reason for axing it now is that we found that the
new statistics code has failure modes associated with the relkind
change caused by this behavior.  We'll patch around that in v15,
but going forward it seems like a better idea to get rid of the
need to support relkind changes.

Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
2022-12-02 12:14:32 -05:00
Daniel Gustafsson c76da690ba Report incompatible roles in pg_upgrade checking
When checking for roles with a pg_ prefix in <= 9.5 servers, report
all found roles in a text file as how other checks are done instead
of just reporting that they exist. Rolenames are printed with their
Oids to match other reports.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/4BB2735C-C347-4CF7-AFB1-8573BE930066@yesql.se
2022-12-02 13:10:21 +01:00
Alvaro Herrera fb958b5da8
Generalize ri_RootToPartitionMap to use for non-partition children
ri_RootToPartitionMap is currently only initialized for tuple routing
target partitions, though a future commit will need the ability to use
it even for the non-partition child tables, so make adjustments to the
decouple it from the partitioning code.

Also, make it lazily initialized via ExecGetRootToChildMap(), making
that function its preferred access path.  Existing third-party code
accessing it directly should no longer do so; consequently, it's been
renamed to ri_RootToChildMap, which also makes it consistent with
ri_ChildToRootMap.

ExecGetRootToChildMap() houses the logic of setting the map appropriately
depending on whether a given child relation is partition or not.

To support this, also add a separate entry point for TupleConversionMap
creation that receives an AttrMap.  No new code here, just split an
existing function in two.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqEYUhDXSK5BTvG_xk=eaAEJCD4GS3C6uH7ybBvv+Z_Tmg@mail.gmail.com
2022-12-02 10:35:55 +01:00
Amit Kapila 40b1491357 Fix incorrect output from pgoutput when using column lists.
For Updates and Deletes, we were not honoring the columns list for old
tuple values while sending tuple data via pgoutput. This results in
pgoutput emitting more columns than expected.

This is not a problem for built-in logical replication as we simply ignore
additional columns based on the relation information sent previously which
didn't have those columns. However, some other users of pgoutput plugin
may expect the columns as per the column list. Also, sending extra columns
unnecessarily consumes network bandwidth defeating the purpose of the
column list feature.

Reported-by: Gunnar Morling
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/CADGJaX9kiRZ-OH0EpWF5Fkyh1ZZYofoNRCrhapBfdk02tj5EKg@mail.gmail.com
2022-12-02 10:52:58 +05:30
Andres Freund 069de07eae autoconf: Don't AC_SUBST() LD in configure
The only use of $(LD) in Makefiles is for AIX, to generate the export file for
the backend. We only support the system linker on AIX and we already hardcode
the path to a number of other binaries. Removing LD substitution will simplify
the upcoming meson PGXS compatibility.

While at it, add a comment why -r is used.

A subsequent commit will remove the determination of LD from configure as
well.

Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
2022-12-01 19:03:26 -08:00
Andres Freund e0f0e08e17 autoconf: Unify CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C
Until now we emitted the cflags to build the CRC objects into architecture
specific variables. That doesn't make a whole lot of sense to me - we're never
going to target x86 and arm at the same time, so they don't need to be
separate variables.

It might be better to instead continue to have CFLAGS_SSE42 /
CFLAGS_ARMV8_CRC32C be computed by PGAC_ARMV8_CRC32C_INTRINSICS /
PGAC_SSE42_CRC32_INTRINSICS and then set CFLAGS_CRC based on those. But it
seems unlikely that we'd need other sets of CRC specific flags for those two
architectures at the same time.

This simplifies the upcoming meson PGXS compatibility.

Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de
2022-12-01 18:46:55 -08:00
Michael Paquier 5e73a60488 Switch pg_dump to use compression specifications
Compression specifications are currently used by pg_basebackup and
pg_receivewal, and are able to let the user control in an extended way
the method and level of compression used.  As an effect of this commit,
pg_dump's -Z/--compress is now able to use more than just an integer, as
of the grammar "method[:detail]".

The method can be either "none" or "gzip", and can optionally take a
detail string.  If the detail string is only an integer, it defines the
compression level.  A comma-separated list of keywords can also be used
method allows for more options, the only keyword supported now is
"level".

The change is backward-compatible, hence specifying only an integer
leads to no compression for a level of 0 and gzip compression when the
level is greater than 0.

Most of the code changes are straight-forward, as pg_dump was relying on
an integer tracking the compression level to check for gzip or no
compression.  These are changed to use a compression specification and
the algorithm stored in it.

As of this change, note that the dump format is not bumped because there
is no need yet to track the compression algorithm in the TOC entries.
Hence, we still rely on the compression level to make the difference
when reading them.  This will be mandatory once a new compression method
is added, though.

In order to keep the code simpler when parsing the compression
specification, the code is changed so as pg_dump now fails hard when
using gzip on -Z/--compress without its support compiled, rather than
enforcing no compression without the user knowing about it except
through a warning.  Like before this commit, archive and custom formats
are compressed by default when the code is compiled with gzip, and left
uncompressed without gzip.

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 10:45:02 +09:00
Jeff Davis edf12e7bbd Fix memory leak for hashing with nondeterministic collations.
Backpatch through 12, where nondeterministic collations were
introduced (5e1963fb76).

Backpatch-through: 12
2022-12-01 11:49:15 -08:00
Tom Lane 1dd6700f44 Fix under-parenthesized display of AT TIME ZONE constructs.
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it.  But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that.  Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.

In passing, remove a comment that was obsoleted later.

Per report from Duncan Sands.  Back-patch to v14 where this code
came in.  (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)

Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
2022-12-01 11:38:14 -05:00
Alvaro Herrera ec38694894
Move PartitioPruneInfo out of plan nodes into PlannedStmt
The planner will now add a given PartitioPruneInfo to
PlannedStmt.partPruneInfos instead of directly to the
Append/MergeAppend plan node.  What gets set instead in the
latter is an index field which points to the list element
of PlannedStmt.partPruneInfos containing the PartitioPruneInfo
belonging to the plan node.

A later commit will make AcquireExecutorLocks() do the initial
partition pruning to determine a minimal set of partitions to be
locked when validating a plan tree and it will need to consult the
PartitioPruneInfos referenced therein to do so.  It would be better
for the PartitioPruneInfos to be accessible directly than requiring
a walk of the plan tree to find them, which is easier when it can be
done by simply iterating over PlannedStmt.partPruneInfos.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
2022-12-01 12:56:21 +01:00
Michael Paquier 43351557d0 Make materialized views participate in predicate locking
Matviews have been discarded from needing predicate locks since 3bf3ab8
and their introduction.  At this point, there was no concurrent flavor
of REFRESH yet, hence there was no meaning in having materialized views
look at read/write conflicts with concurrent transactions using
themselves the serializable isolation level because they could only be
refreshed with an access exclusive lock.  CONCURRENTLY, on the contrary,
allows reads and writes during a refresh as it holds a share update
exclusive lock.

Some isolation tests are added to show the effect of the change, with a
combination of one table and a matview based on it, using a mix of
REFRESH CONCURRENTLY and read/write queries.

This could arguably be considered as a bug, but as it is a subtle
behavior change potentially impacting applications no backpatch is
done.

Author: Yugo Nagata
Reviewed-by: Richard Guo, Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/20220726164434.42d4e33911b4b4fcf751c4e7@sraoss.co.jp
2022-12-01 15:41:13 +09:00
Tom Lane d5515ca7cf Reject missing database name in pg_regress and cohorts.
Writing "pg_regress --dbname= ..." led to a crash, because
we weren't expecting there to be no database name supplied.
It doesn't seem like a great idea to run regression tests
in whatever is the user's default database; so rather than
supporting this case let's explicitly reject it.

Per report from Xing Guo.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
2022-11-30 13:01:41 -05:00
Alvaro Herrera 8f2e74bf87
Bump catalog version for previous commit 2022-11-30 12:09:13 +01:00
Alvaro Herrera 599b33b949
Stop accessing checkAsUser via RTE in some cases
A future commit will move the checkAsUser field from RangeTblEntry
to a new node that, unlike RTEs, will only be created for tables
mentioned in the query but not for the inheritance child relations
added to the query by the planner.  So, checkAsUser value for a
given child relation will have to be obtained by referring to that
for its ancestor mentioned in the query.

In preparation, it seems better to expand the use of RelOptInfo.userid
during planning in place of rte->checkAsUser so that there will be
fewer places to adjust for the above change.

Given that the child-to-ancestor mapping is not available during the
execution of a given "child" ForeignScan node, add a checkAsUser
field to ForeignScan to carry the child relation's RelOptInfo.userid.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGFCs2uq7VRKi7g+FFKbP6Ea_2_HkgZb2HPhUfaAKT3ng@mail.gmail.com
2022-11-30 12:07:03 +01:00
Michael Paquier d2a4490401 Add regression tests for psql's \o and \g on files
This adds coverage for a few scenarios not checked yet in psql, with
multiple query combined across files defined by \o, \g or both at the
same time.  The results are saved in the output files defined, then
reloaded back to check what has been written to them.

Author: Daniel Verite
Discussion: https://postgr.es/m/25c2bb5b-9012-40f8-8088-774cb764046d@manitou-mail.org
2022-11-30 14:46:43 +09:00
Michael Paquier d18655cc03 Refactor code parsing compression option values (-Z/--compress)
This commit moves the code in charge of deparsing the method and detail
strings fed later to parse_compress_specification() to a common routine,
where the backward-compatible case of only an integer being found (N
= 0 => "none", N > 1 => gzip at level N) is handled.

Note that this has a side-effect for pg_basebackup, as we now attempt to
detect "server-" and "client-" before checking for the integer-only
pre-14 grammar, where values like server-N and client-N (without the
follow-up detail string) are now valid rather than failing because of an
unsupported method name.  Past grammars are still handled the same way,
but these flavors are now authorized, and would now switch to consider N
= 0 as no compression and N > 1 as gzip with the compression level used
as N, with the caller still controlling if the compression method should
be done server-side, client-side or is unspecified.  The documentation
of pg_basebackup is updated to reflect that.

This benefits other code paths that would like to rely on the same logic
as pg_basebackup and pg_receivewal with option values used for
compression specifications, one area discussed lately being pg_dump.

Author: Georgios Kokolatos, Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-11-30 09:34:32 +09:00
Michael Paquier d74a366aa2 Fix comment in fe-auth-scram.c
The frontend-side routine in charge of building a SCRAM verifier
mentioned that the restrictions applying to SASLprep on the password
with the encoding are described at the top of fe-auth-scram.c, but this
information is in auth-scram.c.

This is wrong since 8f8b9be, so backpatch all the way down as this is an
important documentation bit.

Spotted while reviewing a different patch.

Backpatch-through: 11
2022-11-30 08:37:59 +09:00
Tom Lane 8242752f9c Improve heuristics for compressing the KnownAssignedXids array.
Previously, we'd compress only when the active range of array entries
reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids).
If max_connections is large, the first term could result in not
compressing for a long time, resulting in much wastage of cycles in
hot-standby backends scanning the array to take snapshots.  Get rid
of that term, and just bound it to 2 * pArray->numKnownAssignedXids.

That however creates the opposite risk, that we might spend too much
effort compressing.  Hence, consider compressing only once every 128
commit records.  (This frequency was chosen by benchmarking.  While
we only tried one benchmark scenario, the results seem stable over
a fairly wide range of frequencies.)

Also, force compression when processing RecoveryInfo WAL records
(which should be infrequent); the old code could perform compression
then, but would do so only after the same array-range check as for
the transaction-commit path.

Also, opportunistically run compression if the startup process is about
to wait for WAL, though not oftener than once a second.  This should
prevent cases where we waste lots of time by leaving the array
not-compressed for long intervals due to low WAL traffic.

Lastly, add a simple check to keep us from uselessly compressing
when the array storage is already compact.

Back-patch, as the performance problem is worse in pre-v14 branches
than in HEAD.

Simon Riggs and Michail Nikolaev, with help from Tom Lane and
Andres Freund.

Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
2022-11-29 15:43:17 -05:00
Tom Lane 8b47ccb624 Prevent clobbering of utility statements in SQL function caches.
This is an oversight in commit 7c337b6b5: I apparently didn't think
about the possibility of a SQL function being executed multiple
times within a query.  In that case, functions.c's primitive caching
mechanism allows the same utility parse tree to be presented for
execution more than once.  We have to tell ProcessUtility to make
a working copy of the parse tree, or bad things happen.

Normally I'd add a regression test, but I think the reported crasher
is dependent on some rather random implementation choices that are
nowhere near functions.c, so its usefulness as a long-lived test
feels questionable.  In any case, this fix is clearly correct given
the design choices of 7c337b6b5.

Per bug #17702 from Xin Wen.  Thanks to Daniel Gustafsson for
analysis.  Back-patch to v14 where the faulty commit came in
(before that, the responsibility for copying scribble-able
utility parse trees lay elsewhere).

Discussion: https://postgr.es/m/17702-ad24fdcdd1e9047a@postgresql.org
2022-11-29 11:46:33 -05:00
Tom Lane 51dfaa0b01 Remove bogus Assert and dead code in remove_useless_results_recurse().
The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that
need to be evaluated at the semijoin's RHS, which is wrong because
there could be some in the semijoin's qual condition.  However, there
could not be any references further up than that, and within the qual
there is not any way that such a PHV could have gone to null yet, so
we don't really need the PHV and there is no need to avoid making the
RHS-removal optimization.  The upshot is that there's no actual bug
in production code, and we ought to just remove this misguided Assert.

While we're here, also drop the JOIN_RIGHT case, which is dead code
because reduce_outer_joins() already got rid of JOIN_RIGHT.

Per bug #17700 from Xin Wen.  Uselessness of the JOIN_RIGHT case
pointed out by Richard Guo.  Back-patch to v12 where this code
was added.

Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
2022-11-29 10:52:44 -05:00
Alvaro Herrera ad86d159b6
Add 'missing_ok' argument to build_attrmap_by_name
When it's given as true, return a 0 in the position of the missing
column rather than raising an error.

This is currently unused, but it allows us to reimplement column
permission checking in a subsequent commit.  It seems worth breaking
into a separate commit because it affects unrelated code.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFfiai=qBxPDTjaio_ZcaqUKh+FC=prESrB8ogZgFNNNQ@mail.gmail.com
2022-11-29 09:39:36 +01:00
Michael Paquier 00ae5d6f58 meson: Add some missing env settings for tests of pg_dump and pg_verifybackup
The commands used for the compression tests were missing in a few
places, causing the tests related to these to never run.

Georgios has spotted GZIP_PROGRAM missing in pg_dump, while I have
noticed the ones missing in pg_verifybackup while looking at the rest of
the tree.

Author: Georgios Kokolatos, Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-11-29 13:33:31 +09:00
Michael Paquier 8aa03f3caa Fix comment in snapbuild.c
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoAmf-PkSnMGAJg2DtGhp7O7vpHoexCxfQLKZg8xrbRwsg@mail.gmail.com
2022-11-29 08:53:01 +09:00
Thomas Munro cd4329d939 Remove promote_trigger_file.
Previously, an idle startup (recovery) process would wake up every 5
seconds to have a chance to poll for promote_trigger_file, even if that
GUC was not configured.  That promotion triggering mechanism was
effectively superseded by pg_ctl promote and pg_promote() a long time
ago.  There probably aren't many users left and it's very easy to change
to the modern mechanisms, so we agreed to remove the feature.

This is part of a campaign to reduce wakeups on idle systems.

Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://postgr.es/m/CANbhV-FsjnzVOQGBpQ589%3DnWuL1Ex0Ykn74Nh1hEjp2usZSR5g%40mail.gmail.com
2022-11-29 12:08:38 +13:00
Andrew Dunstan 4441fc704d Provide non-superuser predefined roles for vacuum and analyze
This provides two new predefined roles: pg_vacuum_all_tables and
pg_analyze_all_tables. Roles which have been granted these roles can
perform vacuum or analyse respectively on any or all tables as if they
were a superuser. This removes the need to grant superuser privilege to
roles just so they can perform vacuum and/or analyze.

Nathan Bossart

Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.

Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
2022-11-28 12:08:14 -05:00
Andrew Dunstan b5d6382496 Provide per-table permissions for vacuum and analyze.
Currently a table can only be vacuumed or analyzed by its owner or
a superuser. This can now be extended to any user by means of an
appropriate GRANT.

Nathan Bossart

Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.

Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
2022-11-28 12:08:14 -05:00
Michael Paquier cbe6e482d7 Add TAP tests for include directives in HBA end ident files
This commit adds a basic set of authentication tests to check after the
new keywords added by a54b658 for the HBA and ident files, aka
"include", "include_if_exists" and "include_dir".

This includes checks for all the positive cases originally proposed,
where valid contents are generated for the HBA and ident files without
any errors happening in the server, checking as well the contents of
their respective system views.  The error handling will be evaluated
separately (-DEXEC_BACKEND makes that trickier), and what we have here
covers most of the ground I would like to see covered if one manipulates
the tokenization logic of hba.c in the future.

While on it, some coverage is added for files included with '@' for
database or user name lists.

Author: Julien Rouhaud
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
2022-11-28 15:19:06 +09:00
Andrew Dunstan ccc59a83cd Fix binary mismatch for MSVC plperl vs gcc built perl libs
When loading plperl built against Strawberry perl or the msys2 ucrt perl
that have been built with gcc, a binary mismatch has been encountered
which looks like this:

loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)

To cure this we bring the handshake keys into sync by adding
NO_THREAD_SAFE_LOCALE to the defines used to build plperl.

Discussion: https://postgr.es/m/20211005004334.tgjmro4kuachwiuc@alap3.anarazel.de
Discussion: https://postgr.es/m/c2da86a0-2906-744c-923d-16da6047875e@dunslane.net

Backpatch to all live branches.
2022-11-27 09:03:22 -05:00
Michael Paquier 1e314847dd Mark two signal flags as sig_atomic_t in pgbench and pg_test_fsync
Two booleans used for timeout tracking were used within some SIGALRM
signal handlers, but they were not declared as sig_atomic_t, so mark
them as such.  This has no consequence on WIN32 for both tools.

Author: Ranier Vilela
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/CAEudQArCDQQiPiFR16=yu9k5s2tp4tgEe1U1ZbkW4ofx81AWWQ@mail.gmail.com
2022-11-26 20:12:33 +09:00
Michael Paquier 02ac05b4c0 Fix typo in hba.c
Spotted while reading through the git history.
2022-11-26 10:14:18 +09:00
Andrew Dunstan 341f4e002d Allow building with MSVC and Strawberry perl
Strawberry uses __builtin_expect which Visual C doesn't have. For this
case define it as a noop. Solution taken from vim sources.

Backpatch to all live branches
2022-11-25 15:28:38 -05:00
Peter Eisentraut d94f29cb89 Correct some SQL feature names
Maybe these were initially typos or they have been changed upstream
since we first added them.  This brings it all in line with SQL:2016.
2022-11-25 16:56:05 +01:00
Andrew Dunstan 50617a9aa3 Fix gen_node_support.pl for changed AclMode size
omitted from 7b378237aa, mea culpa.

Complaint and fix from Amit Langote.
2022-11-25 08:55:56 -05:00
Dean Rasheed 7b2ccc5e03 Fix rule-detection code for MERGE.
Use the relation's rd_rules structure to test whether it has rules,
rather than the relhasrules flag, which might be out of date.

Reviewed by Tom Lane.

Backpatch to 15, where MERGE was added.

Discussion: https://postgr.es/m/CAEZATCVkBVZABfw71sYvkcPf6tarcOFST5Bc6AOi-LFT9YdccQ%40mail.gmail.com
2022-11-25 13:31:48 +00:00
Etsuro Fujita ffbb7e65a8 Fix handling of pending inserts in nodeModifyTable.c.
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to
nodeModifyTable.c code to flush pending inserts to the foreign-table
result relation(s) before completing processing of the ModifyTable node,
but the code failed to take into account the case where the INSERT query
has modifying CTEs, leading to incorrect results.

Also, that commit failed to flush pending inserts before firing BEFORE
ROW triggers so that rows are visible to such triggers.

In that commit we scanned through EState's
es_tuple_routing_result_relations or es_opened_result_relations list to
find the foreign-table result relations to which pending inserts are
flushed, but that would be inefficient in some cases.  So to fix, 1) add
a List member to EState to record the insert-pending result relations,
and 2) modify nodeModifyTable.c so that it adds the foreign-table result
relation to the list in ExecInsert() if appropriate, and flushes pending
inserts properly using the list where needed.

While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(),
which was added by that commit.

Back-patch to v14 where that commit appeared.

Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
2022-11-25 17:45:00 +09:00
Michael Paquier 9e492d6b69 Skip TAP test for peer authentication if there are no unix-domain sockets
Peer connections require support for local connections to work, but the
test missed the same check as the other ones in this suite.  The
buildfarm does not run the authentication tests on Windows, and, more
surprisingly, the CI with meson was already able to skip it.

Author: Anton A. Melnikov
Discussion: https://postgr.es/m/28b9d685-9590-45b1-fe87-358d61c6950a@inbox.ru
2022-11-25 16:37:49 +09:00
Michael Paquier d13b684117 Introduce variables for initial and max nesting depth on configuration files
The code has been assuming already in a few places that the initial
recursion nesting depth is 0, and the recent changes in hba.c (mainly
783e8c6) have relies on this assumption in more places.  The maximum
recursion nesting level is assumed to be 10 for hba.c and GUCs.

Author: Julien Rouhaud
Discussion: https://postgr.es/m/20221124090724.n7amf5kpdhx6vb76@jrouhaud
2022-11-25 07:40:12 +09:00
David Rowley 2d1f3bce97 Fix some 32-bit shift warnings in MSVC
7b378237a widened AclMode to 64 bits which resulted in 3 new additional
warnings on MSVC.  Here we make use of UINT64CONST to reassure the
compiler that we do intend the bit shift expression to yield a 64-bit
result.

Discussion: https://postgr.es/m/CAApHDvo=pn01Y_3zASZZqn+cotF1c4QFCwWgk6MiF0VscaE5ug@mail.gmail.com
2022-11-25 11:05:22 +13:00
David Rowley ec5affdbc2 Improve indenting in _hash_pgaddtup
The Assert added in d09dbeb9b came out rather ugly after having run
pgindent on that code.  Here we adjust things to use some local variables
so that the Assert remains within the 80-character margin.

Author: Ted Yu
Discussion: https://postgr.es/m/CALte62wLSir1=x93Jf0xZvHaO009FEJfhVMFwnaR8q=csPP8kQ@mail.gmail.com
2022-11-25 10:10:44 +13:00
Alvaro Herrera 2cf41cd309
Make multixact error message more explicit
There are recent reports involving a very old error message that we have
no history of hitting -- perhaps a recently introduced bug.  Improve the
error message in an attempt to improve our chances of investigating the
bug.

Per reports from Dimos Stamatakis and Bob Krier.

Backpatch to 11.

Discussion: https://postgr.es/m/CO2PR0801MB2310579F65529380A4E5EDC0E20A9@CO2PR0801MB2310.namprd08.prod.outlook.com
Discussion: https://postgr.es/m/17518-04e368df5ad7f2ee@postgresql.org
2022-11-24 10:45:10 +01:00
Michael Paquier af205152ef Add the database name to the ps display of logical WAL senders
Logical WAL senders display now as follows, gaining a database name:
postgres: walsender USER DATABASE HOST(PORT) STATE

Physical WAL senders show up the same, as of:
postgres: walsender USER HOST(PORT) STATE

This information was missing, hence it was not possible to know from ps
if a WAL sender was a logical or a physical one, and on which database
it is connected when it is logical.

Author: Tatsuhiro Nakamori
Reviewed-by: Fujii Masao, Bharath Rupireddy
Discussion: https://postgr.es/m/36a3b137e82e0ea9fe7e4234f03b64a1@oss.nttdata.com
2022-11-24 16:07:59 +09:00
Michael Paquier a54b658ce7 Add support for file inclusions in HBA and ident configuration files
pg_hba.conf and pg_ident.conf gain support for three record keywords:
- "include", to include a file.
- "include_if_exists", to include a file, ignoring it if missing.
- "include_dir", to include a directory of files.  These are classified
by name (C locale, mostly) and need to be prefixed by ".conf", hence
following the same rules as GUCs.

This commit relies on the refactoring pieces done in efc9816, ad6c528,
783e8c6 and 1b73d0b, adding a small wrapper to build a list of
TokenizedAuthLines (tokenize_include_file), and the code is shaped to
offer some symmetry with what is done for GUCs with the same options.

pg_hba_file_rules and pg_ident_file_mappings gain a new field called
file_name, to track from which file a record is located, taking
advantage of the addition of rule_number in c591300 to offer an
organized view of the HBA or ident records loaded.

Bump catalog version.

Author: Julien Rouhaud
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
2022-11-24 13:51:34 +09:00
David Rowley d09dbeb9bd Speedup hash index builds by skipping needless binary searches
When building hash indexes using the spool method, tuples are added to the
index page in hashkey order.  Because of this, we can safely skip
performing the binary search on the existing tuples on the page to find
the location to insert the tuple based on its hashkey value.  For this
case, we can just always put the tuple at the end of the item array as the
tuples will always arrive in hashkey order.

Testing has shown that this can improve hash index build speeds by 5-15%
with a unique set of integer values.

Author: Simon Riggs
Reviewed-by: David Rowley
Tested-by: David Zhang, Tomas Vondra
Discussion: https://postgr.es/m/CANbhV-GBc5JoG0AneUGPZZW3o4OK5LjBGeKe_icpC3R1McrZWQ@mail.gmail.com
2022-11-24 17:21:44 +13:00
Michael Paquier d46ad72f46 Create memory context for tokenization after opening top-level file in hba.c
The memory context was created before attempting to open the first HBA
or ident file, which would cause it to leak.  This had no consequences
for the system views for HBA and ident files, but this would cause
memory leaks in the postmaster on reload if the initial HBA and/or ident
files are missing, which is a valid behavior while the backend is
running.

Oversight in efc9816.

Author: Ted Yu
Discussion: https://postgr.es/m/CALte62xH6ivgiKKzPRJgfekPZC6FKLB3xbnf3=tZmc_gKj78dw@mail.gmail.com
2022-11-24 10:27:38 +09:00
Michael Paquier d5566fbfeb Add missing initialization in tokenize_expand_file() for output list
This should have been added in efc9816, but it looks like I have found a
way to mess up a bit a patch split.  This should have no consequence in
practice, but let's be clean.

Discussion: https://postgr.es/m/Y324HvGKiWxW2yxe@paquier.xyz
2022-11-24 10:03:11 +09:00
Michael Paquier efc981627a Rework memory contexts in charge of HBA/ident tokenization
The list of TokenizedAuthLines generated at parsing for the HBA and
ident files is now stored in a static context called tokenize_context,
where only all the parsed tokens are stored.  This context is created
when opening the first authentication file of a HBA/ident set (hba_file
or ident_file), and is cleaned up once we are done all the work around
it through a new routine called free_auth_file().  One call of
open_auth_file() should have one matching call of free_auth_file(), the
creation and deletion of the tokenization context is controlled by the
recursion depth of the tokenization.

Rather than having tokenize_auth_file() return a memory context that
includes all the records, the tokenization logic now creates and deletes
one memory context each time this function is called.  This will
simplify recursive calls to this routine for the upcoming inclusion
record logic.

While on it, rename tokenize_inc_file() to tokenize_expand_file() as
this would conflict with the upcoming patch that will add inclusion
records for HBA/ident files.  An '@' file has its tokens added to an
existing list.

Reloading HBA/indent configuration in a tight loop shows no leaks, as of
one type of test done (with and without -DEXEC_BACKEND).

Author: Michael Paquier
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/Y324HvGKiWxW2yxe@paquier.xyz
2022-11-24 08:21:55 +09:00
Alexander Korotkov cee1209514 Support for custom slots in the custom executor nodes
Some custom table access method may have their tuple format and use custom
executor nodes for their custom scan types. The ability to set a custom slot
would save them from tuple format conversion. Other users of custom executor
nodes may also benefit.

Discussion: https://postgr.es/m/CAPpHfduJUU6ToecvTyRE_yjxTS80FyPpct4OHaLFk3OEheMTNA@mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Pavel Borisov
2022-11-24 00:36:11 +03:00
Andrew Dunstan b7a5ef17cf Simplify WARNING messages from skipped vacuum/analyze on a table
This will more easily accomodate adding new permissions for vacuum and
analyze.

Nathan Bossart following a suggestion from Kyotaro Horiguchi

Discussion: https://postgr.es/m/20220726.104712.912995710251150228.horikyota.ntt@gmail.com
2022-11-23 14:43:16 -05:00
Andrew Dunstan 7b378237aa Expand AclMode to 64 bits
We're running out of bits for new permissions. This change doubles the
number of permissions we can accomodate from 16 to 32, so the
forthcoming new ones for vacuum/analyze don't exhaust the pool.

Nathan Bossart

Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.

Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
2022-11-23 14:43:16 -05:00
Peter Geoghegan b6074846ce Simplify vacuum_set_xid_limits() signature.
Pass VACUUM parameters (VacuumParams state) to vacuum_set_xid_limits()
directly, rather than passing most individual VacuumParams fields as
separate arguments.

Also make vacuum_set_xid_limits() output parameter symbol names match
those used by its vacuumlazy.c caller.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=TE7gW5DgSahDkf0UEZigFGAoHNNN6EvSrdzC=Kn+hrA@mail.gmail.com
2022-11-23 11:10:06 -08:00
Peter Geoghegan 02d647bbf0 Don't test HEAP_XMAX_INVALID when freezing xmax.
We shouldn't ever need to rely on whether HEAP_XMAX_INVALID is set in
t_infomask when considering whether or not an xmax should be deemed
already frozen, since that status flag is just a hint.  The only
acceptable representation for an "xmax_already_frozen" raw xmax field is
the transaction ID value zero (also known as InvalidTransactionId).

Adjust code that superficially appeared to rely on HEAP_XMAX_INVALID to
make the rule about xmax_already_frozen clear.  Also avoid needlessly
rereading the tuple's raw xmax.

Oversight in bugfix commit d2599ecf.  There is no evidence that this
ever led to incorrect behavior, so no backpatch.  The worst consequence
of this bug was that VACUUM could hypothetically fail to notice and
report on certain kinds of corruption, which seems fairly benign.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzkh3DMCDRPfhZxj9xCq9v3WmzvmbiCpf1dNKUBPadhCbQ@mail.gmail.com
2022-11-23 10:49:39 -08:00
Andrew Dunstan b425bf0081
Fix perl warning from commit 9b4eafcaf4
per gripe from Andres Freund and Tom Lane

Backpatch to all live branches.
2022-11-23 07:03:06 -05:00
Tom Lane 56d0ed3b75 Give better hints for ambiguous or unreferenceable columns.
Examine ParseNamespaceItem flags to detect whether a column name
is unreferenceable for lack of LATERAL, or could be referenced if
a qualified name were used, and give better hints for such cases.
Also, don't phrase the message to imply that there's only one
matching column when there is really more than one.

Many of the regression test output changes are not very interesting,
but just reflect reclassifying the "There is a column ... but it
cannot be referenced from this part of the query" messages as DETAIL
rather than HINT.  They are details per our style guide, in the sense
of being factual rather than offering advice; and this change provides
room to offer actual HINTs about what to do.

While here, adjust the fuzzy-name-matching code to be a shade less
impenetrable.  It was overloading the meanings of FuzzyAttrMatchState
fields way too much IMO, so splitting them into multiple fields seems
to make it clearer.  It's not like we need to shave bytes in that
struct.

Per discussion of bug #17233 from Alexander Korolev.

Discussion: https://postgr.es/m/17233-afb9d806aaa64b17@postgresql.org
2022-11-22 18:46:31 -05:00
Tom Lane 9c6ad5eaa9 YA attempt at taming worst-case behavior of get_actual_variable_range.
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples.  While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits.  It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.

Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages.  Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.

Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.

Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me).  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
2022-11-22 14:40:20 -05:00
Alvaro Herrera 0538d4c0c3
Remove useless MERGE test
This was trying to exercise an ERROR we don't actually have.

Backpatch to 15.

Reported by Teja Mupparti <Tejeswar.Mupparti@microsoft.com>
Discussion: https://postgr.es/m/SN6PR2101MB1040BDAF740EA4389484E92BF0079@SN6PR2101MB1040.namprd21.prod.outlook.com
2022-11-22 11:26:47 +01:00
Amit Kapila a1efcda7c3 Improve comments atop pg_get_replication_slots.
Update comments atop pg_get_replication_slots to make it clear that it
shows all replication slots that currently exist on the database cluster.

Author: sirisha chamarthi
Discussion: https://postgr.es/m/CAKrAKeXRuFpeiWS+STGFm-RFfW19sUDxju66JkyRi13kdQf94Q@mail.gmail.com
2022-11-22 15:22:00 +05:30
Alvaro Herrera 0557e17702
Ignore invalidated slots while computing oldest catalog Xmin
Once a logical slot has acquired a catalog_xmin, it doesn't let go of
it, even when invalidated by exceeding the max_slot_wal_keep_size, which
means that dead catalog tuples are not removed by vacuum anymore since
the point is invalidated, until the slot is dropped.  This could be
catastrophic if catalog churn is high.

Change the computation of Xmin to ignore invalidated slots,
to prevent dead rows from accumulating.

Backpatch to 13, where slot invalidation appeared.

Author: Sirisha Chamarthi <sirichamarthi22@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAKrAKeUEDeqquN9vwzNeG-CN8wuVsfRYbeOUV9qKO_RHok=j+g@mail.gmail.com
2022-11-22 10:56:07 +01:00
Andres Freund 92daeca45d Add wait event for pg_usleep() in perform_spin_delay()
The lwlock wait queue scalability issue fixed in a4adc31f69 was quite hard to
find because of the exponential backoff and because we adjust spins_per_delay
over time within a backend.

To make it easier to find similar issues in the future, add a wait event for
the pg_usleep() in perform_spin_delay(). Showing a wait event while spinning
without sleeping would increase the overhead of spinlocks, which we do not
want.

We may at some later point want to have more granular wait events, but that'd
be a substantial amount of work. This provides at least some insights into
something currently hard to observe.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
https://postgr.es/m/20221120204310.xywrhyxyytsajuuq@awork3.anarazel.de
2022-11-21 20:34:17 -08:00
Daniel Gustafsson f1d042b21d Replace link to Hunspell with the current homepage
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL.  Backpatch
the doc changes to all supported versions.

Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
2022-11-21 23:25:48 +01:00
Tom Lane 5644d6f909 Add comments and a missing CHECK_FOR_INTERRUPTS in ts_headline.
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function.  Add some commentary about that
while it's fresh in mind.  Also remove some unused macros in
wparser_def.c.

While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.

Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
2022-11-21 17:07:29 -05:00
Andres Freund f686ae82f2 Add workaround to make ubsan and ps_status.c compatible
At least on linux, set_ps_display() breaks /proc/$pid/environ. The sanitizer's
helper library uses /proc/$pid/environ to implement getenv(), as it wants to
work independent of libc. When just using undefined and alignment sanitizers,
the sanitizer library is only initialized when the first error occurs, by
which time we've often already called set_ps_display(), preventing the
sanitizer libraries from seeing the options.

We can work around that by defining __ubsan_default_options, a weak symbol
libsanitizer uses to get defaults from the application, and return
getenv("UBSAN_OPTIONS"). But only if main already was reached, so that we
don't end up relying on a not-yet-working getenv().

As it's just a function that won't get called when not running a sanitizer, it
doesn't seem necessary to make compilation of the function conditional.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de
2022-11-21 13:56:23 -08:00
Tom Lane 51b5834cd5 Provide options for postmaster to kill child processes with SIGABRT.
The postmaster normally sends SIGQUIT to force-terminate its
child processes after a child crash or immediate-stop request.
If that doesn't result in child exit within a few seconds,
we follow it up with SIGKILL.  This patch provides GUC flags
that allow either of these signals to be replaced with SIGABRT.
On typically-configured Unix systems, that will result in a
core dump being produced for each such child.  This can be
useful for debugging problems, although it's not something you'd
want to have on in production due to the risk of disk space
bloat from lots of core files.

The old postmaster -T switch, which sent SIGSTOP in place of
SIGQUIT, is changed to be the same as send_abort_for_crash.
As far as I can tell from the code comments, the intent of
that switch was just to block things for long enough to force
core dumps manually, which seems like an unnecessary extra step.
(Maybe at the time, there was no way to get most kernels to
produce core files with per-PID names, requiring manual core
file renaming after each one.  But now it's surely the hard way.)

I also took the opportunity to remove the old postmaster -n
(skip shmem reinit) switch, which hasn't actually done anything
in decades, though the documentation still claimed it did.

Discussion: https://postgr.es/m/2251016.1668797294@sss.pgh.pa.us
2022-11-21 11:59:29 -05:00
Michael Paquier f193883fc9 Replace SQLValueFunction by COERCE_SQL_SYNTAX
This switch impacts 9 patterns related to a SQL-mandated special syntax
for function calls:
- LOCALTIME [ ( typmod ) ]
- LOCALTIMESTAMP [ ( typmod ) ]
- CURRENT_TIME [ ( typmod ) ]
- CURRENT_TIMESTAMP [ ( typmod ) ]
- CURRENT_DATE

Five new entries are added to pg_proc to compensate the removal of
SQLValueFunction to provide backward-compatibility and making this
change transparent for the end-user (for example for the attribute
generated when a keyword is specified in a SELECT or in a FROM clause
without an alias, or when specifying something else than an Iconst to
the parser).

The parser included a set of checks coming from the files in charge of
holding the C functions used for the SQLValueFunction calls (as of
transformSQLValueFunction()), which are now moved within each function's
execution path, so this reduces the dependencies between the execution
and the parsing steps.  As of this change, all the SQL keywords use the
same paths for their work, relying only on COERCE_SQL_SYNTAX.  Like
fb32748, no performance difference has been noticed, while the perf
profiles get reduced with ExecEvalSQLValueFunction() gone.

Bump catalog version.

Reviewed-by: Corey Huinker, Ted Yu
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
2022-11-21 18:31:59 +09:00
Amit Kapila 240e0dbacd Add additional checks while creating the initial decoding snapshot.
As per one of the CI reports, there is an assertion failure which
indicates that we were trying to use an unenforced xmin horizon for
decoding snapshots. Though, we couldn't figure out the reason for
assertion failure these checks would help us in finding the reason if the
problem happens again in the future.

Author: Amit Kapila based on suggestions by Andres Freund
Reviewd by: Andres Freund
Discussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.com
2022-11-21 08:54:43 +05:30
Andres Freund a4adc31f69 lwlock: Fix quadratic behavior with very long wait lists
Until now LWLockDequeueSelf() sequentially searched the list of waiters to see
if the current proc is still is on the list of waiters, or has already been
removed. In extreme workloads, where the wait lists are very long, this leads
to a quadratic behavior. #backends iterating over a list #backends
long. Additionally, the likelihood of needing to call LWLockDequeueSelf() in
the first place also increases with the increased length of the wait queue, as
it becomes more likely that a lock is released while waiting for the wait list
lock, which is held for longer during lock release.

Due to the exponential back-off in perform_spin_delay() this is surprisingly
hard to detect. We should make that easier, e.g. by adding a wait event around
the pg_usleep() - but that's a separate patch.

The fix is simple - track whether a proc is currently waiting in the wait list
or already removed but waiting to be woken up in PGPROC->lwWaiting.

In some workloads with a lot of clients contending for a small number of
lwlocks (e.g. WALWriteLock), the fix can substantially increase throughput.

As the quadratic behavior arguably is a bug, we might want to decide to
backpatch this fix in the future.

Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de
Discussion: https://postgr.es/m/CALj2ACXktNbG=K8Xi7PSqbofTZozavhaxjatVc14iYaLu4Maag@mail.gmail.com
2022-11-20 11:56:32 -08:00
Andres Freund 061bf98fb8 pgstat: replace double lookup with IsSharedRelation()
As the list of shared relations is fixed, we can just dispatch based
IsSharedRelation(), instead of first trying to look up stats for a non-shared
rel and falling back to shared stats.

Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de
Discussion: https://postgr.es/m/8c1851a2-a98e-e1bc-7729-37b0b95f66ec@gmail.com
2022-11-20 10:56:32 -08:00
Tom Lane b62303794e Fix sloppy cleanup of roles in privileges.sql.
Commit 3d14e171e dropped regress_roleoption_donor twice and
regress_roleoption_protagonist not at all.  Leaving roles behind
after "make installcheck" is unfriendly in its own right, plus
it causes repeated runs of "make installcheck" to fail.
2022-11-20 11:30:50 -05:00
Tom Lane 75b8d3de98 Fix long-obsolete comment.
Commit c94959d41 fixed DROP OPERATOR to reset oprcom/oprnegate links
to the dropped operator; but it missed updating this old comment that
claimed we allow such links to dangle.
2022-11-20 11:22:22 -05:00
Andrew Dunstan 9b4eafcaf4 Prevent port collisions between concurrent TAP tests
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.

Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by meson or Makefile.global, or its own
tmp_check directory.

The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.

Per complaint from Andres Freund

This will be backpatched in due course after some testing.

Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
2022-11-20 10:07:35 -05:00
Michael Paquier fb32748e32 Switch SQLValueFunction on "name" to use COERCE_SQL_SYNTAX
This commit changes six SQL keywords to use COERCE_SQL_SYNTAX rather
than relying on SQLValueFunction:
- CURRENT_ROLE
- CURRENT_USER
- USER
- SESSION_USER
- CURRENT_CATALOG
- CURRENT_SCHEMA

Among the six, "user", "current_role" and "current_catalog" require
specific SQL functions to allow ruleutils.c to map them to the SQL
keywords these require when using COERCE_SQL_SYNTAX.  Having
pg_proc.proname match with the keyword ensures that the compatibility
remains the same when projecting any of these keywords in a FROM clause
to an attribute name when an alias is not specified.  This is covered by
the tests added in 2e0d80c, making sure that a correct mapping happens
with each SQL keyword.  The three others (current_schema, session_user
and current_user) already have pg_proc entries for this job, so this
brings more consistency between the way such keywords are treated in the
parser, the executor and ruleutils.c.

SQLValueFunction is reduced to half its contents after this change,
simplifying its logic a bit as there is no need to enforce a C collation
anymore for the entries returning a name as a result.  I have made a few
performance tests, with a million-ish calls to these keywords without
seeing a difference in run-time or in perf profiles
(ExecEvalSQLValueFunction() is removed from the profiles).  The
remaining SQLValueFunctions are now related to timestamps and dates.

Bump catalog version.

Reviewed-by: Corey Huinker
Discussion: https://postgr.es/m/YzaG3MoryCguUOym@paquier.xyz
2022-11-20 10:58:28 +09:00
Joe Conway ed1d3132d2 Fix catversion
Commit 2fb6154fc didn't quite get the catversion correct per usual
norms. Fix it. Reported by Rishu Bagga.
2022-11-19 17:55:52 -05:00
Andres Freund 8c954168cf Fix mislabeling of PROC_QUEUE->links as PGPROC, fixing UBSan on 32bit
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next,
because that does "the right thing" with SHMQueueInsertBefore(). While that
largely works, it's certainly not correct and unnecessary - we can just use
SHM_QUEUE* to point to the insertion point.

Noticed when testing a 32bit of postgres with undefined behavior
sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't
sufficiently aligned (required since 46d6e5f567, ensured indirectly, via
ShmemAllocRaw() guaranteeing cacheline alignment).

For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently
we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but
that's a larger change that we don't want to backpatch.

Backpatch to all supported versions - it's useful to be able to run postgres
under UBSan.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de
Backpatch: 11-
2022-11-19 12:22:04 -08:00
Tom Lane 3efc82e289 Disable debug_discard_caches in test_oat_hooks test.
The test output varies when debug_discard_caches is enabled,
because that causes extra executions of recomputeNamespacePath.
Maybe putting a hook in that was a bad idea, but as a stopgap,
just turn off debug_discard_caches in this test.

Per buildfarm (now that we have debug_discard_caches coverage
again).  Back-patch to v15 where this module was added.

Discussion: https://postgr.es/m/2267406.1668804934@sss.pgh.pa.us
2022-11-19 13:42:53 -05:00
Tom Lane 3b8ad00853 Doc: sync src/tutorial/basics.source with SGML documentation.
basics.source is supposed to be pretty closely in step with
the examples in chapter 2 of the tutorial, but I forgot to
update it in commit f05a5e000.  Fix that, and adjust a couple
of other discrepancies that had crept in over time.

(I notice that advanced.source is nowhere near being in sync
with chapter 3, but I lack the ambition to do something
about that right now.)
2022-11-19 13:09:14 -05:00
Robert Haas 2fb6154fcd Fix typos and bump catversion.
Typos reported by Álvaro Herrera and Erik Rijkers.

Catversion bump for 3d14e171e9 was
inadvertently omitted.
2022-11-18 16:16:21 -05:00
Robert Haas 3d14e171e9 Add a SET option to the GRANT command.
Similar to how the INHERIT option controls whether or not the
permissions of the granted role are automatically available to the
grantee, the new SET permission controls whether or not the grantee
may use the SET ROLE command to assume the privileges of the granted
role.

In addition, the new SET permission controls whether or not it
is possible to transfer ownership of objects to the target role
or to create new objects owned by the target role using commands
such as CREATE DATABASE .. OWNER. We could alternatively have made
this controlled by the INHERIT option, or allow it when either
option is given. An advantage of this approach is that if you
are granted a predefined role with INHERIT TRUE, SET FALSE, you
can't go and create objects owned by that role.

The underlying theory here is that the ability to create objects
as a target role is not a privilege per se, and thus does not
depend on whether you inherit the target role's privileges. However,
it's surely something you could do anyway if you could SET ROLE
to the target role, and thus making it contingent on whether you
have that ability is reasonable.

Design review by Nathan Bossat, Wolfgang Walther, Jeff Davis,
Peter Eisentraut, and Stephen Frost.

Discussion: http://postgr.es/m/CA+Tgmob+zDSRS6JXYrgq0NWdzCXuTNzT5eK54Dn2hhgt17nm8A@mail.gmail.com
2022-11-18 12:32:56 -05:00
Tom Lane f84ff0c6d4 Don't read MCV stats needlessly in eqjoinsel().
eqjoinsel() currently makes use of MCV stats only when we have such
stats for both sides of the clause.  As coded, though, it would
fetch those stats even when they're present for just one side.
This can be a bit expensive with high statistics targets, leading
to wasted effort in common cases such as joining a unique column
to a non-unique column.  So it seems worth the trouble to do a quick
pre-check to confirm that both sides have MCVs before fetching either.

Also, tweak the API spec for get_attstatsslot() to document the
method we're using here.

David Geier, Tomas Vondra, Tom Lane

Discussion: https://postgr.es/m/b9846ca0-5f1c-9b26-5881-aad3f42b07f0@gmail.com
2022-11-18 11:01:03 -05:00
Peter Eisentraut d8678aba2e Make object_address test output easier to update
The object_address test file turns to psql unaligned output for some
tests to avoid huge diffs for changes.  But this is useful also to the
other large test in that file, so apply it there as well.  This also
makes verifying the null and whitespace behavior easier.
2022-11-18 16:00:52 +01:00
Peter Eisentraut d0ca708540 Clean up SQL code indentation in test file
This makes the code layout more consistent inside the same file.
2022-11-18 15:31:55 +01:00
Andrew Dunstan 97ee956416 Fix version comparison in Version.pm
Version strings with unequal numbers of parts were being compared
incorrectly. We cure this by treating a missing part in the shorter
version as 0.

per complaint from Jehan-Guillaume de Rorthais, but the fix is mine, not
his.

Discussion: https://postgr.es/m/20220628225325.53d97b8d@karst

Backpatch to release 14 where this code was introduced.
2022-11-18 08:45:58 -05:00
Michael Paquier 07f7237c2a psql: Improve tab completion for GRANT/REVOKE
This commit improves the handling of the following clauses:
- Addition of "CREATE" for ALTER DEFAULT PRIVILEGES .. GRANT/REVOKE.
- Addition of GRANT|ADMIN|INHERIT OPTION FOR for REVOKE, with some
completion for roles, INHERIT being added recently by e3ce2de.
- Addition of GRANT WITH ADMIN|INHERIT.

The list of privilege options common to GRANT and REVOKE is refactored
to avoid its duplication.

Author: Shi Yu
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Peter Smith
Discussion: https://postgr.es/m/OSZPR01MB6310FCE8609185A56344EED2FD559@OSZPR01MB6310.jpnprd01.prod.outlook.com
2022-11-18 11:26:49 +09:00
Andres Freund 967db242c2 ci: Add task testing windows with mingw
For now the task has been set to be manually triggered, as we are already
limited by the amount of CI time available for windows, particularly on cfbot.

Author: Melih Mutlu <m.melihmutlu@gmail.com>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAGPVpCSKS9E0An4=e7ZDnme+y=WOcQFJYJegKO8kE9=gh8NJKQ@mail.gmail.com
2022-11-17 16:22:25 -08:00
Peter Geoghegan 1489b1ce72 Standardize rmgrdesc recovery conflict XID output.
Standardize on the name snapshotConflictHorizon for all XID fields from
WAL records that generate recovery conflicts when in hot standby mode.
This supersedes the previous latestRemovedXid naming convention.

The new naming convention places emphasis on how the values are actually
used by REDO routines.  How the values are generated during original
execution (details of which vary by record type) is deemphasized.  Users
of tools like pg_waldump can now grep for snapshotConflictHorizon to see
all potential sources of recovery conflicts in a standardized way,
without necessarily having to consider which specific record types might
be involved.

Also bring a couple of WAL record types that didn't follow any kind of
naming convention into line.  These are heapam's VISIBLE record type and
SP-GiST's VACUUM_REDIRECT record type.  Now every WAL record whose REDO
routine calls ResolveRecoveryConflictWithSnapshot() passes through the
snapshotConflictHorizon field from its WAL record.  This is follow-up
work to the refactoring from commit 9e540599 that made FREEZE_PAGE WAL
records use a standard snapshotConflictHorizon style XID cutoff.

No bump in XLOG_PAGE_MAGIC, since the underlying format of affected WAL
records doesn't change.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wzm2CQUmViUq7Opgk=McVREHSOorYaAjR1ZpLYkRN7_dPw@mail.gmail.com
2022-11-17 14:55:08 -08:00
Alvaro Herrera 6ff5aa1299
Fix MERGE tuple count with DO NOTHING
Reporting tuples for which nothing is done is useless and goes against
the documented behavior, so don't do it.

Backpatch to 15.

Reported by: Luca Ferrari
Discussion: https://postgr.es/m/CAKoxK+42MmACUh6s8XzASQKizbzrtOGA6G1UjzCP75NcXHsiNw@mail.gmail.com
2022-11-17 18:56:11 +01:00
Peter Geoghegan 813492dacc Use correct type name in comments about freezing.
Oversight in commit 9e540599, which added freeze plan deduplication.
2022-11-17 09:34:12 -08:00
Noah Misch aca93b5f18 Account for IPC::Run::result() Windows behavior change.
This restores compatibility with the not-yet-released successor of
version 20220807.0.  Back-patch to 9.4, which introduced this code.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20221117061805.GA4020280@rfd.leadboat.com
2022-11-17 07:35:06 -08:00
Peter Eisentraut bbf9c282ce libpq: Handle NegotiateProtocolVersion message
Before, receiving a NegotiateProtocolVersion message would result in a
confusing error message like

    expected authentication request from server, but received v

This adds proper handling of this protocol message and produces an
on-topic error message from it.

Reviewed-by: Jacob Champion <jchampion@timescale.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f9c7862f-b864-8ef7-a861-c4638c83e209%40enterprisedb.com
2022-11-17 15:42:09 +01:00
Peter Eisentraut dce92e59b1 libpq: Correct processing of startup response messages
After sending a startup message, libpq expects either an error
response ('E') or an authentication request ('R').  Before processing
the message, it ensures it has read enough bytes to correspond to the
length specified in the message.  However, when processing the 'R'
message, if an EOF status is returned it loops back waiting for more
input, even though we already checked that we have enough input.  In
this particular case, this is probably not reachable anyway, because
other code ensures we have enough bytes for an authentication request
message, but the code is wrong and misleading.  In the more general
case, processing a faulty message could result in an EOF status, which
would then result in an infinite loop waiting for the end of a message
that will never come.  The correction is to make this an error.

Reported-by: Jacob Champion <jchampion@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/f9c7862f-b864-8ef7-a861-c4638c83e209@enterprisedb.com
2022-11-17 14:12:04 +01:00
Daniel Gustafsson 3d0c95bc89 Fix wording in comment
Author: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/CALDaNm0jKY__83tUsem79+YqfjTWTAkDfiPS0T_Z4y0AYGd_HQ@mail.gmail.com
2022-11-17 13:17:19 +01:00
Alvaro Herrera 01755490cf
Fix outdated comment in ExecDelete
This commend references a struct that disappeared before MERGE was
merged ... and ExecDelete is not called by the committed MERGE anyway.
Revert to the original wording.

Backpatch to 15
2022-11-17 12:52:20 +01:00
Peter Eisentraut af3abca029 Allow initdb to complete on systems without "locale" command
This partially reverts 2fe3bdbd69, which
added an error check on the "locale -a" execution.  This is removed
again, adding a comment explaining why.  We already had code that
shows a warning if no system locales could be found, which should be
sufficient for feedback to the user.

Discussion: https://www.postgresql.org/message-id/flat/b2b491d1-3b36-15b9-6910-5b5540b27f5c%40enterprisedb.com
2022-11-17 12:12:11 +01:00
Daniel Gustafsson 3b304547a9 doc: Fix wording of MERGE actions in README
UPDATE was listed twice and DELETE was omitted, replace one UPDATE
with DELETE instead.

Backpatch through v15 where MERGE was added.

Author: Myo Wai Thant <myo.waithant@fujitsu.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/OSAPR01MB43247E46931E9E9CFC4AA0F29A079@OSAPR01MB4324.jpnprd01.prod.outlook.com
Backpatch-through: 15
2022-11-17 10:07:06 +01:00
Daniel Gustafsson aba2dbb3cf Fix typos in comments
Fix various misspellings of xl_running_xacts.

Author: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/MEYP282MB1669CA2A39ACF0172774ED27B6069@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2022-11-17 09:12:51 +01:00