Commit Graph

22006 Commits

Author SHA1 Message Date
Amit Kapila 29abde637b Don't allow to set replication slot_name as ''.
We don't allow to create replication slot_name as an empty string ('') via
SQL API pg_create_logical_replication_slot() but it is allowed to be set
via Alter Subscription command. This will lead to apply worker repeatedly
keep trying to stream data via slot_name '' and the user is not allowed to
create the slot with that name.

Author: Japin Li
Reviewed-By: Ranier Vilela, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-07-19 10:36:15 +05:30
Thomas Munro 04cad8f7bc Adjust commit 2dbe8905 for ancient macOS.
A couple of open flags used in an assertion didn't exist in macOS 10.4.
Per build farm animal prairiedog.  Also add O_EXCL while here (there are
a few more standard flags but they're not relevant and likely to be
missing).
2021-07-19 16:50:21 +12:00
Amit Kapila dcecdfafbd Update comments for AlterSubscription.
Add explanation as to why the subscription needs to be disabled to allow
slot_name as none.

Author: Japin Li and Amit Kapila
Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-07-19 08:32:37 +05:30
Thomas Munro 2dbe890571 Support direct I/O on macOS.
Macs don't understand O_DIRECT, but they can disable caching with a
separate fcntl() call.  Extend the file opening functions in fd.c to
handle this for us if the caller passes in PG_O_DIRECT.

For now, this affects only WAL data and even then only if you set:

  max_wal_senders=0
  wal_level=minimal

This is not expected to be very useful on its own, but later proposed
patches will make greater use of direct I/O, and it'll be useful for
testing if developers on Macs can see the effects.

Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2BADiyyHe0cun2wfT%2BSVnFVqNYPxoO6J9zcZkVO7%2BNGig%40mail.gmail.com
2021-07-19 11:01:01 +12:00
Alexander Korotkov 9e3c217bd9 Support for unnest(multirange)
It has been spotted that multiranges lack of ability to decompose them into
individual ranges.  Subscription and proper expanded object representation
require substantial work, and it's too late for v14.  This commit
provides the implementation of unnest(multirange), which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure.

Catversion is bumped.

Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu, Tom Lane
Reviewed-by: Alvaro Herrera
2021-07-18 21:07:24 +03:00
Dean Rasheed ba620760c4 Improve error checking of CREATE COLLATION options.
Check for conflicting or redundant options, as we do for most other
commands. Specifying any option more than once is at best redundant,
and quite likely indicates a bug in the user's code.

While at it, improve the error for conflicting locale options by
adding detail text (the same as for CREATE DATABASE).

Bharath Rupireddy, reviewed by Vignesh C. Some additional hacking by
me.

Discussion: https://postgr.es/m/CALj2ACWtL6fTLdyF4R_YkPtf1YEDb6FUoD5DGAki3rpD+sWqiA@mail.gmail.com
2021-07-18 11:08:34 +01:00
Alvaro Herrera df80fa2ee5
Preserve firing-on state when cloning row triggers to partitions
When triggers are cloned from partitioned tables to their partitions,
the 'tgenabled' flag (origin/replica/always/disable) was not propagated.
Make it so that the flag on the trigger on partition is initially set to
the same value as on the partitioned table.

Add a test case to verify the behavior.

Backpatch to 11, where this appeared in commit 86f575948c.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 13:01:43 -04:00
Alvaro Herrera ead9e51e82
Advance old-segment horizon properly after slot invalidation
When some slots are invalidated due to the max_slot_wal_keep_size limit,
the old segment horizon should move forward to stay within the limit.
However, in commit c655077639 we forgot to call KeepLogSeg again to
recompute the horizon after invalidating replication slots.  In cases
where other slots remained, the limits would be recomputed eventually
for other reasons, but if all slots were invalidated, the limits would
not move at all afterwards.  Repair.

Backpatch to 13 where the feature was introduced.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reported-by: Marcin Krupowicz <mk@071.ovh>
Discussion: https://postgr.es/m/17103-004130e8f27782c9@postgresql.org
2021-07-16 12:07:30 -04:00
Tom Lane a49d081235 Replace explicit PIN entries in pg_depend with an OID range test.
As of v14, pg_depend contains almost 7000 "pin" entries recording
the OIDs of built-in objects.  This is a fair amount of bloat for
every database, and it adds time to pg_depend lookups as well as
initdb.  We can get rid of all of those entries in favor of an OID
range check, i.e. "OIDs below FirstUnpinnedObjectId are pinned".

(template1 and the public schema are exceptions.  Those exceptions
are now wired into IsPinnedObject() instead of initdb's code for
filling pg_depend, but it's the same amount of cruft either way.)

The contents of pg_shdepend are modified likewise.

Discussion: https://postgr.es/m/3737988.1618451008@sss.pgh.pa.us
2021-07-15 11:41:47 -04:00
Dean Rasheed 2bfb50b3df Improve reporting of "conflicting or redundant options" errors.
When reporting "conflicting or redundant options" errors, try to
ensure that errposition() is used, to help the user identify the
offending option.

Formerly, errposition() was invoked in less than 60% of cases. This
patch raises that to over 90%, but there remain a few places where the
ParseState is not readily available. Using errdetail() might improve
the error in such cases, but that is left as a task for the future.

Additionally, since this error is thrown from over 100 places in the
codebase, introduce a dedicated function to throw it, reducing code
duplication.

Extracted from a slightly larger patch by Vignesh C. Reviewed by
Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter
Smith, Daniel Gustafsson, Julien Rouhaud and me.

Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
2021-07-15 08:49:45 +01:00
Michael Paquier dc2db1eac3 Remove unnecessary assertion in postmaster.c
A code path asserted that the archiver was dead, but a check made that
impossible to happen.

Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACW=CYE1ars+2XyPTEPq0wQvru4c0dPZ=Nrn3EqNBkksvQ@mail.gmail.com
Backpatch-throgh: 14
2021-07-15 15:00:45 +09:00
Peter Eisentraut 9aa8268faa Fix some nonstandard C code indentation in grammar file 2021-07-15 00:11:00 +02:00
Tom Lane be850f1822 Copy a Param's location field when replacing it with a Const.
This allows Param substitution to produce just the same result
as writing a constant value literally would have done.  While
it hardly matters so far as the current core code is concerned,
extensions might take more interest in node location fields.

Julien Rouhaud

Discussion: https://postgr.es/m/20170311220932.GJ15188@nol.local
2021-07-14 14:15:12 -04:00
John Naylor c203dcddf9 Remove unused function parameter in get_qual_from_partbound
Commit 0563a3a8b changed how partition constraints were generated such
that this function no longer computes the mapping of parent attnos to
child attnos.

This is an external function that extensions could use, so this is
potentially a breaking change. No external callers are known, however,
and this will make it simpler to write such callers in the future.

Author: Hou Zhijie
Reviewed-by: David Rowley, Michael Paquier, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/flat/OS0PR01MB5716A75A45BE46101A1B489894379@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-07-14 09:52:04 -04:00
Peter Eisentraut 55b2a23407 Fix lack of message pluralization 2021-07-14 09:15:14 +02:00
Amit Kapila a8fd13cab0 Add support for prepared transactions to built-in logical replication.
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:

* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.

* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.

* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.

We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.

The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.

We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.

Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
2021-07-14 07:33:50 +05:30
David Rowley 83f4fcc655 Change the name of the Result Cache node to Memoize
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough.  That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize".  People seem to like "Memoize", so let's do the rename.

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
2021-07-14 12:43:58 +12:00
Tom Lane d68a003912 Rename debug_invalidate_system_caches_always to debug_discard_caches.
The name introduced by commit 4656e3d66 was agreed to be unreasonably
long.  To match this change, rename initdb's recently-added
--clobber-cache option to --discard-caches.

Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
2021-07-13 15:01:01 -04:00
David Rowley e0271d5f1e Remove useless range checks on INT8 sequences
There's no point in checking if an INT8 sequence has a seqmin and seqmax
value is outside the range of the minimum and maximum values for an int64
type.  These both use the same underlying types so an INT8 certainly
cannot be outside the minimum and maximum values supported by int64.

This code is fairly harmless and it seems likely that most compilers
would optimize it out anyway, never-the-less, let's remove it replacing
it with a small comment to mention why the check is not needed.

Author: Greg Nancarrow, with the comment revised by David Rowley
Discussion: https://postgr.es/m/CAJcOf-c9KBUZ8ow_6e%3DWSfbbEyTKfqV%3DVwoFuODQVYMySHtusw%40mail.gmail.com
2021-07-13 13:56:59 +12:00
David Rowley 5bd38d2f28 Robustify tuplesort's free_sort_tuple function
41469253e went to the trouble of removing a theoretical bug from
free_sort_tuple by checking if the tuple was NULL before freeing it. Let's
make this a little more robust by also setting the tuple to NULL so that
should we be called again we won't end up doing a pfree on the already
pfree'd tuple. Per advice from Tom Lane.

Discussion: https://postgr.es/m/3188192.1626136953@sss.pgh.pa.us
Backpatch-through: 9.6, same as 41469253e
2021-07-13 13:27:05 +12:00
David Rowley 41469253e9 Fix theoretical bug in tuplesort
This fixes a theoretical bug in tuplesort.c which, if a bounded sort was
used in combination with a byval Datum sort (tuplesort_begin_datum), when
switching the sort to a bounded heap in make_bounded_heap(), we'd call
free_sort_tuple().  The problem was that when sorting Datums of a byval
type, the tuple is NULL and free_sort_tuple() would free the memory for it
regardless of that.  This would result in a crash.

Here we fix that simply by adding a check to see if the tuple is NULL
before trying to disassociate and free any memory belonging to it.

The reason this bug is only theoretical is that nowhere in the current
code base do we do tuplesort_set_bound() when performing a Datum sort.
However, let's backpatch a fix for this as if any extension uses the code
in this way then it's likely to cause problems.

Author: Ronan Dunklau
Discussion: https://postgr.es/m/CAApHDvpdoqNC5FjDb3KUTSMs5dg6f+XxH4Bg_dVcLi8UYAG3EQ@mail.gmail.com
Backpatch-through: 9.6, oldest supported version
2021-07-13 12:40:16 +12:00
Tom Lane f10f0ae420 Replace RelationOpenSmgr() with RelationGetSmgr().
The idea behind this patch is to design out bugs like the one fixed
by commit 9d523119f.  Previously, once one did RelationOpenSmgr(rel),
it was considered okay to access rel->rd_smgr directly for some
not-very-clear interval.  But since that pointer will be cleared by
relcache flushes, we had bugs arising from overreliance on a previous
RelationOpenSmgr call still being effective.

Now, very little code except that in rel.h and relcache.c should ever
touch the rd_smgr field directly.  The normal coding rule is to use
RelationGetSmgr(rel) and not expect the result to be valid for longer
than one smgr function call.  There are a couple of places where using
the function every single time seemed like overkill, but they are now
annotated with large warning comments.

Amul Sul, after an idea of mine.

Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
2021-07-12 17:01:36 -04:00
Heikki Linnakangas 4c64b51dc5 Remove dead assignment to local variable.
This should have been removed in commit 7e30c186da, which split the loop
into two. Only the first loop uses the 'from' variable; updating it in
the second loop is bogus. It was never read after the first loop, so this
was harmless and surely optimized away by the compiler, but let's be tidy.

Backpatch to all supported versions.

Author: Ranier Vilela
Discussion: https://www.postgresql.org/message-id/CAEudQAoWq%2BAL3BnELHu7gms2GN07k-np6yLbukGaxJ1vY-zeiQ%40mail.gmail.com
2021-07-12 11:13:33 +03:00
Tom Lane 626731db26 Lock the extension during ALTER EXTENSION ADD/DROP.
Although we were careful to lock the object being added or dropped,
we failed to get any sort of lock on the extension itself.  This
allowed the ALTER to proceed in parallel with a DROP EXTENSION,
which is problematic for a couple of reasons.  If both commands
succeeded we'd be left with a dangling link in pg_depend, which
would cause problems later.  Also, if the ALTER failed for some
reason, it might try to print the extension's name, and that could
result in a crash or (in older branches) a silly error message
complaining about extension "(null)".

Per bug #17098 from Alexander Lakhin.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
2021-07-11 12:54:24 -04:00
Jeff Davis dd0e37cc15 Fix assign_record_type_typmod().
If an error occurred in the wrong place, it was possible to leave an
unintialized entry in the hash table, leading to a crash. Fixed.

Also, be more careful about the order of operations so that an
allocation error doesn't leak memory in CacheMemoryContext or
unnecessarily advance NextRecordTypmod.

Backpatch through version 11. Earlier versions (prior to 35ea75632a)
do not exhibit the problem, because an uninitialized hash entry
contains a valid empty list.

Author: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/HE1PR8303MB009069D476225B9A9E194B8891779@HE1PR8303MB0090.EURPRD83.prod.outlook.com
Backpatch-through: 11
2021-07-10 10:26:38 -07:00
Michael Paquier 44bd0126c7 Add more sanity checks in SASL exchanges
The following checks are added, to make the SASL infrastructure more
aware of defects when implementing new mechanisms:
- Detect that no output is generated by a mechanism if an exchange fails
in the backend, failing if there is a message waiting to be sent.
- Handle zero-length messages in the frontend.  The backend handles that
already, and SCRAM would complain if sending empty messages as this is
not authorized for this mechanism, but other mechanisms may want this
capability (the SASL specification allows that).
- Make sure that a mechanism generates a message in the middle of the
exchange in the frontend.

SCRAM, as implemented, respects all these requirements already, and the
recent refactoring of SASL done in 9fd8557 helps in documenting that in
a cleaner way.

Analyzed-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel@vmware.com
2021-07-10 21:45:28 +09:00
Dean Rasheed e7fc488ad6 Fix numeric_mul() overflow due to too many digits after decimal point.
This fixes an overflow error when using the numeric * operator if the
result has more than 16383 digits after the decimal point by rounding
the result. Overflow errors should only occur if the result has too
many digits *before* the decimal point.

Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
2021-07-10 12:42:59 +01:00
Jeff Davis 8e7811e952 Eliminate replication protocol error related to IDENTIFY_SYSTEM.
The requirement that IDENTIFY_SYSTEM be run before START_REPLICATION
was both undocumented and unnecessary. Remove the error and ensure
that ThisTimeLineID is initialized in START_REPLICATION.

Elect not to backport because this requirement was expected behavior
(even if inconsistently enforced), and is not likely to cause any
major problem.

Author: Jeff Davis
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/de4bbf05b7cd94227841c433ea6ff71d2130c713.camel%40j-davis.com
2021-07-09 11:37:45 -07:00
Tom Lane d23ac62afa Avoid creating a RESULT RTE that's marked LATERAL.
Commit 7266d0997 added code to pull up simple constant function
results, converting the RTE_FUNCTION RTE to a dummy RTE_RESULT
RTE since it no longer need be scanned.  But I forgot to clear
the LATERAL flag if the RTE has it set.  If the function reduced
to a constant, it surely contains no lateral references so this
simplification is logically OK.  It's needed because various other
places will Assert that RESULT RTEs aren't LATERAL.

Per bug #17097 from Yaoguang Chen.  Back-patch to v13 where the
faulty code came in.

Discussion: https://postgr.es/m/17097-3372ef9f798fc94f@postgresql.org
2021-07-09 13:38:24 -04:00
Tom Lane a9da1934e9 Reject cases where a query in WITH rewrites to just NOTIFY.
Since the executor can't cope with a utility statement appearing
as a node of a plan tree, we can't support cases where a rewrite
rule inserts a NOTIFY into an INSERT/UPDATE/DELETE command appearing
in a WITH clause of a larger query.  (One can imagine ways around
that, but it'd be a new feature not a bug fix, and so far there's
been no demand for it.)  RewriteQuery checked for this, but it
missed the case where the DML command rewrites to *only* a NOTIFY.
That'd lead to crashes later on in planning.  Add the missed check,
and improve the level of testing of this area.

Per bug #17094 from Yaoguang Chen.  It's been busted since WITH
was introduced, so back-patch to all supported branches.

Discussion: https://postgr.es/m/17094-bf15dff55eaf2e28@postgresql.org
2021-07-09 11:02:26 -04:00
David Rowley ca2e4472ba Teach pg_size_pretty and pg_size_bytes about petabytes
There was talk about adding units all the way up to yottabytes but it
seems quite far-fetched that anyone would need those.  Since such large
units are not exactly commonplace, it seems unlikely that having
pg_size_pretty outputting unit any larger than petabytes would actually be
helpful to anyone.

Since petabytes are on the horizon, let's just add those only.  Maybe one
day we'll get to add additional units, but it will likely be a while
before we'll need to think beyond petabytes in regards to the size of a
database.

Author: David Christensen
Discussion: https://postgr.es/m/CAOxo6XKmHc_WZip-x5QwaOqFEiCq_SVD0B7sbTZQk+qqcn2qaw@mail.gmail.com
2021-07-09 18:56:00 +12:00
Michael Paquier 0f80b47d24 Add forgotten LSN_FORMAT_ARGS() in xlogreader.c
These should have been part of 4035cd5, that introduced LZ4 support for
wal_compression.
2021-07-09 15:27:36 +09:00
Thomas Munro 2f78338064 Remove more obsolete comments about semaphores.
Commit 6753333f stopped using semaphores as the sleep/wake mechanism for
heavyweight locks, but some obsolete references to that scheme remained
in comments.  As with similar commit 25b93a29, back-patch all the way.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA%2BhUKGLafjB1uzXcy%3D%3D2L3cy7rjHkqOVn7qRYGBjk%3D%3DtMJE7Yg%40mail.gmail.com
2021-07-09 18:04:24 +12:00
David Rowley 56ff8b2991 Use a lookup table for units in pg_size_pretty and pg_size_bytes
We've grown 2 versions of pg_size_pretty over the years, one for BIGINT
and one for NUMERIC.  Both should output the same, but keeping them in
sync is harder than needed due to neither function sharing a source of
truth about which units to use and how to transition to the next largest
unit.

Here we add a static array which defines the units that we recognize and
have both pg_size_pretty and pg_size_pretty_numeric use it.  This will
make adding any units in the future a very simple task.

The table contains all information required to allow us to also modify
pg_size_bytes to use the lookup table, so adjust that too.

There are no behavioral changes here.

Author: David Rowley
Reviewed-by: Dean Rasheed, Tom Lane, David Christensen
Discussion: https://postgr.es/m/CAApHDvru1F7qsEVL-iOHeezJ+5WVxXnyD_Jo9nht+Eh85ekK-Q@mail.gmail.com
2021-07-09 16:29:02 +12:00
David Rowley 55fe609387 Fix incorrect return value in pg_size_pretty(bigint)
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes.  This was due to two separate issues.

1. The function used bit shifting to convert the number of bytes into
larger units.  The rounding performed by bit shifting is not the same as
dividing.  For example -3 >> 1 = -2, but -3 / 2 = -1.  These two
operations are only equivalent with positive numbers.

2. The half_rounded() macro rounded towards positive infinity.  This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.

Here we fix #1 by dividing the values instead of bit shifting.  We fix #2
by adjusting the half_rounded macro always to round away from zero.

Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting.  A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right.  However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear.  This change is just cosmetic and does not
affect the return value of the numeric version of the function.

Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.

This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.

Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
2021-07-09 14:04:30 +12:00
Daniel Gustafsson 387925893e Fix typos in pgstat.c, reorderbuffer.c and pathnodes.h
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/50250765-5B87-4AD7-9770-7FCED42A6175@yesql.se
2021-07-08 12:45:09 +02:00
Peter Eisentraut 2ed532ee8c Improve error messages about mismatching relkind
Most error messages about a relkind that was not supported or
appropriate for the command was of the pattern

    "relation \"%s\" is not a table, foreign table, or materialized view"

This style can become verbose and tedious to maintain.  Moreover, it's
not very helpful: If I'm trying to create a comment on a TOAST table,
which is not supported, then the information that I could have created
a comment on a materialized view is pointless.

Instead, write the primary error message shorter and saying more
directly that what was attempted is not possible.  Then, in the detail
message, explain that the operation is not supported for the relkind
the object was.  To simplify that, add a new function
errdetail_relkind_not_supported() that does this.

In passing, make use of RELKIND_HAS_STORAGE() where appropriate,
instead of listing out the relkinds individually.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com
2021-07-08 09:44:51 +02:00
David Rowley 29f45e299e Use a hash table to speed up NOT IN(values)
Similar to 50e17ad28, which allowed hash tables to be used for IN clauses
with a set of constants, here we add the same feature for NOT IN clauses.

NOT IN evaluates the same as: WHERE a <> v1 AND a <> v2 AND a <> v3.
Obviously, if we're using a hash table we must be exactly equivalent to
that and return the same result taking into account that either side of
the condition could contain a NULL.  This requires a little bit of
special handling to make work with the hash table version.

When processing NOT IN, the ScalarArrayOpExpr's operator will be the <>
operator.  To be able to build and lookup a hash table we must use the
<>'s negator operator.  The planner checks if that exists and is hashable
and sets the relevant fields in ScalarArrayOpExpr to instruct the executor
to use hashing.

Author: David Rowley, James Coleman
Reviewed-by: James Coleman, Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvoF1mum_FRk6D621edcB6KSHBi2+GAgWmioj5AhOu2vwQ@mail.gmail.com
2021-07-07 16:29:17 +12:00
Michael Paquier 9fd85570d1 Refactor SASL code with a generic interface for its mechanisms
The code of SCRAM and SASL have been tightly linked together since SCRAM
exists in the core code, making hard to apprehend the addition of new
SASL mechanisms, but these are by design different facilities, with
SCRAM being an option for SASL.  This refactors the code related to both
so as the backend and the frontend use a set of callbacks for SASL
mechanisms, documenting while on it what is expected by anybody adding a
new SASL mechanism.

The separation between both layers is neat, using two sets of callbacks
for the frontend and the backend to mark the frontier between both
facilities.  The shape of the callbacks is now directly inspired from
the routines used by SCRAM, so the code change is straight-forward, and
the SASL code is moved into its own set of files.  These will likely
change depending on how and if new SASL mechanisms get added in the
future.

Author: Jacob Champion
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel@vmware.com
2021-07-07 10:55:15 +09:00
Tom Lane 955b3e0f92 Allow CustomScan providers to say whether they support projections.
Previously, all CustomScan providers had to support projections,
but there may be cases where this is inconvenient.  Add a flag
bit to say if it's supported.

Important item for the release notes: this is non-backwards-compatible
since the default is now to assume that CustomScan providers can't
project, instead of assuming that they can.  It's fail-soft, but could
result in visible performance penalties due to adding unnecessary
Result nodes.

Sven Klemm, reviewed by Aleksander Alekseev; some cosmetic fiddling
by me.

Discussion: https://postgr.es/m/CAMCrgp1kyakOz6c8aKhNDJXjhQ1dEjEnp+6KNT3KxPrjNtsrDg@mail.gmail.com
2021-07-06 18:10:20 -04:00
Tom Lane 64919aaab4 Reduce the cost of planning deeply-nested views.
Joel Jacobson reported that deep nesting of trivial (flattenable)
views results in O(N^3) growth of planning time for N-deep nesting.
It turns out that a large chunk of this cost comes from copying around
the "subquery" sub-tree of each view's RTE_SUBQUERY RTE.  But once we
have successfully flattened the subquery, we don't need that anymore,
because the planner isn't going to do anything else interesting with
that RTE.  We already zap the subquery pointer during setrefs.c (cf.
add_rte_to_flat_rtable), but it's useless baggage earlier than that
too.  Clearing the pointer as soon as pull_up_simple_subquery is done
with the RTE reduces the cost from O(N^3) to O(N^2); which is still
not great, but it's quite a lot better.  Further improvements will
require rethinking of the RTE data structure, which is being considered
in another thread.

Patch by me; thanks to Dean Rasheed for review.

Discussion: https://postgr.es/m/797aff54-b49b-4914-9ff9-aa42564a4d7d@www.fastmail.com
2021-07-06 14:23:09 -04:00
Amit Kapila 8aafb02616 Refactor function parse_subscription_options.
Instead of using multiple parameters in parse_subscription_options
function signature, use the struct SubOpts that encapsulate all the
subscription options and their values. It will be useful for future work
where we need to add other options in the subscription. Also, use bitmaps
to pass the supported and retrieve the specified options much like the way
it is done in the commit a3dc926009.

Author: Bharath Rupireddy
Reviewed-By: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: https://postgr.es/m/CALj2ACXtoQczfNsDQWobypVvHbX2DtgEHn8DawS0eGFwuo72kw@mail.gmail.com
2021-07-06 07:46:50 +05:30
David Rowley 9ee91cc583 Fix typo in comment
Author: James Coleman
Discussion: https://postgr.es/m/CAAaqYe8f8ENA0i1PdBtUNWDd2sxHSMgscNYbjhaXMuAdfBrZcg@mail.gmail.com
2021-07-06 12:38:50 +12:00
David Rowley 53d86957e9 Reduce the number of pallocs when building partition bounds
In each of the create_*_bound() functions for LIST, RANGE and HASH
partitioning, there were a large number of palloc calls which could be
reduced down to a much smaller number.

In each of these functions, an array was built so that we could qsort it
before making the PartitionBoundInfo. For LIST and HASH partitioning, an
array of pointers was allocated then each element was allocated within
that array.  Since the number of items of each dimension is known
beforehand, we can just allocate a single chunk of memory for this.

Similarly, with all partition strategies, we're able to reduce the number
of allocations to build the ->datums field.  This is an array of Datum
pointers, but there's no need for the Datums that each element points to
to be singly allocated.  One big chunk will do.  For RANGE partitioning,
the PartitionBoundInfo->kind field can get the same treatment.

We can apply the same optimizations to partition_bounds_copy().  Doing
this might have a small effect on cache performance when searching for the
correct partition during partition pruning or DML on a partitioned table.
However, that's likely to be small and this is mostly about reducing
palloc overhead.

Author: Nitin Jadhav, Justin Pryzby, David Rowley
Reviewed-by: Justin Pryzby, Zhihong Yu
Discussion: https://postgr.es/m/flat/CAMm1aWYFTqEio3bURzZh47jveiHRwgQTiSDvBORczNEz2duZ1Q@mail.gmail.com
2021-07-06 12:24:43 +12:00
Michael Paquier 2aca19f298 Use WaitLatch() instead of pg_usleep() at the end of backups
This concerns pg_stop_backup() and BASE_BACKUP, when waiting for the
WAL segments required for a backup to be archived.  This simplifies a
bit the handling of the wait event used in this code path.

Author: Bharath Rupireddy
Reviewed-by: Michael Paquier, Stephen Frost
Discussion: https://postgr.es/m/CALj2ACU4AdPCq6NLfcA-ZGwX7pPCK5FgEj-CAU0xCKzkASSy_A@mail.gmail.com
2021-07-06 08:10:59 +09:00
Tom Lane 9753324b7d Reduce overhead of cache-clobber testing in LookupOpclassInfo().
Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled.  It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.

There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive.  Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.

While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry.  (That seems to be my
oversight in 03ffc4d6d.)  Also, in >= v13, stop allocating one array
entry too many.  That's evidently left over from sloppy reversion in
851b14b0c.

Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.

Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
2021-07-05 16:51:57 -04:00
Dean Rasheed f025f2390e Prevent numeric overflows in parallel numeric aggregates.
Formerly various numeric aggregate functions supported parallel
aggregation by having each worker convert partial aggregate values to
Numeric and use numeric_send() as part of serializing their state.
That's problematic, since the range of Numeric is smaller than that of
NumericVar, so it's possible for it to overflow (on either side of the
decimal point) in cases that would succeed in non-parallel mode.

Fix by serializing NumericVars instead, to avoid the overflow risk and
ensure that parallel and non-parallel modes work the same.

A side benefit is that this improves the efficiency of the
serialization/deserialization code, which can make a noticeable
difference to performance with large numbers of parallel workers.

No back-patch due to risk from changing the binary format of the
aggregate serialization states, as well as lack of prior field
complaints and low probability of such overflows in practice.

Patch by me. Thanks to David Rowley for review and performance
testing, and Ranier Vilela for an additional suggestion.

Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
2021-07-05 10:16:42 +01:00
David Rowley 63b1af9437 Cleanup some aggregate code in the executor
Here we alter the code that calls build_pertrans_for_aggref() so that the
function no longer needs to special-case whether it's dealing with an
aggtransfn or an aggcombinefn.  This allows us to reuse the
build_aggregate_transfn_expr() function and just get rid of the
build_aggregate_combinefn_expr() completely.

All of the special case code that was in build_pertrans_for_aggref() has
been moved up to the calling functions.

This saves about a dozen lines of code in nodeAgg.c and a few dozen more
in parse_agg.c

Also, rename a few variables in nodeAgg.c to try to make it more clear
that we're working with either a aggtransfn or an aggcombinefn.  Some of
the old names would have you believe that we were always working with an
aggtransfn.

Discussion: https://postgr.es/m/CAApHDvptMQ9FmF0D67zC_w88yVnoNVR2+kkOQGUrCmdxWxLULQ@mail.gmail.com
2021-07-04 18:47:31 +12:00
Tom Lane 50371df266 Don't try to print data type names in slot_store_error_callback().
The existing code tried to do syscache lookups in an already-failed
transaction, which is problematic to say the least.  After some
consideration of alternatives, the best fix seems to be to just drop
type names from the error message altogether.  The table and column
names seem like sufficient localization.  If the user is unsure what
types are involved, she can check the local and remote table
definitions.

Having done that, we can also discard the LogicalRepTypMap hash
table, which had no other use.  Arguably, LOGICAL_REP_MSG_TYPE
replication messages are now obsolete as well; but we should
probably keep them in case some other use emerges.  (The complexity
of removing something from the replication protocol would likely
outweigh any savings anyhow.)

Masahiko Sawada and Bharath Rupireddy, per complaint from Andres
Freund.  Back-patch to v10 where this code originated.

Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
2021-07-02 16:04:54 -04:00
Peter Eisentraut 6bd3ec62d9 Use InvalidBucket instead of -1 where appropriate
Reported-by: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAEudQAp%3DZwKjrP4L%2BCzqV7SmWiaQidPPRqj4tqdjDG4KBx5yrg%40mail.gmail.com
2021-07-02 11:59:55 +02:00