Commit Graph

5130 Commits

Author SHA1 Message Date
Alvaro Herrera 72d0642172
Invalidate partitions of table being attached/detached
Failing to do that, any direct inserts/updates of those partitions
would fail to enforce the correct constraint, that is, one that
considers the new partition constraint of their parent table.

Backpatch to 10.

Reported by: Hou Zhijie <houzj.fnst@fujitsu.com>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>

Discussion: https://postgr.es/m/OS3PR01MB5718DA1C4609A25186D1FBF194089%40OS3PR01MB5718.jpnprd01.prod.outlook.com
2021-10-18 19:08:25 -03:00
Tom Lane fd059ac2e4 Fix planner error with pulling up subquery expressions into function RTEs.
If a function-in-FROM laterally references the output of some sub-SELECT
earlier in the FROM clause, and we are able to flatten that sub-SELECT
into the outer query, the expression(s) copied into the function RTE
missed being processed by eval_const_expressions.  This'd lead to trouble
and probable crashes at execution if such expressions contained
named-argument function call syntax or functions with defaulted arguments.
The bug is masked if the query contains any explicit JOIN syntax, which
may help explain why we'd not noticed.

Per bug #17227 from Bernd Dorn.  This is an oversight in commit 7266d0997,
so back-patch to v13 where that came in.

Discussion: https://postgr.es/m/17227-5a28ed1512189fa4@postgresql.org
2021-10-14 12:43:43 -04:00
Dean Rasheed 8e26b868d5 Fix corner-case loss of precision in numeric_power().
This fixes a loss of precision that occurs when the first input is
very close to 1, so that its logarithm is very small.

Formerly, during the initial low-precision calculation to estimate the
result weight, the logarithm was computed to a local rscale that was
capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be
as close as 1e-16383 to 1, hence its logarithm may be as small as
1e-16383, and so the local rscale needs to be allowed to exceed 16383,
otherwise all precision is lost, leading to a poor choice of rscale
for the full-precision calculation.

Fix this by removing the cap on the local rscale during the initial
low-precision calculation, as we already do in the full-precision
calculation. This doesn't change the fact that the initial calculation
is a low-precision approximation, computing the logarithm to around 8
significant digits, which is very fast, especially when the base is
very close to 1.

Patch by me, reviewed by Alvaro Herrera.

Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com
2021-10-06 13:19:25 +01:00
Tom Lane e0eba586b1 Fix checking of query type in plpgsql's RETURN QUERY command.
Prior to v14, we insisted that the query in RETURN QUERY be of a type
that returns tuples.  (For instance, INSERT RETURNING was allowed,
but not plain INSERT.)  That happened indirectly because we opened a
cursor for the query, so spi.c checked SPI_is_cursor_plan().  As a
consequence, the error message wasn't terribly on-point, but at least
it was there.

Commit 2f48ede08 lost this detail.  Instead, plain RETURN QUERY
insisted that the query be a SELECT (by checking for SPI_OK_SELECT)
while RETURN QUERY EXECUTE failed to check the query type at all.
Neither of these changes was intended.

The only convenient place to check this in the EXECUTE case is inside
_SPI_execute_plan, because we haven't done parse analysis until then.
So we need to pass down a flag saying whether to enforce that the
query returns tuples.  Fortunately, we can squeeze another boolean
into struct SPIExecuteOptions without an ABI break, since there's
padding space there.  (It's unlikely that any extensions would
already be using this new struct, but preserving ABI in v14 seems
like a smart idea anyway.)

Within spi.c, it seemed like _SPI_execute_plan's parameter list
was already ridiculously long, and I didn't want to make it longer.
So I thought of passing SPIExecuteOptions down as-is, allowing that
parameter list to become much shorter.  This makes the patch a bit
more invasive than it might otherwise be, but it's all internal to
spi.c, so that seems fine.

Per report from Marc Bachmann.  Back-patch to v14 where the
faulty code came in.

Discussion: https://postgr.es/m/1F2F75F0-27DF-406F-848D-8B50C7EEF06A@gmail.com
2021-10-03 13:21:20 -04:00
Alvaro Herrera 20047609d3
Error out if SKIP LOCKED and WITH TIES are both specified
Both bugs #16676[1] and #17141[2] illustrate that the combination of
SKIP LOCKED and FETCH FIRST WITH TIES break expectations when it comes
to rows returned to other sessions accessing the same row.  Since this
situation is detectable from the syntax and hard to fix otherwise,
forbid for now, with the potential to fix in the future.

[1] https://postgr.es/m/16676-fd62c3c835880da6@postgresql.org
[2] https://postgr.es/m/17141-913d78b9675aac8e@postgresql.org

Backpatch-through: 13, where WITH TIES was introduced
Author: David Christensen <david.christensen@crunchydata.com>
Discussion: https://postgr.es/m/CAOxo6XLPccCKru3xPMaYDpa+AXyPeWFs+SskrrL+HKwDjJnLhg@mail.gmail.com
2021-10-01 18:29:18 -03:00
Amit Kapila 9eff859326 Invalidate all partitions for a partitioned table in publication.
Updates/Deletes on a partition were allowed even without replica identity
after the parent table was added to a publication. This would later lead
to an error on subscribers. The reason was that we were not invalidating
the partition's relcache and the publication information for partitions
was not getting rebuilt. Similarly, we were not invalidating the
partitions' relcache after dropping a partitioned table from a publication
which will prohibit Updates/Deletes on its partition without replica
identity even without any publication.

Reported-by: Haiying Tang
Author: Hou Zhijie and Vignesh C
Reviewed-by: Vignesh C and Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/OS0PR01MB6113D77F583C922F1CEAA1C3FBD29@OS0PR01MB6113.jpnprd01.prod.outlook.com
2021-09-22 08:13:37 +05:30
Tom Lane 4d5b4483db Fix pull_varnos to cope with translated PlaceHolderVars.
Commit 55dc86eca changed pull_varnos to use (if possible) the associated
ph_eval_at for a PlaceHolderVar.  I missed a fine point though: we might
be looking at a PHV in the quals or tlist of a child appendrel, in which
case we need to compute a ph_eval_at value that's been translated in the
same way that the PHV itself has been (cf. adjust_appendrel_attrs).
Fortunately, enough info is available in the PlaceHolderInfo to make
such translation possible without additional outside data, so we don't
need another round of uglification of planner APIs.  This is a little
bit complicated, but since it's a hard-to-hit corner case, I'm not much
worried about adding cycles here.

Per report from Jaime Casanova.  Back-patch to v12, like the previous
commit.

Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
2021-09-17 15:41:16 -04:00
Tom Lane 388726753b Fix EXPLAIN to handle SEARCH BREADTH FIRST queries.
The rewriter transformation for SEARCH BREADTH FIRST produces a
FieldSelect on a Var of type RECORD, where the Var references the
recursive union's worktable output.  EXPLAIN VERBOSE failed to handle
this case, because it only expected such Vars to appear in CteScans
not WorkTableScans.  Fix that, and add some test cases exercising
EXPLAIN on SEARCH and CYCLE queries.

In principle this oversight is an old bug, but it seems that the
case is unreachable without SEARCH BREADTH FIRST, because the
parser fails when attempting to create such a reference manually.
So for today I'll just patch HEAD/v14.  Someday we might find that
the code portion of this patch needs to be back-patched further.

Per report from Atsushi Torikoshi.

Discussion: https://postgr.es/m/5bafa66ad529e11860339565c9e7c166@oss.nttdata.com
2021-09-16 10:45:42 -04:00
Peter Eisentraut f46dc96fcc Message style improvements 2021-09-16 15:36:58 +02:00
Tom Lane 29aa0ce361 Fix planner error with multiple copies of an AlternativeSubPlan.
It's possible for us to copy an AlternativeSubPlan expression node
into multiple places, for example the scan quals of several
partition children.  Then it's possible that we choose a different
one of the alternatives as optimal in each place.  Commit 41efb8340
failed to consider this scenario, so its attempt to remove "unused"
subplans could remove subplans that were still used elsewhere.

Fix by delaying the removal logic until we've examined all the
AlternativeSubPlans in a given query level.  (This does assume that
AlternativeSubPlans couldn't get copied to other query levels, but
for the foreseeable future that's fine; cf qual_is_pushdown_safe.)

Per report from Rajkumar Raghuwanshi.  Back-patch to v14
where the faulty logic came in.

Discussion: https://postgr.es/m/CAKcux6==O3NNZC3bZ2prRYv3cjm3_Zw1GfzmOjEVqYN4jub2+Q@mail.gmail.com
2021-09-14 15:11:21 -04:00
Tom Lane d844cd75a6 Fix some anomalies with NO SCROLL cursors.
We have long forbidden fetching backwards from a NO SCROLL cursor,
but the prohibition didn't extend to cases in which we rewind the
query altogether and then re-fetch forwards.  I think the reason is
that this logic was mainly meant to protect plan nodes that can't
be run in the reverse direction.  However, re-reading the query output
is problematic if the query is volatile (which includes SELECT FOR
UPDATE, not just queries with volatile functions): the re-read can
produce different results, which confuses the cursor navigation logic
completely.  Another reason for disliking this approach is that some
code paths will either fetch backwards or rewind-and-fetch-forwards
depending on the distance to the target row; so that seemingly
identical use-cases may or may not draw the "cursor can only scan
forward" error.  Hence, let's clean things up by disallowing rewind
as well as fetch-backwards in a NO SCROLL cursor.

Ordinarily we'd only make such a definitional change in HEAD, but
there is a third reason to consider this change now.  Commit ba2c6d6ce
created some new user-visible anomalies for non-scrollable cursors
WITH HOLD, in that navigation in the cursor result got confused if the
cursor had been partially read before committing.  The only good way
to resolve those anomalies is to forbid rewinding such a cursor, which
allows removal of the incorrect cursor state manipulations that
ba2c6d6ce added to PersistHoldablePortal.

To minimize the behavioral change in the back branches (including
v14), refuse to rewind a NO SCROLL cursor only when it has a holdStore,
ie has been held over from a previous transaction due to WITH HOLD.
This should avoid breaking most applications that have been sloppy
about whether to declare cursors as scrollable.  We'll enforce the
prohibition across-the-board beginning in v15.

Back-patch to v11, as ba2c6d6ce was.

Discussion: https://postgr.es/m/3712911.1631207435@sss.pgh.pa.us
2021-09-10 13:18:32 -04:00
Tom Lane 03d01d746b Fix rewriter to set hasModifyingCTE correctly on rewritten queries.
If we copy data-modifying CTEs from the original query to a replacement
query (from a DO INSTEAD rule), we must set hasModifyingCTE properly
in the replacement query.  Failure to do this can cause various
unpleasantness, such as unsafe usage of parallel plans.  The code also
neglected to propagate hasRecursive, though that's only cosmetic at
the moment.

A difficulty arises if the rule action is an INSERT...SELECT.  We
attach the original query's RTEs and CTEs to the sub-SELECT Query, but
data-modifying CTEs are only allowed to appear in the topmost Query.
For the moment, throw an error in such cases.  It would probably be
possible to avoid this error by attaching the CTEs to the top INSERT
Query instead; but that would require a bunch of new code to adjust
ctelevelsup references.  Given the narrowness of the use-case, and
the need to back-patch this fix, it does not seem worth the trouble
for now.  We can revisit this if we get field complaints.

Per report from Greg Nancarrow.  Back-patch to all supported branches.
(The test case added here does not fail before v10, but there are
plenty of places checking top-level hasModifyingCTE in 9.6, so I have
no doubt that this code change is necessary there too.)

Greg Nancarrow and Tom Lane

Discussion: https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
2021-09-08 12:05:43 -04:00
Peter Eisentraut 054adca641 Disable anonymous record hash support except in special cases
Commit 01e658fa74 added hash support for row types.  This also added
support for hashing anonymous record types, using the same approach
that the type cache uses for comparison support for record types: It
just reports that it works, but it might fail at run time if a
component type doesn't actually support the operation.  We get away
with that for comparison because most types support that.  But some
types don't support hashing, so the current state can result in
failures at run time where the planner chooses hashing over sorting,
whereas that previously worked if only sorting was an option.

We do, however, want the record hashing support for path tracking in
recursive unions, and the SEARCH and CYCLE clauses built on that.  In
that case, hashing is the only plan option.  So enable that, this
commit implements the following approach: The type cache does not
report that hashing is available for the record type.  This undoes
that part of 01e658fa74.  Instead, callers that require hashing no
matter what can override that result themselves.  This patch only
touches the callers to make the aforementioned recursive query cases
work, namely the parse analysis of unions, as well as the hash_array()
function.

Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com>
Bug: #17158
Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
2021-09-08 09:55:18 +02:00
Amit Kapila 8db27fbc11 Invalidate relcache for publications defined for all tables.
Updates/Deletes on a relation were allowed even without replica identity
after we define the publication for all tables. This would later lead to
an error on subscribers. The reason was that for such publications we were
not invalidating the relcache and the publication information for
relations was not getting rebuilt. Similarly, we were not invalidating the
relcache after dropping of such publications which will prohibit
Updates/Deletes without replica identity even without any publication.

Author: Vignesh C and Hou Zhijie
Reviewed-by: Hou Zhijie, Kyotaro Horiguchi, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
2021-09-08 12:08:29 +05:30
Tom Lane 08b96a2b52 Fix portability issue in tests from commit ce773f230.
Modern POSIX seems to require strtod() to accept "-NaN", but there's
nothing about NaN in SUSv2, and some of our oldest buildfarm members
don't like it.  Let's try writing it as -'NaN' instead; that seems
to produce the same result, at least on Intel hardware.

Per buildfarm.
2021-09-03 10:01:02 -04:00
Tom Lane 23c6bc581d Fix float4/float8 hash functions to produce uniform results for NaNs.
The IEEE 754 standard allows a wide variety of bit patterns for NaNs,
of which at least two ("NaN" and "-NaN") are pretty easy to produce
from SQL on most machines.  This is problematic because our btree
comparison functions deem all NaNs to be equal, but our float hash
functions know nothing about NaNs and will happily produce varying
hash codes for them.  That causes unexpected results from queries
that hash a column containing different NaN values.  It could also
produce unexpected lookup failures when using a hash index on a
float column, i.e. "WHERE x = 'NaN'" will not find all the rows
it should.

To fix, special-case NaN in the float hash functions, not too much
unlike the existing special case that forces zero and minus zero
to hash the same.  I arranged for the most vanilla sort of NaN
(that coming from the C99 NAN constant) to still have the same
hash code as before, to reduce the risk to existing hash indexes.

I dithered about whether to back-patch this into stable branches,
but ultimately decided to do so.  It's a clear improvement for
queries that hash internally.  If there is anybody who has -NaN
in a hash index, they'd be well advised to re-index after applying
this patch ... but the misbehavior if they don't will not be much
worse than the misbehavior they had before.

Per bug #17172 from Ma Liangzhu.

Discussion: https://postgr.es/m/17172-7505bea9e04e230f@postgresql.org
2021-09-02 17:24:42 -04:00
Tomas Vondra 50ba70a957 Identify simple column references in extended statistics
Until now, when defining extended statistics, everything except a plain
column reference was treated as complex expression. So for example "a"
was a column reference, but "(a)" would be an expression. In most cases
this does not matter much, but there were a couple strange consequences.
For example

    CREATE STATISTICS s ON a FROM t;

would fail, because extended stats require at least two columns. But

    CREATE STATISTICS s ON (a) FROM t;

would succeed, because that requirement does not apply to expressions.
Moreover, that statistics object is useless - the optimizer will always
use the regular statistics collected for attribute "a".

So do a bit more work to identify those expressions referencing a single
column, and translate them to a simple column reference. Backpatch to
14, where support for extended statistics on expressions was introduced.

Reported-by: Justin Pryzby
Backpatch-through: 14
Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
2021-09-01 18:08:43 +02:00
Tomas Vondra 4d1816ec26 Don't print extra parens around expressions in extended stats
The code printing expressions for extended statistics doubled the
parens, producing results like ((a+1)), which is unnecessary and not
consistent with how we print expressions elsewhere.

Fixed by tweaking the code to produce just a single set of parens.

Reported by Mark Dilger, fix by me. Backpatch to 14, where support for
extended statistics on expressions was added.

Reported-by: Mark Dilger
Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
2021-09-01 00:44:12 +02:00
Tomas Vondra 4090ff2a99 Rename the role in stats_ext to have regress_ prefix
Commit 5be8ce82e8 added a new role to the stats_ext regression suite,
but the role name did not start with regress_ causing failures when
running with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. Fixed by
renaming the role to start with the expected regress_ prefix.

Backpatch-through: 10, same as the new regression test
Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
2021-08-31 19:32:32 +02:00
Tomas Vondra a371a5ba34 Fix lookup error in extended stats ownership check
When an ownership check on extended statistics object failed, the code
was calling aclcheck_error_type to report the failure, which is clearly
wrong, resulting in cache lookup errors. Fix by calling aclcheck_error.

This issue exists since the introduction of extended statistics, so
backpatch all the way back to PostgreSQL 10. It went unnoticed because
there were no tests triggering the error, so add one.

Reported-by: Mark Dilger
Backpatch-through: 10, where extended stats were introduced
Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
2021-08-31 18:36:06 +02:00
Tom Lane 983d7033df Fix missed lock acquisition while inlining new-style SQL functions.
When starting to use a query parsetree loaded from the catalogs,
we must begin by applying AcquireRewriteLocks(), to obtain the same
relation locks that the parser would have gotten if the query were
entered interactively, and to do some other cleanup such as dealing
with later-dropped columns.  New-style SQL functions are just as
subject to this rule as other stored parsetrees; however, of the
places dealing with such functions, only init_sql_fcache had gotten
the memo.  In particular, if we successfully inlined a new-style
set-returning SQL function that contained any relation references,
we'd either get an assertion failure or attempt to use those
relation(s) sans locks.

I also added AcquireRewriteLocks calls to fmgr_sql_validator and
print_function_sqlbody.  Desultory experiments didn't demonstrate any
failures in those, but I suspect that I just didn't try hard enough.
Certainly we don't expect nearby code paths to operate without locks.

On the same logic of it-ought-to-have-the-same-effects-as-the-old-code,
call pg_rewrite_query() in fmgr_sql_validator, too.  It's possible
that neither code path there needs to bother with rewriting, but
doing the analysis to prove that is beyond my goals for today.

Per bug #17161 from Alexander Lakhin.

Discussion: https://postgr.es/m/17161-048a1cdff8422800@postgresql.org
2021-08-31 12:02:36 -04:00
Alvaro Herrera ce6b662aae
psql: Fix name quoting on extended statistics
Per our message style guidelines, for human consumption we quote
qualified names as a whole rather than each part separately; but commits
bc085205c8 introduced a deviation for extended statistics and
a4d75c86bf copied it.  I don't agree with this policy applying to
names shown by psql, but that's a poor reason to deviate from the
practice only in two obscure corners, so make said corners use the same
style as everywhere else.

Backpatch to 14.  The first of these is older, but I'm not sure we want
to destabilize the psql output in the older branches for such a small
thing.

Discussion: https://postgr.es/m/20210828181618.GS26465@telsasoft.com
2021-08-30 14:01:29 -04:00
Fujii Masao ec619102aa Avoid using ambiguous word "positive" in error message.
There are two identical error messages about valid value of modulus for
hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved
only one of them so that ambiguous word "positive" was avoided there,
and forgot to improve the other. This commit improves the other.
Which would reduce translator burden.

Back-pach to v11 where the error message exists.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:47:43 +09:00
Tom Lane 244dd79923 Fix regexp misbehavior with capturing parens inside "{0}".
Regexps like "(.){0}...\1" drew an "invalid backreference number".
That's not unreasonable on its face, since the capture group will
never be matched if it's iterated zero times.  However, other engines
such as Perl's don't complain about this, nor do we throw an error for
related cases such as "(.)|\1", even though that backref can never
succeed either.  Also, if the zero-iterations case happens at runtime
rather than compile time --- say, "(x)*...\1" when there's no "x" to
be found --- that's not an error, we just deem the backref to not
match.  Making this even less defensible, no error was thrown for
nested cases such as "((.)){0}...\2"; and to add insult to injury,
those cases could result in assertion failures instead.  (It seems
that nothing especially bad happened in non-assert builds, though.)

Let's just fix it so that no error is thrown and instead the backref
is deemed to never match, so that compile-time detection of no
iterations behaves the same as run-time detection.

Per report from Mark Dilger.  This appears to be an aboriginal error
in Spencer's library, so back-patch to all supported versions.

Pre-v14, it turns out to also be necessary to back-patch one aspect of
commits cb76fbd7e/00116dee5, namely to create capture-node subREs with
the begin/end states of their subexpressions, not the current lp/rp
of the outer parseqatom invocation.  Otherwise delsub complains that
we're trying to disconnect a state from itself.  This is a bit scary
but code examination shows that it's safe: in the pre-v14 code, if we
want to wrap iteration around the subexpression, the first thing we do
is overwrite the atom's begin/end fields with new states.  So the
bogus values didn't survive long enough to be used for anything, except
if no iteration is required, in which case it doesn't matter.

Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
2021-08-24 16:37:27 -04:00
Amit Kapila 5cfcd46e9d Fix Alter Subscription's Add/Drop Publication behavior.
The current refresh behavior tries to just refresh added/dropped
publications but that leads to removing wrong tables from subscription. We
can't refresh just the dropped publication because it is quite possible
that some of the tables are removed from publication by that time and now
those will remain as part of the subscription. Also, there is a chance
that the tables that were part of the publication being dropped are also
part of another publication, so we can't remove those.

So, we decided that by default, add/drop commands will also act like
REFRESH PUBLICATION which means they will refresh all the publications. We
can keep the old behavior for "add publication" but it is better to be
consistent with "drop publication".

Author: Hou Zhijie
Reviewed-by: Masahiko Sawada, Amit Kapila
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-08-24 08:38:11 +05:30
Tom Lane 779557bd22 Prevent regexp back-refs from sometimes matching when they shouldn't.
The recursion in cdissect() was careless about clearing match data
for capturing parentheses after rejecting a partial match.  This
could allow a later back-reference to succeed when by rights it
should fail for lack of a defined referent.

To fix, think a little more rigorously about what the contract
between different levels of cdissect's recursion needs to be.
With the right spec, we can fix this using fewer rather than more
resets of the match data; the key decision being that a failed
sub-match is now explicitly responsible for clearing any matches
it may have set.

There are enough other cross-checks and optimizations in the code
that it's not especially easy to exhibit this problem; usually, the
match will fail as-expected.  Plus, regexps that are even potentially
vulnerable are most likely user errors, since there's just not much
point in writing a back-ref that doesn't always have a referent.
These facts perhaps explain why the issue hasn't been detected,
even though it's almost certainly a couple of decades old.

Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
2021-08-23 17:41:07 -04:00
Tom Lane 4649003933 Avoid trying to lock OLD/NEW in a rule with FOR UPDATE.
transformLockingClause neglected to exclude the pseudo-RTEs for
OLD/NEW when processing a rule's query.  This led to odd errors
or even crashes later on.  This bug is very ancient, but it's
not terribly surprising that nobody noticed, since the use-case
for SELECT FOR UPDATE in a non-view rule is somewhere between
thin and non-existent.  Still, crashing is not OK.

Per bug #17151 from Zhiyong Wu.  Thanks to Masahiko Sawada
for analysis of the problem.

Discussion: https://postgr.es/m/17151-c03a3e6e4ec9aadb@postgresql.org
2021-08-19 12:12:35 -04:00
Tom Lane 61f9dae2ce Fix check_agg_arguments' examination of aggregate FILTER clauses.
Recursion into the FILTER clause was mis-implemented, such that a
relevant Var or Aggref at the very top of the FILTER clause would
be ignored.  (Of course, that'd have to be a plain boolean Var or
boolean-returning aggregate.)  The consequence would be
mis-identification of the correct semantic level of the aggregate,
which could lead to not-per-spec query behavior.  If the FILTER
expression is an aggregate, this could also lead to failure to issue
an expected "aggregate function calls cannot be nested" error, which
would likely result in a core dump later on, since the planner and
executor aren't expecting such cases to appear.

The root cause is that commit b560ec1b0 blindly copied some code
that assumed it's recursing into a List, and thus didn't examine the
top-level node.  To forestall questions about why this call doesn't
look like the others, as well as possible future copy-and-paste
mistakes, let's change all three check_agg_arguments_walker calls in
check_agg_arguments, even though only the one for the filter clause
is really broken.

Per bug #17152 from Zhiyong Wu.  This has been wrong since we
implemented FILTER, so back-patch to all supported versions.
(Testing suggests that pre-v11 branches manage to avoid crashing
in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg.
But I'm not sure how thorough that protection is, and anyway the
wrong-behavior issue remains, so fix 9.6 and 10 too.)

Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
2021-08-18 18:12:51 -04:00
Tom Lane 2c915905e3 Really fix the ambiguity in REFRESH MATERIALIZED VIEW CONCURRENTLY.
Rather than trying to pick table aliases that won't conflict with
any possible user-defined matview column name, adjust the queries'
syntax so that the aliases are only used in places where they can't be
mistaken for column names.  Mostly this consists of writing "alias.*"
not just "alias", which adds clarity for humans as well as machines.
We do have the issue that "SELECT alias.*" acts differently from
"SELECT alias", but we can use the same hack ruleutils.c uses for
whole-row variables in SELECT lists: write "alias.*::compositetype".

We might as well revert to the original aliases after doing this;
they're a bit easier to read.

Like 75d66d10e, back-patch to all supported branches.

Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
2021-08-07 13:29:32 -04:00
Tom Lane e5f6493e35 Don't elide casting to typmod -1.
Casting a value that's already of a type with a specific typmod
to an unspecified typmod doesn't do anything so far as run-time
behavior is concerned.  However, it really ought to change the
exposed type of the expression to match.  Up to now,
coerce_type_typmod hasn't bothered with that, which creates gotchas
in contexts such as recursive unions.  If for example one side of
the union is numeric(18,3), but it needs to be plain numeric to
match the other side, there's no direct way to express that.

This is easy enough to fix, by inserting a RelabelType to update the
exposed type of the expression.  However, it's a bit nervous-making
to change this behavior, because it's stood for a really long time.
(I strongly suspect that it's like this in part because the logic
pre-dates the introduction of RelabelType in 7.0.  The commit log
message for 57b30e8e2 is interesting reading here.)  As a compromise,
we'll sneak the change into 14beta3, and consider back-patching to
stable branches if no complaints emerge in the next three months.

Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com
2021-08-06 17:32:54 -04:00
Dean Rasheed 0325565702 Adjust the integer overflow tests in the numeric code.
Formerly, the numeric code tested whether an integer value of a larger
type would fit in a smaller type by casting it to the smaller type and
then testing if the reverse conversion produced the original value.
That's perfectly fine, except that it caused a test failure on
buildfarm animal castoroides, most likely due to a compiler bug.

Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That
matches existing code in other places, such as int84(), which is more
widely tested, and so is less likely to go wrong.

While at it, add regression tests covering the numeric-to-int8/4/2
conversions, and adjust the recently added tests to the style of
434ddfb79a (on the v11 branch) to make failures easier to diagnose.

Per buildfarm via Tom Lane, reviewed by Tom Lane.

Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
2021-08-06 21:30:25 +01:00
Peter Eisentraut c3a135b41e Add missing message punctuation 2021-08-06 22:11:02 +02:00
Dean Rasheed ecbdbdfd90 Fix division-by-zero error in to_char() with 'EEEE' format.
This fixes a long-standing bug when using to_char() to format a
numeric value in scientific notation -- if the value's exponent is
less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
division-by-zero error.

The reason for this error was that get_str_from_var_sci() divides its
input by 10^exp, which it produced using power_var_int(). However, the
underflow test in power_var_int() causes it to return zero if the
result scale is too small. That's not a problem for power_var_int()'s
only other caller, power_var(), since that limits the rscale to 1000,
but in get_str_from_var_sci() the exponent can be much smaller,
requiring a much larger rscale. Fix by introducing a new function to
compute 10^exp directly, with no rscale limit. This also allows 10^exp
to be computed more efficiently, without any numeric multiplication,
division or rounding.

Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
2021-08-05 09:27:35 +01:00
Dean Rasheed 0d6b87497e Fix corner-case errors and loss of precision in numeric_power().
This fixes a couple of related problems that arise when raising
numbers to very large powers.

Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.

Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:

  local_rscale = rscale + (int) val - ln_dweight + 8

The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.

Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.

Dean Rasheed, reviewed by Yugo Nagata.

Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
2021-07-31 11:23:48 +01:00
John Naylor fc0d9b8c22 Disallow negative strides in date_bin()
It's not clear what the semantics of negative strides would be, so throw
an error instead.

Per report from Bauyrzhan Sakhariyev

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAKpL73vZmLuFVuwF26FJ%2BNk11PVHhAnQRoREFcA03x7znRoFvA%40mail.gmail.com
Backpatch to v14
2021-07-28 12:11:33 -04:00
Tom Lane b7ea0e8c41 Fix bugs in polymorphic-argument resolution for multiranges.
We failed to deal with an UNKNOWN-type input for
anycompatiblemultirange; that should throw an error indicating
that we don't know how to resolve the multirange type.

We also failed to infer the type of an anycompatiblerange output
from an anycompatiblemultirange input or vice versa.

Per bug #17066 from Alexander Lakhin.  Back-patch to v14
where multiranges were added.

Discussion: https://postgr.es/m/17066-16a37f6223a8470b@postgresql.org
2021-07-27 15:01:49 -04:00
Fujii Masao fd90f6ba7a Avoid using ambiguous word "non-negative" in error messages.
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".

Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.

When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.

Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-07-28 01:21:35 +09:00
Tomas Vondra 611e42444b psql \dX: check schema when listing statistics objects
Commit ad600bba04 added psql command \dX listing extended statistics
objects, but it failed to consider search_path when selecting the
elements so some of the returned elements might be invisible.

The visibility was already considered for tab completion (added by
commit d99d58cdc8), so adding it to the query is fairly simple.

Reported and fix by Justin Pryzby, regression tests by me. Backpatch
to PostgreSQL 14, where \dX was introduced.

Batchpatch-through: 14
Author: Justin Pryzby
Reviewed-by: Tatsuro Yamada
Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
2021-07-26 17:31:16 +02:00
Tom Lane 712ba6b8b7 Fix check for conflicting session- vs transaction-level locks.
We have an implementation restriction that PREPARE TRANSACTION can't
handle cases where both session-lifespan and transaction-lifespan locks
are held on the same lockable object.  (That's because we'd otherwise
need to acquire a new PROCLOCK entry during post-prepare cleanup, which
is an operation that might fail.  The situation can only arise with odd
usages of advisory locks, so removing the restriction is probably not
worth the amount of effort it would take.)  AtPrepare_Locks attempted
to enforce this, but its logic was many bricks shy of a load, because
it only detected cases where the session and transaction locks had the
same lockmode.  Locks of different modes on the same object would lead
to the rather unhelpful message "PANIC: we seem to have dropped a bit
somewhere".

To fix, build a transient hashtable with one entry per locktag,
not one per locktag + mode, and use that to detect conflicts.

Per bug #17122 from Alexander Pyhalov.  This bug is ancient,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
2021-07-24 18:35:52 -04:00
John Naylor 81322fc409 Fix division by zero error in date_bin
Bauyrzhan Sakhariyev, via Github

Backpatch to v14
2021-07-22 17:36:26 -04:00
Amit Kapila 40295d158f Don't allow to set replication slot_name as ''.
We don't allow to create replication slot_name as an empty string ('') via
SQL API pg_create_logical_replication_slot() but it is allowed to be set
via Alter Subscription command. This will lead to apply worker repeatedly
keep trying to stream data via slot_name '' and the user is not allowed to
create the slot with that name.

Author: Japin Li
Reviewed-By: Ranier Vilela, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-07-19 10:54:21 +05:30
Alexander Korotkov 244ad54155 Support for unnest(multirange)
It has been spotted that multiranges lack of ability to decompose them into
individual ranges.  Subscription and proper expanded object representation
require substantial work, and it's too late for v14.  This commit
provides the implementation of unnest(multirange), which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure.

Catversion is bumped.

Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu, Tom Lane
Reviewed-by: Alvaro Herrera
2021-07-18 21:11:33 +03:00
Alvaro Herrera 3c5b7c6286
Fix pg_dump for disabled triggers on partitioned tables
pg_dump failed to preserve the 'enabled' flag (which can be not only
disabled, but also REPLICA or ALWAYS) for partitions which had it
changed from their respective parents.  Attempt to handle that by
including a definition for such triggers in the dump, but replace the
standard CREATE TRIGGER line with an ALTER TRIGGER line.

Backpatch to 11, where these triggers can exist.  In branches 11 and 12,
pick up a few test lines from commit b9b408c487 to verify that
pg_upgrade is okay with these arrangements.

Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 17:29:22 -04:00
Alvaro Herrera eef92de11e
Preserve firing-on state when cloning row triggers to partitions
When triggers are cloned from partitioned tables to their partitions,
the 'tgenabled' flag (origin/replica/always/disable) was not propagated.
Make it so that the flag on the trigger on partition is initially set to
the same value as on the partitioned table.

Add a test case to verify the behavior.

Backpatch to 11, where this appeared in commit 86f575948c.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 13:01:43 -04:00
Michael Paquier 0c83eb2e0e Fix unexpected error messages for various flavors of ALTER TABLE
Some commands of ALTER TABLE could fail with the following error:
ERROR:  "tab" is of the wrong type

This error is unexpected, as all the code paths leading to
ATWrongRelkindError() should use a supported set of relkinds to generate
correct error messages.  This commit closes the gap with such mistakes,
by adding all the missing relkind combinations.  Tests are added to
check all the problems found.  Note that some combinations are not used,
but these are left around as it could have an impact on applications
relying on this code.

2ed532e has done a much larger refactoring on HEAD to make such error
messages easier to manage in the long-term, so nothing is needed there.

Author: Kyotaro Horiguchi
Reviewed-by: Peter Eisentraut, Ahsan Hadi, Michael Paquier
Discussion: https://postgr.es/m/20210216.181415.368926598204753659.horikyota.ntt@gmail.com
Backpatch-through: 11
2021-07-14 17:15:01 +09:00
David Rowley 47ca483644 Change the name of the Result Cache node to Memoize
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough.  That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize".  People seem to like "Memoize", so let's do the rename.

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
2021-07-14 12:45:00 +12:00
Dean Rasheed 06883d58ff Fix numeric_mul() overflow due to too many digits after decimal point.
This fixes an overflow error when using the numeric * operator if the
result has more than 16383 digits after the decimal point by rounding
the result. Overflow errors should only occur if the result has too
many digits *before* the decimal point.

Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
2021-07-10 12:45:00 +01:00
Tom Lane 1d98fdaed8 Avoid creating a RESULT RTE that's marked LATERAL.
Commit 7266d0997 added code to pull up simple constant function
results, converting the RTE_FUNCTION RTE to a dummy RTE_RESULT
RTE since it no longer need be scanned.  But I forgot to clear
the LATERAL flag if the RTE has it set.  If the function reduced
to a constant, it surely contains no lateral references so this
simplification is logically OK.  It's needed because various other
places will Assert that RESULT RTEs aren't LATERAL.

Per bug #17097 from Yaoguang Chen.  Back-patch to v13 where the
faulty code came in.

Discussion: https://postgr.es/m/17097-3372ef9f798fc94f@postgresql.org
2021-07-09 13:38:24 -04:00
Tom Lane 39b6e85f13 Reject cases where a query in WITH rewrites to just NOTIFY.
Since the executor can't cope with a utility statement appearing
as a node of a plan tree, we can't support cases where a rewrite
rule inserts a NOTIFY into an INSERT/UPDATE/DELETE command appearing
in a WITH clause of a larger query.  (One can imagine ways around
that, but it'd be a new feature not a bug fix, and so far there's
been no demand for it.)  RewriteQuery checked for this, but it
missed the case where the DML command rewrites to *only* a NOTIFY.
That'd lead to crashes later on in planning.  Add the missed check,
and improve the level of testing of this area.

Per bug #17094 from Yaoguang Chen.  It's been busted since WITH
was introduced, so back-patch to all supported branches.

Discussion: https://postgr.es/m/17094-bf15dff55eaf2e28@postgresql.org
2021-07-09 11:02:26 -04:00
David Rowley 6de3a21bbc Fix incorrect return value in pg_size_pretty(bigint)
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes.  This was due to two separate issues.

1. The function used bit shifting to convert the number of bytes into
larger units.  The rounding performed by bit shifting is not the same as
dividing.  For example -3 >> 1 = -2, but -3 / 2 = -1.  These two
operations are only equivalent with positive numbers.

2. The half_rounded() macro rounded towards positive infinity.  This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.

Here we fix #1 by dividing the values instead of bit shifting.  We fix #2
by adjusting the half_rounded macro always to round away from zero.

Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting.  A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right.  However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear.  This change is just cosmetic and does not
affect the return value of the numeric version of the function.

Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.

This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.

Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
2021-07-09 14:04:40 +12:00
Alvaro Herrera be280cdad2
Don't reset relhasindex for partitioned tables on ANALYZE
Commit 0e69f705cc introduced code to analyze partitioned table;
however, that code fails to preserve pg_class.relhasindex correctly.
Fix by observing whether any indexes exist rather than accidentally
falling through to assuming none do.

Backpatch to 14.

Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CALNJ-vS1R3Qoe5t4tbzxrkpBtzRbPq1dDcW4RmA_a+oqweF30w@mail.gmail.com
2021-07-01 12:56:30 -04:00
Alexander Korotkov 322e82b77e Fixes for multirange selectivity estimation
* Fix enumeration of the multirange operators in calc_multirangesel() and
   calc_multirangesel() switches.
 * Add more regression tests for matching to empty ranges/multiranges.

Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/c5269c65-f967-77c5-ff7c-15e621c47f6a%40gmail.com
Author: Alexander Korotkov
Backpatch-through: 14, where multiranges were introduced
2021-06-29 23:18:59 +03:00
Tom Lane f8b51464c2 Fix bogus logic for reporting which hash partition conflicts.
Commit efbfb6424 added logic for reporting exactly which existing
partition conflicts when complaining that a new hash partition's
modulus isn't compatible with the existing ones.  However, it
misunderstood the partitioning data structure, and would select
the wrong partition in some cases, or crash outright due to fetching
a bogus table OID in other cases.

Per bug #17076 from Alexander Lakhin.  Fix by Amit Langote;
some further work on the code comments by me.

Discussion: https://postgr.es/m/17076-89a16ae835d329b9@postgresql.org
2021-06-29 14:34:41 -04:00
Peter Eisentraut c31833779d Message style improvements 2021-06-28 08:36:44 +02:00
Michael Paquier 09a69f6e23 Add test for CREATE INDEX CONCURRENTLY with not-so-immutable predicate
83158f7 has improved index_set_state_flags() so as it is possible to use
transactional updates when updating pg_index state flags, but there was
not really a test case which stressed directly the possibility it fixed.
This commit adds such a test, using a predicate that looks valid in
appearance but calls a stable function.

Author: Andrey Lepikhov
Discussion: https://postgr.es/m/9b905019-5297-7372-0ad2-e1a4bb66a719@postgrespro.ru
Backpatch-through: 9.6
2021-06-28 11:17:05 +09:00
Peter Eisentraut e59d428f34 Fixes in ALTER SUBSCRIPTION DROP PUBLICATION code
ALTER SUBSCRIPTION DROP PUBLICATION does not actually support
copy_data option, so remove it from tab completion.

Also, reword the error message that is thrown when all the
publications from a subscription are specified to be dropped.

Also, made few doc and cosmetic adjustments.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddy@enterprisedb.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CALDaNm21RwsDzs4xj14ApteAF7auyyomHNnp+NEL-sH8m-jMvQ@mail.gmail.com
2021-06-25 09:57:02 +02:00
Peter Geoghegan 3499df0dee Support disabling index bypassing by VACUUM.
Generalize the INDEX_CLEANUP VACUUM parameter (and the corresponding
reloption): make it into a ternary style boolean parameter.  It now
exposes a third option, "auto".  The "auto" option (which is now the
default) enables the "bypass index vacuuming" optimization added by
commit 1e55e7d1.

"VACUUM (INDEX_CLEANUP TRUE)" is redefined to once again make VACUUM
simply do any required index vacuuming, regardless of how few dead
tuples are encountered during the first scan of the target heap relation
(unless there are exactly zero).  This gives users a way of opting out
of the "bypass index vacuuming" optimization, if for whatever reason
that proves necessary.  It is also expected to be used by PostgreSQL
developers as a testing option from time to time.

"VACUUM (INDEX_CLEANUP FALSE)" does the same thing as it always has: it
forcibly disables both index vacuuming and index cleanup.  It's not
expected to be used much in PostgreSQL 14.  The failsafe mechanism added
by commit 1e55e7d1 addresses the same problem in a simpler way.
INDEX_CLEANUP can now be thought of as a testing and compatibility
option.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CAH2-WznrBoCST4_Gxh_G9hA8NzGUbeBGnOUC8FcXcrhqsv6OHQ@mail.gmail.com
2021-06-18 20:04:07 -07:00
Tom Lane d21fca0843 Fix misbehavior of DROP OWNED BY with duplicate polroles entries.
Ordinarily, a pg_policy.polroles array wouldn't list the same role
more than once; but CREATE POLICY does not prevent that.  If we
perform DROP OWNED BY on a role that is listed more than once,
RemoveRoleFromObjectPolicy either suffered an assertion failure
or encountered a tuple-updated-by-self error.  Rewrite it to cope
correctly with duplicate entries, and add a CommandCounterIncrement
call to prevent the other problem.

Per discussion, there's other cleanup that ought to happen here,
but this seems like the minimum essential fix.

Per bug #17062 from Alexander Lakhin.  It's been broken all along,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17062-11f471ae3199ca23@postgresql.org
2021-06-18 18:00:09 -04:00
Andrew Dunstan 0a4efdc7eb
Don't set a fast default for anything but a plain table
The fast default code added in Release 11 omitted to check that the
table a fast default was being added to was a plain table. Thus one
could be added to a foreign table, which predicably blows up. Here we
perform that check.

In addition, on the back branches, since some of these might have
escaped into the wild, if we encounter a missing value for
an attribute of something other than a plain table we ignore it.

Fixes bug #17056

Backpatch to release 11,

Reviewed by: Andres Freund, Álvaro Herrera and Tom Lane
2021-06-18 06:51:12 -04:00
Alexander Korotkov 817bb0a7d1 Revert 29854ee8d1 due to buildfarm failures
Reported-by: Tom Lane
Discussion: https://postgr.es/m/CAPpHfdvcnw3x7jdV3r52p4%3D5S4WUxBCzcQKB3JukQHoicv1LSQ%40mail.gmail.com
2021-06-15 21:44:40 +03:00
Alexander Korotkov 29854ee8d1 Support for unnest(multirange) and cast multirange as an array of ranges
It has been spotted that multiranges lack of ability to decompose them into
individual ranges.  Subscription and proper expanded object representation
require substantial work, and it's too late for v14.  This commit
provides the implementation of unnest(multirange) and cast multirange as
an array of ranges, which is quite trivial.

unnest(multirange) is defined as a polymorphic procedure.  The catalog
description of the cast underlying procedure is duplicated for each multirange
type because we don't have anyrangearray polymorphic type to use here.

Catversion is bumped.

Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu
2021-06-15 15:59:20 +03:00
Michael Paquier 2d689babe3 Improve handling of dropped objects in pg_event_trigger_ddl_commands()
An object found as dropped when digging into the list of objects
returned by pg_event_trigger_ddl_commands() could cause a cache lookup
error, as the calls grabbing for the object address and the type name
would fail if the object was missing.

Those lookup errors could be seen with combinations of ALTER TABLE
sub-commands involving identity columns.  The lookup logic is changed in
this code path to get a behavior similar to any other SQL-callable
function by ignoring objects that are not found, taking advantage of
2a10fdc.  The back-branches are not changed, as they require this commit
that is too invasive for stable branches.

While on it, add test cases to exercise event triggers with identity
columns, and stress more cases with the event ddl_command_end for
relations.

Author: Sven Klemm, Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/CAMCrgp2R1cEXU53iYKtW6yVEp2_yKUz+z=3-CTrYpPP+xryRtg@mail.gmail.com
2021-06-14 14:57:22 +09:00
Michael Paquier dbab0c07e5 Remove forced toast recompression in VACUUM FULL/CLUSTER
The extra checks added by the recompression of toast data introduced in
bbe0a81 is proving to have a performance impact on VACUUM or CLUSTER
even if no recompression is done.  This is more noticeable with more
toastable columns that contain non-NULL values.

Improvements could be done to make those extra checks less expensive,
but that's not material for 14 at this stage, and we are not sure either
if the code path of VACUUM FULL/CLUSTER is adapted for this job.

Per discussion with several people, including Andres Freund, Robert
Haas, Álvaro Herrera, Tom Lane and myself.

Discussion: https://postgr.es/m/20210527003144.xxqppojoiwurc2iz@alap3.anarazel.de
2021-06-14 09:25:50 +09:00
Michael Paquier a9e0b3b08f Ignore more environment variables in pg_regress.c
This is similar to the work done in 8279f68 for TestLib.pm, where
environment variables set may cause unwanted failures if using a
temporary installation with pg_regress.  The list of variables reset is
adjusted in each stable branch depending on what is supported.

Comments are added to remember that the lists in TestLib.pm and
pg_regress.c had better be kept in sync.

Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/YMNR9GYDn+fHlMta@paquier.xyz
Backpatch-through: 9.6
2021-06-13 20:07:39 +09:00
Tom Lane e56bce5d43 Reconsider the handling of procedure OUT parameters.
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only.  While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives.  Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types.  That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s).  The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.

Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.

To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.

This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.

catversion bump forced because the representation of procedures
with OUT arguments changes.

Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-10 17:11:36 -04:00
Tom Lane bfeede9fa4 Don't crash on empty statements in SQL-standard function bodies.
gram.y should discard NULL pointers (empty statements) when
assembling a routine_body_stmt_list, as it does for other
sorts of statement lists.

Julien Rouhaud and Tom Lane, per report from Noah Misch.

Discussion: https://postgr.es/m/20210606044418.GA297923@rfd.leadboat.com
2021-06-08 11:59:34 -04:00
Michael Paquier 68a6d8a870 Fix portability issue in test indirect_toast
When run on a server using default_toast_compression set to LZ4, this
test would fail because of a consistency issue with the order of the
tuples treated.  LZ4 causes one tuple to be stored inline instead of
getting externalized.  As the goal of this test is to check after data
stored externally, stick to pglz as the compression algorithm used, so
as all data of this test is stored the way it should.

Analyzed-by: Dilip Kumar
Discussion: https://postgr.es/m/YLrDWxJgM8WWMoCg@paquier.xyz
2021-06-07 18:12:29 +09:00
Tom Lane 2955c2be79 Re-allow custom GUC names that have more than two components.
Commit 3db826bd5 disallowed this case, but it turns out that some
people are depending on it.  Since the core grammar has allowed
it since 3dc37cd8d, it seems like this code should fall in line.

Per bug #17045 from Robert Sosinski.

Discussion: https://postgr.es/m/17045-6a4a9f0d1513f72b@postgresql.org
2021-06-02 18:50:23 -04:00
Tom Lane 1103033aed Reject SELECT ... GROUP BY GROUPING SETS (()) FOR UPDATE.
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets.  That resulted in a bad plan and
a null-pointer dereference in the executor.

Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable().  That'd just lead to
silly estimates, but it should be fixed too.

Per private report from Yaoguang Chen.
Back-patch to all supported branches.
2021-06-01 11:12:56 -04:00
Tom Lane 6ee41a301e Fix mis-planning of repeated application of a projection.
create_projection_plan contains a hidden assumption (here made
explicit by an Assert) that a projection-capable Path will yield a
projection-capable Plan.  Unfortunately, that assumption is violated
only a few lines away, by create_projection_plan itself.  This means
that two stacked ProjectionPaths can yield an outcome where we try to
jam the upper path's tlist into a non-projection-capable child node,
resulting in an invalid plan.

There isn't any good reason to have stacked ProjectionPaths; indeed the
whole concept is faulty, since the set of Vars/Aggs/etc needed by the
upper one wouldn't necessarily be available in the output of the lower
one, nor could the lower one create such values if they weren't
available from its input.  Hence, we can fix this by adjusting
create_projection_path to strip any top-level ProjectionPath from the
subpath it's given.  (This amounts to saying "oh, we changed our
minds about what we need to project here".)

The test case added here only fails in v13 and HEAD; before that, we
don't attempt to shove the Sort into the parallel part of the plan,
for reasons that aren't entirely clear to me.  However, all the
directly-related code looks generally the same as far back as v11,
where the hazard was introduced (by d7c19e62a).  So I've got no faith
that the same type of bug doesn't exist in v11 and v12, given the
right test case.  Hence, back-patch the code changes, but not the
irrelevant test case, into those branches.

Per report from Bas Poot.

Discussion: https://postgr.es/m/534fca83789c4a378c7de379e9067d4f@politie.nl
2021-05-31 12:03:00 -04:00
Michael Paquier 12cc956664 Improve some error wording with multirange type parsing
Braces were referred in some error messages as only brackets (not curly
brackets or curly braces), which can be confusing as other types of
brackets could be used.

While on it, add one test to check after the case of junk characters
detected after a right brace.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210514.153153.1814935914483287479.horikyota.ntt@gmail.com
2021-05-31 11:35:00 +09:00
Tom Lane e6241d8e03 Rethink definition of pg_attribute.attcompression.
Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to
compress, use the current setting of default_toast_compression".
This allows '\0' to be a suitable default choice regardless of
datatype, greatly simplifying code paths that initialize tupledescs
and the like.  It seems like a more user-friendly approach as well,
because now the default compression choice doesn't migrate into table
definitions, meaning that changing default_toast_compression is
usually sufficient to flip an installation's behavior; one needn't
tediously issue per-column ALTER SET COMPRESSION commands.

Along the way, fix a few minor bugs and documentation issues
with the per-column-compression feature.  Adopt more robust
APIs for SetIndexStorageProperties and GetAttributeCompression.

Bump catversion because typical contents of attcompression will now
be different.  We could get away without doing that, but it seems
better to ensure v14 installations all agree on this.  (We already
forced initdb for beta2, anyway.)

Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
2021-05-27 13:24:27 -04:00
Amit Kapila 0734b0e983 Improve docs and error messages for parallel vacuum.
The error messages, docs, and one of the options were using
'parallel degree' to indicate parallelism used by vacuum command. We
normally use 'parallel workers' at other places so change it for parallel
vacuum accordingly.

Author: Bharath Rupireddy
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
2021-05-25 09:26:53 +05:30
David Rowley cba5c70b95 Fix setrefs.c code for Result Cache nodes
Result Cache, added in 9eacee2e6 neglected to properly adjust the plan
references in setrefs.c.  This could lead to the following error during
EXPLAIN:

ERROR:  cannot decompile join alias var in plan tree

Fix that.

Bug: 17030
Reported-by: Hans Buschmann
Discussion: https://postgr.es/m/17030-5844aecae42fe223@postgresql.org
2021-05-25 12:50:22 +12:00
Tom Lane 4b10074453 Disallow whole-row variables in GENERATED expressions.
This was previously allowed, but I think that was just an oversight.
It's a clear violation of the rule that a generated column cannot
depend on itself or other generated columns.  Moreover, because the
code was relying on the assumption that no such cross-references
exist, it was pretty easy to crash ALTER TABLE and perhaps other
places.  Even if you managed not to crash, you got quite unstable,
implementation-dependent results.

Per report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.

Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
2021-05-21 15:12:08 -04:00
Tom Lane 2b0ee126bb Fix usage of "tableoid" in GENERATED expressions.
We consider this supported (though I've got my doubts that it's a
good idea, because tableoid is not immutable).  However, several
code paths failed to fill the field in soon enough, causing such
a GENERATED expression to see zero or the wrong value.  This
occurred when ALTER TABLE adds a new GENERATED column to a table
with existing rows, and during regular INSERT or UPDATE on a
foreign table with GENERATED columns.

Noted during investigation of a report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.

Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
2021-05-21 15:02:06 -04:00
Tom Lane 413c1ef98e Avoid creating testtablespace directories where not wanted.
Recently we refactored things so that pg_regress makes the
"testtablespace" subdirectory used by the core regression tests,
instead of doing that in the makefiles.  That had the undesirable
side effect of making such a subdirectory in every directory that
has "input" or "output" test files.  Since these subdirectories
remain empty, git doesn't complain about them, but nonetheless
they're clutter.

To fix, invent an explicit --make-testtablespace-dir switch,
so that pg_regress only makes the subdirectory when explicitly
told to.

Discussion: https://postgr.es/m/2854388.1621284789@sss.pgh.pa.us
2021-05-19 14:04:01 -04:00
Tom Lane def5b065ff Initial pgindent and pgperltidy run for v14.
Also "make reformat-dat-files".

The only change worthy of note is that pgindent messed up the formatting
of launcher.c's struct LogicalRepWorkerId, which led me to notice that
that struct wasn't used at all anymore, so I just took it out.
2021-05-12 13:14:10 -04:00
Tom Lane e135743ef0 Reduce runtime of privileges.sql test under CLOBBER_CACHE_ALWAYS.
Several queries in the privileges regression test cause the planner
to apply the plpgsql function "leak()" to every element of the
histogram for atest12.b.  Since commit 0c882e52a increased the size
of that histogram to 10000 entries, the test invokes that function
over 100000 times, which takes an absolutely unreasonable amount of
time in clobber-cache-always mode.

However, there's no real reason why that has to be a plpgsql
function: for the purposes of this test, all that matters is that
it not be marked leakproof.  So we can replace the plpgsql
implementation with a direct call of int4lt, which has the same
behavior and is orders of magnitude faster.  This is expected to
cut several hours off the buildfarm cycle time for CCA animals.
It has some positive impact in normal builds too, though that's
probably lost in the noise.

Back-patch to v13 where 0c882e52a came in.

Discussion: https://postgr.es/m/575884.1620626638@sss.pgh.pa.us
2021-05-11 20:59:58 -04:00
Tom Lane 1df3555acc Get rid of the separate serial_schedule list of tests.
Having to maintain two lists of regression test scripts has proven
annoyingly error-prone.  We can achieve the effect of the
serial_schedule by running the parallel_schedule with
"--max_connections=1"; so do that and remove serial_schedule.

This causes cosmetic differences in the progress output, but it
doesn't seem worth restructuring pg_regress to avoid that.

Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
2021-05-11 17:52:13 -04:00
Tom Lane 6303a57309 Replace opr_sanity test's binary_coercible() function with C code.
opr_sanity's binary_coercible() function has always been meant
to match the parser's notion of binary coercibility, but it also
has always been a rather poor approximation of the parser's
real rules (as embodied in IsBinaryCoercible()).  That hasn't
bit us so far, but it's predictable that it will eventually.

It also now emerges that implementing this check in plpgsql
performs absolutely horribly in clobber-cache-always testing.
(Perhaps we could do something about that, but I suspect it just
means that plpgsql is exploiting catalog caching to the hilt.)

Hence, let's replace binary_coercible() with a C shim that directly
invokes IsBinaryCoercible(), eliminating both the semantic hazard
and the performance issue.

Most of regress.c's C functions are declared in create_function_1,
but we can't simply move that to before opr_sanity/type_sanity
since those tests would complain about the resulting shell types.
I chose to split it into create_function_0 and create_function_1.
Since create_function_0 now runs as part of a parallel group while
create_function_1 doesn't, reduce the latter to create just those
functions that opr_sanity and type_sanity would whine about.

To make room for create_function_0 in the second parallel group
of tests, move tstypes to the third parallel group.

In passing, clean up some ordering deviations between
parallel_schedule and serial_schedule.

Discussion: https://postgr.es/m/292305.1620503097@sss.pgh.pa.us
2021-05-11 14:28:11 -04:00
Peter Eisentraut 6d177e2813 Fix typo 2021-05-11 09:06:49 +02:00
Tom Lane 049e1e2edb Fix mishandling of resjunk columns in ON CONFLICT ... UPDATE tlists.
It's unusual to have any resjunk columns in an ON CONFLICT ... UPDATE
list, but it can happen when MULTIEXPR_SUBLINK SubPlans are present.
If it happens, the ON CONFLICT UPDATE code path would end up storing
tuples that include the values of the extra resjunk columns.  That's
fairly harmless in the short run, but if new columns are added to
the table then the values would become accessible, possibly leading
to malfunctions if they don't match the datatypes of the new columns.

This had escaped notice through a confluence of missing sanity checks,
including

* There's no cross-check that a tuple presented to heap_insert or
heap_update matches the table rowtype.  While it's difficult to
check that fully at reasonable cost, we can easily add assertions
that there aren't too many columns.

* The output-column-assignment cases in execExprInterp.c lacked
any sanity checks on the output column numbers, which seems like
an oversight considering there are plenty of assertion checks on
input column numbers.  Add assertions there too.

* We failed to apply nodeModifyTable's ExecCheckPlanOutput() to
the ON CONFLICT UPDATE tlist.  That wouldn't have caught this
specific error, since that function is chartered to ignore resjunk
columns; but it sure seems like a bad omission now that we've seen
this bug.

In HEAD, the right way to fix this is to make the processing of
ON CONFLICT UPDATE tlists work the same as regular UPDATE tlists
now do, that is don't add "SET x = x" entries, and use
ExecBuildUpdateProjection to evaluate the tlist and combine it with
old values of the not-set columns.  This adds a little complication
to ExecBuildUpdateProjection, but allows removal of a comparable
amount of now-dead code from the planner.

In the back branches, the most expedient solution seems to be to
(a) use an output slot for the ON CONFLICT UPDATE projection that
actually matches the target table, and then (b) invent a variant of
ExecBuildProjectionInfo that can be told to not store values resulting
from resjunk columns, so it doesn't try to store into nonexistent
columns of the output slot.  (We can't simply ignore the resjunk columns
altogether; they have to be evaluated for MULTIEXPR_SUBLINK to work.)
This works back to v10.  In 9.6, projections work much differently and
we can't cheaply give them such an option.  The 9.6 version of this
patch works by inserting a JunkFilter when it's necessary to get rid
of resjunk columns.

In addition, v11 and up have the reverse problem when trying to
perform ON CONFLICT UPDATE on a partitioned table.  Through a
further oversight, adjust_partition_tlist() discarded resjunk columns
when re-ordering the ON CONFLICT UPDATE tlist to match a partition.
This accidentally prevented the storing-bogus-tuples problem, but
at the cost that MULTIEXPR_SUBLINK cases didn't work, typically
crashing if more than one row has to be updated.  Fix by preserving
resjunk columns in that routine.  (I failed to resist the temptation
to add more assertions there too, and to do some minor code
beautification.)

Per report from Andres Freund.  Back-patch to all supported branches.

Security: CVE-2021-32028
2021-05-10 11:02:29 -04:00
Thomas Munro c2dc19342e Revert recovery prefetching feature.
This set of commits has some bugs with known fixes, but at this late
stage in the release cycle it seems best to revert and resubmit next
time, along with some new automated test coverage for this whole area.

Commits reverted:

dc88460c: Doc: Review for "Optionally prefetch referenced data in recovery."
1d257577: Optionally prefetch referenced data in recovery.
f003d9f8: Add circular WAL decoding buffer.
323cbe7c: Remove read_page callback from XLogReader.

Remove the new GUC group WAL_RECOVERY recently added by a55a9847, as the
corresponding section of config.sgml is now reverted.

Discussion: https://postgr.es/m/CAOuzzgrn7iKnFRsB4MHp3UisEQAGgZMbk_ViTN4HV4-Ksq8zCg%40mail.gmail.com
2021-05-10 16:06:09 +12:00
Peter Eisentraut 9f989a8581 Fix typo 2021-05-07 17:53:34 +02:00
Thomas Munro ec48314708 Revert per-index collation version tracking feature.
Design problems were discovered in the handling of composite types and
record types that would cause some relevant versions not to be recorded.
Misgivings were also expressed about the use of the pg_depend catalog
for this purpose.  We're out of time for this release so we'll revert
and try again.

Commits reverted:

1bf946bd: Doc: Document known problem with Windows collation versions.
cf002008: Remove no-longer-relevant test case.
ef387bed: Fix bogus collation-version-recording logic.
0fb0a050: Hide internal error for pg_collation_actual_version(<bad OID>).
ff942057: Suppress "warning: variable 'collcollate' set but not used".
d50e3b1f: Fix assertion in collation version lookup.
f24b1569: Rethink extraction of collation dependencies.
257836a7: Track collation versions for indexes.
cd6f479e: Add pg_depend.refobjversion.
7d1297df: Remove pg_collation.collversion.

Discussion: https://postgr.es/m/CA%2BhUKGLhj5t1fcjqAu8iD9B3ixJtsTNqyCCD4V0aTO9kAKAjjA%40mail.gmail.com
2021-05-07 21:10:11 +12:00
Alvaro Herrera 6f70d7ca1d
Have ALTER CONSTRAINT recurse on partitioned tables
When ALTER TABLE .. ALTER CONSTRAINT changes deferrability properties
changed in a partitioned table, we failed to propagate those changes
correctly to partitions and to triggers.  Repair by adding a recursion
mechanism to affect all derived constraints and all derived triggers.
(In particular, recurse to partitions even if their respective parents
are already in the desired state: it is possible for the partitions to
have been altered individually.)  Because foreign keys involve tables in
two sides, we cannot use the standard ALTER TABLE recursion mechanism,
so we invent our own by following pg_constraint.conparentid down.

When ALTER TABLE .. ALTER CONSTRAINT is invoked on the derived
pg_constraint object that's automaticaly created in a partition as a
result of a constraint added to its parent, raise an error instead of
pretending to work and then failing to modify all the affected triggers.
Before this commit such a command would be allowed but failed to affect
all triggers, so it would silently misbehave.  (Restoring dumps of
existing databases is not affected, because pg_dump does not produce
anything for such a derived constraint anyway.)

Add some tests for the case.

Backpatch to 11, where foreign key support was added to partitioned
tables by commit 3de241dba8.  (A related change is commit f56f8f8da6
in pg12 which added support for FKs *referencing* partitioned tables;
this is what forces us to use an ad-hoc recursion mechanism for this.)

Diagnosed by Tom Lane from bug report from Ron L Johnson.  As of this
writing, no reviews were offered.

Discussion: https://postgr.es/m/75fe0761-a291-86a9-c8d8-4906da077469@gmail.com
Discussion: https://postgr.es/m/3144850.1607369633@sss.pgh.pa.us
2021-05-05 12:21:50 -04:00
Peter Eisentraut a970edbed3 Fix ALTER TABLE / INHERIT with generated columns
When running ALTER TABLE t2 INHERIT t1, we must check that columns in
t2 that correspond to a generated column in t1 are also generated and
have the same generation expression.  Otherwise, this would allow
creating setups that a normal CREATE TABLE sequence would not allow.

Discussion: https://www.postgresql.org/message-id/22de27f6-7096-8d96-4619-7b882932ca25@2ndquadrant.com
2021-05-04 12:09:08 +02:00
Tom Lane f68970e33f Fix performance issue in new regex match-all detection code.
Commit 824bf7190 introduced a new search of the NFAs generated by
regex compilation.  I failed to think hard about the performance
characteristics of that search, with the predictable outcome
that it's bad: weird regexes can trigger exponential search time.
Worse, there's no check-for-interrupt in that code, so you can't
even cancel the query if this happens.

Fix by introducing memo-ization of the search results, so that any one
NFA state need be examined in detail just once.  This potentially uses
a lot of memory, but we can bound the memory usage by putting a limit
on the number of states for which we'll try to prove match-all-ness.
That is sane because we already have a limit (DUPINF) on the maximum
finite string length that a matchall regex can match; and patterns
that involve much more than DUPINF states would probably exceed that
limit anyway.

Also, rearrange the logic so that we check the basic is-the-graph-
all-RAINBOW-arcs property before we start the recursive search to
determine path lengths.  This will ensure that we fall out quickly
whenever the NFA couldn't possibly be matchall.

Also stick in a check-for-interrupt, just in case these measures
don't completely eliminate the risk of slowness.

Discussion: https://postgr.es/m/3483895.1619898362@sss.pgh.pa.us
2021-05-03 11:42:31 -04:00
Alexander Korotkov eb086056fe Make websearch_to_tsquery() parse text in quotes as a single token
websearch_to_tsquery() splits text in quotes into tokens and connects them with
phrase operator on its own.  However, that leads to surprising results when the
token contains no words.

For instance, websearch_to_tsquery('"aaa: bbb"') is 'aaa <2> bbb', because
it is equivalent of to_tsquery(E'aaa <-> \':\' <-> bbb').  But
websearch_to_tsquery('"aaa: bbb"') has to be 'aaa <-> bbb' in order to match
to_tsvector('aaa: bbb').

Since 0c4f355c6a, we anyway connect lexemes of complex tokens with phrase
operators.  Thus, let's just websearch_to_tsquery() parse text in quotes as
a single token.  Therefore, websearch_to_tsquery() should process the quoted
text in the same way phraseto_tsquery() does.  This solution is what we exactly
need and also simplifies the code.

This commit is an incompatible change, so we don't backpatch it.

Reported-by: Valentin Gatien-Baron
Discussion: https://postgr.es/m/CA%2B0DEqiZs7gdOd4ikmg%3D0UWG%2BSwWOLxPsk_JW-sx9WNOyrb0KQ%40mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Tom Lane, Zhihong Yu
2021-05-03 04:18:19 +03:00
Bruce Momjian 651d005e76 Revert use singular for -1 (commits 9ee7d533da and 5da9868ed9
Turns out you can specify negative values using plurals:

	https://english.stackexchange.com/questions/9735/is-1-followed-by-a-singular-or-plural-noun

so the previous code was correct enough, and consistent with other usage
in our code.  Also add comment in the two places where this could be
confused.

Reported-by: Noah Misch

Diagnosed-by: 20210425115726.GA2353095@rfd.leadboat.com
2021-05-01 10:42:44 -04:00
David Rowley 3c80e96dff Adjust EXPLAIN output for parallel Result Cache plans
Here we adjust the EXPLAIN ANALYZE output for Result Cache so that we
don't show any Result Cache stats for parallel workers who don't
contribute anything to Result Cache plan nodes.

I originally had ideas that workers who don't help could still have their
Result Cache stats displayed.  The idea with that was so that I could
write some parallel Result Cache regression tests that show the EXPLAIN
ANALYZE output.  However, I realized a little too late that such tests
would just not be possible to have run in a stable way on the buildfarm.

With that knowledge, before 9eacee2e6 went in, I had removed all of the
tests that were showing the EXPLAIN ANALYZE output of a parallel Result
Cache plan, however, I forgot to put back the code that adjusts the
EXPLAIN output to hide the Result Cache stats for parallel workers who
were not fast enough to help out before query execution was over. All
other nodes behave this way and so should Result Cache.

Additionally, with this change, it now seems safe enough to remove the SET
force_parallel_mode = off that I had added to the regression tests.

Also, perform some cleanup in the partition_prune tests. I had adjusted
the explain_parallel_append() function to sanitize the Result Cache
EXPLAIN ANALYZE output.  However, since I didn't actually include any
parallel Result Cache tests that show their EXPLAIN ANALYZE output, that
code does nothing and can be removed.

In passing, move the setting of memPeakKb into the scope where it's used.

Reported-by: Amit Khandekar
Author: David Rowley, Amit Khandekar
Discussion: https://postgr.es/m/CAJ3gD9d8SkfY95GpM1zmsOtX2-Ogx5q-WLsf8f0ykEb0hCRK3w@mail.gmail.com
2021-04-30 14:46:42 +12:00
Michael Paquier f7aab36d61 Fix pg_identify_object_as_address() with event triggers
Attempting to use this function with event triggers failed, as, since
its introduction in a676201, this code has never associated an object
name with event triggers.  This addresses the failure by adding the
event trigger name to the set defining its object address.

Note that regression tests are added within event_trigger and not
object_address to avoid issues with concurrent connections in parallel
schedules.

Author: Joel Jacobson
Discussion: https://postgr.es/m/3c905e77-a026-46ae-8835-c3f6cd1d24c8@www.fastmail.com
Backpatch-through: 9.6
2021-04-28 11:17:58 +09:00
Amit Kapila 3fa17d3771 Use HTAB for replication slot statistics.
Previously, we used to use the array of size max_replication_slots to
store stats for replication slots. But that had two problems in the cases
where a message for dropping a slot gets lost: 1) the stats for the new
slot are not recorded if the array is full and 2) writing beyond the end
of the array if the user reduces the max_replication_slots.

This commit uses HTAB for replication slot statistics, resolving both
problems. Now, pgstat_vacuum_stat() search for all the dead replication
slots in stats hashtable and tell the collector to remove them. To avoid
showing the stats for the already-dropped slots, pg_stat_replication_slots
view searches slot stats by the slot name taken from pg_replication_slots.

Also, we send a message for creating a slot at slot creation, initializing
the stats. This reduces the possibility that the stats are accumulated
into the old slot stats when a message for dropping a slot gets lost.

Reported-by: Andres Freund
Author: Sawada Masahiko, test case by Vignesh C
Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-27 09:09:11 +05:30
Tom Lane d479d00285 Don't crash on reference to an un-available system column.
Adopt a more consistent policy about what slot-type-specific
getsysattr functions should do when system attributes are not
available.  To wit, they should all throw the same user-oriented
error, rather than variously crashing or emitting developer-oriented
messages.

This closes a identifiable problem in commits a71cfc56b and
3fb93103a (in v13 and v12), so back-patch into those branches,
along with a test case to try to ensure we don't break it again.
It is not known that any of the former crash cases are reachable
in HEAD, but this seems like a good safety improvement in any case.

Discussion: https://postgr.es/m/141051591267657@mail.yandex.ru
2021-04-22 17:30:55 -04:00
Alvaro Herrera 7b357cc6ae
Don't add a redundant constraint when detaching a partition
On ALTER TABLE .. DETACH CONCURRENTLY, we add a new table constraint
that duplicates the partition constraint.  But if the partition already
has another constraint that implies that one, then that's unnecessary.
We were already avoiding the addition of a duplicate constraint if there
was an exact 'equal' match -- this just improves the quality of the check.

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20210410184226.GY6592@telsasoft.com
2021-04-21 18:12:05 -04:00
Peter Eisentraut 544b28088f doc: Improve hyphenation consistency 2021-04-21 08:14:43 +02:00
Bruce Momjian 9660834dd8 adjust query id feature to use pg_stat_activity.query_id
Previously, it was pg_stat_activity.queryid to match the
pg_stat_statements queryid column.  This is an adjustment to patch
4f0b0966c8.  This also adjusts some of the internal function calls to
match.  Catversion bumped.

Reported-by: Álvaro Herrera, Julien Rouhaud

Discussion: https://postgr.es/m/20210408032704.GA7498@alvherre.pgsql
2021-04-20 12:22:26 -04:00
Tom Lane 3753982441 Fix planner failure in some cases of sorting by an aggregate.
An oversight introduced by the incremental-sort patches caused
"could not find pathkey item to sort" errors in some situations
where a sort key involves an aggregate or window function.

The basic problem here is that find_em_expr_usable_for_sorting_rel
isn't properly modeling what prepare_sort_from_pathkeys will do
later.  Rather than hoping we can keep those functions in sync,
let's refactor so that they actually share the code for
identifying a suitable sort expression.

With this refactoring, tlist.c's tlist_member_ignore_relabel
is unused.  I removed it in HEAD but left it in place in v13,
in case any extensions are using it.

Per report from Luc Vlaming.  Back-patch to v13 where the
problem arose.

James Coleman and Tom Lane

Discussion: https://postgr.es/m/91f3ec99-85a4-fa55-ea74-33f85a5c651f@swarm64.com
2021-04-20 11:32:02 -04:00
Tom Lane f24b156997 Rethink extraction of collation dependencies.
As it stands, find_expr_references_walker() pays attention to leaf-node
collation fields while ignoring the input collations of actual function
and operator nodes.  That seems exactly backwards from a semantic
standpoint, and it leads to reporting dependencies on collations that
really have nothing to do with the expression's behavior.

Hence, rewrite to look at function input collations instead.  This
isn't completely perfect either; it fails to account for the behavior
of record_eq and its siblings.  (The previous coding at least gave an
approximation of that, though I think it could be fooled pretty easily
into considering the columns of irrelevant composite types.)  We may
be able to improve on this later, but for now this should satisfy the
buildfarm members that didn't like ef387bed8.

In passing fix some oversights in GetTypeCollations(), and get
rid of its duplicative de-duplications.  (I'm worried that it's
still potentially O(N^2) or worse, but this makes it a little
better.)

Discussion: https://postgr.es/m/3564817.1618420687@sss.pgh.pa.us
2021-04-16 22:23:46 -04:00
Tom Lane ef387bed87 Fix bogus collation-version-recording logic.
recordMultipleDependencies had the wrong scope for its "version"
variable, allowing a version label to leak from the collation entry it
was meant for to subsequent non-collation entries.  This is relatively
hard to trigger because of the OID-descending order that the inputs
will normally arrive in: subsequent non-collation items will tend to
be pinned.  But it can be exhibited easily with a custom collation.

Also, don't special-case the default collation, but instead ignore
pinned-ness of a collation when we've found a version for it.  This
avoids creating useless pg_depend entries, and removes a not-very-
future-proof assumption that C, POSIX, and DEFAULT are the only
pinned collations.

A small problem is that, because the default collation may or may
not have a version, the regression tests can't assume anything about
whether dependency entries will be made for it.  This seems OK though
since it's now handled just the same as other collations, and we have
test cases for both versioned and unversioned collations.

Fixes oversights in commit 257836a75.  Thanks to Julien Rouhaud
for review.

Discussion: https://postgr.es/m/3564817.1618420687@sss.pgh.pa.us
2021-04-16 12:26:50 -04:00
Peter Eisentraut 25593d7d33 psql: Small fixes for better translatability 2021-04-16 11:05:58 +02:00
Amit Kapila f5fc2f5b23 Add information of total data processed to replication slot stats.
This adds the statistics about total transactions count and total
transaction data logically sent to the decoding output plugin from
ReorderBuffer. Users can query the pg_stat_replication_slots view to check
these stats.

Suggested-by: Andres Freund
Author: Vignesh C and Amit Kapila
Reviewed-by: Sawada Masahiko, Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-16 07:34:43 +05:30
Tom Lane 409723365b Provide query source text when parsing a SQL-standard function body.
Without this, we lose error cursor positions, as shown in the
modified regression test result.

Discussion: https://postgr.es/m/2197698.1617984583@sss.pgh.pa.us
2021-04-15 17:24:12 -04:00
Tom Lane 1111b2668d Undo decision to allow pg_proc.prosrc to be NULL.
Commit e717a9a18 changed the longstanding rule that prosrc is NOT NULL
because when a SQL-language function is written in SQL-standard style,
we don't currently have anything useful to put there.  This seems a poor
decision though, as it could easily have negative impacts on external
PLs (opening them to crashes they didn't use to have, for instance).
SQL-function-related code can just as easily test "is prosqlbody not
null" as "is prosrc null", so there's no real gain there either.
Hence, revert the NOT NULL marking removal and adjust related logic.

For now, we just put an empty string into prosrc for SQL-standard
functions.  Maybe we'll have a better idea later, although the
history of things like pg_attrdef.adsrc suggests that it's not
easy to maintain a string equivalent of a node tree.

This also adds an assertion that queryDesc->sourceText != NULL
to standard_ExecutorStart.  We'd been silently relying on that
for awhile, so let's make it less silent.

Also fix some overlooked documentation and test cases.

Discussion: https://postgr.es/m/2197698.1617984583@sss.pgh.pa.us
2021-04-15 17:17:20 -04:00
Tom Lane 3157cbe974 Stabilize recently-added information_schema test queries.
These queries could show unexpected entries if the core system,
or concurrently-running test scripts, created any functions that
would appear in the information_schema views.  Restrict them
to showing functions belonging to this test's schema, as the
far-older nearby test case does.

Per experimentation with conversion of some built-in functions
to SQL-function-body style.
2021-04-15 16:31:44 -04:00
Peter Eisentraut fae65629ce Revert "psql: Show all query results by default"
This reverts commit 3a51306722.

Per discussion, this patch had too many issues to resolve at this
point of the development cycle.  We'll try again in the future.

Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2021-04-15 19:42:55 +02:00
Tom Lane 6c0373ab77 Allow table-qualified variable names in ON CONFLICT ... WHERE.
Previously you could only use unqualified variable names here.
While that's not a functional deficiency, since only the target
table can be referenced, it's a surprising inconsistency with the
rules for partial-index predicates, on which this syntax is
supposedly modeled.

The fix for that is no harder than passing addToRelNameSpace = true
to addNSItemToQuery.  However, it's really pretty bogus for
transformOnConflictArbiter and transformOnConflictClause to be
messing with the namespace item for the target table at all.
It's not theirs to manage, it results in duplicative creations of
namespace items, and transformOnConflictClause wasn't even doing
it quite correctly (that coding resulted in two nsitems for the
target table, since it hadn't cleaned out the existing one).
Hence, make transformInsertStmt responsible for setting up the
target nsitem once for both these clauses and RETURNING.

Also, arrange for ON CONFLICT ... UPDATE's "excluded" pseudo-relation
to be added to the rangetable before we run transformOnConflictArbiter.
This produces a more helpful HINT if someone writes "excluded.col"
in the arbiter expression.

Per bug #16958 from Lukas Eder.  Although I agree this is a bug,
the consequences are hardly severe, so no back-patch.

Discussion: https://postgr.es/m/16958-963f638020de271c@postgresql.org
2021-04-13 15:39:41 -04:00
Tom Lane cf0020080a Remove no-longer-relevant test case.
collate.icu.utf8.sql was exercising the recording of a collation
dependency for an enum comparison expression, but such an expression
should never have had any collation dependency in the first place.
After I fixed that in commit c402b02b9, the test started failing.
We don't need to test that scenario anymore, so just remove the
now-useless test steps.

(This test case is new in HEAD, so no need to back-patch.)

Discussion: https://postgr.es/m/3044030.1618261159@sss.pgh.pa.us
Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
2021-04-12 18:58:20 -04:00
Tom Lane c402b02b9f Fix old bug with coercing the result of a COLLATE expression.
There are hacks in parse_coerce.c to push down a requested coercion
to below any CollateExpr that may appear.  However, we did that even
if the requested data type is non-collatable, leading to an invalid
expression tree in which CollateExpr is applied to a non-collatable
type.  The fix is just to drop the CollateExpr altogether, reasoning
that it's useless.

This bug is ten years old, dating to the original addition of
COLLATE support.  The lack of field complaints suggests that there
aren't a lot of user-visible consequences.  We noticed the problem
because it would trigger an assertion in DefineVirtualRelation if
the invalid structure appears as an output column of a view; however,
in a non-assert build, you don't see a crash just a (subtly incorrect)
complaint about applying collation to a non-collatable type.  I found
that by putting the incorrect structure further down in a view, I could
make a view definition that would fail dump/reload, per the added
regression test case.  But CollateExpr doesn't do anything at run-time,
so this likely doesn't lead to any really exciting consequences.

Per report from Yulin Pei.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
2021-04-12 14:37:49 -04:00
Michael Paquier 7a3972597f Fix out-of-bound memory access for interval -> char conversion
Using Roman numbers (via "RM" or "rm") for a conversion to calculate a
number of months has never considered the case of negative numbers,
where a conversion could easily cause out-of-bound memory accesses.  The
conversions in themselves were not completely consistent either, as
specifying 12 would result in NULL, but it should mean XII.

This commit reworks the conversion calculation to have a more
consistent behavior:
- If the number of months and years is 0, return NULL.
- If the number of months is positive, return the exact month number.
- If the number of months is negative, do a backward calculation, with
-1 meaning December, -2 November, etc.

Reported-by: Theodor Arsenij Larionov-Trichkin
Author: Julien Rouhaud
Discussion: https://postgr.es/m/16953-f255a18f8c51f1d5@postgresql.org
backpatch-through: 9.6
2021-04-12 11:30:50 +09:00
Peter Eisentraut 496e58bb0e Improve behavior of date_bin with origin in the future
Currently, when the origin is after the input, the result is the
timestamp at the end of the bin, rather than the beginning as
expected.  This puts the result consistently at the beginning of the
bin.

Author: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/CAFBsxsGjLDxQofRfH+d4KSAXxPf3MMevUG7s6EDfdBOvHLDLjw@mail.gmail.com
2021-04-10 19:33:46 +02:00
Fujii Masao 8ff1c94649 Allow TRUNCATE command to truncate foreign tables.
This commit introduces new foreign data wrapper API for TRUNCATE.
It extends TRUNCATE command so that it accepts foreign tables as
the targets to truncate and invokes that API. Also it extends postgres_fdw
so that it can issue TRUNCATE command to foreign servers, by adding
new routine for that TRUNCATE API.

The information about options specified in TRUNCATE command, e.g.,
ONLY, CACADE, etc is passed to FDW via API. The list of foreign tables to
truncate is also passed to FDW. FDW truncates the foreign data sources
that the passed foreign tables specify, based on those information.
For example, postgres_fdw constructs TRUNCATE command using them
and issues it to the foreign server.

For performance, TRUNCATE command invokes the FDW routine for
TRUNCATE once per foreign server that foreign tables to truncate belong to.

Author: Kazutaka Onishi, Kohei KaiGai, slightly modified by Fujii Masao
Reviewed-by: Bharath Rupireddy, Michael Paquier, Zhihong Yu, Alvaro Herrera, Stephen Frost, Ashutosh Bapat, Amit Langote, Daniel Gustafsson, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/CAOP8fzb_gkReLput7OvOK+8NHgw-RKqNv59vem7=524krQTcWA@mail.gmail.com
Discussion: https://postgr.es/m/CAJuF6cMWDDqU-vn_knZgma+2GMaout68YUgn1uyDnexRhqqM5Q@mail.gmail.com
2021-04-08 20:56:08 +09:00
David Rowley 50e17ad281 Speedup ScalarArrayOpExpr evaluation
ScalarArrayOpExprs with "useOr=true" and a set of Consts on the righthand
side have traditionally been evaluated by using a linear search over the
array.  When these arrays contain large numbers of elements then this
linear search could become a significant part of execution time.

Here we add a new method of evaluating ScalarArrayOpExpr expressions to
allow them to be evaluated by first building a hash table containing each
element, then on subsequent evaluations, we just probe that hash table to
determine if there is a match.

The planner is in charge of determining when this optimization is possible
and it enables it by setting hashfuncid in the ScalarArrayOpExpr.  The
executor will only perform the hash table evaluation when the hashfuncid
is set.

This means that not all cases are optimized. For example CHECK constraints
containing an IN clause won't go through the planner, so won't get the
hashfuncid set.  We could maybe do something about that at some later
date.  The reason we're not doing it now is from fear that we may slow
down cases where the expression is evaluated only once.  Those cases can
be common, for example, a single row INSERT to a table with a CHECK
constraint containing an IN clause.

In the planner, we enable this when there are suitable hash functions for
the ScalarArrayOpExpr's operator and only when there is at least
MIN_ARRAY_SIZE_FOR_HASHED_SAOP elements in the array.  The threshold is
currently set to 9.

Author: James Coleman, David Rowley
Reviewed-by: David Rowley, Tomas Vondra, Heikki Linnakangas
Discussion: https://postgr.es/m/CAAaqYe8x62+=wn0zvNKCj55tPpg-JBHzhZFFc6ANovdqFw7-dA@mail.gmail.com
2021-04-08 23:51:22 +12:00
Thomas Munro 1d257577e0 Optionally prefetch referenced data in recovery.
Introduce a new GUC recovery_prefetch, disabled by default.  When
enabled, look ahead in the WAL and try to initiate asynchronous reading
of referenced data blocks that are not yet cached in our buffer pool.
For now, this is done with posix_fadvise(), which has several caveats.
Better mechanisms will follow in later work on the I/O subsystem.

The GUC maintenance_io_concurrency is used to limit the number of
concurrent I/Os we allow ourselves to initiate, based on pessimistic
heuristics used to infer that I/Os have begun and completed.

The GUC wal_decode_buffer_size is used to limit the maximum distance we
are prepared to read ahead in the WAL to find uncached blocks.

Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (parts)
Reviewed-by: Andres Freund <andres@anarazel.de> (parts)
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (parts)
Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Tested-by: Dmitry Dolgov <9erthalion6@gmail.com>
Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2021-04-08 23:20:42 +12:00
Peter Eisentraut 7e3c54168d Add ORDER BY to some regression test queries
Apparently, an unrelated patch introduced some variation on the build
farm.

Reported-by: Magnus Hagander <magnus@hagander.net>
2021-04-08 12:20:11 +02:00
Alvaro Herrera 0827e8af70
autovacuum: handle analyze for partitioned tables
Previously, autovacuum would completely ignore partitioned tables, which
is not good regarding analyze -- failing to analyze those tables means
poor plans may be chosen.  Make autovacuum aware of those tables by
propagating "changes since analyze" counts from the leaf partitions up
the partitioning hierarchy.

This also introduces necessary reloptions support for partitioned tables
(autovacuum_enabled, autovacuum_analyze_scale_factor,
autovacuum_analyze_threshold).  It's unclear how best to document this
aspect.

Author: Yuzuko Hosoya <yuzukohosoya@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CAKkQ508_PwVgwJyBY=0Lmkz90j8CmWNPUxgHvCUwGhMrouz6UA@mail.gmail.com
2021-04-08 01:19:36 -04:00
Tom Lane a3027e1e7f Allow psql's \df and \do commands to specify argument types.
When dealing with overloaded function or operator names, having
to look through a long list of matches is tedious.  Let's extend
these commands to allow specification of (input) argument types
to let such results be trimmed down.  Each additional argument
is treated the same as the pattern argument of \dT and matched
against the appropriate argument's type name.

While at it, fix \dT (and these new options) to recognize the
usual notation of "foo[]" for "the array type over foo", and
to handle the special abbreviations allowed by the backend
grammar, such as "int" for "integer".

Greg Sabino Mullane, revised rather significantly by me

Discussion: https://postgr.es/m/CAKAnmmLF9Hhu02N+s7uAyLc5J1xZReg72HQUoiKhNiJV3_jACQ@mail.gmail.com
2021-04-07 23:02:21 -04:00
Bruce Momjian bc70728693 Fix regression test failure caused by commit 4f0b0966c8
The query originally used was too simple, cause explain_filter() to be
unable to remove JIT output text.

Reported-by: Tom Lane

Author: Julien Rouhaud
2021-04-07 18:14:46 -04:00
Peter Eisentraut e717a9a18b SQL-standard function body
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
statements for language SQL with a function body that conforms to the
SQL standard and is portable to other implementations.

Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
this allows writing out the SQL statements making up the body
unquoted, either as a single statement:

    CREATE FUNCTION add(a integer, b integer) RETURNS integer
        LANGUAGE SQL
        RETURN a + b;

or as a block

    CREATE PROCEDURE insert_data(a integer, b integer)
    LANGUAGE SQL
    BEGIN ATOMIC
      INSERT INTO tbl VALUES (a);
      INSERT INTO tbl VALUES (b);
    END;

The function body is parsed at function definition time and stored as
expression nodes in a new pg_proc column prosqlbody.  So at run time,
no further parsing is required.

However, this form does not support polymorphic arguments, because
there is no more parse analysis done at call time.

Dependencies between the function and the objects it uses are fully
tracked.

A new RETURN statement is introduced.  This can only be used inside
function bodies.  Internally, it is treated much like a SELECT
statement.

psql needs some new intelligence to keep track of function body
boundaries so that it doesn't send off statements when it sees
semicolons that are inside a function body.

Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
2021-04-07 21:47:55 +02:00
Bruce Momjian 4f0b0966c8 Make use of in-core query id added by commit 5fd9dfa5f5
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.

Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.

Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.

For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.

Bump catalog version.

Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol

Author: Julien Rouhaud

Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
2021-04-07 14:04:06 -04:00
Tom Lane 3db826bd55 Tighten up allowed names for custom GUC parameters.
Formerly we were pretty lax about what a custom GUC's name could
be; so long as it had at least one dot in it, we'd take it.
However, corner cases such as dashes or equal signs in the name
would cause various bits of functionality to misbehave.  Rather
than trying to make the world perfectly safe for that, let's
just require that custom names look like "identifier.identifier",
where "identifier" means something that scan.l would accept
without double quotes.

Along the way, this patch refactors things slightly in guc.c
so that find_option() is responsible for reporting GUC-not-found
cases, allowing removal of duplicative code from its callers.

Per report from Hubert Depesz Lubaczewski.  No back-patch,
since the consequences of the problem don't seem to warrant
changing behavior in stable branches.

Discussion: https://postgr.es/m/951335.1612910077@sss.pgh.pa.us
2021-04-07 11:22:22 -04:00
Peter Eisentraut dd13ad9d39 Fix use of cursor sensitivity terminology
Documentation and comments in code and tests have been using the terms
sensitive/insensitive cursor incorrectly relative to the SQL standard.
(Cursor sensitivity is only relevant for changes made in the same
transaction as the cursor, not for concurrent changes in other
sessions.)  Moreover, some of the behavior of PostgreSQL is incorrect
according to the SQL standard, confusing the issue further.  (WHERE
CURRENT OF changes are not visible in insensitive cursors, but they
should be.)

This change corrects the terminology and removes the claim that
sensitive cursors are supported.  It also adds a test case that checks
the insensitive behavior in a "correct" way, using a change command
not using WHERE CURRENT OF.  Finally, it adds the ASENSITIVE cursor
option to select the default asensitive behavior, per SQL standard.

There are no changes to cursor behavior in this patch.

Discussion: https://www.postgresql.org/message-id/flat/96ee8b30-9889-9e1b-b053-90e10c050e85%40enterprisedb.com
2021-04-07 08:05:55 +02:00
Tom Lane c5b7ba4e67 Postpone some stuff out of ExecInitModifyTable.
Arrange to do some things on-demand, rather than immediately during
executor startup, because there's a fair chance of never having to do
them at all:

* Don't open result relations' indexes until needed.

* Don't initialize partition tuple routing, nor the child-to-root
tuple conversion map, until needed.

This wins in UPDATEs on partitioned tables when only some of the
partitions will actually receive updates; with larger partition
counts the savings is quite noticeable.  Also, we can remove some
sketchy heuristics in ExecInitModifyTable about whether to set up
tuple routing.

Also, remove execPartition.c's private hash table tracking which
partitions were already opened by the ModifyTable node.  Instead
use the hash added to ModifyTable itself by commit 86dc90056.

To allow lazy computation of the conversion maps, we now set
ri_RootResultRelInfo in all child ResultRelInfos.  We formerly set it
only in some, not terribly well-defined, cases.  This has user-visible
side effects in that now more error messages refer to the root
relation instead of some partition (and provide error data in the
root's column order, too).  It looks to me like this is a strict
improvement in consistency, so I don't have a problem with the
output changes visible in this commit.

Extracted from a larger patch, which seemed to me to be too messy
to push in one commit.

Amit Langote, reviewed at different times by Heikki Linnakangas and
myself

Discussion: https://postgr.es/m/CA+HiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA@mail.gmail.com
2021-04-06 15:57:11 -04:00
Andres Freund 90c885cdab Increment xactCompletionCount during subtransaction abort.
Snapshot caching, introduced in 623a9ba79b, did not increment
xactCompletionCount during subtransaction abort. That could lead to an older
snapshot being reused. That is, at least as far as I can see, not a
correctness issue (for MVCC snapshots there's no difference between "in
progress" and "aborted"). The only difference between the old and new
snapshots would be a newer ->xmax.

While HeapTupleSatisfiesMVCC makes the same visibility determination, reusing
the old snapshot leads HeapTupleSatisfiesMVCC to not set
HEAP_XMIN_INVALID. Which subsequently causes the kill_prior_tuple optimization
to not kick in (via HeapTupleIsSurelyDead() returning false). The performance
effects of doing the same index-lookups over and over again is how the issue
was discovered...

Fix the issue by incrementing xactCompletionCount in
XidCacheRemoveRunningXids. It already acquires ProcArrayLock exclusively,
making that an easy proposition.

Add a test to ensure that kill_prior_tuple prevents index growth when it
involves aborted subtransaction of the current transaction.

Author: Andres Freund
Discussion: https://postgr.es/m/20210406043521.lopeo7bbigad3n6t@alap3.anarazel.de
Discussion: https://postgr.es/m/20210317055718.v6qs3ltzrformqoa%40alap3.anarazel.de
2021-04-06 09:24:50 -07:00
Peter Eisentraut 3a51306722 psql: Show all query results by default
Previously, psql printed only the last result if a command string
returned multiple result sets.  Now it prints all of them.  The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2021-04-06 17:10:24 +02:00
Tomas Vondra 518442c7f3 Fix handling of clauses incompatible with extended statistics
Handling of incompatible clauses while applying extended statistics was
a bit confused - while handling a mix of compatible and incompatible
clauses it sometimes incorrectly treated the incompatible clauses as
compatible, resulting in a crash.

Fixed by reworking the code applying the selected statistics object to
make it easier to understand, and adding a proper compatibility check.

Reported-by: David Rowley
Discussion: https://postgr.es/m/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ%40mail.gmail.com
2021-04-06 16:56:06 +02:00
Heikki Linnakangas c4c393b3ec Mark test_enc_conversion() as STRICT.
Reported-by: Jaime Casanova, using SQLsmith
Discussion: https://www.postgresql.org/message-id/20210402235337.GA4082@ahch-to
2021-04-06 14:53:56 +03:00
Peter Eisentraut 82ed7748b7 ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION
At present, if we want to update publications in a subscription, we
can use SET PUBLICATION.  However, it requires supplying all
publications that exists and the new publications.  If we want to add
new publications, it's inconvenient.  The new syntax only supplies the
new publications.  When the refresh is true, it only refreshes the new
publications.

Author: Japin Li <japinli@hotmail.com>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/MEYP282MB166939D0D6C480B7FBE7EFFBB6BC0@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-04-06 11:49:51 +02:00
Peter Eisentraut a2da77cdb4 Change return type of EXTRACT to numeric
The previous implementation of EXTRACT mapped internally to
date_part(), which returned type double precision (since it was
implemented long before the numeric type existed).  This can lead to
imprecise output in some cases, so returning numeric would be
preferrable.  Changing the return type of an existing function is a
bit risky, so instead we do the following:  We implement a new set of
functions, which are now called "extract", in parallel to the existing
date_part functions.  They work the same way internally but use
numeric instead of float8.  The EXTRACT construct is now mapped by the
parser to these new extract functions.  That way, dumps of views
etc. from old versions (which would use date_part) continue to work
unchanged, but new uses will map to the new extract functions.

Additionally, the reverse compilation of EXTRACT now reproduces the
original syntax, using the new mechanism introduced in
40c24bfef9.

The following minor changes of behavior result from the new
implementation:

- The column name from an isolated EXTRACT call is now "extract"
  instead of "date_part".

- Extract from date now rejects inappropriate field names such as
  HOUR.  It was previously mapped internally to extract from
  timestamp, so it would silently accept everything appropriate for
  timestamp.

- Return values when extracting fields with possibly fractional
  values, such as second and epoch, now have the full scale that the
  value has internally (so, for example, '1.000000' instead of just
  '1').

Reported-by: Petr Fedorov <petr.fedorov@phystech.edu>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
2021-04-06 07:20:42 +02:00
Fujii Masao 43620e3286 Add function to log the memory contexts of specified backend process.
Commit 3e98c0bafb added pg_backend_memory_contexts view to display
the memory contexts of the backend process. However its target process
is limited to the backend that is accessing to the view. So this is
not so convenient when investigating the local memory bloat of other
backend process. To improve this situation, this commit adds
pg_log_backend_memory_contexts() function that requests to log
the memory contexts of the specified backend process.

This information can be also collected by calling
MemoryContextStats(TopMemoryContext) via a debugger. But
this technique cannot be used in some environments because no debugger
is available there. So, pg_log_backend_memory_contexts() allows us to
see the memory contexts of specified backend more easily.

Only superusers are allowed to request to log the memory contexts
because allowing any users to issue this request at an unbounded rate
would cause lots of log messages and which can lead to denial of service.

On receipt of the request, at the next CHECK_FOR_INTERRUPTS(),
the target backend logs its memory contexts at LOG_SERVER_ONLY level,
so that these memory contexts will appear in the server log but not
be sent to the client. It logs one message per memory context.
Because if it buffers all memory contexts into StringInfo to log them
as one message, which may require the buffer to be enlarged very much
and lead to OOM error since there can be a large number of memory
contexts in a backend.

When a backend process is consuming huge memory, logging all its
memory contexts might overrun available disk space. To prevent this,
now this patch limits the number of child contexts to log per parent
to 100. As with MemoryContextStats(), it supposes that practical cases
where the log gets long will typically be huge numbers of siblings
under the same parent context; while the additional debugging value
from seeing details about individual siblings beyond 100 will not be large.

There was another proposed patch to add the function to return
the memory contexts of specified backend as the result sets,
instead of logging them, in the discussion. However that patch is
not included in this commit because it had several issues to address.

Thanks to Tatsuhito Kasahara, Andres Freund, Tom Lane, Tomas Vondra,
Michael Paquier, Kyotaro Horiguchi and Zhihong Yu for the discussion.

Bump catalog version.

Author: Atsushi Torikoshi
Reviewed-by: Kyotaro Horiguchi, Zhihong Yu, Fujii Masao
Discussion: https://postgr.es/m/0271f440ac77f2a4180e0e56ebd944d1@oss.nttdata.com
2021-04-06 13:44:15 +09:00
Tom Lane 09c1c6ab4b Support INCLUDE'd columns in SP-GiST.
Not much to say here: does what it says on the tin.
We steal a previously-always-zero bit from the nextOffset
field of leaf index tuples in order to track whether there
is a nulls bitmap.  Otherwise it works about like included
columns in other index types.

Pavel Borisov, reviewed by Andrey Borodin and Anastasia Lubennikova,
and rather heavily editorialized on by me

Discussion: https://postgr.es/m/CALT9ZEFi-vMp4faht9f9Junb1nO3NOSjhpxTmbm1UGLMsLqiEQ@mail.gmail.com
2021-04-05 18:41:21 -04:00
Stephen Frost 6c3ffd697e Add pg_read_all_data and pg_write_all_data roles
A commonly requested use-case is to have a role who can run an
unfettered pg_dump without having to explicitly GRANT that user access
to all tables, schemas, et al, without that role being a superuser.
This address that by adding a "pg_read_all_data" role which implicitly
gives any member of this role SELECT rights on all tables, views and
sequences, and USAGE rights on all schemas.

As there may be cases where it's also useful to have a role who has
write access to all objects, pg_write_all_data is also introduced and
gives users implicit INSERT, UPDATE and DELETE rights on all tables,
views and sequences.

These roles can not be logged into directly but instead should be
GRANT'd to a role which is able to log in.  As noted in the
documentation, if RLS is being used then an administrator may (or may
not) wish to set BYPASSRLS on the login role which these predefined
roles are GRANT'd to.

Reviewed-by: Georgios Kokolatos
Discussion: https://postgr.es/m/20200828003023.GU29590@tamriel.snowman.net
2021-04-05 13:42:52 -04:00
Tom Lane ac9099fc1d Fix confusion in SP-GiST between attribute type and leaf storage type.
According to the documentation, the attType passed to the opclass
config function (and also relied on by the core code) is the type
of the heap column or expression being indexed.  But what was
actually being passed was the type stored for the index column.
This made no difference for user-defined SP-GiST opclasses,
because we weren't allowing the STORAGE clause of CREATE OPCLASS
to be used, so the two types would be the same.  But it's silly
not to allow that, seeing that the built-in poly_ops opclass
has a different value for opckeytype than opcintype, and that if you
want to do lossy storage then the types must really be different.
(Thus, user-defined opclasses doing lossy storage had to lie about
what type is in the index.)  Hence, remove the restriction, and make
sure that we use the input column type not opckeytype where relevant.

For reasons of backwards compatibility with existing user-defined
opclasses, we can't quite insist that the specified leafType match
the STORAGE clause; instead just add an amvalidate() warning if
they don't match.

Also fix some bugs that would only manifest when trying to return
index entries when attType is different from attLeafType.  It's not
too surprising that these have not been reported, because the only
usual reason for such a difference is to store the leaf value
lossily, rendering index-only scans impossible.

Add a src/test/modules module to exercise cases where attType is
different from attLeafType and yet index-only scan is supported.

Discussion: https://postgr.es/m/3728741.1617381471@sss.pgh.pa.us
2021-04-04 14:28:57 -04:00
Tomas Vondra 4908684dda Add regression test for minmax-multi macaddr8 type
The regression test for BRIN minmax-multi opclasses tested almost all
supported data types, with the exception of macaddr8. So this adds it.
2021-04-04 19:26:55 +02:00
Peter Eisentraut 9c5f67fd62 Add support for NullIfExpr in eval_const_expressions
Author: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Discussion: https://www.postgresql.org/message-id/flat/7ea5ce773bbc4eea9ff1a381acd3b102@G08CNEXMBPEKD05.g08.fujitsu.local
2021-04-02 11:01:49 +02:00
David Rowley a4fac4ffe8 Attempt to fix unstable Result Cache regression tests
force_parallel_mode = regress is causing a few more problems than I
thought.  It seems that both the leader and the single worker can
contribute to the execution. I had mistakenly thought that only the worker
process would do any work.  Since it's not deterministic as to which
of the two processes will get a chance to work on the plan, it seems just
better to disable force_parallel_mode for these tests.  At least doing
this seems better than changing to EXPLAIN only rather than EXPLAIN
ANALYZE.

Additionally, I overlooked the fact that the number of executions of the
sub-plan below a Result Cache will execute a varying number of times
depending on cache eviction.  32-bit machines will use less memory and
evict fewer tuples from the cache.  That results in the subnode being
executed fewer times on 32-bit machines.  Let's just blank out the number
of loops in each node.
2021-04-02 15:25:38 +13:00
David Rowley 9eacee2e62 Add Result Cache executor node (take 2)
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-02 14:10:56 +13:00
Michael Paquier fe246d1c11 Improve stability of test with vacuum_truncate in reloptions.sql
This test has been using a simple VACUUM with pg_relation_size() to
check if a relation gets physically truncated or not, but forgot the
fact that some concurrent activity, like checkpoint buffer writes, could
cause some pages to be skipped.  The second test enabling
vacuum_truncate could fail, seeing a non-empty relation.  The first test
would not have failed, but could finish by testing a behavior different
than the one aimed for.  Both tests gain a FREEZE option, to make the
vacuums more aggressive and prevent page skips.

This is similar to the issues fixed in c2dc1a7.

Author: Arseny Sher
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/87tuotr2hh.fsf@ars-thinkpad
backpatch-through: 12
2021-04-02 09:44:42 +09:00
Heikki Linnakangas ea1b99a661 Add 'noError' argument to encoding conversion functions.
With the 'noError' argument, you can try to convert a buffer without
knowing the character boundaries beforehand. The functions now need to
return the number of input bytes successfully converted.

This is is a backwards-incompatible change, if you have created a custom
encoding conversion with CREATE CONVERSION. This adds a check to
pg_upgrade for that, refusing the upgrade if there are any user-defined
encoding conversions. Custom conversions are very rare, there are no
commonly used extensions that I know of that uses that feature. No other
objects can depend on conversions, so if you do have one, you can fairly
easily drop it before upgrading, and recreate it after the upgrade with
an updated version.

Add regression tests for built-in encoding conversions. This doesn't cover
every conversion, but it covers all the internal functions in conv.c that
are used to implement the conversions.

Reviewed-by: John Naylor
Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01%40iki.fi
2021-04-01 11:45:22 +03:00
Peter Eisentraut e2639a767b Make extract(timetz) tests a bit more interesting
Use a time zone offset with nonzero minutes to make the
timezone_minute test meaningful.
2021-04-01 09:52:03 +02:00
David Rowley 28b3e3905c Revert b6002a796
This removes "Add Result Cache executor node".  It seems that something
weird is going on with the tracking of cache hits and misses as
highlighted by many buildfarm animals.  It's not yet clear what the
problem is as other parts of the plan indicate that the cache did work
correctly, it's just the hits and misses that were being reported as 0.

This is especially a bad time to have the buildfarm so broken, so
reverting before too many more animals go red.

Discussion: https://postgr.es/m/CAApHDvq_hydhfovm4=izgWs+C5HqEeRScjMbOgbpC-jRAeK3Yw@mail.gmail.com
2021-04-01 13:33:23 +13:00
David Rowley b6002a796d Add Result Cache executor node
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-01 12:32:22 +13:00
Joe Conway b12bd4869b Fix has_column_privilege function corner case
According to the comments, when an invalid or dropped column oid is passed
to has_column_privilege(), the intention has always been to return NULL.
However, when the caller had table level privilege the invalid/missing
column was never discovered, because table permissions were checked first.

Fix that by introducing extended versions of pg_attribute_acl(check|mask)
and pg_class_acl(check|mask) which take a new argument, is_missing. When
is_missing is NULL, the old behavior is preserved. But when is_missing is
passed by the caller, no ERROR is thrown for dropped or missing
columns/relations, and is_missing is flipped to true. This in turn allows
has_column_privilege to check for column privileges first, providing the
desired semantics.

Not backpatched since it is a user visible behavioral change with no previous
complaints, and the fix is a bit on the invasive side.

Author: Joe Conway
Reviewed-By: Tom Lane
Reported by: Ian Barwick
Discussion: https://postgr.es/m/flat/9b5f4311-157b-4164-7fe7-077b4fe8ed84%40joeconway.com
2021-03-31 13:55:25 -04:00
Tom Lane 86dc90056d Rework planning and execution of UPDATE and DELETE.
This patch makes two closely related sets of changes:

1. For UPDATE, the subplan of the ModifyTable node now only delivers
the new values of the changed columns (i.e., the expressions computed
in the query's SET clause) plus row identity information such as CTID.
ModifyTable must re-fetch the original tuple to merge in the old
values of any unchanged columns.  The core advantage of this is that
the changed columns are uniform across all tables of an inherited or
partitioned target relation, whereas the other columns might not be.
A secondary advantage, when the UPDATE involves joins, is that less
data needs to pass through the plan tree.  The disadvantage of course
is an extra fetch of each tuple to be updated.  However, that seems to
be very nearly free in context; even worst-case tests don't show it to
add more than a couple percent to the total query cost.  At some point
it might be interesting to combine the re-fetch with the tuple access
that ModifyTable must do anyway to mark the old tuple dead; but that
would require a good deal of refactoring and it seems it wouldn't buy
all that much, so this patch doesn't attempt it.

2. For inherited UPDATE/DELETE, instead of generating a separate
subplan for each target relation, we now generate a single subplan
that is just exactly like a SELECT's plan, then stick ModifyTable
on top of that.  To let ModifyTable know which target relation a
given incoming row refers to, a tableoid junk column is added to
the row identity information.  This gets rid of the horrid hack
that was inheritance_planner(), eliminating O(N^2) planning cost
and memory consumption in cases where there were many unprunable
target relations.

Point 2 of course requires point 1, so that there is a uniform
definition of the non-junk columns to be returned by the subplan.
We can't insist on uniform definition of the row identity junk
columns however, if we want to keep the ability to have both
plain and foreign tables in a partitioning hierarchy.  Since
it wouldn't scale very far to have every child table have its
own row identity column, this patch includes provisions to merge
similar row identity columns into one column of the subplan result.
In particular, we can merge the whole-row Vars typically used as
row identity by FDWs into one column by pretending they are type
RECORD.  (It's still okay for the actual composite Datums to be
labeled with the table's rowtype OID, though.)

There is more that can be done to file down residual inefficiencies
in this patch, but it seems to be committable now.

FDW authors should note several API changes:

* The argument list for AddForeignUpdateTargets() has changed, and so
has the method it must use for adding junk columns to the query.  Call
add_row_identity_var() instead of manipulating the parse tree directly.
You might want to reconsider exactly what you're adding, too.

* PlanDirectModify() must now work a little harder to find the
ForeignScan plan node; if the foreign table is part of a partitioning
hierarchy then the ForeignScan might not be the direct child of
ModifyTable.  See postgres_fdw for sample code.

* To check whether a relation is a target relation, it's no
longer sufficient to compare its relid to root->parse->resultRelation.
Instead, check it against all_result_relids or leaf_result_relids,
as appropriate.

Amit Langote and Tom Lane

Discussion: https://postgr.es/m/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com
2021-03-31 11:52:37 -04:00
Peter Eisentraut 055fee7eb4 Allow an alias to be attached to a JOIN ... USING
This allows something like

    SELECT ... FROM t1 JOIN t2 USING (a, b, c) AS x

where x has the columns a, b, c and unlike a regular alias it does not
hide the range variables of the tables being joined t1 and t2.

Per SQL:2016 feature F404 "Range variable for common column names".

Reviewed-by: Vik Fearing <vik.fearing@2ndquadrant.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/454638cf-d563-ab76-a585-2564428062af@2ndquadrant.com
2021-03-31 17:10:50 +02:00
Etsuro Fujita 27e1f14563 Add support for asynchronous execution.
This implements asynchronous execution, which runs multiple parts of a
non-parallel-aware Append concurrently rather than serially to improve
performance when possible.  Currently, the only node type that can be
run concurrently is a ForeignScan that is an immediate child of such an
Append.  In the case where such ForeignScans access data on different
remote servers, this would run those ForeignScans concurrently, and
overlap the remote operations to be performed simultaneously, so it'll
improve the performance especially when the operations involve
time-consuming ones such as remote join and remote aggregation.

We may extend this to other node types such as joins or aggregates over
ForeignScans in the future.

This also adds the support for postgres_fdw, which is enabled by the
table-level/server-level option "async_capable".  The default is false.

Robert Haas, Kyotaro Horiguchi, Thomas Munro, and myself.  This commit
is mostly based on the patch proposed by Robert Haas, but also uses
stuff from the patch proposed by Kyotaro Horiguchi and from the patch
proposed by Thomas Munro.  Reviewed by Kyotaro Horiguchi, Konstantin
Knizhnik, Andrey Lepikhov, Movead Li, Thomas Munro, Justin Pryzby, and
others.

Discussion: https://postgr.es/m/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com
Discussion: https://postgr.es/m/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com
Discussion: https://postgr.es/m/20200228.170650.667613673625155850.horikyota.ntt%40gmail.com
2021-03-31 18:45:00 +09:00
Noah Misch 0ff8bbdee1 Accept slightly-filled pages for tuples larger than fillfactor.
We always inserted a larger-than-fillfactor tuple into a newly-extended
page, even when existing pages were empty or contained nothing but an
unused line pointer.  This was unnecessary relation extension.  Start
tolerating page usage up to 1/8 the maximum space that could be taken up
by line pointers.  This is somewhat arbitrary, but it should allow more
cases to reuse pages.  This has no effect on tables with fillfactor=100
(the default).

John Naylor and Floris van Nee.  Reviewed by Matthias van de Meent.
Reported by Floris van Nee.

Discussion: https://postgr.es/m/6e263217180649339720afe2176c50aa@opammb0562.comp.optiver.com
2021-03-30 18:53:44 -07:00
Bruce Momjian 5da9868ed9 In messages, use singular nouns for -1, like we do for +1.
This outputs "-1 year", not "-1 years".

Reported-by: neverov.max@gmail.com

Bug: 16939

Discussion: https://postgr.es/m/16939-cceeb03fb72736ee@postgresql.org
2021-03-30 18:34:27 -04:00
Peter Eisentraut 6131ffc43f Add tests for date_part of epoch near upper bound of timestamp range
This exercises a special case in the implementations of
date_part('epoch', timestamp[tz]) that was previously not tested.
2021-03-30 22:05:18 +02:00
Peter Eisentraut bc9f1afdeb Add upper boundary tests for timestamp and timestamptz types
The existing regression tests only tested the lower boundary of the
range supported by the timestamp and timestamptz types because "The
upper boundary differs between integer and float timestamps, so no
check".  Since this is obsolete, add similar tests for the upper
boundary.
2021-03-30 08:46:34 +02:00
Peter Eisentraut efcc7572f5 Clean up date_part tests a bit
Some tests for timestamp and timestamptz were in the date.sql test
file.  Move them to their appropriate files, or drop tests cases that
were already present there.
2021-03-29 17:53:30 +02:00
Peter Eisentraut f37fec837c Add unistr function
This allows decoding a string with Unicode escape sequences.  It is
similar to Unicode escape strings, but offers some more flexibility.

Author: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Asif Rehman <asifr.rehman@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRA5GnKT+gDVwbVRH2ep451H_myBt+NTz8RkYUARE9+qOQ@mail.gmail.com
2021-03-29 11:56:53 +02:00
Peter Eisentraut ebedd0c78f Reset standard_conforming_strings in strings test
After some tests relating to standard_conforming_strings behavior, the
value was not reset to the default value.  Therefore, the rest of the
tests in that file ran with the nondefault setting, which affected the
results of some tests.  For clarity, reset the value and run the rest
of the tests with the default setting again.
2021-03-29 08:40:39 +02:00
Tomas Vondra 2a058e938c Stabilize stats_ext test with other collations
The tests used string concatenation to test statistics on expressions,
but that made the tests locale-dependent, e.g. because the ordering of
'11' and '1X' depends on the collation. This affected both the estimated
and actual row couts, breaking some of the tests.

Fixed by replacing the string concatenation with upper() function call,
so that the text values contain only digits.

Discussion: https://postgr.es/m/b650920b-2767-fbc3-c87a-cb8b5d693cbf%40enterprisedb.com
2021-03-27 18:26:56 +01:00
Tomas Vondra a4d75c86bf Extended statistics on expressions
Allow defining extended statistics on expressions, not just just on
simple column references.  With this commit, expressions are supported
by all existing extended statistics kinds, improving the same types of
estimates. A simple example may look like this:

  CREATE TABLE t (a int);
  CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t;
  ANALYZE t;

The collected statistics are useful e.g. to estimate queries with those
expressions in WHERE or GROUP BY clauses:

  SELECT * FROM t WHERE mod(a,10) = 0 AND mod(a,20) = 0;

  SELECT 1 FROM t GROUP BY mod(a,10), mod(a,20);

This introduces new internal statistics kind 'e' (expressions) which is
built automatically when the statistics object definition includes any
expressions. This represents single-expression statistics, as if there
was an expression index (but without the index maintenance overhead).
The statistics is stored in pg_statistics_ext_data as an array of
composite types, which is possible thanks to 79f6a942bd.

CREATE STATISTICS allows building statistics on a single expression, in
which case in which case it's not possible to specify statistics kinds.

A new system view pg_stats_ext_exprs can be used to display expression
statistics, similarly to pg_stats and pg_stats_ext views.

ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes the same way it
treats indexes, i.e. it drops and recreates the statistics. This means
all statistics are reset, and we no longer try to preserve at least the
functional dependencies. This should not be a major issue in practice,
as the functional dependencies actually rely on per-column statistics,
which were always reset anyway.

Author: Tomas Vondra
Reviewed-by: Justin Pryzby, Dean Rasheed, Zhihong Yu
Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
2021-03-27 00:01:11 +01:00
Tomas Vondra 98376c18f1 Reduce duration of stats_ext regression tests
The regression tests of extended statistics were taking a fair amount of
time, due to using fairly large data sets with a couple thousand rows.
So far this was fine, but with tests for statistics on expressions the
duration would get a bit excessive.  So reduce the size of some of the
tests that will be used to test expressions, to keep the duration under
control.  Done in a separate commit before adding the statistics on
expressions, to make it clear which estimates are expected to change.

Author: Tomas Vondra
Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
2021-03-26 23:00:47 +01:00
Tomas Vondra 33e52ad9a3 Fix ndistinct estimates with system attributes
When estimating the number of groups using extended statistics, the code
was discarding information about system attributes. This led to strange
situation that

    SELECT 1 FROM t GROUP BY ctid;

could have produced higher estimate (equal to pg_class.reltuples) than

    SELECT 1 FROM t GROUP BY a, b, ctid;

with extended statistics on (a,b). Fixed by retaining information about
the system attribute.

Backpatch all the way to 10, where extended statistics were introduced.

Author: Tomas Vondra
Backpatch-through: 10
2021-03-26 22:34:58 +01:00
Noah Misch a14a0118a1 Add "pg_database_owner" default role.
Membership consists, implicitly, of the current database owner.  Expect
use in template databases.  Once pg_database_owner has rights within a
template, each owner of a database instantiated from that template will
exercise those rights.

Reviewed by John Naylor.

Discussion: https://postgr.es/m/20201228043148.GA1053024@rfd.leadboat.com
2021-03-26 10:42:17 -07:00
Tomas Vondra ab596105b5 BRIN minmax-multi indexes
Adds BRIN opclasses similar to the existing minmax, except that instead
of summarizing the page range into a single [min,max] range, the summary
consists of multiple ranges and/or points, allowing gaps. This allows
more efficient handling of data with poor correlation to physical
location within the table and/or outlier values, for which the regular
minmax opclassed tend to work poorly.

It's possible to specify the number of values kept for each page range,
either as a single point or an interval boundary.

  CREATE TABLE t (a int);
  CREATE INDEX ON t
   USING brin (a int4_minmax_multi_ops(values_per_range=16));

When building the summary, the values are combined into intervals with
the goal to minimize the "covering" (sum of interval lengths), using a
support procedure computing distance between two values.

Bump catversion, due to various catalog changes.

Author: Tomas Vondra <tomas.vondra@postgresql.org>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru>
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com
Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
2021-03-26 13:54:30 +01:00
Tomas Vondra 77b88cd1bb BRIN bloom indexes
Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes
using the new opclasses allow only equality queries (similar to hash
indexes), but that works fine for data like UUID, MAC addresses etc. for
which range queries are not very common. This also means the indexes
work for data that is not well correlated to physical location within
the table, or perhaps even entirely random (which is a common issue with
existing BRIN minmax opclasses).

It's possible to specify opclass parameters with the usual Bloom filter
parameters, i.e. the desired false-positive rate and the expected number
of distinct values per page range.

  CREATE TABLE t (a int);
  CREATE INDEX ON t
   USING brin (a int4_bloom_ops(false_positive_rate = 0.05,
                                n_distinct_per_range = 100));

The opclasses do not operate on the indexed values directly, but compute
a 32-bit hash first, and the Bloom filter is built on the hash value.
Collisions should not be a huge issue though, as the number of distinct
values in a page ranges is usually fairly small.

Bump catversion, due to various catalog changes.

Author: Tomas Vondra <tomas.vondra@postgresql.org>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru>
Reviewed-by: Nico Williams <nico@cryptonector.com>
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com
Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
2021-03-26 13:35:32 +01:00
Robert Haas 5db1fd7823 Fix interaction of TOAST compression with expression indexes.
Before, trying to compress a value for insertion into an expression
index would crash.

Dilip Kumar, with some editing by me. Report by Jaime Casanova.

Discussion: http://postgr.es/m/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk=S3V8DOKT=LZC1L36Q@mail.gmail.com
2021-03-25 19:55:32 -04:00
Alvaro Herrera 71f4c8c6f7
ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY
Allow a partition be detached from its partitioned table without
blocking concurrent queries, by running in two transactions and only
requiring ShareUpdateExclusive in the partitioned table.

Because it runs in two transactions, it cannot be used in a transaction
block.  This is the main reason to use dedicated syntax: so that users
can choose to use the original mode if they need it.  But also, it
doesn't work when a default partition exists (because an exclusive lock
would still need to be obtained on it, in order to change its partition
constraint.)

In case the second transaction is cancelled or a crash occurs, there's
ALTER TABLE .. DETACH PARTITION .. FINALIZE, which executes the final
steps.

The main trick to make this work is the addition of column
pg_inherits.inhdetachpending, initially false; can only be set true in
the first part of this command.  Once that is committed, concurrent
transactions that use a PartitionDirectory will include or ignore
partitions so marked: in optimizer they are ignored if the row is marked
committed for the snapshot; in executor they are always included.  As a
result, and because of the way PartitionDirectory caches partition
descriptors, queries that were planned before the detach will see the
rows in the detached partition and queries that are planned after the
detach, won't.

A CHECK constraint is created that duplicates the partition constraint.
This is probably not strictly necessary, and some users will prefer to
remove it afterwards, but if the partition is re-attached to a
partitioned table, the constraint needn't be rechecked.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200803234854.GA24158@alvherre.pgsql
2021-03-25 18:00:28 -03:00
Michael Paquier a1999a01bb Sanitize the term "combo CID" in code comments
Combo CIDs were referred in the code comments using different terms
across various places of the code, so unify a bit the term used with
what is currently in use in some of the READMEs.

Author: "Hou, Zhijie"
Discussion: https://postgr.es/m/1d42865c91404f46af4562532fdbea31@G08CNEXMBPEKD05.g08.fujitsu.local
2021-03-25 16:08:03 +09:00
Robert Haas e5595de03e Tidy up more loose ends related to configurable TOAST compression.
Change the default_toast_compression GUC to be an enum rather than
a string. Earlier, uncommitted versions of the patch supported using
CREATE ACCESS METHOD to add new compression methods to a running
system, but that idea was dropped before commit. So, we can simplify
the GUC handling as well, which has the nice side effect of improving
the error messages.

While updating the documentation to reflect the new GUC type, also
move it back to the right place in the list. I moved this while
revising what became commit 24f0e395ac,
but apparently the intended ordering is "alphabetical" rather than
"whatever Robert thinks looks nice."

Rejigger things to avoid having access/toast_compression.h depend on
utils/guc.h, so that we don't end up with every file that includes
it also depending on something largely unrelated. Move a few
inline functions back into the C source file partly to help reduce
dependencies and partly just to avoid clutter. A few very minor
cosmetic fixes.

Original patch by Justin Pryzby, but very heavily edited by me,
and reverse reviewed by him and also reviewed by by Tom Lane.

Discussion: http://postgr.es/m/CA+TgmoYp=GT_ztUCeZg2i4hkHAQv8o=-nVJ1-TKWTG1zQOmOpg@mail.gmail.com
2021-03-24 12:36:08 -04:00
Peter Eisentraut 49ab61f0bd Add date_bin function
Similar to date_trunc, but allows binning by an arbitrary interval
rather than just full units.

Author: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: David Fetter <david@fetter.org>
Reviewed-by: Isaac Morland <isaac.morland@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Artur Zakirov <zaartur@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CACPNZCt4buQFRgy6DyjuZS-2aPDpccRkrJBmgUfwYc1KiaXYxg@mail.gmail.com
2021-03-24 16:18:24 +01:00
Amit Kapila 26acb54a13 Revert "Enable parallel SELECT for "INSERT INTO ... SELECT ..."."
To allow inserts in parallel-mode this feature has to ensure that all the
constraints, triggers, etc. are parallel-safe for the partition hierarchy
which is costly and we need to find a better way to do that. Additionally,
we could have used existing cached information in some cases like indexes,
domains, etc. to determine the parallel-safety.

List of commits reverted, in reverse chronological order:

ed62d3737c Doc: Update description for parallel insert reloption.
c8f78b6161 Add a new GUC and a reloption to enable inserts in parallel-mode.
c5be48f092 Improve FK trigger parallel-safety check added by 05c8482f7f.
e2cda3c20a Fix use of relcache TriggerDesc field introduced by commit 05c8482f7f.
e4e87a32cc Fix valgrind issue in commit 05c8482f7f.
05c8482f7f Enable parallel SELECT for "INSERT INTO ... SELECT ...".

Discussion: https://postgr.es/m/E1lMiB9-0001c3-SY@gemulon.postgresql.org
2021-03-24 11:29:15 +05:30
Tom Lane 9d523119fd Avoid possible crash while finishing up a heap rewrite.
end_heap_rewrite was not careful to ensure that the target relation
is open at the smgr level before performing its final smgrimmedsync.
In ordinary cases this is no problem, because it would have been
opened earlier during the rewrite.  However a crash can be reproduced
by re-clustering an empty table with CLOBBER_CACHE_ALWAYS enabled.

Although that exact scenario does not crash in v13, I think that's
a chance result of unrelated planner changes, and the problem is
likely still reachable with other test cases.  The true proximate
cause of this failure is commit c6b92041d, which replaced a call to
heap_sync (which was careful about opening smgr) with a direct call
to smgrimmedsync.  Hence, back-patch to v13.

Amul Sul, per report from Neha Sharma; cosmetic changes
and test case by me.

Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
2021-03-23 11:24:16 -04:00
Peter Eisentraut a6715af1e7 Add bit_count SQL function
This function for bit and bytea counts the set bits in the bit or byte
string.  Internally, we use the existing popcount functionality.

For the name, after some discussion, we settled on bit_count, which
also exists with this meaning in MySQL, Java, and Python.

Author: David Fetter <david@fetter.org>
Discussion: https://www.postgresql.org/message-id/flat/20201230105535.GJ13234@fetter.org
2021-03-23 10:13:58 +01:00
Robert Haas a4d5284a10 Error on invalid TOAST compression in CREATE or ALTER TABLE.
The previous coding treated an invalid compression method name as
equivalent to the default, which is certainly not right.

Justin Pryzby

Discussion: http://postgr.es/m/20210321235544.GD4203@telsasoft.com
2021-03-22 10:57:08 -04:00
Tom Lane fd1ac9a548 Make compression.sql regression test independent of default.
This test will fail in "make installcheck" if the installation's
default_toast_compression setting is not 'pglz'.  Make it robust
against that situation.

Dilip Kumar

Discussion: https://postgr.es/m/CAFiTN-t0w+Rc2U3S+y=7KWcLuOYNB5MfWeGdNa7+pg0UovVdcQ@mail.gmail.com
2021-03-21 16:26:44 -04:00
Robert Haas bbe0a81db6 Allow configurable LZ4 TOAST compression.
There is now a per-column COMPRESSION option which can be set to pglz
(the default, and the only option in up until now) or lz4. Or, if you
like, you can set the new default_toast_compression GUC to lz4, and
then that will be the default for new table columns for which no value
is specified. We don't have lz4 support in the PostgreSQL code, so
to use lz4 compression, PostgreSQL must be built --with-lz4.

In general, TOAST compression means compression of individual column
values, not the whole tuple, and those values can either be compressed
inline within the tuple or compressed and then stored externally in
the TOAST table, so those properties also apply to this feature.

Prior to this commit, a TOAST pointer has two unused bits as part of
the va_extsize field, and a compessed datum has two unused bits as
part of the va_rawsize field. These bits are unused because the length
of a varlena is limited to 1GB; we now use them to indicate the
compression type that was used. This means we only have bit space for
2 more built-in compresison types, but we could work around that
problem, if necessary, by introducing a new vartag_external value for
any further types we end up wanting to add. Hopefully, it won't be
too important to offer a wide selection of algorithms here, since
each one we add not only takes more coding but also adds a build
dependency for every packager. Nevertheless, it seems worth doing
at least this much, because LZ4 gets better compression than PGLZ
with less CPU usage.

It's possible for LZ4-compressed datums to leak into composite type
values stored on disk, just as it is for PGLZ. It's also possible for
LZ4-compressed attributes to be copied into a different table via SQL
commands such as CREATE TABLE AS or INSERT .. SELECT.  It would be
expensive to force such values to be decompressed, so PostgreSQL has
never done so. For the same reasons, we also don't force recompression
of already-compressed values even if the target table prefers a
different compression method than was used for the source data.  These
architectural decisions are perhaps arguable but revisiting them is
well beyond the scope of what seemed possible to do as part of this
project.  However, it's relatively cheap to recompress as part of
VACUUM FULL or CLUSTER, so this commit adjusts those commands to do
so, if the configured compression method of the table happens not to
match what was used for some column value stored therein.

Dilip Kumar. The original patches on which this work was based were
written by Ildus Kurbangaliev, and those were patches were based on
even earlier work by Nikita Glukhov, but the design has since changed
very substantially, since allow a potentially large number of
compression methods that could be added and dropped on a running
system proved too problematic given some of the architectural issues
mentioned above; the choice of which specific compression method to
add first is now different; and a lot of the code has been heavily
refactored.  More recently, Justin Przyby helped quite a bit with
testing and reviewing and this version also includes some code
contributions from him. Other design input and review from Tomas
Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander
Korotkov, and me.

Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain
Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com
2021-03-19 15:10:38 -04:00
Tomas Vondra be45be9c33 Implement GROUP BY DISTINCT
With grouping sets, it's possible that some of the grouping sets are
duplicate.  This is especially common with CUBE and ROLLUP clauses. For
example GROUP BY CUBE (a,b), CUBE (b,c) is equivalent to

  GROUP BY GROUPING SETS (
    (a, b, c),
    (a, b, c),
    (a, b, c),
    (a, b),
    (a, b),
    (a, b),
    (a),
    (a),
    (a),
    (c, a),
    (c, a),
    (c, a),
    (c),
    (b, c),
    (b),
    ()
  )

Some of the grouping sets are calculated multiple times, which is mostly
unnecessary.  This commit implements a new GROUP BY DISTINCT feature, as
defined in the SQL standard, which eliminates the duplicate sets.

Author: Vik Fearing
Reviewed-by: Erik Rijkers, Georgios Kokolatos, Tomas Vondra
Discussion: https://postgr.es/m/bf3805a8-d7d1-ae61-fece-761b7ff41ecc@postgresfriends.org
2021-03-18 18:22:18 +01:00
Amit Kapila c8f78b6161 Add a new GUC and a reloption to enable inserts in parallel-mode.
Commit 05c8482f7f added the implementation of parallel SELECT for
"INSERT INTO ... SELECT ..." which may incur non-negligible overhead in
the additional parallel-safety checks that it performs, even when, in the
end, those checks determine that parallelism can't be used. This is
normally only ever a problem in the case of when the target table has a
large number of partitions.

A new GUC option "enable_parallel_insert" is added, to allow insert in
parallel-mode. The default is on.

In addition to the GUC option, the user may want a mechanism to allow
inserts in parallel-mode with finer granularity at table level. The new
table option "parallel_insert_enabled" allows this. The default is true.

Author: "Hou, Zhijie"
Reviewed-by: Greg Nancarrow, Amit Langote, Takayuki Tsunakawa, Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1K-cW7svLC2D7DHoGHxdAdg3P37BLgebqBOC2ZLc9a6QQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV=qpFJrR3AcrTS3g@mail.gmail.com
2021-03-18 07:25:27 +05:30
Peter Eisentraut e1ae40f381 Small error message improvement 2021-03-17 08:17:33 +01:00
Thomas Munro 3792959949 Fix transaction.sql tests in higher isolation levels.
It seems like a useful sanity check to be able to run "installcheck"
against a cluster running with default_transaction_level set to
serializable or repeatable read.  Only one thing currently fails in
those configurations, so let's fix that.

No back-patch for now, because it fails in many other places in some of
the stable branches.  We'd have to go back and fix those too if we
included this configuration in automated testing.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA%2BhUKGJUaHeK%3DHLATxF1JOKDjKJVrBKA-zmbPAebOM0Se2FQRg%40mail.gmail.com
2021-03-17 17:26:20 +13:00
Thomas Munro 9e7ccd9ef6 Enable parallelism in REFRESH MATERIALIZED VIEW.
Pass CURSOR_OPT_PARALLEL_OK to pg_plan_query() so that parallel plans
are considered when running the underlying SELECT query.  This wasn't
done in commit e9baa5e9, which did this for CREATE MATERIALIZED VIEW,
because it wasn't yet known to be safe.

Since REFRESH always inserts into a freshly created table before later
merging or swapping the data into place with separate operations, we can
enable such plans here too.

Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Hou, Zhijie <houzj.fnst@cn.fujitsu.com>
Reviewed-by: Luc Vlaming <luc@swarm64.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CALj2ACXg-4hNKJC6nFnepRHYT4t5jJVstYvri%2BtKQHy7ydcr8A%40mail.gmail.com
2021-03-17 15:04:17 +13:00
Michael Paquier 15639d5e8f Add some basic tests for progress reporting of COPY
This tests some basic features for progress reporting of COPY, relying
on an INSERT trigger that gets fired when doing COPY FROM with a file or
stdin, checking for sizes, number of tuples processed, and number of
tuples excluded by a WHERE clause.

Author: Josef Šimánek, Matthias van de Meent
Reviewed-by: Michael Paquier, Justin Pryzby, Bharath Rupireddy, Tomas
Vondra
Discussion: https://postgr.es/m/CAEze2WiOcgdH4aQA8NtZq-4dgvnJzp8PohdeKchPkhMY-jWZXA@mail.gmail.com
2021-03-16 09:55:43 +09:00
Thomas Munro eeb60e45d8 Drop SERIALIZABLE workaround from parallel query tests.
SERIALIZABLE no longer inhibits parallelism, so we can drop some
outdated workarounds and comments from regression tests.  The change
came in release 12, commit bb16aba5, but it's not really worth
back-patching.

Also fix a typo.

Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGJUaHeK%3DHLATxF1JOKDjKJVrBKA-zmbPAebOM0Se2FQRg%40mail.gmail.com
2021-03-15 23:30:22 +13:00
Tom Lane f52c5d6749 Forbid marking an identity column as nullable.
GENERATED ALWAYS AS IDENTITY implies NOT NULL, but the code failed
to complain if you overrode that with "GENERATED ALWAYS AS IDENTITY
NULL".  One might think the old behavior was a feature, but it was
inconsistent because the outcome varied depending on the order of
the clauses, so it seems to have been just an oversight.

Per bug #16913 from Pavel Boev.  Back-patch to v10 where identity
columns were introduced.

Vik Fearing (minor tweaks by me)

Discussion: https://postgr.es/m/16913-3b5198410f67d8c6@postgresql.org
2021-03-12 11:08:42 -05:00
Peter Geoghegan 9f3665fbfc Don't consider newly inserted tuples in nbtree VACUUM.
Remove the entire idea of "stale stats" within nbtree VACUUM (stop
caring about stats involving the number of inserted tuples).  Also
remove the vacuum_cleanup_index_scale_factor GUC/param on the master
branch (though just disable them on postgres 13).

The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM
partially responsible for deciding when pg_class.reltuples stats needed
to be updated.  This seems contrary to the spirit of the index AM API,
though -- it is not actually necessary for an index AM's bulk delete and
cleanup callbacks to provide accurate stats when it happens to be
inconvenient.  The core code owns that.  (Index AMs have the authority
to perform or not perform certain kinds of deferred cleanup based on
their own considerations, such as page deletion and recycling, but that
has little to do with pg_class.reltuples/num_index_tuples.)

This issue was fairly harmless until the introduction of the
autovacuum_vacuum_insert_threshold feature by commit b07642db, which had
an undesirable interaction with the vacuum_cleanup_index_scale_factor
mechanism: it made insert-driven autovacuums perform full index scans,
even though there is no real benefit to doing so.  This has been tied to
a regression with an append-only insert benchmark [1].

Also have remaining cases that perform a full scan of an index during a
cleanup-only nbtree VACUUM indicate that the final tuple count is only
an estimate.  This prevents vacuumlazy.c from setting the index's
pg_class.reltuples in those cases (it will now only update pg_class when
vacuumlazy.c had TIDs for nbtree to bulk delete).  This arguably fixes
an oversight in deduplication-related bugfix commit 48e12913.

[1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com
Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
2021-03-10 16:27:01 -08:00
Tom Lane c3ffe34863 Avoid creating duplicate cached plans for inherited FK constraints.
When a foreign key constraint is applied to a partitioned table, each
leaf partition inherits a similar FK constraint.  We were processing all
of those constraints independently, meaning that in large partitioning
trees we'd build up large collections of cached FK-checking query plans.
However, in all cases but one, the generated queries are actually
identical for all members of the inheritance tree (because, in most
cases, the query only mentions the topmost table of the other side of
the FK relationship).  So we can share a single cached plan among all
the partitions, saving memory, not to mention time to build and maintain
the cached plans.

Keisuke Kuroda and Amit Langote

Discussion: https://postgr.es/m/cab4b85d-9292-967d-adf2-be0d803c3e23@nttcom.co.jp_1
2021-03-10 14:22:31 -05:00
Michael Paquier 6c788d9f6a Move tablespace path re-creation from the makefiles to pg_regress
Moving this logic into pg_regress fixes a potential failure with
parallel tests when pg_upgrade and the main regression test suite both
trigger the makefile rule that cleaned up testtablespace/ under
src/test/regress.  Even if pg_upgrade was triggering this rule, it has
no need to do so as it uses a different tablespace path.  So if
pg_upgrade triggered the makefile rule for the tablespace setup while
the main regression test suite ran the tablespace cases, it would fail.

61be85a was a similar attempt at achieving that, but that broke cases
where the regression tests require to run under an Administrator
account, like with Appveyor.

Reported-by: Andres Freund, Kyotaro Horiguchi
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20201209012911.uk4d6nxcnkp7ehrx@alap3.anarazel.de
2021-03-10 14:50:00 +09:00
Amit Kapila 05c8482f7f Enable parallel SELECT for "INSERT INTO ... SELECT ...".
Parallel SELECT can't be utilized for INSERT in the following cases:
- INSERT statement uses the ON CONFLICT DO UPDATE clause
- Target table has a parallel-unsafe: trigger, index expression or
  predicate, column default expression or check constraint
- Target table has a parallel-unsafe domain constraint on any column
- Target table is a partitioned table with a parallel-unsafe partition key
  expression or support function

The planner is updated to perform additional parallel-safety checks for
the cases listed above, for determining whether it is safe to run INSERT
in parallel-mode with an underlying parallel SELECT. The planner will
consider using parallel SELECT for "INSERT INTO ... SELECT ...", provided
nothing unsafe is found from the additional parallel-safety checks, or
from the existing parallel-safety checks for SELECT.

While checking parallel-safety, we need to check it for all the partitions
on the table which can be costly especially when we decide not to use a
parallel plan. So, in a separate patch, we will introduce a GUC and or a
reloption to enable/disable parallelism for Insert statements.

Prior to entering parallel-mode for the execution of INSERT with parallel
SELECT, a TransactionId is acquired and assigned to the current
transaction state. This is necessary to prevent the INSERT from attempting
to assign the TransactionId whilst in parallel-mode, which is not allowed.
This approach has a disadvantage in that if the underlying SELECT does not
return any rows, then the TransactionId is not used, however that
shouldn't happen in practice in many cases.

Author: Greg Nancarrow, Amit Langote, Amit Kapila
Reviewed-by: Amit Langote, Hou Zhijie, Takayuki Tsunakawa, Antonin Houska, Bharath Rupireddy, Dilip Kumar, Vignesh C, Zhihong Yu, Amit Kapila
Tested-by: Tang, Haiying
Discussion: https://postgr.es/m/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV=qpFJrR3AcrTS3g@mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
2021-03-10 07:38:58 +05:30
Fujii Masao ff99918c62 Track total amounts of times spent writing and syncing WAL data to disk.
This commit adds new GUC track_wal_io_timing. When this is enabled,
the total amounts of time XLogWrite writes and issue_xlog_fsync syncs
WAL data to disk are counted in pg_stat_wal. This information would be
useful to check how much WAL write and sync affect the performance.

Enabling track_wal_io_timing will make the server query the operating
system for the current time every time WAL is written or synced,
which may cause significant overhead on some platforms. To avoid such
additional overhead in the server with track_io_timing enabled,
this commit introduces track_wal_io_timing as a separate parameter from
track_io_timing.

Note that WAL write and sync activity by walreceiver has not been tracked yet.

This commit makes the server also track the numbers of times XLogWrite
writes and issue_xlog_fsync syncs WAL data to disk, in pg_stat_wal,
regardless of the setting of track_wal_io_timing. This counters can be
used to calculate the WAL write and sync time per request, for example.

Bump PGSTAT_FILE_FORMAT_ID.

Bump catalog version.

Author: Masahiro Ikeda
Reviewed-By: Japin Li, Hayato Kuroda, Masahiko Sawada, David Johnston, Fujii Masao
Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
2021-03-09 16:52:06 +09:00
Michael Paquier 9d2d457009 Add support for more progress reporting in COPY
The command (TO or FROM), its type (file, pipe, program or callback),
and the number of tuples excluded by a WHERE clause in COPY FROM are
added to the progress reporting already available.

The column "lines_processed" is renamed to "tuples_processed" to
disambiguate the meaning of this column in the cases of CSV and BINARY
COPY and to be more consistent with the other catalog progress views.

Bump catalog version, again.

Author: Matthias van de Meent
Reviewed-by: Michael Paquier, Justin Pryzby, Bharath Rupireddy, Josef
Šimánek, Tomas Vondra
Discussion: https://postgr.es/m/CAEze2WiOcgdH4aQA8NtZq-4dgvnJzp8PohdeKchPkhMY-jWZXA@mail.gmail.com
2021-03-09 14:21:03 +09:00
Michael Paquier f9264d1524 Remove support for SSL compression
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since.  Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that.  The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.

This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.

Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.

A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.

Bump catalog version.

Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion:  https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
2021-03-09 11:16:47 +09:00
Peter Eisentraut f9a0392e1c Add bit_xor aggregate function
This can be used as a checksum for unordered sets.  bit_and and bit_or
already exist.

Author: Alexey Bashtanov <bashtanov@imap.cc>
Reviewed-by: Ibrar Ahmed <ibrar.ahmad@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/9d4582ae-ecfc-3a13-2238-6ab5a37c1f41@imap.cc
2021-03-06 19:28:05 +01:00
Tom Lane 0a687c8f10 Add trim_array() function.
This has been in the SQL spec since 2008.  It's a pretty thin
wrapper around the array slice functionality, but the spec
says we should have it, so here it is.

Vik Fearing, reviewed by Dian Fay

Discussion: https://postgr.es/m/fc92ce17-9655-8ff1-c62a-4dc4c8ccd815@postgresfriends.org
2021-03-03 16:39:57 -05:00
Amit Kapila 19890a064e Add option to enable two_phase commits via pg_create_logical_replication_slot.
Commit 0aa8a01d04 extends the output plugin API to allow decoding of
prepared xacts and allowed the user to enable/disable the two-phase option
via pg_logical_slot_get_changes(). This can lead to a problem such that
the first time when it gets changes via pg_logical_slot_get_changes()
without two_phase option enabled it will not get the prepared even though
prepare is after consistent snapshot. Now next time during getting changes,
if the two_phase option is enabled it can skip prepare because by that
time start decoding point has been moved. So the user will only get commit
prepared.

Allow to enable/disable this option at the create slot time and default
will be false. It will break the existing slots which is fine in a major
release.

Author: Ajin Cherian
Reviewed-by: Amit Kapila and Vignesh C
Discussion: https://postgr.es/m/d0f60d60-133d-bf8d-bd70-47784d8fabf3@enterprisedb.com
2021-03-03 07:34:11 +05:30
Tom Lane ffd3944ab9 Improve reporting for syntax errors in multi-line JSON data.
Point to the specific line where the error was detected; the
previous code tended to include several preceding lines as well.
Avoid re-scanning the entire input to recompute which line that
was.  Simplify the logic a bit.  Add test cases.

Simon Riggs and Hamid Akhtar, reviewed by Daniel Gustafsson and myself

Discussion: https://postgr.es/m/CANbhV-EPBnXm3MF_TTWBwwqgn1a1Ghmep9VHfqmNBQ8BT0f+_g@mail.gmail.com
2021-03-01 16:44:17 -05:00
Alvaro Herrera 25936fd46c
Fix use-after-free bug with AfterTriggersTableData.storeslot
AfterTriggerSaveEvent() wrongly allocates the slot in execution-span
memory context, whereas the correct thing is to allocate it in
a transaction-span context, because that's where the enclosing
AfterTriggersTableData instance belongs into.

Backpatch to 12 (the test back to 11, where it works well with no code
changes, and it's good to have to confirm that the case was previously
well supported); this bug seems introduced by commit ff11e7f4b9.

Reported-by: Bertrand Drouvot <bdrouvot@amazon.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/39a71864-b120-5a5c-8cc5-c632b6f16761@amazon.com
2021-02-27 18:09:15 -03:00
David Rowley bb437f995d Add TID Range Scans to support efficient scanning ranges of TIDs
This adds a new executor node named TID Range Scan.  The query planner
will generate paths for TID Range scans when quals are discovered on base
relations which search for ranges on the table's ctid column.  These
ranges may be open at either end. For example, WHERE ctid >= '(10,0)';
will return all tuples on page 10 and over.

To support this, two new optional callback functions have been added to
table AM.  scan_set_tidrange is used to set the scan range to just the
given range of TIDs.  scan_getnextslot_tidrange fetches the next tuple
in the given range.

For AMs were scanning ranges of TIDs would not make sense, these functions
can be set to NULL in the TableAmRoutine.  The query planner won't
generate TID Range Scan Paths in that case.

Author: Edmund Horner, David Rowley
Reviewed-by: David Rowley, Tomas Vondra, Tom Lane, Andres Freund, Zhihong Yu
Discussion: https://postgr.es/m/CAMyN-kB-nFTkF=VA_JPwFNo08S0d-Yk0F741S2B7LDmYAi8eyA@mail.gmail.com
2021-02-27 22:59:36 +13:00
Peter Eisentraut f4adc41c4f Enhanced cycle mark values
Per SQL:202x draft, in the CYCLE clause of a recursive query, the
cycle mark values can be of type boolean and can be omitted, in which
case they default to TRUE and FALSE.

Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://www.postgresql.org/message-id/flat/db80ceee-6f97-9b4a-8ee8-3ba0c58e5be2@2ndquadrant.com
2021-02-27 08:13:24 +01:00
Peter Eisentraut b3a9e9897e Extend a test case a little
This will possibly help a subsequent patch by making sure the notice
messages are distinct so that it's clear that they come out in the
right order.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/alpine.DEB.2.21.1904240654120.3407%40lancre
2021-02-26 09:11:15 +01:00
Thomas Munro 8556267b2b Revert "pg_collation_actual_version() -> pg_collation_current_version()."
This reverts commit 9cf184cc05.  Name
change less well received than anticipated.

Discussion: https://postgr.es/m/afcfb97e-88a1-a540-db95-6c573b93bc2b%40eisentraut.org
2021-02-26 15:29:27 +13:00
Tom Lane 80ca8464fe Fix list-manipulation bug in WITH RECURSIVE processing.
makeDependencyGraphWalker and checkWellFormedRecursionWalker
thought they could hold onto a pointer to a list's first
cons cell while the list was modified by recursive calls.
That was okay when the cons cell was actually separately
palloc'd ... but since commit 1cff1b95a, it's quite unsafe,
leading to core dumps or incorrect complaints of faulty
WITH nesting.

In the field this'd require at least a seven-deep WITH nest
to cause an issue, but enabling DEBUG_LIST_MEMORY_USAGE
allows the bug to be seen with lesser nesting depths.

Per bug #16801 from Alexander Lakhin.  Back-patch to v13.

Michael Paquier and Tom Lane

Discussion: https://postgr.es/m/16801-393c7922143eaa4d@postgresql.org
2021-02-25 20:47:32 -05:00
Michael Paquier bcf2667bf6 Fix some typos, grammar and style in docs and comments
The portions fixing the documentation are backpatched where needed.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20210210235557.GQ20012@telsasoft.com
backpatch-through: 9.6
2021-02-24 16:13:17 +09:00
Peter Eisentraut 8ec8fe0f31 Message style fix
Don't quote type name placeholders.
2021-02-24 07:00:49 +01:00
Thomas Munro 9cf184cc05 pg_collation_actual_version() -> pg_collation_current_version().
The new name seems a bit more natural.

Discussion: https://postgr.es/m/20210117215940.GE8560%40telsasoft.com
2021-02-22 23:32:16 +13:00