Commit Graph

3972 Commits

Author SHA1 Message Date
Alvaro Herrera afc7e0ad55
Raise error on concurrent drop of partitioned index
We were already raising an error for DROP INDEX CONCURRENTLY on a
partitioned table, albeit a different and confusing one:
  ERROR:  DROP INDEX CONCURRENTLY must be first action in transaction

Change that to throw a more comprehensible error:
  ERROR:  cannot drop partitioned index \"%s\" concurrently

Michael Paquier authored the test case for indexes on temporary
partitioned tables.

Backpatch to 11, where indexes on partitioned tables were added.

Reported-by: Jan Mussler <jan.mussler@zalando.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/16594-d2956ca909585067@postgresql.org
2020-09-01 13:40:43 -04:00
Tom Lane 10564ee02c Fix code for re-finding scan position in a multicolumn GIN index.
collectMatchBitmap() needs to re-find the index tuple it was previously
looking at, after transiently dropping lock on the index page it's on.
The tuple should still exist and be at its prior position or somewhere
to the right of that, since ginvacuum never removes tuples but
concurrent insertions could add one.  However, there was a thinko in
that logic, to the effect of expecting any inserted tuples to have the
same index "attnum" as what we'd been scanning.  Since there's no
physical separation of tuples with different attnums, it's not terribly
hard to devise scenarios where this fails, leading to transient "lost
saved point in index" errors.  (While I've duplicated this with manual
testing, it seems impossible to make a reproducible test case with our
available testing technology.)

Fix by just continuing the scan when the attnum doesn't match.

While here, improve the error message used if we do fail, so that it
matches the wording used in btree for a similar case.

collectMatchBitmap()'s posting-tree code path was previously not
exercised at all by our regression tests.  While I can't make
a regression test that exhibits the bug, I can at least improve
the code coverage here, so do that.  The test case I made for this
is an extension of one added by 4b754d6c1, so it only works in
HEAD and v13; didn't seem worth trying hard to back-patch it.

Per bug #16595 from Jesse Kinkead.  This has been broken since
multicolumn capability was added to GIN (commit 27cb66fdf),
so back-patch to all supported branches.

Discussion: https://postgr.es/m/16595-633118be8eef9ce2@postgresql.org
2020-08-27 17:36:13 -04:00
Fujii Masao adc8fc6167 Add regression test for pg_backend_memory_contexts.
Author: Atsushi Torikoshi
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/20200819135545.GC19121@paquier.xyz
2020-08-26 10:52:02 +09:00
Tom Lane 4d346def15 Avoid pushing quals down into sub-queries that have grouping sets.
The trouble with doing this is that an apparently-constant subquery
output column isn't really constant if it is a grouping column that
appears in only some of the grouping sets.  A qual using such a
column would be subject to incorrect const-folding after push-down,
as seen in bug #16585 from Paul Sivash.

To fix, just disable qual pushdown altogether if the sub-query has
nonempty groupingSets.  While we could imagine far less restrictive
solutions, there is not much point in working harder right now,
because subquery_planner() won't move HAVING clauses to WHERE within
such a subquery.  If the qual stays in HAVING it's not going to be
a lot more useful than if we'd kept it at the outer level.

Having said that, this restriction could be removed if we used a
parsetree representation that distinguished such outputs from actual
constants, which is something I hope to do in future.  Hence, make
the patch a minimal addition rather than integrating it more tightly
(e.g. by renumbering the existing items in subquery_is_pushdown_safe's
comment).

Back-patch to 9.5 where grouping sets were introduced.

Discussion: https://postgr.es/m/16585-9d8c340d23ade8c1@postgresql.org
2020-08-22 14:46:40 -04:00
Tom Lane 5b02d68e75 Fix ALTER TABLE's scheduling rules for AT_AddConstraint subcommands.
Commit 1281a5c90 rearranged the logic in this area rather drastically,
and it broke the case of adding a foreign key constraint in the same
ALTER that adds the pkey or unique constraint it depends on.  While
self-referential fkeys are surely a pretty niche case, this used to
work so we shouldn't break it.

To fix, reorganize the scheduling rules in ATParseTransformCmd so
that a transformed AT_AddConstraint subcommand will be delayed into
a later pass in all cases, not only when it's been spit out as a
side-effect of parsing some other command type.

Also tweak the logic so that we won't run ATParseTransformCmd twice
while doing this.  It seems to work even without that, but it's
surely wasting cycles to do so.

Per bug #16589 from Jeremy Evans.  Back-patch to v13 where the new
code was introduced.

Discussion: https://postgr.es/m/16589-31c8d981ca503896@postgresql.org
2020-08-22 12:34:17 -04:00
Tom Lane 5028981923 Fix handling of CREATE TABLE LIKE with inheritance.
If a CREATE TABLE command uses both LIKE and traditional inheritance,
Vars in CHECK constraints and expression indexes that are absorbed
from a LIKE parent table tended to get mis-numbered, resulting in
wrong answers and/or bizarre error messages (though probably not any
actual crashes, thanks to validation occurring in the executor).

In v12 and up, the same could happen to Vars in GENERATED expressions,
even in cases with no LIKE clause but multiple traditional-inheritance
parents.

The cause of the problem for LIKE is that parse_utilcmd.c supposed
it could renumber such Vars correctly during transformCreateStmt(),
which it cannot since we have not yet accounted for columns added via
inheritance.  Fix that by postponing processing of LIKE INCLUDING
CONSTRAINTS, DEFAULTS, GENERATED, INDEXES till after we've performed
DefineRelation().

The error with GENERATED and multiple inheritance is a simple oversight
in MergeAttributes(); it knows it has to renumber Vars in inherited
CHECK constraints, but forgot to apply the same processing to inherited
GENERATED expressions (a/k/a defaults).

Per bug #16272 from Tom Gottfried.  The non-GENERATED variants of the
issue are ancient, presumably dating right back to the addition of
CREATE TABLE LIKE; hence back-patch to all supported branches.

Discussion: https://postgr.es/m/16272-6e32da020e9a9381@postgresql.org
2020-08-21 15:00:47 -04:00
Fujii Masao eabba4a3eb Fix explain regression test failure.
Commit 9d701e624f caused the regression test for EXPLAIN to fail on
the buildfarm member prion. This happened because of instability of
test output, i.e., in text format, whether "Planning:" line is output
varies depending on the system state.

This commit updated the regression test so that it ignores that
"Planning:" line to produce more stable test output and get rid of
the test failure.

Back-patch to v13.

Author: Fujii Masao
Discussion: https://postgr.es/m/1803897.1598021621@sss.pgh.pa.us
2020-08-22 01:50:05 +09:00
Fujii Masao 9d701e624f Rework EXPLAIN for planner's buffer usage.
Commit ce77abe63c allowed EXPLAIN (BUFFERS) to report the information
on buffer usage during planning phase. However three issues were
reported regarding this feature.

(1) Previously, EXPLAIN option BUFFERS required ANALYZE. So the query
    had to be actually executed by specifying ANALYZE even when we
    want to see only the planner's buffer usage. This was inconvenient
    especially when the query was write one like DELETE.

(2) EXPLAIN included the planner's buffer usage in summary
    information. So SUMMARY option had to be enabled to report that.
    Also this format was confusing.

(3) The output structure for planning information was not consistent
    between TEXT format and the others. For example, "Planning" tag
    was output in JSON format, but not in TEXT format.

For (1), this commit allows us to perform EXPLAIN (BUFFERS) without
ANALYZE to report the planner's buffer usage.

For (2), this commit changed EXPLAIN output so that the planner's
buffer usage is reported before summary information.

For (3), this commit made the output structure for planning
information more consistent between the formats.

Back-patch to v13 where the planner's buffer usage was allowed to
be reported in EXPLAIN.

Reported-by: Pierre Giraud, David Rowley
Author: Fujii Masao
Reviewed-by: David Rowley, Julien Rouhaud, Pierre Giraud
Discussion: https://postgr.es/m/07b226e6-fa49-687f-b110-b7c37572f69e@dalibo.com
2020-08-21 20:48:59 +09:00
Fujii Masao 3e98c0bafb Add pg_backend_memory_contexts system view.
This view displays the usages of all the memory contexts of the server
process attached to the current session. This information is useful to
investigate the cause of backend-local memory bloat.

This information can be also collected by calling
MemoryContextStats(TopMemoryContext) via a debugger. But this technique
cannot be uesd in some environments because no debugger is available there.
And it outputs lots of text messages and it's not easy to analyze them.
So, pg_backend_memory_contexts view allows us to access to backend-local
memory contexts information more easily.

Bump catalog version.

Author: Atsushi Torikoshi, Fujii Masao
Reviewed-by: Tatsuhito Kasahara, Andres Freund, Daniel Gustafsson, Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/72a656e0f71d0860161e0b3f67e4d771@oss.nttdata.com
2020-08-19 15:34:43 +09:00
Alvaro Herrera 6e70443eda
Disable autovacuum for BRIN test table
This should improve stability in the tests.

Per buildfarm member hyrax (CLOBBER_CACHE_ALWAYS) via Tom Lane.

Discussion: https://postgr.es/m/871534.1597503261@sss.pgh.pa.us
2020-08-17 16:20:06 -04:00
Tom Lane 1e7629d2c9 Be more careful about the shape of hashable subplan clauses.
nodeSubplan.c expects that the testexpr for a hashable ANY SubPlan
has the form of one or more OpExprs whose LHS is an expression of the
outer query's, while the RHS is an expression over Params representing
output columns of the subquery.  However, the planner only went as far
as verifying that the clauses were all binary OpExprs.  This works
99.99% of the time, because the clauses have the right shape when
emitted by the parser --- but it's possible for function inlining to
break that, as reported by PegoraroF10.  To fix, teach the planner
to check that the LHS and RHS contain the right things, or more
accurately don't contain the wrong things.  Given that this has been
broken for years without anyone noticing, it seems sufficient to just
give up hashing when it happens, rather than go to the trouble of
commuting the clauses back again (which wouldn't necessarily work
anyway).

While poking at that, I also noticed that nodeSubplan.c had a baked-in
assumption that the number of hash clauses is identical to the number
of subquery output columns.  Again, that's fine as far as parser output
goes, but it's not hard to break it via function inlining.  There seems
little reason for that assumption though --- AFAICS, the only thing
it's buying us is not having to store the number of hash clauses
explicitly.  Adding code to the planner to reject such cases would take
more code than getting nodeSubplan.c to cope, so I fixed it that way.

This has been broken for as long as we've had hashable SubPlans,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/1549209182255-0.post@n3.nabble.com
2020-08-14 22:14:03 -04:00
Heikki Linnakangas 5bdf694568 Fix typo in test comment. 2020-08-14 10:40:50 +03:00
Etsuro Fujita 199cec9779 Fix yet another issue with step generation in partition pruning.
Commit 13838740f fixed some issues with step generation in partition
pruning, but there was yet another one: get_steps_using_prefix() assumes
that clauses in the passed-in prefix list are sorted in ascending order
of their partition key numbers, but the caller failed to ensure this for
range partitioning, which led to an assertion failure in debug builds.
Adjust the caller function to arrange the clauses in the prefix list in
the required order for range partitioning.

Back-patch to v11, like the previous commit.

Patch by me, reviewed by Amit Langote.

Discussion: https://postgr.es/m/CAPmGK16jkXiFG0YqMbU66wte-oJTfW6D1HaNvQf%3D%2B5o9%3Dm55wQ%40mail.gmail.com
2020-08-07 14:45:00 +09:00
Tom Lane 3d2376d55c Fix oversight in ALTER TYPE: typmodin/typmodout must propagate to arrays.
If a base type supports typmods, its array type does too, with the
same interpretation.  Hence changes in pg_type.typmodin/typmodout
must be propagated to the array type.

While here, improve AlterTypeRecurse to not recurse to domains if
there is nothing we'd need to change.

Oversight in fe30e7ebf.  Back-patch to v13 where that came in.
2020-07-31 17:11:28 -04:00
Etsuro Fujita 13838740f6 Fix some issues with step generation in partition pruning.
In the case of range partitioning, get_steps_using_prefix() assumes that
the passed-in prefix list contains at least one clause for each of the
partition keys earlier than one specified in the passed-in
step_lastkeyno, but the caller (ie, gen_prune_steps_from_opexps())
didn't take it into account, which led to a server crash or incorrect
results when the list contained no clauses for such partition keys, as
reported in bug #16500 and #16501 from Kobayashi Hisanori.  Update the
caller to call that function only when the list created there contains
at least one clause for each of the earlier partition keys in the case
of range partitioning.

While at it, fix some other issues:

* The list to pass to get_steps_using_prefix() is allowed to contain
  multiple clauses for the same partition key, as described in the
  comment for that function, but that function actually assumed that the
  list contained just a single clause for each of middle partition keys,
  which led to an assertion failure when the list contained multiple
  clauses for such partition keys.  Update that function to match the
  comment.
* In the case of hash partitioning, partition keys are allowed to be
  NULL, in which case the list to pass to get_steps_using_prefix()
  contains no clauses for NULL partition keys, but that function treats
  that case as like the case of range partitioning, which led to the
  assertion failure.  Update the assertion test to take into account
  NULL partition keys in the case of hash partitioning.
* Fix a typo in a comment in get_steps_using_prefix_recurse().
* gen_partprune_steps() failed to detect self-contradiction from
  strict-qual clauses and an IS NULL clause for the same partition key
  in some cases, producing incorrect partition-pruning steps, which led
  to incorrect results of partition pruning, but didn't cause any
  user-visible problems fortunately, as the self-contradiction is
  detected later in the query planning.  Update that function to detect
  the self-contradiction.

Per bug #16500 and #16501 from Kobayashi Hisanori.  Patch by me, initial
diagnosis for the reported issue and review by Dmitry Dolgov.
Back-patch to v11, where partition pruning was introduced.

Discussion: https://postgr.es/m/16500-d1613f2a78e1e090%40postgresql.org
Discussion: https://postgr.es/m/16501-5234a9a0394f6754%40postgresql.org
2020-07-28 11:00:00 +09:00
Tom Lane 8a37951eeb Mark built-in coercion functions as leakproof where possible.
Making these leakproof seems helpful since (for example) if you have a
function f(int8) that is leakproof, you don't want it to effectively
become non-leakproof when you apply it to an int4 or int2 column.
But that's what happens today, since the implicit up-coercion will
not be leakproof.

Most of the coercion functions that visibly can't throw errors are
functions that convert numeric datatypes to other, wider ones.
Notable is that float4_numeric and float8_numeric can be marked
leakproof; before commit a57d312a7 they could not have been.
I also marked the functions that coerce strings to "name" as leakproof;
that's okay today because they truncate silently, but if we ever
reconsidered that behavior then they could no longer be leakproof.

I desisted from marking rtrim1() as leakproof; it appears so right now,
but the code seems a little too complex and perhaps subject to change,
since it's shared with other SQL functions.

Discussion: https://postgr.es/m/459322.1595607431@sss.pgh.pa.us
2020-07-25 12:54:58 -04:00
Tom Lane 2f2007fbb2 Fix assorted bugs by changing TS_execute's callback API to ternary logic.
Text search sometimes failed to find valid matches, for instance
'!crew:A'::tsquery might fail to locate 'crew:1B'::tsvector during
an index search.  The root of the issue is that TS_execute's callback
functions were not changed to use ternary (yes/no/maybe) reporting
when we made the search logic itself do so.  It's somewhat annoying
to break that API, but on the other hand we now see that any code
using plain boolean logic is almost certainly broken since the
addition of phrase search.  There seem to be very few outside callers
of this code anyway, so we'll just break them intentionally to get
them to adapt.

This allows removal of tsginidx.c's private re-implementation of
TS_execute, since that's now entirely duplicative.  It's also no
longer necessary to avoid use of CALC_NOT in tsgistidx.c, since
the underlying callbacks can now do something reasonable.

Back-patch into v13.  We can't change this in stable branches,
but it seems not quite too late to fix it in v13.

Tom Lane and Pavel Borisov

Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
2020-07-24 15:26:51 -04:00
Tom Lane a57d312a77 Support infinity and -infinity in the numeric data type.
Add infinities that behave the same as they do in the floating-point
data types.  Aside from any intrinsic usefulness these may have,
this closes an important gap in our ability to convert floating
values to numeric and/or replace float-based APIs with numeric.

The new values are represented by bit patterns that were formerly
not used (although old code probably would take them for NaNs).
So there shouldn't be any pg_upgrade hazard.

Patch by me, reviewed by Dean Rasheed and Andrew Gierth

Discussion: https://postgr.es/m/606717.1591924582@sss.pgh.pa.us
2020-07-22 19:19:44 -04:00
Tom Lane a4faef8f8f Fix some corner cases for window ranges with infinite offsets.
Many situations where the offset is infinity were not handled sanely.
We should generally allow the val versus base +/- offset comparison to
proceed according to the normal rules of IEEE arithmetic; however, we
must do something special for the corner cases where base +/- offset
would produce NaN due to subtracting two like-signed infinities.
That corresponds to asking which values infinitely precede +inf or
infinitely follow -inf, which should certainly be true of any finite
value or of the opposite-signed infinity.  After some discussion it
seems that the best decision is to make it true of the same-signed
infinity as well, ie, just return constant TRUE if the calculation
would produce a NaN.

(We could write this with a bit less code by subtracting anyway,
and then checking for a NaN result.  However, I prefer this
formulation because it'll be easier to transpose into numeric.c.)

Although this seems like clearly a bug fix with respect to finite
values, it is less obviously correct for infinite values.  Between
that and the fact that the whole issue only arises for very strange
window specifications (e.g. RANGE BETWEEN 'inf' PRECEDING AND 'inf'
PRECEDING), I'll desist from back-patching.

Noted by Dean Rasheed.

Discussion: https://postgr.es/m/3393130.1594925893@sss.pgh.pa.us
2020-07-20 22:03:18 -04:00
Tom Lane 4fb6aeb4f6 Make floating-point "NaN / 0" return NaN instead of raising an error.
This is more consistent with the IEEE 754 spec and our treatment of
NaNs elsewhere; in particular, the case has always acted that way in
"numeric" arithmetic.

Noted by Dean Rasheed.

Discussion: https://postgr.es/m/3421746.1594927785@sss.pgh.pa.us
2020-07-20 19:44:45 -04:00
Fujii Masao d05b172a76 Add generic_plans and custom_plans fields into pg_prepared_statements.
There was no easy way to find how many times generic and custom plans
have been executed for a prepared statement. This commit exposes those
numbers of times in pg_prepared_statements view.

Author: Atsushi Torikoshi, Kyotaro Horiguchi
Reviewed-by: Tatsuro Yamada, Masahiro Ikeda, Fujii Masao
Discussion: https://postgr.es/m/CACZ0uYHZ4M=NZpofH6JuPHeX=__5xcDELF8hT8_2T+R55w4RQw@mail.gmail.com
2020-07-20 11:55:50 +09:00
Tom Lane 9de77b5453 Allow logical replication to transfer data in binary format.
This patch adds a "binary" option to CREATE/ALTER SUBSCRIPTION.
When that's set, the publisher will send data using the data type's
typsend function if any, rather than typoutput.  This is generally
faster, if slightly less robust.

As committed, we won't try to transfer user-defined array or composite
types in binary, for fear that type OIDs won't match at the subscriber.
This might be changed later, but it seems like fit material for a
follow-on patch.

Dave Cramer, reviewed by Daniel Gustafsson, Petr Jelinek, and others;
adjusted some by me

Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
2020-07-18 12:44:51 -04:00
Michael Paquier 2a10fdc430 Eliminate cache lookup errors in SQL functions for object addresses
When using the following functions, users could see various types of
errors of the type "cache lookup failed for OID XXX" with elog(), that
can only be used for internal errors:
* pg_describe_object()
* pg_identify_object()
* pg_identify_object_as_address()

The set of APIs managing object addresses for all object types are made
smarter by gaining a new argument "missing_ok" that allows any caller to
control if an error is raised or not on an undefined object.  The SQL
functions listed above are changed to handle the case where an object is
missing.

Regression tests are added for all object types for the cases where
these are undefined.  Before this commit, these cases failed with cache
lookup errors, and now they basically return NULL (minus the name of the
object type requested).

Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson,
Álvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
2020-07-15 09:03:10 +09:00
Tom Lane 689696c711 Fix bitmap AND/OR scans on the inside of a nestloop partition-wise join.
reparameterize_path_by_child() failed to reparameterize BitmapAnd
and BitmapOr paths.  This matters only if such a path is chosen as
the inside of a nestloop partition-wise join, where we have to pass
in parameters from the outside of the nestloop.  If that did happen,
we generated a bad plan that would likely lead to crashes at execution.

This is not entirely reparameterize_path_by_child()'s fault though;
it's the victim of an ancient decision (my ancient decision, I think)
to not bother filling in param_info in BitmapAnd/Or path nodes.  That
caused the function to believe that such nodes and their children
contain no parameter references and so need not be processed.

In hindsight that decision looks pretty penny-wise and pound-foolish:
while it saves a few cycles during path node setup, we do commonly
need the information later.  In particular, by reversing the decision
and requiring valid param_info data in all nodes of a bitmap path
tree, we can get rid of indxpath.c's get_bitmap_tree_required_outer()
function, which computed the data on-demand.  It's not unlikely that
that nets out as a savings of cycles in many scenarios.  A couple
of other things in indxpath.c can be simplified as well.

While here, get rid of some cases in reparameterize_path_by_child()
that are visibly dead or useless, given that we only care about
reparameterizing paths that can be on the inside of a parameterized
nestloop.  This case reminds one of the maxim that untested code
probably does not work, so I'm unwilling to leave unreachable code
in this function.  (I did leave the T_Gather case in place even
though it's not reached in the regression tests.  It's not very
clear to me when the planner might prefer to put Gather below
rather than above a nestloop, but at least in principle the case
might be interesting.)

Per bug #16536, originally from Arne Roland but with a test case
by Andrew Gierth.  Back-patch to v11 where this code came in.

Discussion: https://postgr.es/m/16536-2213ee0b3aad41fd@postgresql.org
2020-07-14 18:56:56 -04:00
David Rowley f1fcf2d3b2 Fix timing issue with ALTER TABLE's validate constraint
An ALTER TABLE to validate a foreign key in which another subcommand
already caused a pending table rewrite could fail due to ALTER TABLE
attempting to validate the foreign key before the actual table rewrite
takes place.  This situation could result in an error such as:

ERROR:  could not read block 0 in file "base/nnnnn/nnnnn": read only 0 of 8192 bytes

The failure here was due to the SPI call which validates the foreign key
trying to access an index which is yet to be rebuilt.

Similarly, we also incorrectly tried to validate CHECK constraints before
the heap had been rewritten.

The fix for both is to delay constraint validation until phase 3, after
the table has been rewritten.  For CHECK constraints this means a slight
behavioral change.  Previously ALTER TABLE VALIDATE CONSTRAINT on
inheritance tables would be validated from the bottom up.  This was
different from the order of evaluation when a new CHECK constraint was
added.  The changes made here aligns the VALIDATE CONSTRAINT evaluation
order for inheritance tables to be the same as ADD CONSTRAINT, which is
generally top-down.

Reported-by: Nazli Ugur Koyluoglu, using SQLancer
Discussion: https://postgr.es/m/CAApHDvp%3DZXv8wiRyk_0rWr00skhGkt8vXDrHJYXRMft3TjkxCA%40mail.gmail.com
Backpatch-through: 9.5 (all supported versions)
2020-07-14 16:55:35 +12:00
Tom Lane a742ecf9c6 Cope with lateral references in the quals of a subquery RTE.
The qual pushdown logic assumed that all Vars in a restriction clause
must be Vars referencing subquery outputs; but since we introduced
LATERAL, it's possible for such a Var to be a lateral reference instead.
This led to an assertion failure in debug builds.  In a non-debug
build, there might be no ill effects (if qual_is_pushdown_safe decided
the qual was unsafe anyway), or we could get failures later due to
construction of an invalid plan.  I've not gone to much length to
characterize the possible failures, but at least segfaults in the
executor have been observed.

Given that this has been busted since 9.3 and it took this long for
anybody to notice, I judge that the case isn't worth going to great
lengths to optimize.  Hence, fix by just teaching qual_is_pushdown_safe
that such quals are unsafe to push down, matching the previous behavior
when it accidentally didn't fail.

Per report from Tom Ellis.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/20200713175124.GQ8220@cloudinit-builder
2020-07-13 20:38:20 -04:00
Alexander Korotkov 8d2ed66e41 Improvements to psql \dAo and \dAp commands
* Strategy number and purpose are essential information for opfamily operator.
   So, show those columns in non-verbose output.
 * "Left/right arg type" \dAp column names are confusing, because those type
   don't necessary match to function arguments.  Rename them to "Registered
   left/right type".
 * Replace manual assembling of operator/procedure names with casts to
   regoperator/regprocedure.
 * Add schema-qualification for pg_catalog functions and tables.

Reported-by: Peter Eisentraut, Tom Lane
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2edc7b27-031f-b2b6-0db2-864241c91cb9%402ndquadrant.com
Backpatch-through: 13
2020-07-13 18:53:20 +03:00
Amit Kapila d973747281 Revert "Track statistics for spilling of changes from ReorderBuffer".
The stats with this commit was available only for WALSenders, however,
users might want to see for backends doing logical decoding via SQL API.
Then, users might want to reset and access these stats across server
restart which was not possible with the current patch.

List of commits reverted:

caa3c4242c   Don't call elog() while holding spinlock.
e641b2a995   Doc: Update the documentation for spilled transaction
statistics.
5883f5fe27   Fix unportable printf format introduced in commit 9290ad198.
9290ad198b   Track statistics for spilling of changes from ReorderBuffer.

Additionaly, remove the release notes entry for this feature.

Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
2020-07-13 08:53:23 +05:30
Alexander Korotkov df646509f3 Forbid numeric NaN in jsonpath
SQL standard doesn't define numeric Inf or NaN values.  It appears even more
ridiculous to support then in jsonpath assuming JSON doesn't support these
values as well.  This commit forbids returning NaN from .double(), which was
previously allowed.  NaN can't be result of inner-jsonpath computation over
non-NaNs.  So, we can not expect NaN in the jsonpath output.

Reported-by: Tom Lane
Discussion: https://postgr.es/m/203949.1591879542%40sss.pgh.pa.us
Author: Alexander Korotkov
Reviewed-by: Tom Lane
Backpatch-through: 12
2020-07-11 03:21:00 +03:00
Alexander Korotkov 0657181167 Improve error reporting for jsonpath .double() method
When jsonpath .double() method detects that numeric or string can't be
converted to double precision, it throws an error.  This commit makes these
errors explicitly express the reason of failure.

Discussion: https://postgr.es/m/CAPpHfdtqJtiSXkP7tOXez18NxhLUH_-75bL8%3DOce4Ki%2Bbv7V6Q%40mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Tom Lane
Backpatch-through: 12
2020-07-11 03:20:46 +03:00
Alvaro Herrera a8aaa0c786
Morph pg_replication_slots.min_safe_lsn to safe_wal_size
The previous definition of the column was almost universally disliked,
so provide this updated definition which is more useful for monitoring
purposes: a large positive value is good, while zero or a negative value
means danger.  This should be operationally more convenient.

Backpatch to 13, where the new column to pg_replication_slots (and the
feature it represents) were added.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/9ddfbf8c-2f67-904d-44ed-cf8bc5916228@oss.nttdata.com
2020-07-07 13:08:00 -04:00
Peter Eisentraut e61225ffab Rename enable_incrementalsort for clarity
Author: James Coleman <jtc331@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/df652910-e985-9547-152c-9d4357dc3979%402ndquadrant.com
2020-07-05 11:43:08 +02:00
Fujii Masao 9bae7e4cde Add +(pg_lsn,numeric) and -(pg_lsn,numeric) operators.
By using these operators, the number of bytes can be added into and
subtracted from LSN.

Bump catalog version.

Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Asif Rehman
Discussion: https://postgr.es/m/ed9f7f74-e996-67f8-554a-52ebd3779b3b@oss.nttdata.com
2020-06-30 23:55:07 +09:00
Peter Eisentraut 78c887679d Add current substring regular expression syntax
SQL:1999 had syntax

    SUBSTRING(text FROM pattern FOR escapechar)

but this was replaced in SQL:2003 by the more clear

    SUBSTRING(text SIMILAR pattern ESCAPE escapechar)

but this was never implemented in PostgreSQL.  This patch adds that
new syntax as an alternative in the parser, and updates documentation
and tests to indicate that this is the preferred alternative now.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/a15db31c-d0f8-8ce0-9039-578a31758adb%402ndquadrant.com
2020-06-29 11:05:00 +02:00
Tom Lane 2c8ef9363d Future-proof regression tests against possibly-missing posixrules file.
The IANA time zone folk have deprecated use of a "posixrules" file in
the tz database.  While for now it's our choice whether to keep
supplying one in our own builds, installations built with
--with-system-tzdata will soon be needing to cope with that file not
being present, at least on some platforms.

This causes a problem for the horology test, which expected the
nonstandard POSIX zone spec "CST7CDT" to apply pre-2007 US daylight
savings rules.  That does happen if the posixrules file supplies such
information, but otherwise the test produces undesired results.
To fix, add an explicit transition date rule that matches 2005 practice.
(We could alternatively have switched the test to use some real time
zone, but it seems useful to have coverage of this type of zone spec.)

While at it, update a documentation example that also relied on
"CST7CDT"; use a real-world zone name instead.  Also, document why
the zone names EST5EDT, CST6CDT, MST7MDT, PST8PDT aren't subject to
similar failures when "posixrules" is missing.

Back-patch to all supported branches, since the hazard is the same
for all.

Discussion: https://postgr.es/m/1665379.1592581287@sss.pgh.pa.us
2020-06-19 13:55:36 -04:00
Peter Eisentraut 0a40563ead Disallow factorial of negative numbers
The previous implementation returned 1 for all negative numbers, which
is not sensible under any definition.

Discussion: https://www.postgresql.org/message-id/flat/6ce1df0e-86a3-e544-743a-f357ff663f68%402ndquadrant.com
2020-06-18 08:41:31 +02:00
Peter Eisentraut 9d402c73ad Expand tests for factorial
Move from int4 to numeric test.  (They were originally int4 functions,
but were reimplemented for numeric in
04a4821adef38155b7920ba9eb83c4c3c29156f8.)  Add some tests for edge
cases.

Discussion: https://www.postgresql.org/message-id/flat/6ce1df0e-86a3-e544-743a-f357ff663f68%402ndquadrant.com
2020-06-18 08:41:31 +02:00
Tom Lane 3a990a1263 Lobotomize test for float -Inf ^ -2, at least for now.
Per POSIX this case should produce +0, but buildfarm member fossa
(with icc (ICC) 19.0.5.281 20190815) is reporting -0.  icc has a
boatload of unsafe floating-point optimizations, with a corresponding
boatload of not-too-well-documented compiler switches, and it seems our
default use of "-mp1" isn't whacking it hard enough to keep it from
misoptimizing the stanza in dpow() that checks whether y is odd.
There's nothing wrong with that code (seeing that no other buildfarm
member has trouble with it), so I'm content to blame this on the
compiler.  But without access to the compiler I'm not going to guess at
what switches might be needed to fix it.  For now, tweak the test case
so it will accept either -0 or +0 as a correct answer.

Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
2020-06-16 15:28:45 -04:00
Tom Lane e532b1d57d Fix power() for infinity inputs some more.
Buildfarm results for commit decbe2bfb show that AIX and illumos
have non-POSIX-compliant pow() functions, as do ancient NetBSD
and HPUX releases.  While it's dubious how much we should care
about the latter two platforms, the former two are probably enough
reason to put in manual handling of infinite-input cases.  Hence,
do so, and clean up the post-pow() error handling to reflect its
now-more-limited scope.  (Notably, while we no longer expect to
ever see EDOM from pow(), report it as a domain error if we do.
The former coding had the net effect of expensively converting the
error to ERANGE, which seems highly questionable: if pow() wanted
to report ERANGE, it would have done so.)

Patch by me; thanks to Michael Paquier for review.

Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
2020-06-15 12:15:56 -04:00
Tom Lane decbe2bfb1 Fix behavior of exp() and power() for infinity inputs.
Previously, these functions tended to throw underflow errors for
negative-infinity exponents.  The correct thing per POSIX is to
return 0, so let's do that instead.  (Note that the SQL standard
is silent on such issues, as it lacks the concepts of either Inf
or NaN; so our practice is to follow POSIX whenever a corresponding
C-library function exists.)

Also, add a bunch of test cases verifying that exp() and power()
actually do follow POSIX for Inf and NaN inputs.  While this patch
should guarantee that exp() passes the tests, power() will not unless
the platform's pow(3) is fully POSIX-compliant.  I already know that
gaur fails some of the tests, and I am suspicious that the Windows
animals will too; the extent of compliance of other old platforms
remains to be seen.  We might choose to drop failing test cases, or
to work harder at overriding pow(3) for these cases, but first let's
see just how good or bad the situation is.

Discussion: https://postgr.es/m/582552.1591917752@sss.pgh.pa.us
2020-06-14 11:00:07 -04:00
Peter Eisentraut 378badc8eb Add test coverage for EXTRACT()
The variants for time and timetz had zero test coverage, the variant
for interval only very little.  This adds practically full coverage
for those functions.

Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/c3306ac7-fcae-a1b8-1e30-6a379d605bcb%402ndquadrant.com
2020-06-14 08:04:45 +02:00
Tom Lane 23cbeda50b Sync behavior of var_samp and stddev_samp for single NaN inputs.
var_samp(numeric) and stddev_samp(numeric) disagreed with their float
cousins about what to do for a single non-null input value that is NaN.
The float versions return NULL on the grounds that the calculation is
only defined for more than one non-null input, which seems like the
right answer.  But the numeric versions returned NaN, as a result of
dealing with edge cases in the wrong order.  Fix that.  The patch
also gets rid of an insignificant memory leak in such cases.

This inconsistency is of long standing, but on the whole it seems best
not to back-patch the change into stable branches; nobody's complained
and it's such an obscure point that nobody's likely to complain.
(Note that v13 and v12 now contain test cases that will notice if we
accidentally back-patch this behavior change in future.)

Report and patch by me; thanks to Dean Rasheed for review.

Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
2020-06-13 14:01:46 -04:00
Tom Lane 03109a5302 Fix behavior of float aggregates for single Inf or NaN inputs.
When there is just one non-null input value, and it is infinity or NaN,
aggregates such as stddev_pop and covar_pop should produce a NaN
result, because the calculation is not well-defined.  They used to do
so, but since we adopted Youngs-Cramer aggregation in commit e954a727f,
they produced zero instead.  That's an oversight, so fix it.  Add tests
exercising these edge cases.

Affected aggregates are

 var_pop(double precision)
 stddev_pop(double precision)
 var_pop(real)
 stddev_pop(real)
 regr_sxx(double precision,double precision)
 regr_syy(double precision,double precision)
 regr_sxy(double precision,double precision)
 regr_r2(double precision,double precision)
 regr_slope(double precision,double precision)
 regr_intercept(double precision,double precision)
 covar_pop(double precision,double precision)
 corr(double precision,double precision)

Back-patch to v12 where the behavior change was accidentally introduced.

Report and patch by me; thanks to Dean Rasheed for review.

Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
2020-06-13 13:43:40 -04:00
Tom Lane 77a3be32f7 Fix mishandling of NaN counts in numeric_[avg_]combine.
When merging two NumericAggStates, the code missed adding the new
state's NaNcount unless its N was also nonzero; since those counts
are independent, this is wrong.

This would only have visible effect if some partial aggregate scans
found only NaNs while earlier ones found only non-NaNs; then we could
end up falsely deciding that there were no NaNs and fail to return a
NaN final result as expected.  That's pretty improbable, so it's no
surprise this hasn't been reported from the field.  Still, it's a bug.

I didn't try to produce a regression test that would show the bug,
but I did notice that these functions weren't being reached at all
in our regression tests, so I improved the tests to at least
exercise them.  With these additions, I see pretty complete code
coverage on the aggregation-related functions in numeric.c.

Back-patch to 9.6 where this code was introduced.  (I only added
the improved test case as far back as v10, though, since the
relevant part of aggregates.sql isn't there at all in 9.6.)
2020-06-11 17:38:42 -04:00
Jeff Davis 92c58fd948 Rework HashAgg GUCs.
Eliminate enable_groupingsets_hash_disk, which was primarily useful
for testing grouping sets that use HashAgg and spill. Instead, hack
the table stats to convince the planner to choose hashed aggregation
for grouping sets that will spill to disk. Suggested by Melanie
Plageman.

Rename enable_hashagg_disk to hashagg_avoid_disk_plan, and invert the
meaning of on/off. The new name indicates more strongly that it only
affects the planner. Also, the word "avoid" is less definite, which
should avoid surprises when HashAgg still needs to use the
disk. Change suggested by Justin Pryzby, though I chose a different
GUC name.

Discussion: https://postgr.es/m/CAAKRu_aisiENMsPM2gC4oUY1hHG3yrCwY-fXUg22C6_MJUwQdA%40mail.gmail.com
Discussion: https://postgr.es/m/20200610021544.GA14879@telsasoft.com
Backpatch-through: 13
2020-06-11 12:57:43 -07:00
Peter Eisentraut c7eab0e97e Change default of password_encryption to scram-sha-256
Also, the legacy values on/true/yes/1 for password_encryption that
mapped to md5 are removed.  The only valid values are now
scram-sha-256 and md5.

Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
Discussion: https://www.postgresql.org/message-id/flat/d5b0ad33-7d94-bdd1-caac-43a1c782cab2%402ndquadrant.com
2020-06-10 16:42:55 +02:00
Tom Lane 05dea2427c Repair unstable regression test.
Commit 0c882e52a tried to force table atest12 to have more-accurate-
than-default statistics; but transiently setting default_statistics_target
isn't enough for that, because autovacuum could come along and overwrite
the stats later.  This evidently explains some intermittent buildfarm
failures we've seen since then.  Repair by disabling autovac on this table.

Thanks to David Rowley for correctly diagnosing the cause.

Discussion: https://postgr.es/m/CA+hUKG+OUkQSOUTg=qo=S=fWa_tbm99i7rB7mfbHz1SYm4v-jQ@mail.gmail.com
2020-06-09 01:18:34 -04:00
Tom Lane 0c882e52a8 Improve ineq_histogram_selectivity's behavior for non-default orderings.
ineq_histogram_selectivity() can be invoked in situations where the
ordering we care about is not that of the column's histogram.  We could
be considering some other collation, or even more drastically, the
query operator might not agree at all with what was used to construct
the histogram.  (We'll get here for anything using scalarineqsel-based
estimators, so that's quite likely to happen for extension operators.)

Up to now we just ignored this issue and assumed we were dealing with
an operator/collation whose sort order exactly matches the histogram,
possibly resulting in junk estimates if the binary search gets confused.
It's past time to improve that, since the use of nondefault collations
is increasing.  What we can do is verify that the given operator and
collation match what's recorded in pg_statistic, and use the existing
code only if so.  When they don't match, instead execute the operator
against each histogram entry, and take the fraction of successes as our
selectivity estimate.  This gives an estimate that is probably good to
about 1/histogram_size, with no assumptions about ordering.  (The quality
of the estimate is likely to degrade near the ends of the value range,
since the two orderings probably don't agree on what is an extremal value;
but this is surely going to be more reliable than what we did before.)

At some point we might further improve matters by storing more than one
histogram calculated according to different orderings.  But this code
would still be good fallback logic when no matches exist, so that is
not an argument for not doing this.

While here, also improve get_variable_range() to deal more honestly
with non-default collations.

This isn't back-patchable, because it requires adding another argument
to ineq_histogram_selectivity, and because it might have significant
impact on the estimation results for extension operators relying on
scalarineqsel --- mostly for the better, one hopes, but in any case
destabilizing plan choices in back branches is best avoided.

Per investigation of a report from James Lucas.

Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
2020-06-05 16:55:27 -04:00
Michael Paquier 1127f0e392 Preserve pg_index.indisreplident across REINDEX CONCURRENTLY
If the flag value is lost, logical decoding would work the same way as
REPLICA IDENTITY NOTHING, meaning that no old tuple values would be
included in the changes anymore produced by logical decoding.

Author: Michael Paquier
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20200603065340.GK89559@paquier.xyz
Backpatch-through: 12
2020-06-05 10:26:02 +09:00
Tom Lane a9632830bb Reject "23:59:60.nnn" in datetime input.
It's intentional that we don't allow values greater than 24 hours,
while we do allow "24:00:00" as well as "23:59:60" as inputs.
However, the range check was miscoded in such a way that it would
accept "23:59:60.nnn" with a nonzero fraction.  For time or timetz,
the stored result would then be greater than "24:00:00" which would
fail dump/reload, not to mention possibly confusing other operations.

Fix by explicitly calculating the result and making sure it does not
exceed 24 hours.  (This calculation is redundant with what will happen
later in tm2time or tm2timetz.  Maybe someday somebody will find that
annoying enough to justify refactoring to avoid the duplication; but
that seems too invasive for a back-patched bug fix, and the cost is
probably unmeasurable anyway.)

Note that this change also rejects such input as the time portion
of a timestamp(tz) value.

Back-patch to v10.  The bug is far older, but to change this pre-v10
we'd need to ensure that the logic behaves sanely with float timestamps,
which is possibly nontrivial due to roundoff considerations.
Doesn't really seem worth troubling with.

Per report from Christoph Berg.

Discussion: https://postgr.es/m/20200520125807.GB296739@msg.df7cb.de
2020-06-04 16:42:23 -04:00
Peter Eisentraut f5067049cd psql: Clean up terminology in \dAp command
The preferred terminology has been support "function", not procedure,
for some time, so change that over.  The command stays \dAp, since
\dAf is already something else.
2020-06-04 22:09:41 +02:00
Michael Paquier e786be5fcb Fix crashes with currtid() and currtid2()
A relation that has no storage initializes rd_tableam to NULL, which
caused those two functions to crash because of a pointer dereference.
Note that in 11 and older versions, this has always failed with a
confusing error "could not open file".

These two functions are used by the Postgres ODBC driver, which requires
them only when connecting to a backend strictly older than 8.1.  When
connected to 8.2 or a newer version, the driver uses a RETURNING clause
instead whose support has been added in 8.2, so it should be possible to
just remove both functions in the future.  This is left as an issue to
address later.

While on it, add more regression tests for those functions as we never
really had coverage for them, and for aggregates of TIDs.

Reported-by: Jaime Casanova, via sqlsmith
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAJGNTeO93u-5APMga6WH41eTZ3Uee9f3s8dCpA-GSSqNs1b=Ug@mail.gmail.com
Backpatch-through: 12
2020-06-01 10:32:06 +09:00
Heikki Linnakangas 0099db4ce1 Fix typo in test comment.
The same comment was copied to a few different places, with the same typo.
Backpatch down to v11, where this typo was introduced.
2020-05-28 12:35:18 +03:00
Alvaro Herrera c301c2e739
WITH TIES: number of rows is optional and defaults to one
FETCH FIRST .. ONLY implements this correctly, but we missed to include
it for FETCH FIRST .. WITH TIES in commit 357889eb17.

Author: Vik Fearing
Discussion: https://postgr.es/m/6aa690ef-551d-e24f-2690-c38c2442947c@postgresfriends.org
2020-05-18 19:28:46 -04:00
Alexander Korotkov 18b9d22cef Cosmetic improvement for psql opfamily-related information
* Rename column "Opfamily Name" to "Operator family" for uniformity.
 * Rename column alias from "t1" to "t".
2020-05-17 13:05:11 +03:00
Alexander Korotkov b1953e67e4 Improve ordering for \dAo and \dAp psql commands
This commit changes ORDER BY clause for \dAo and \dAp psql commands in
the following way.
 * Operators for the same types are grouped together.
 * Same-class operators and procedures are listed before cross-class operators
   and procedures.

Modification of ORDER BY clause for \dAp required removing DISTINCT clause,
which doesn't seem to affect anything.

Discussion: https://postgr.es/m/20200511210856.GA18368%40alvherre.pgsql
Author: Alvaro Herrera revised by me
Reviewed-by: Alexander Korotkov, Nikita Glukhov
2020-05-17 12:45:17 +03:00
Michael Paquier 2c8dd05d6c Make pg_stat_wal_receiver consistent with the WAL receiver's shmem info
d140f2f3 has renamed receivedUpto to flushedUpto, and has added
writtenUpto to the WAL receiver's shared memory information, but
pg_stat_wal_receiver was not consistent with that.  This commit renames
received_lsn to flushed_lsn, and adds a new column called written_lsn.

Bump catalog version.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20200515090817.GA212736@paquier.xyz
2020-05-17 09:22:07 +09:00
Tomas Vondra 6a918c3ac8 Rework EXPLAIN format for incremental sort
The explain format used by incremental sort was somewhat inconsistent
with other nodes, making it harder to parse and understand. This commit
addresses that by

 - adding an extra space to better separate groups of values

 - using colons instead of equal signs to separate key/value

 - properly capitalizing first letter of a key

 - using separate lines for full and pre-sorted groups

These changes were proposed by Justin Pryzby and mostly copy the final
explain format used to report WAL usage.

Author: Justin Pryzby
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/20200419023625.GP26953@telsasoft.com
2020-05-12 20:04:39 +02:00
Tom Lane 7b48f1b490 Do pre-release housekeeping on catalog data.
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES.  For reference, the command was
./renumber_oids.pl --first-mapped-oid=8000 --target-oid=5032

Also run reformat_dat_file.pl while I'm here.

Renumbering recently-added types changed some results in the opr_sanity
test.  To make those a bit easier to eyeball-verify, change the queries
to show regtype not just bare type OIDs.  (I think we didn't have
regtype when these queries were written.)
2020-05-12 13:03:43 -04:00
Fujii Masao 81ec990a23 Correct standbycheck regression test output.
Commit 2eb34ac369 changed error messages emit when commands are
rejected during recovery. But it forgot to update the standbycheck
regression test output with those error message changes.

Reported-by: Michail Nikolaev
Author: Michail Nikolaev
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CANtu0ojFPgcspH=0nNZ+qmu0XD69sXKtVttuSoYKHawRADSQGg@mail.gmail.com
2020-05-12 13:56:19 +09:00
Tom Lane 96d175e3e2 Fix findoidjoins to recognize oidvector columns.
Somehow we'd never noticed this oversight, even though it means
that such basic columns as pg_proc.proargtypes were not being
validated by the oidjoins test.  Correct the query and update
the test script with the newly-found dependencies.
2020-05-09 16:28:20 -04:00
Tom Lane 6c298881c2 Update oidjoins regression test for v13.
We seem to have forgotten to do this in the v12 cycle, so add it as
a task in the RELEASE_CHANGES list, in hopes we won't forget again.

While here, fix findoidjoins.c so that it actually works in the
new dispensation where OID is a regular column, and change it to only
consider system relations (this avoids being fooled by the OID column
in the brintest test table).

Also tweak the largeobject test so that the somewhat-recently-added
manual creation of a LO with an OID in the system range doesn't
fool findoidjoins.c.  For the moment I just made that use an unused
OID, but we might have to find a more robust solution someday.
2020-05-09 13:05:08 -04:00
Peter Eisentraut 086ffddf36 Fix several DDL issues of generated columns versus inheritance
Several combinations of generated columns and inheritance in CREATE
TABLE were not handled correctly.  Specifically:

- Disallow a child column specifying a generation expression if the
  parent column is a generated column.  The child column definition
  must be unadorned and the parent column's generation expression will
  be copied.

- Prohibit a child column of a generated parent column specifying
  default values or identity.

- Allow a child column of a not-generated parent column specifying
  itself as a generated column.  This previously did not work, but it
  was possible to arrive at the state via other means (involving ALTER
  TABLE), so it seems sensible to support it.

Add tests for each case.  Also add documentation about the rules
involving generated columns and inheritance.

Discussion:
    https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
    https://www.postgresql.org/message-id/flat/2678bad1-048f-519a-ef24-b12962f41807%40enterprisedb.com
    https://www.postgresql.org/message-id/flat/CAJvUf_u4h0DxkCMCeEKAWCuzGUTnDP-G5iVmSwxLQSXn0_FWNQ%40mail.gmail.com
2020-05-08 11:31:57 +02:00
Peter Eisentraut 501e41dd3c Propagate ALTER TABLE ... SET STORAGE to indexes
When creating a new index, the attstorage setting of the table column
is copied to regular (non-expression) index columns.  But a later
ALTER TABLE ... SET STORAGE is not propagated to indexes, thus
creating an inconsistent and undumpable state.

Discussion: https://www.postgresql.org/message-id/flat/9765d72b-37c0-06f5-e349-2a580aafd989%402ndquadrant.com
2020-05-08 08:39:17 +02:00
Tom Lane db89f0e3a4 Fix YA text phrase search bug.
checkcondition_str() failed to report multiple matches for a prefix
pattern correctly: it would dutifully merge the match positions, but
then after exiting that loop, if the last prefix-matching word had
had no suitable positions, it would report there were no matches.
The upshot would be failing to recognize a match that the query
should match.

It looks like you need all of these conditions to see the bug:
* a phrase search (else we don't ask for match position details)
* a prefix search item (else we don't get to this code)
* a weight restriction (else checkclass_str won't fail)

Noted while investigating a problem report from Pavel Borisov,
though this is distinct from the issue he was on about.

Back-patch to 9.6 where phrase search was added.
2020-05-07 15:59:51 -04:00
Peter Geoghegan d9c501da70 Add nbtree ScalarArrayOpExpr tests.
Add test coverage for the nbtutils.c routines concerned with IndexScans
that have native ScalarArrayOpExpr quals.  The ScalarArrayOpExpr
specialized mark and restore routines, and the "find extreme element"
routine now have some test coverage.

These functions are probably infrequently exercised by real world
queries, so having some coverage seems like a good idea.  The mark and
restore routines were originally added by a bugfix that came several
weeks after the first stable release of Postgres 9.2 (see commit
70bc583319).
2020-04-30 14:33:13 -07:00
Peter Geoghegan 52b164c5a0 Add LP_DEAD deletion of a posting list tuple test.
Make sure that we have test coverage of the posting list tuple path
within _bt_xid_horizon().

Per off-list complaint from Andres Freund.
2020-04-28 16:12:56 -07:00
Alexander Korotkov ef11051bbe Fix definition of pg_statio_all_tables view
pg_statio_all_tables view appears to have a wrong grouping.  As the result
numbers of toast index blocks read and hit were multiplied to the number of
table indexes.  This commit fixes the view definition.

Backpatching this appears difficult.  We don't have a mechanism to patch a
system catalog of existing instances in minor upgrade.  We can write a
release notes instruction to do this manually.  But per discussion this is
probably not so critical bug for doing such an intrusive fix.

Reported-by: Andrei Zubkov
Discussion: https://postgr.es/m/CAPpHfdtMYkkNudLMG9G0dxX_B%3Dn5sfKzOyxxrvWYtSicaGW0Lw%40mail.gmail.com
2020-04-28 11:30:33 +03:00
Tom Lane e81e5741a6 Fix full text search to handle NOT above a phrase search correctly.
Queries such as '!(foo<->bar)' failed to find matching rows when
implemented as a GiST or GIN index search.  That's because of
failing to handle phrase searches as tri-valued when considering
a query without any position information for the target tsvector.
We can only say that the phrase operator might match, not that it
does match; and therefore its NOT also might match.  The previous
coding incorrectly inverted the approximate phrase result to
decide that there was certainly no match.

To fix, we need to make TS_phrase_execute return a real ternary result,
and then bubble that up accurately in TS_execute.  As long as we have
to do that anyway, we can simplify the baroque things TS_phrase_execute
was doing internally to manage tri-valued searching with only a bool
as explicit result.

For now, I left the externally-visible result of TS_execute as a plain
bool.  There do not appear to be any outside callers that need to
distinguish a three-way result, given that they passed in a flag
saying what to do in the absence of position data.  This might need
to change someday, but we wouldn't want to back-patch such a change.

Although tsginidx.c has its own TS_execute_ternary implementation for
use at upper index levels, that sadly managed to get this case wrong
as well :-(.  Fixing it is a lot easier fortunately.

Per bug #16388 from Charles Offenbacher.  Back-patch to 9.6 where
phrase search was introduced.

Discussion: https://postgr.es/m/16388-98cffba38d0b7e6e@postgresql.org
2020-04-27 12:21:04 -04:00
Tomas Vondra de0dc1a847 Fix cost_incremental_sort for expressions with varno 0
When estimating the number of pre-sorted groups in cost_incremental_sort
we must not pass Vars with varno 0 to estimate_num_groups, which would
cause failues in find_base_rel. This may happen when sorting output of
set operations, thanks to generate_append_tlist.

Unlike recurse_set_operations we can't easily access the original target
list, so if we find any Vars with varno 0, we fall back to the default
estimate DEFAULT_NUM_DISTINCT.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20200411214639.GK2228%40telsasoft.com
2020-04-23 00:15:24 +02:00
Alvaro Herrera c33869cc3b
psql \d: Display table where trigger is defined, if inherited
It's important to know that a trigger is cloned from a parent table,
because of the behavior that the trigger is dropped on detach.  Make
psql's \d display it.

We'd like to backpatch, but lack of the pg_trigger.tgparentid column
makes it more difficult.  Punt for now.  If somebody wants to volunteer
an implementation that reads pg_depend on older versions, that can
probably be backpatched.

Authors: Justin Pryzby, Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/20200419002206.GM26953@telsasoft.com
2020-04-21 18:37:26 -04:00
Alvaro Herrera afccd76f1c
Fix detaching partitions with cloned row triggers
When a partition is detached, any triggers that had been cloned from its
parent were not properly disentangled from its parent triggers.
This resulted in triggers that could not be dropped because they
depended on the trigger in the trigger in the no-longer-parent table:
  ALTER TABLE t DETACH PARTITION t1;
  DROP TRIGGER trig ON t1;
    ERROR:  cannot drop trigger trig on table t1 because trigger trig on table t requires it
    HINT:  You can drop trigger trig on table t instead.

Moreover the table can no longer be re-attached to its parent, because
the trigger name is already taken:
  ALTER TABLE t ATTACH PARTITION t1 FOR VALUES FROM (1)TO(2);
    ERROR:  trigger "trig" for relation "t1" already exists

The former is a bug introduced in commit 86f575948c.  (The latter is
not necessarily a bug, but it makes the bug more uncomfortable.)

To avoid the complexity that would be needed to tell whether the trigger
has a local definition that has to be merged with the one coming from
the parent table, establish the behavior that the trigger is removed
when the table is detached.

Backpatch to pg11.

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/20200408152412.GZ2228@telsasoft.com
2020-04-21 13:57:00 -04:00
David Rowley 3cb02e307e Fix possible crash with GENERATED ALWAYS columns
In some corner cases, this could also lead to corrupted values being
included in the tuple.

Users who are concerned that they are affected by this should first
upgrade and then perform a base backup of their database and restore onto
an off-line server. They should then query each table with generated
columns to ensure there are no rows where the generated expression does
not match a newly calculated version of the GENERATED ALWAYS expression.
If no crashes occur and no rows are returned then you're not affected.

Fixes bug #16369.

Reported-by: Cameron Ezell
Discussion: https://postgr.es/m/16369-5845a6f1bef59884@postgresql.org
Backpatch-through: 12 (where GENERATED ALWAYS columns were added.)
2020-04-18 14:10:37 +12:00
Amit Kapila 24d2d38b1e Fix the usage of parallel and full options of vacuum command.
Earlier we were inconsistent in allowing the usage of parallel and
full options.  Change it such that we disallow them only when they are
combined in a way that we don't support.

In passing, improve the comments in some of the existing tests of parallel
vacuum.

Reported-by: Tushar Ahuja
Author: Justin Pryzby, Amit Kapila
Reviewed-by: Sawada Masahiko, Michael Paquier, Mahendra Singh Thalor and
Amit Kapila
Discussion: https://postgr.es/m/58c8d171-e665-6fa3-a9d3-d9423b694dae%40enterprisedb.com
2020-04-16 10:55:02 +05:30
Tom Lane 2d59643dbc Account for collation when coercing the output of a SQL function.
Commit 913bbd88d overlooked that the result of coerce_to_target_type
might need collation fixups.  Per report from Andreas Joseph Krogh.

Discussion: https://postgr.es/m/VisenaEmail.72.37d08ec2b8cb8fb5.17179940cd3@tc7-visena
2020-04-14 17:30:36 -04:00
Tom Lane c9b0c678d3 Fix default text search parser's ts_headline code for phrase queries.
This code could produce very poor results when asked to highlight a
string based on a query using phrase-match operators.  The root cause
is that hlCover(), which is supposed to find a minimal substring that
matches the query, was written assuming that word position is not
significant.  I'm only 95% convinced that its algorithm was correct even
for plain AND/OR queries; but it definitely fails completely for phrase
matches, causing it to possibly not identify a cover string at all.

Hence, rewrite hlCover() with a less-tense algorithm that just tries
all the possible substrings, earlier and shorter ones first.  (This is
not as bad as it sounds performance-wise, because all of the string
matching has been done already: the repeated tsquery match checks
boil down to pointer comparisons.)

Unfortunately, since that approach produces more candidate cover
strings than before, it also exposes that there were bugs in the
heuristics in mark_hl_words() for selecting a best cover string.
Fixes there include:
* Do not apply the ShortWord filter to words that appear in the query.
* Remove a misguided optimization for quickly rejecting a cover.
* Fix order-of-operation bug that could cause computation of a
wrong figure of merit (poslen) when shortening a cover.
* Change the preference rule so that candidate headlines that do not
include their whole cover string (after MaxWords trimming) are lowest
priority, since they may not actually satisfy the user's query.

This results in some changes in existing regression test cases,
but they all seem reasonable.  Note in particular that the tests
involving strings like "1 2 3" were previously being affected by
the ShortWord filter, masking the normal matching behavior.

Per bug #16345 from Augustinas Jokubauskas; the new test cases are
based on that example.  Back-patch to 9.6 where phrase search was
added to tsquery.

Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
2020-04-09 13:19:23 -04:00
Peter Eisentraut e92e4a2b68 Fix CREATE TABLE LIKE INCLUDING GENERATED column order issue
CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column
referred to a column with a higher attribute number.  This is because
the column mapping mechanism created the mapping incrementally as
columns are added.  This was sufficient for previous uses of that
mechanism (omitting dropped columns), and it also happened to work if
generated columns only referred to columns with lower attribute
numbers, but here it failed.

This fix is to build the attribute mapping in a separate loop before
processing the columns in detail.

Bug: #16342
Reported-by: Ethan Waldo <ewaldo@healthetechs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2020-04-09 16:36:45 +02:00
Tomas Vondra cea09246e5 Stabilize incremental_sort tests
The test never did ANALYZE on the test table, so the plans depended on
various defaults (e.g. number of groups being 200). This worked most of
the time, but with CLOBBER_CACHE_ALWAYS the autoanalyze often managed
to build accurate stats, changing the  plan.

Fixed by increasing the size of test tables a bit, making the Sort a bit
more expensive than Incremental Sort. The tests were constructed to test
transitions in the Incremental Sort algorithm, and this change does not
break that.

Reviewed-by: James Coleman
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-08 18:30:11 +02:00
Peter Eisentraut 83fd4532a7 Allow publishing partition changes via ancestors
To control whether partition changes are replicated using their own
identity and schema or an ancestor's, add a new parameter that can be
set per publication named 'publish_via_partition_root'.

This allows replicating a partitioned table into a different partition
structure on the subscriber.

Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
2020-04-08 11:19:23 +02:00
Michael Paquier c0187869a0 Fix crash when using COLLATE in partition bound expressions
Attempting to use a COLLATE clause with a type that it not collatable in
a partition bound expression could crash the server.  This commit fixes
the code by adding more checks similar to what is done when computing
index or partition attributes by making sure that there is a collation
iff the type is collatable.

Backpatch down to 12, as 7c079d7 introduced this problem.

Reported-by: Alexander Lakhin
Author: Dmitry Dolgov
Discussion: https://postgr.es/m/16325-809194cf742313ab@postgresql.org
Backpatch-through: 12
2020-04-08 15:04:51 +09:00
Tom Lane 981643dcdb Allow partitionwise join to handle nested FULL JOIN USING cases.
This case didn't work because columns merged by FULL JOIN USING are
represented in the parse tree by COALESCE expressions, and the logic
for recognizing a partitionable join failed to match upper-level join
clauses to such expressions.  To fix, synthesize suitable COALESCE
expressions and add them to the nullable_partexprs lists.  This is
pretty ugly and brute-force, but it gets the job done.  (I have
ambitions of rethinking the way outer-join output Vars are
represented, so maybe that will provide a cleaner solution someday.
For now, do this.)

Amit Langote, reviewed by Justin Pryzby, Richard Guo, and myself

Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
2020-04-07 22:12:14 -04:00
Etsuro Fujita c8434d64ce Allow partitionwise joins in more cases.
Previously, the partitionwise join technique only allowed partitionwise
join when input partitioned tables had exactly the same partition
bounds.  This commit extends the technique to some cases when the tables
have different partition bounds, by using an advanced partition-matching
algorithm introduced by this commit.  For both the input partitioned
tables, the algorithm checks whether every partition of one input
partitioned table only matches one partition of the other input
partitioned table at most, and vice versa.  In such a case the join
between the tables can be broken down into joins between the matching
partitions, so the algorithm produces the pairs of the matching
partitions, plus the partition bounds for the join relation, to allow
partitionwise join for computing the join.  Currently, the algorithm
works for list-partitioned and range-partitioned tables, but not
hash-partitioned tables.  See comments in partition_bounds_merge().

Ashutosh Bapat and Etsuro Fujita, most of regression tests by Rajkumar
Raghuwanshi, some of the tests by Mark Dilger and Amul Sul, reviewed by
Dmitry Dolgov and Amul Sul, with additional review at various points by
Ashutosh Bapat, Mark Dilger, Robert Haas, Antonin Houska, Amit Langote,
Justin Pryzby, and Tomas Vondra

Discussion: https://postgr.es/m/CAFjFpRdjQvaUEV5DJX3TW6pU5eq54NCkadtxHX2JiJG_GvbrCA@mail.gmail.com
2020-04-08 10:25:00 +09:00
Tom Lane 41a194f491 Fix circle_in to accept "(x,y),r" as it's advertised to do.
Our documentation describes four allowed input syntaxes for circles,
but the regression tests tried only three ... with predictable
consequences.  Remarkably, this has been wrong since the circle
datatype was added in 1997, but nobody noticed till now.

David Zhang, with some help from me

Discussion: https://postgr.es/m/332c47fa-d951-7574-b5cc-a8f7f7201202@highgo.ca
2020-04-07 20:50:28 -04:00
Alvaro Herrera c655077639
Allow users to limit storage reserved by replication slots
Replication slots are useful to retain data that may be needed by a
replication system.  But experience has shown that allowing them to
retain excessive data can lead to the primary failing because of running
out of space.  This new feature allows the user to configure a maximum
amount of space to be reserved using the new option
max_slot_wal_keep_size.  Slots that overrun that space are invalidated
at checkpoint time, enabling the storage to be released.

Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20170228.122736.123383594.horiguchi.kyotaro@lab.ntt.co.jp
2020-04-07 18:35:00 -04:00
Tom Lane b63c293bcb Allow psql's \g and \gx commands to transiently change \pset options.
We invented \gx to allow the "\pset expanded" flag to be forced on
for the duration of one command output, but that turns out to not
be nearly enough to satisfy the demand for variant output formats.
Hence, make it possible to change any pset option(s) for the duration
of a single command output, by writing "option=value ..." inside
parentheses, for example
	\g (format=csv csv_fieldsep='\t') somefile

\gx can now be understood as a shorthand for including expanded=on
inside the parentheses.

Patch by me, expanding on a proposal by Pavel Stehule

Discussion: https://postgr.es/m/CAFj8pRBx9OnBPRJVtfA5ycUpySge-XootAXAsv_4rrkHxJ8eRg@mail.gmail.com
2020-04-07 17:46:29 -04:00
Alvaro Herrera 357889eb17
Support FETCH FIRST WITH TIES
WITH TIES is an option to the FETCH FIRST N ROWS clause (the SQL
standard's spelling of LIMIT), where you additionally get rows that
compare equal to the last of those N rows by the columns in the
mandatory ORDER BY clause.

There was a proposal by Andrew Gierth to implement this functionality in
a more powerful way that would yield more features, but the other patch
had not been finished at this time, so we decided to use this one for
now in the spirit of incremental development.

Author: Surafel Temesgen <surafel3000@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Discussion: https://postgr.es/m/CALAY4q9ky7rD_A4vf=FVQvCGngm3LOes-ky0J6euMrg=_Se+ag@mail.gmail.com
Discussion: https://postgr.es/m/87o8wvz253.fsf@news-spur.riddles.org.uk
2020-04-07 16:22:13 -04:00
Tom Lane 26a944cf29 Adjust bytea get_bit/set_bit to use int8 not int4 for bit numbering.
Since the existing bit number argument can't exceed INT32_MAX, it's
not possible for these functions to manipulate bits beyond the first
256MB of a bytea value.  Lift that restriction by redeclaring the
bit number arguments as int8 (which requires a catversion bump,
hence is not back-patchable).

The similarly-named functions for bit/varbit don't really have a
problem because we restrict those types to at most VARBITMAXLEN bits;
hence leave them alone.

While here, extend the encode/decode functions in utils/adt/encode.c
to allow dealing with values wider than 1GB.  This is not a live bug
or restriction in current usage, because no input could be more than
1GB, and since none of the encoders can expand a string more than 4X,
the result size couldn't overflow uint32.  But it might be desirable
to support more in future, so make the input length values size_t
and the potential-output-length values uint64.

Also add some test cases to improve the miserable code coverage
of these functions.

Movead Li, editorialized some by me; also reviewed by Ashutosh Bapat

Discussion: https://postgr.es/m/20200312115135445367128@highgo.ca
2020-04-07 15:57:58 -04:00
Tomas Vondra d22782a539 Minor improvements in Incremental Sort explain
Some places still used "Maximum" instead of "Peak" when displaying info
about sort space, so fix that. Also, add a comment clarifying why it's
correct to check the number of full/prefix sort groups.

Author: James Coleman
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-07 18:25:13 +02:00
Tomas Vondra ba3e76cc57 Consider Incremental Sort paths at additional places
Commit d2d8a229bc introduced Incremental Sort, but it was considered
only in create_ordered_paths() as an alternative to regular Sort. There
are many other places that require sorted input and might benefit from
considering Incremental Sort too.

This patch modifies a number of those places, but not all. The concern
is that just adding Incremental Sort to any place that already adds
Sort may increase the number of paths considered, negatively affecting
planning time, without any benefit. So we've taken a more conservative
approach, based on analysis of which places do affect a set of queries
that did seem practical. This means some less common queries may not
benefit from Incremental Sort yet.

Author: Tomas Vondra
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-07 16:43:22 +02:00
Thomas Munro 4c04be9b05 Introduce xid8-based functions to replace txid_XXX.
The txid_XXX family of fmgr functions exposes 64 bit transaction IDs to
users as int8.  Now that we have an SQL type xid8 for FullTransactionId,
define a new set of functions including pg_current_xact_id() and
pg_current_snapshot() based on that.  Keep the old functions around too,
for now.

It's a bit sneaky to use the same C functions for both, but since the
binary representation is identical except for the signedness of the
type, and since older functions are the ones using the wrong signedness,
and since we'll presumably drop the older ones after a reasonable period
of time, it seems reasonable to switch to FullTransactionId internally
and share the code for both.

Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com>
Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com>
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
2020-04-07 12:04:32 +12:00
Thomas Munro aeec457de8 Add SQL type xid8 to expose FullTransactionId to users.
Similar to xid, but 64 bits wide.  This new type is suitable for use in
various system views and administration functions.

Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com>
Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com>
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
2020-04-07 12:03:59 +12:00
Tomas Vondra 23ba3b5ee2 Fix failures in incremental_sort due to number of workers
The last test in incremental_sort suite prints a parallel plan, but some
of the buildfarm animals have custom max_parallel_workers_per_gather
values, causing failures. Fixed by setting the GUC to an explicit value.

Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-07 00:02:07 +02:00
Tomas Vondra d2d8a229bc Implement Incremental Sort
Incremental Sort is an optimized variant of multikey sort for cases when
the input is already sorted by a prefix of the requested sort keys. For
example when the relation is already sorted by (key1, key2) and we need
to sort it by (key1, key2, key3) we can simply split the input rows into
groups having equal values in (key1, key2), and only sort/compare the
remaining column key3.

This has a number of benefits:

- Reduced memory consumption, because only a single group (determined by
  values in the sorted prefix) needs to be kept in memory. This may also
  eliminate the need to spill to disk.

- Lower startup cost, because Incremental Sort produce results after each
  prefix group, which is beneficial for plans where startup cost matters
  (like for example queries with LIMIT clause).

We consider both Sort and Incremental Sort, and decide based on costing.

The implemented algorithm operates in two different modes:

- Fetching a minimum number of tuples without check of equality on the
  prefix keys, and sorting on all columns when safe.

- Fetching all tuples for a single prefix group and then sorting by
  comparing only the remaining (non-prefix) keys.

We always start in the first mode, and employ a heuristic to switch into
the second mode if we believe it's beneficial - the goal is to minimize
the number of unnecessary comparions while keeping memory consumption
below work_mem.

This is a very old patch series. The idea was originally proposed by
Alexander Korotkov back in 2013, and then revived in 2017. In 2018 the
patch was taken over by James Coleman, who wrote and rewrote most of the
current code.

There were many reviewers/contributors since 2013 - I've done my best to
pick the most active ones, and listed them in this commit message.

Author: James Coleman, Alexander Korotkov
Reviewed-by: Tomas Vondra, Andreas Karlsson, Marti Raudsepp, Peter Geoghegan, Robert Haas, Thomas Munro, Antonin Houska, Andres Freund, Alexander Kuzmenkov
Discussion: https://postgr.es/m/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=goN-gfA@mail.gmail.com
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
2020-04-06 21:35:10 +02:00
Tom Lane 3c8553547b Re-stabilize infinite_recurse() test case.
Since commit 8f59f6b9c0, CLOBBER_CACHE_ALWAYS buildfarm members have
been failing this test case because the error message now sometimes
includes an error cursor position.  It seems largely just luck that
that never happened before, and there are likely to be more ways it
could happen in future.  Hence, rather than trying to prevent it,
adjust the test script to suppress that component of the report.

At some point we might need to back-patch this, but refrain until
there's a demonstrated need.  (We'd need a different fix before v12,
anyway, since VERBOSITY=sqlstate is a recent thing.)

Tom Lane and Andres Freund

Discussion: https://postgr.es/m/30675.1586111599@sss.pgh.pa.us
2020-04-06 12:00:37 -04:00
Michael Paquier a40caf5f86 Preserve clustered index after rewrites with ALTER TABLE
A table rewritten by ALTER TABLE would lose tracking of an index usable
for CLUSTER.  This setting is tracked by pg_index.indisclustered and is
controlled by ALTER TABLE, so some extra work was needed to restore it
properly.  Note that ALTER TABLE only marks the index that can be used
for clustering, and does not do the actual operation.

Author: Amit Langote, Justin Pryzby
Reviewed-by: Ibrar Ahmed, Michael Paquier
Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com
Backpatch-through: 9.5
2020-04-06 11:03:49 +09:00
Tom Lane 18d85e9b8a Further improve stability fix for partition_aggregate test.
Commit 7cb0a423f overlooked that the multi-level partition test table
pagg_tab_ml still had an exactly even row split at its upper level of
partitioning, so that some of the sub-aggregation plan steps still had
exactly equal costs, leading to plan instability.  Tweak the partition
boundaries some more to make the row distribution unequal at both
levels.  This leads to more changes in the "expected" plan order than
the previous round, but it seems fine.  (Actually, I'm surprised that
this didn't affect even more plans in this test: looking at the
underlying costs shows that some of the parallel plan groups are
*not* getting sorted by cost.  Bug?)

Per buildfarm member lousyjack,
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-04-04%2021%3A03%3A04

Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
2020-04-05 00:53:28 -04:00
Tom Lane 07871d40c7 Remove bogus Assert, add some regression test cases showing why.
Commit 77ec5affb added an assertion to enforce_generic_type_consistency
that boils down to "if the function result is polymorphic, there must be
at least one polymorphic argument".  This should be true for user-created
functions, but there are built-in functions for which it's not true, as
pointed out by Jaime Casanova.  Hence, go back to the old behavior of
leaving the return type alone.  There's only a limited amount of stuff
you can do with such a function result, but it does work to some extent;
add some regression test cases to ensure we don't break that again.

Discussion: https://postgr.es/m/CAJGNTeMbhtsCUZgJJ8h8XxAJbK7U2ipsX8wkHRtZRz-NieT8RA@mail.gmail.com
2020-04-04 18:03:30 -04:00
Noah Misch c6b92041d3 Skip WAL for new relfilenodes, under wal_level=minimal.
Until now, only selected bulk operations (e.g. COPY) did this.  If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY.  See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules.  Maintainers of table access
methods should examine that section.

To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL.  A new GUC,
wal_skip_threshold, guides that choice.  If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold.  Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.

Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode.  Introduce rd_firstRelfilenodeSubid.  Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node.  Make relcache.c retain entries for certain
dropped relations until end of transaction.

Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN.
Future servers accept older WAL, so this bump is discretionary.

Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas.  Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem.  Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs.  Reported by Martijn van Oosterhout.

Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
2020-04-04 12:25:34 -07:00
Fujii Masao ce77abe63c Include information on buffer usage during planning phase, in EXPLAIN output, take two.
When BUFFERS option is enabled, EXPLAIN command includes the information
on buffer usage during each plan node, in its output. In addition to that,
this commit makes EXPLAIN command include also the information on
buffer usage during planning phase, in its output. This feature makes it
easier to discern the cases where lots of buffer access happen during
planning.

This commit revives the original commit ed7a509571 that was reverted by
commit 19db23bcbd. The original commit had to be reverted because
it caused the regression test failure on the buildfarm members prion and
dory. But since commit c0885c4c30 got rid of the caues of the test failure,
the original commit can be safely introduced again.

Author: Julien Rouhaud, slightly revised by Fujii Masao
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
2020-04-04 03:13:17 +09:00
Tom Lane e41955faf0 Fix bugs in gin_fuzzy_search_limit processing.
entryGetItem()'s three code paths each contained bugs associated
with filtering the entries for gin_fuzzy_search_limit.

The posting-tree path failed to advance "advancePast" after having
decided to filter an item.  If we ran out of items on the current
page and needed to advance to the next, what would actually happen
is that entryLoadMoreItems() would re-load the same page.  Eventually,
the random dropItem() test would accept one of the same items it'd
previously rejected, and we'd move on --- but it could take awhile
with small gin_fuzzy_search_limit.  To add insult to injury, this
case would inevitably cause entryLoadMoreItems() to decide it needed
to re-descend from the root, making things even slower.

The posting-list path failed to implement gin_fuzzy_search_limit
filtering at all, so that all entries in the posting list would
be returned.

The bitmap-result path used a "gotitem" variable that it failed to
update in the one place where it'd actually make a difference, ie
at the one "continue" statement.  I think this was unreachable in
practice, because if we'd looped around then it shouldn't be the
case that the entries on the new page are before advancePast.
Still, the "gotitem" variable was contributing nothing to either
clarity or correctness, so get rid of it.

Refactor all three loops so that the termination conditions are
more alike and less unreadable.

The code coverage report showed that we had no coverage at all for
the re-descend-from-root code path in entryLoadMoreItems(), which
seems like a very bad thing, so add a test case that exercises it.
We also had exactly no coverage for gin_fuzzy_search_limit, so add a
simplistic test case that at least hits those code paths a little bit.

Back-patch to all supported branches.

Adé Heyward and Tom Lane

Discussion: https://postgr.es/m/CAEknJCdS-dE1Heddptm7ay2xTbSeADbkaQ8bU2AXRCVC2LdtKQ@mail.gmail.com
2020-04-03 13:15:45 -04:00
Fujii Masao c0885c4c30 Improve stability of explain regression test.
The explain regression test runs EXPLAIN commands via the function
that filters unstable outputs. To produce more stable test output,
this commit improves the function so that it also filters out text-mode
Buffers lines. This is necessary because text-mode Buffers lines vary
depending the system state.

This improvement will get rid of the regression test failure that
the commit ed7a509571 caused on the buildfarm members prion and
dory because of the instability of Buffers lines.

Author: Fujii Masao
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20200403025751.GB1759@paquier.xyz
2020-04-04 01:26:39 +09:00
Fujii Masao 19db23bcbd Revert "Include information on buffer usage during planning phase, in EXPLAIN output."
This reverts commit ed7a509571.

Per buildfarm member prion.
2020-04-03 12:20:42 +09:00
Fujii Masao ed7a509571 Include information on buffer usage during planning phase, in EXPLAIN output.
When BUFFERS option is enabled, EXPLAIN command includes the information
on buffer usage during each plan node, in its output. In addition to that,
this commit makes EXPLAIN command include also the information on
buffer usage during planning phase, in its output. This feature makes it
easier to discern the cases where lots of buffer access happen during
planning.

Author: Julien Rouhaud, slightly revised by Fujii Masao
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
2020-04-03 11:27:09 +09:00
Tom Lane 7cb0a423f9 Improve stability fix for partition_aggregate test.
Instead of disabling autovacuum on these test tables, adjust the
partition boundaries so that the child partitions are not all the
same size.  That should cause the planner to use a predictable
ordering of the per-partition scan nodes even in cases where
autovacuum causes the rowcount estimates to be off a bit.
Moreover, this also lets these tests show that the planner does
properly order the tables in descending size order, something
that wasn't being proven before.

The pagg_tab1 and pagg_tab2 partitions are still all the same
size, but that should be fine, because those tables are so small
that (1) autovacuum won't fire on them, and (2) even if it did,
it couldn't change the reltuples value --- with only one page,
it can't see just part of the relation.

Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
2020-04-02 19:43:51 -04:00
David Rowley cefb82d49e Attempt to stabilize partitionwise_aggregate test
In b07642dbc, we added code to trigger autovacuums based on the number of
INSERTs into a table. This seems to have cause some destabilization of
the regression tests. Likely this is due to an autovacuum triggering
mid-test and (per theory from Tom Lane) one of the test's queries causes
autovacuum to skip some number of pages, resulting in the reltuples
estimate changing.

The failure that this is attempting to fix is around the order of subnodes
in an Append. Since the planner orders these according to the subnode
cost, then it's possible that a small change in the reltuples value changes
the subnode's cost enough that it swaps position with one of its fellow
subnodes.

The failure here only seems to occur on slower buildfarm machines. In this
case, lousyjack, which seems have taken over 8 minutes to run just
the partitionwise_aggregate test. Such a slow run would increase the
chances that the autovacuum launcher would trigger a vacuum mid-test.
Faster machines run this test in sub second time, so have a much smaller
window for an autovacuum to trigger.

Here we fix this by disabling autovacuum on all tables created in the test.

Additionally, this reverts the change made in the
partitionwise_aggregate test in 2dc16efed.

Discussion: https://postgr.es/m/22297.1585797192@sss.pgh.pa.us
2020-04-02 21:26:54 +13:00
Peter Eisentraut 2991ac5fc9 Add SQL functions for Unicode normalization
This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and
check Unicode normal forms, per SQL standard.

To support fast IS NORMALIZED tests, we pull in a new data file
DerivedNormalizationProps.txt from Unicode and build a lookup table
from that, using techniques similar to ones already used for other
Unicode data.  make update-unicode will keep it up to date.  We only
build and use these tables for the NFC and NFKC forms, because they
are too big for NFD and NFKD and the improvement is not significant
enough there.

Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com
2020-04-02 08:56:27 +02:00
Tomas Vondra 28cac71bd3 Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.

This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.

The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.

This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.

Bump catversion as the patch introduces new functions and system view.

Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:34:21 +02:00
Tom Lane 0936d1b6ff Still another try at stabilizing stats_ext test results.
The stats_ext test is not expecting that autovacuum will touch
any of its tables; an expectation falsified by commit b07642dbc.
Although I'm suspicious that there's something else going on that
makes extended stats estimates not 100% reproducible, it's pretty
easy to demonstrate that there are places in this test that fail
if an autovacuum updates the table's stats unexpectedly.

Hence, revert the band-aid changes made by 2dc16efed and 24566b359
in favor of summarily disabling autovacuum for all the tables that
this test checks estimated rowcounts for.

Also remove an evidently obsolete comment at the head of the test.

Discussion: https://postgr.es/m/15012.1585623298@sss.pgh.pa.us
2020-03-31 16:09:25 -04:00
Alexander Korotkov 02a5786df2 Improve error reporting in opclasscmds.c
This commit improves error reporting introduced by 911e702077.  It puts
argument of errmsg() to the single line for easier grepping source for error
text.  Also it improves wording of errhint().
2020-03-31 17:51:57 +03:00
Peter Eisentraut de3bbfcc96 Fix INSERT OVERRIDING USER VALUE behavior
The original implementation disallowed using OVERRIDING USER VALUE on
identity columns defined as GENERATED ALWAYS, which is not per
standard.  So allow that now.

Expand documentation and tests around this.

Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://www.postgresql.org/message-id/flat/CAEZATCVrh2ufCwmzzM%3Dk_OfuLhTTPBJCdFkimst2kry4oHepuQ%40mail.gmail.com
2020-03-31 08:50:39 +02:00
Alexander Korotkov 911e702077 Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have
much freedom in the semantics of indexing.  These index AMs are GiST, GIN,
SP-GiST and BRIN.  There opclasses define representation of keys, operations on
them and supported search strategies.  So, it's natural that opclasses may be
faced some tradeoffs, which require user-side decision.  This commit implements
opclass parameters allowing users to set some values, which tell opclass how to
index the particular dataset.

This commit doesn't introduce new storage in system catalog.  Instead it uses
pg_attribute.attoptions, which is used for table column storage options but
unused for index attributes.

In order to evade changing signature of each opclass support function, we
implement unified way to pass options to opclass support functions.  Options
are set to fn_expr as the constant bytea expression.  It's possible due to the
fact that opclass support functions are executed outside of expressions, so
fn_expr is unused for them.

This commit comes with some examples of opclass options usage.  We parametrize
signature length in GiST.  That applies to multiple opclasses: tsvector_ops,
gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and
gist_hstore_ops.  Also we parametrize maximum number of integer ranges for
gist__int_ops.  However, the main future usage of this feature is expected
to be json, where users would be able to specify which way to index particular
json parts.

Catversion is bumped.

Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 19:17:23 +03:00
David Rowley 24566b359d Attempt to fix unstable regression tests, take 2
Following up on 2dc16efed, petalura has suffered some additional
failures in stats_ext which again appear to be around the timing of an
autovacuum during the test, causing instability in the row estimates.

Again, let's fix this by explicitly performing a VACUUM on the table
and not leave it to happen by chance of an autovacuum pass.

Discussion: https://postgr.es/m/CAApHDvok5hmXr%2BbUbJe7%2B2sQzWo4B_QzSk7RKFR9fP6BjYXx5g%40mail.gmail.com
2020-03-30 23:41:11 +13:00
David Rowley 2dc16efedc Attempt to fix unstable regression tests
b07642dbc added code to trigger autovacuums based on the number of
inserts into a table. This seems to have caused some regression test
results to destabilize. I suspect this is due to autovacuum triggering a
vacuum sometime after the test's ANALYZE run and perhaps reltuples is
ending up being set to a slightly different value as a result.

Attempt to resolve this by running a VACUUM ANALYZE on the affected table
instead of just ANALYZE. pg_class.reltuples will still get set to whatever
ANALYZE chooses but we should no longer get the proceeding autovacuum
overriding that.

The overhead this adds to each test's runtime seems small enough not to
worry about. I measure 3-4% on stats_ext and can't measure any change in
partition_aggregate.

I'm unable to recreate the issue locally, so this is a bit of a blind
fix.

Discussion: https://postgr.es/m/CAApHDvpWmpqYrKwwDQyeDq8dAyK7GMNaxDhrG69CkSuXoEg%2BVg%40mail.gmail.com
2020-03-29 19:36:20 +13:00
Dean Rasheed 4083f445c0 Improve the performance and accuracy of numeric sqrt() and ln().
Instead of using Newton's method to compute numeric square roots, use
the Karatsuba square root algorithm, which performs better for numbers
of all sizes. In practice, this is 3-5 times faster for inputs with
just a few digits and up to around 10 times faster for larger inputs.

Also, the new algorithm guarantees that the final digit of the result
is correctly rounded, since it computes an integer square root with
truncation, containing at least 1 extra decimal digit before rounding.
The former algorithm would occasionally round the wrong way because
it rounded both the intermediate and final results.

In addition, arrange for sqrt_var() to explicitly support negative
rscale values (rounding before the decimal point). This allows the
argument reduction phase of ln_var() to be optimised for large inputs,
since it only needs to compute square roots with a few more digits
than the final ln() result, rather than computing all the digits
before the decimal point. For very large inputs, this can be many
thousands of times faster.

In passing, optimise div_var_fast() in a couple of places where it was
doing unnecessary work.

Patch be me, reviewed by Tom Lane and Tels.

Discussion: https://postgr.es/m/CAEZATCV1A7+jD3P30Zu31KjaxeSEyOn3v9d6tYegpxcq3cQu-g@mail.gmail.com
2020-03-28 14:37:53 +00:00
Dean Rasheed 87779aa474 Prevent functional dependency estimates from exceeding column estimates.
Formerly we applied a functional dependency "a => b with dependency
degree f" using the formula

  P(a,b) = P(a) * [f + (1-f)*P(b)]

This leads to the possibility that the combined selectivity P(a,b)
could exceed P(b), which is not ideal. The addition of support for IN
and OR clauses (commits 8f321bd16c and ccaa3569f5) would seem to make
this more likely, since the user-supplied values in such clauses are
not necessarily compatible with the functional dependency.

Mitigate this by using the formula

  P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b)

instead, which guarantees that the combined selectivity is less than
each column's individual selectivity. Logically, this is modifies the
part of the formula that accounts for dependent rows to handle cases
where P(a) > P(b), whilst not changing the second term which accounts
for independent rows.

Additionally, this refactors the way that functional dependencies are
applied, so now dependencies_clauselist_selectivity() estimates both
the implying clauses and the implied clauses for each functional
dependency (formerly only the implied clauses were estimated), and now
all clauses for each attribute are taken into account (formerly only
one clause for each implied attribute was estimated). This removes the
previously built-in assumption that only equality clauses will be
seen, which is no longer true, and opens up the possibility of
applying functional dependencies to more general clauses.

Patch by me, reviewed by Tomas Vondra.

Discussion: https://postgr.es/m/CAEZATCXaNFZyOhR4XXAfkvj1tibRBEjje6ZbXwqWUB_tqbH%3Drw%40mail.gmail.com
Discussion: https://postgr.es/m/20200318002946.6dvblukm3cfmgir2%40development
2020-03-28 12:48:34 +00:00
David Rowley b07642dbcd Trigger autovacuum based on number of INSERTs
Traditionally autovacuum has only ever invoked a worker based on the
estimated number of dead tuples in a table and for anti-wraparound
purposes. For the latter, with certain classes of tables such as
insert-only tables, anti-wraparound vacuums could be the first vacuum that
the table ever receives. This could often lead to autovacuum workers being
busy for extended periods of time due to having to potentially freeze
every page in the table. This could be particularly bad for very large
tables. New clusters, or recently pg_restored clusters could suffer even
more as many large tables may have the same relfrozenxid, which could
result in large numbers of tables requiring an anti-wraparound vacuum all
at once.

Here we aim to reduce the work required by anti-wraparound and aggressive
vacuums in general, by triggering autovacuum when the table has received
enough INSERTs. This is controlled by adding two new GUCs and reloptions;
autovacuum_vacuum_insert_threshold and
autovacuum_vacuum_insert_scale_factor. These work exactly the same as the
existing scale factor and threshold controls, only base themselves off the
number of inserts since the last vacuum, rather than the number of dead
tuples. New controls were added rather than reusing the existing
controls, to allow these new vacuums to be tuned independently and perhaps
even completely disabled altogether, which can be done by setting
autovacuum_vacuum_insert_threshold to -1.

We make no attempt to skip index cleanup operations on these vacuums as
they may trigger for an insert-mostly table which continually doesn't have
enough dead tuples to trigger an autovacuum for the purpose of removing
those dead tuples. If we were to skip cleaning the indexes in this case,
then it is possible for the index(es) to become bloated over time.

There are additional benefits to triggering autovacuums based on inserts,
as tables which never contain enough dead tuples to trigger an autovacuum
are now more likely to receive a vacuum, which can mark more of the table
as "allvisible" and encourage the query planner to make use of Index Only
Scans.

Currently, we still obey vacuum_freeze_min_age when triggering these new
autovacuums based on INSERTs. For large insert-only tables, it may be
beneficial to lower the table's autovacuum_freeze_min_age so that tuples
are eligible to be frozen sooner. Here we've opted not to zero that for
these types of vacuums, since the table may just be insert-mostly and we
may otherwise freeze tuples that are still destined to be updated or
removed in the near future.

There was some debate to what exactly the new scale factor and threshold
should default to. For now, these are set to 0.2 and 1000, respectively.
There may be some motivation to adjust these before the release.

Author: Laurenz Albe, Darafei Praliaskouski
Reviewed-by: Alvaro Herrera, Masahiko Sawada, Chris Travers, Andres Freund, Justin Pryzby
Discussion: https://postgr.es/m/CAC8Q8t%2Bj36G_bLF%3D%2B0iMo6jGNWnLnWb1tujXuJr-%2Bx8ZCCTqoQ%40mail.gmail.com
2020-03-28 19:20:12 +13:00
Jeff Davis 76df765e88 Reduce test time for disk-based Hash Aggregation.
Discussion: https://postgr.es/m/23196.1584943506@sss.pgh.pa.us
2020-03-23 19:03:49 -07:00
Fujii Masao 67e0adfb3f Report NULL as total backup size if it's not estimated.
Previously 0 was reported in pg_stat_progress_basebackup.total_backup
if the total backup size was not estimated. Per discussion, our consensus
is that NULL is better choise as the value in total_backup in that case.
So this commit makes pg_stat_progress_basebackup view report NULL
in total_backup column if the estimation is disabled.

Bump catversion.

Author: Fujii Masao
Reviewed-by: Amit Langote, Magnus Hagander, Alvaro Herrera
Discussion: https://postgr.es/m/CABUevExnhOD89zBDuPvfAAh243RzNpwCPEWNLtMYpKHMB8gbAQ@mail.gmail.com
2020-03-24 10:43:41 +09:00
Andres Freund f801ceb696 Add regression tests for constraint errors in partitioned tables.
While #16293 only applied to 11 (and 10 to some degree), it seems best
to add tests to all branches with partitioning support.

Reported-By: Daniel WM
Author: Andres Freund
Bug: #16293
Discussion: https://postgr.es/m/16293-26f5777d10143a66@postgresql.org
Backpatch: 10-
2020-03-23 15:06:11 -07:00
Noah Misch de9396326e Revert "Skip WAL for new relfilenodes, under wal_level=minimal."
This reverts commit cb2fd7eac2.  Per
numerous buildfarm members, it was incompatible with parallel query, and
a test case assumed LP64.  Back-patch to 9.5 (all supported versions).

Discussion: https://postgr.es/m/20200321224920.GB1763544@rfd.leadboat.com
2020-03-22 09:24:09 -07:00
Noah Misch cb2fd7eac2 Skip WAL for new relfilenodes, under wal_level=minimal.
Until now, only selected bulk operations (e.g. COPY) did this.  If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY.  See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules.  Maintainers of table access
methods should examine that section.

To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL.  A new GUC,
wal_skip_threshold, guides that choice.  If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold.  Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.

Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode.  Introduce rd_firstRelfilenodeSubid.  Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node.  Make relcache.c retain entries for certain
dropped relations until end of transaction.

Back-patch to 9.5 (all supported versions).  This introduces a new WAL
record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC.  As
always, update standby systems before master systems.  This changes
sizeof(RelationData) and sizeof(IndexStmt), breaking binary
compatibility for affected extensions.  (The most recent commit to
affect the same class of extensions was
089e4d405d0f3b94c74a2c6a54357a84a681754b.)

Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas.  Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem.  Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs.  Reported by Martijn van Oosterhout.

Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
2020-03-21 09:38:26 -07:00
Tom Lane 24e2885ee3 Introduce "anycompatible" family of polymorphic types.
This patch adds the pseudo-types anycompatible, anycompatiblearray,
anycompatiblenonarray, and anycompatiblerange.  They work much like
anyelement, anyarray, anynonarray, and anyrange respectively, except
that the actual input values need not match precisely in type.
Instead, if we can find a common supertype (using the same rules
as for UNION/CASE type resolution), then the parser automatically
promotes the input values to that type.  For example,
"myfunc(anycompatible, anycompatible)" can match a call with one
integer and one bigint argument, with the integer automatically
promoted to bigint.  With anyelement in the definition, the user
would have had to cast the integer explicitly.

The new types also provide a second, independent set of type variables
for function matching; thus with "myfunc(anyelement, anyelement,
anycompatible) returns anycompatible" the first two arguments are
constrained to be the same type, but the third can be some other
type, and the result has the type of the third argument.  The need
for more than one set of type variables was foreseen back when we
first invented the polymorphic types, but we never did anything
about it.

Pavel Stehule, revised a bit by me

Discussion: https://postgr.es/m/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com
2020-03-19 11:43:11 -04:00
Jeff Davis 1f39bce021 Disk-based Hash Aggregation.
While performing hash aggregation, track memory usage when adding new
groups to a hash table. If the memory usage exceeds work_mem, enter
"spill mode".

In spill mode, new groups are not created in the hash table(s), but
existing groups continue to be advanced if input tuples match. Tuples
that would cause a new group to be created are instead spilled to a
logical tape to be processed later.

The tuples are spilled in a partitioned fashion. When all tuples from
the outer plan are processed (either by advancing the group or
spilling the tuple), finalize and emit the groups from the hash
table. Then, create new batches of work from the spilled partitions,
and select one of the saved batches and process it (possibly spilling
recursively).

Author: Jeff Davis
Reviewed-by: Tomas Vondra, Adam Lee, Justin Pryzby, Taylor Vesely, Melanie Plageman
Discussion: https://postgr.es/m/507ac540ec7c20136364b5272acbcd4574aa76ef.camel@j-davis.com
2020-03-18 15:42:02 -07:00
Alvaro Herrera 487e9861d0
Enable BEFORE row-level triggers for partitioned tables
... with the limitation that the tuple must remain in the same
partition.

Reviewed-by: Ashutosh Bapat
Discussion: https://postgr.es/m/20200227165158.GA2071@alvherre.pgsql
2020-03-18 18:58:05 -03:00
Peter Eisentraut a2b1faa0f2 Implement type regcollation
This will be helpful for a following commit and it's also just
generally useful, like the other reg* types.

Author: Julien Rouhaud
Reviewed-by: Thomas Munro and Michael Paquier
Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
2020-03-18 21:21:00 +01:00
Tomas Vondra ccaa3569f5 Recognize some OR clauses as compatible with functional dependencies
Since commit 8f321bd16c functional dependencies can handle IN clauses,
which however introduced a possible (and surprising) inconsistency,
because IN clauses may be expressed as an OR clause, which are still
considered incompatible. For example

  a IN (1, 2, 3)

may be rewritten as

  (a = 1 OR a = 2 OR a = 3)

The IN clause will work fine with functional dependencies, but the OR
clause will force the estimation to fall back to plain per-column
estimates, possibly introducing significant estimation errors.

This commit recognizes OR clauses equivalent to an IN clause (when all
arugments are compatible and reference the same attribute) as a special
case, compatible with functional dependencies. This allows applying
functional dependencies, just like for IN clauses.

This does not eliminate the difference in estimating the clause itself,
i.e. IN clause and OR clause still use different formulas. It would be
possible to change that (for these special OR clauses), but that's not
really about extended statistics - it was always like this. Moreover the
errors are usually much smaller compared to ignoring dependencies.

Author: Tomas Vondra
Reviewed-by: Dean Rasheed
Discussion: https://www.postgresql.org/message-id/flat/13902317.Eha0YfKkKy%40pierred-pdoc
2020-03-18 16:41:49 +01:00
Tom Lane e6c178b5b7 Refactor our checks for valid function and aggregate signatures.
pg_proc.c and pg_aggregate.c had near-duplicate copies of the logic
to decide whether a function or aggregate's signature is legal.
This seems like a bad thing even without the problem that the
upcoming "anycompatible" patch would roughly double the complexity
of that logic.  Hence, refactor so that the rules are localized
in new subroutines supplied by parse_coerce.c.  (One could quibble
about just where to add that code, but putting it beside
enforce_generic_type_consistency seems not totally unreasonable.)

The fact that the rules are about to change would mandate some
changes in the wording of the associated error messages in any case.
I ended up spelling things out in a fairly literal fashion in the
errdetail messages, eg "A result of type anyelement requires at
least one input of type anyelement, anyarray, anynonarray, anyenum,
or anyrange."  Perhaps this is overkill, but once there's more than
one subgroup of polymorphic types, people might get confused by
more-abstract messages.

Discussion: https://postgr.es/m/24137.1584139352@sss.pgh.pa.us
2020-03-17 19:36:41 -04:00
Tom Lane 77ec5affbc Adjust handling of an ANYARRAY actual input for an ANYARRAY argument.
Ordinarily it's impossible for an actual input of a function to have
declared type ANYARRAY, since we'd resolve that to a concrete array
type before doing argument type resolution for the function.  But an
exception arises for functions applied to certain columns of pg_statistic
or pg_stats, since we abuse the "anyarray" pseudotype by using it to
declare those columns.  So parse_coerce.c has to deal with the case.

Previously we allowed an ANYARRAY actual input to match an ANYARRAY
polymorphic argument, but only if no other argument or result was
declared ANYELEMENT.  When that logic was written, those were the only
two polymorphic types, and I fear nobody thought carefully about how it
ought to extend to the other ones that came along later.  But actually
it was wrong even then, because if a function has two ANYARRAY
arguments, it should be able to expect that they have identical element
types, and we'd not be able to ensure that.

The correct generalization is that we can match an ANYARRAY actual input
to an ANYARRAY polymorphic argument only if no other argument or result
is of any polymorphic type, so that no promises are being made about
element type compatibility.  check_generic_type_consistency can't test
that condition, but it seems better anyway to accept such matches there
and then throw an error if needed in enforce_generic_type_consistency.
That way we can produce a specific error message rather than an
unintuitive "function does not exist" complaint.  (There's some risk
perhaps of getting new ambiguous-function complaints, but I think that
any set of functions for which that could happen would be ambiguous
against ordinary array columns as well.)  While we're at it, we can
improve the message that's produced in cases that the code did already
object to, as shown in the regression test changes.

Also, remove a similar test that got cargo-culted in for ANYRANGE;
there are no catalog columns of type ANYRANGE, and I hope we never
create any, so that's not needed.  (It was incomplete anyway.)

While here, update some comments and rearrange the code a bit in
preparation for upcoming additions of more polymorphic types.

In practical situations I believe this is just exchanging one error
message for another, hopefully better, one.  So it doesn't seem
needful to back-patch, even though the mistake is ancient.

Discussion: https://postgr.es/m/21569.1584314271@sss.pgh.pa.us
2020-03-17 18:29:14 -04:00
Tom Lane 9d9784c840 Remove bogus assertion about polymorphic SQL function result.
It is possible to reach check_sql_fn_retval() with an unresolved
polymorphic rettype, resulting in an assertion failure as demonstrated
by one of the added test cases.  However, the code following that
throws what seems an acceptable error message, so just remove the
Assert and adjust commentary.

While here, I thought it'd be a good idea to provide some parallel
tests of SQL-function and PL/pgSQL-function polymorphism behavior.
Some of these cases are perhaps duplicative of tests elsewhere,
but we hadn't any organized coverage of the topic AFAICS.

Although that assertion's been wrong all along, it won't have any
effect in production builds, so I'm not bothering to back-patch.

Discussion: https://postgr.es/m/21569.1584314271@sss.pgh.pa.us
2020-03-17 14:54:46 -04:00
Tom Lane b4570d33aa Avoid holding a directory FD open across assorted SRF calls.
This extends the fixes made in commit 085b6b667 to other SRFs with the
same bug, namely pg_logdir_ls(), pgrowlocks(), pg_timezone_names(),
pg_ls_dir(), and pg_tablespace_databases().

Also adjust various comments and documentation to warn against
expecting to clean up resources during a ValuePerCall SRF's final
call.

Back-patch to all supported branches, since these functions were
all born broken.

Justin Pryzby, with cosmetic tweaks by me

Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
2020-03-16 21:05:52 -04:00
Tomas Vondra d8cfa82d51 Improve test coverage for multi-column MCV lists
The regression tests for extended statistics were not testing a couple
of important cases for the MCV lists:

  * IS NOT NULL clauses - We did have queries with IS NULL clauses, but
    not the negative case.

  * clauses with variable on the right - All the clauses had the Var on
    the left, i.e. (Var op Const), so this adds (Const op Var) too.

  * columns with fixed-length types passed by reference - All columns
    were using either by-value or varlena types, so add a test with
    UUID columns too. This matters for (de)serialization.

  * NULL-only dimension - When one of the columns contains only NULL
    values, we treat it a a special case during (de)serialization.

  * arrays containing NULL - When the constant parameter contains NULL
    value, we need to handle it correctly during estimation, for all
    IN, ANY and ALL clauses.

Discussion: https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development
Author: Tomas Vondra
2020-03-14 23:09:40 +01:00
Tomas Vondra f9696782c7 Improve test coverage for functional dependencies
The regression tests for functional dependencies were only using clauses
of the form (Var op Const), i.e. with Var on the left side. This adds
a couple of queries with Var on the right, to test other code paths.

It also prints one of the functional dependencies, to test the data type
output function. The functional dependencies are "perfect" with degree
of 1.0 so this should be stable.

Discussion: https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development
Author: Tomas Vondra
2020-03-14 23:06:23 +01:00
Tom Lane 4dbcb3f844 Restructure polymorphic-type resolution in funcapi.c.
resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes() failed to
cover the case of having to resolve anyarray given only an anyrange input.
The bug was masked if anyelement was also used (as either input or
output), which probably helps account for our not having noticed.

While looking at this I noticed that resolve_generic_type() would produce
the wrong answer if asked to make that same resolution.  ISTM that
resolve_generic_type() is confusingly defined and overly complex, so
rather than fix it, let's just make funcapi.c do the specific lookups
it requires for itself.

With this change, resolve_generic_type() is not used anywhere, so remove
it in HEAD.  In the back branches, leave it alone (complete with bug)
just in case any external code is using it.

While we're here, make some other refactoring adjustments in funcapi.c
with an eye to upcoming future expansion of the set of polymorphic types:

* Simplify quick-exit tests by adding an overall have_polymorphic_result
flag.  This is about a wash now but will be a win when there are more
flags.

* Reduce duplication of code between resolve_polymorphic_tupdesc() and
resolve_polymorphic_argtypes().

* Don't bother to validate correct matching of anynonarray or anyenum;
the parser should have done that, and even if it didn't, just doing
"return false" here would lead to a very confusing, off-point error
message.  (Really, "return false" in these two functions should only
occur if the call_expr isn't supplied or we can't obtain data type
info from it.)

* For the same reason, throw an elog rather than "return false" if
we fail to resolve a polymorphic type.

The bug's been there since we added anyrange, so back-patch to
all supported branches.

Discussion: https://postgr.es/m/6093.1584202130@sss.pgh.pa.us
2020-03-14 14:42:22 -04:00
Tomas Vondra e83daa7e33 Use multi-variate MCV lists to estimate ScalarArrayOpExpr
Commit 8f321bd16c added support for estimating ScalarArrayOpExpr clauses
(IN/ANY) clauses using functional dependencies. There's no good reason
not to support estimation of these clauses using multi-variate MCV lists
too, so this commits implements that. That makes the behavior consistent
and MCV lists can estimate all variants (ANY/ALL, inequalities, ...).

Author: Tomas Vondra
Review: Dean Rasheed
Discussion: https://www.postgresql.org/message-id/flat/13902317.Eha0YfKkKy%40pierred-pdoc
2020-03-14 16:13:00 +01:00
Tomas Vondra 8f321bd16c Use functional dependencies to estimate ScalarArrayOpExpr
Until now functional dependencies supported only simple equality clauses
and clauses that can be trivially translated to equalities. This commit
allows estimation of some ScalarArrayOpExpr (IN/ANY) clauses.

For IN clauses we can do this thanks to using operator with equality
semantics, which means an IN clause

    WHERE c IN (1, 2, ..., N)

can be translated to

    WHERE (c = 1 OR c = 2 OR ... OR c = N)

IN clauses are now considered compatible with functional dependencies,
and rely on the same assumption of consistency of queries with data
(which is an assumption we already used for simple equality clauses).
This applies also to ALL clauses with an equality operator, which can be
considered equivalent to IN clause.

ALL clauses are still considered incompatible, although there's some
discussion about maybe relaxing this in the future.

Author: Pierre Ducroquet
Reviewed-by: Tomas Vondra, Dean Rasheed
Discussion: https://www.postgresql.org/message-id/flat/13902317.Eha0YfKkKy%40pierred-pdoc
2020-03-14 16:12:41 +01:00
Peter Eisentraut 1cc9c2412c Preserve replica identity index across ALTER TABLE rewrite
If an index was explicitly set as replica identity index, this setting
was lost when a table was rewritten by ALTER TABLE.  Because this
setting is part of pg_index but actually controlled by ALTER
TABLE (not part of CREATE INDEX, say), we have to do some extra work
to restore it.

Based-on-patch-by: Quan Zongliang <quanzongliang@gmail.com>
Reviewed-by: Euler Taveira <euler.taveira@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/c70fcab2-4866-0d9f-1d01-e75e189db342@gmail.com
2020-03-13 11:57:06 +01:00
Tom Lane a029a0641c Fix test case instability introduced in 085b6b667.
I forgot that the WAL directory might hold other files besides WAL
segments, notably including new segments still being filled.
That means a blind test for the first file's size being 16MB can
fail.  Restrict based on file name length to make it more robust.

Per buildfarm.
2020-03-11 18:24:11 -04:00
Tom Lane 085b6b6679 Avoid holding a directory FD open across pg_ls_dir_files() calls.
This coding technique is undesirable because (a) it leaks the FD for
the rest of the transaction if the SRF is not run to completion, and
(b) allocated FDs are a scarce resource, but multiple interleaved
uses of the relevant functions could eat many such FDs.

In v11 and later, a query such as "SELECT pg_ls_waldir() LIMIT 1"
yields a warning about the leaked FD, and the only reason there's
no warning in earlier branches is that fd.c didn't whine about such
leaks before commit 9cb7db3f0.  Even disregarding the warning, it
wouldn't be too hard to run a backend out of FDs with careless use
of these SQL functions.

Hence, rewrite the function so that it reads the directory within
a single call, returning the results as a tuplestore rather than
via value-per-call mode.

There are half a dozen other built-in SRFs with similar problems,
but let's fix this one to start with, just to see if the buildfarm
finds anything wrong with the code.

In passing, fix bogus error report for stat() failure: it was
whining about the directory when it should be fingering the
individual file.  Doubtless a copy-and-paste error.

Back-patch to v10 where this function was added.

Justin Pryzby, with cosmetic tweaks and test cases by me

Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
2020-03-11 15:27:59 -04:00
Tom Lane cacef17223 Ensure that CREATE TABLE LIKE copies any NO INHERIT constraint property.
Since the documentation about LIKE doesn't say that a copied constraint
has properties different from the original, it seems that ignoring
a NO INHERIT property doesn't meet the principle of least surprise.
So make it copy that.

(Note, however, that we still don't copy a NOT VALID property;
CREATE TABLE offers no way to do that, plus it seems pointless.)

Arguably this is a bug fix; but no back-patch, as it seems barely
possible somebody is depending on the current behavior.

Ildar Musin and Chris Travers; reviewed by Amit Langote and myself

Discussion: https://postgr.es/m/CAONYFtMC6C+3AWCVp7Yd8H87Zn0GxG1_iQG6_bQKbaqYZY0=-g@mail.gmail.com
2020-03-10 14:54:00 -04:00
Tom Lane d01f03a495 Preserve integer and float values accurately in (de)serialize_deflist.
Previously, this code just smashed all types of DefElem values to
strings, cavalierly reasoning that nobody would care.  But in point of
fact, most of the defGetFoo functions do distinguish among different
input syntaxes; for instance defGetBoolean will accept 1 as an integer
but not "1" as a string.  This led to CREATE/ALTER TEXT SEARCH
DICTIONARY accepting 0 and 1 as values for boolean dictionary
properties, only to have the dictionary fail at runtime.

We can upgrade this behavior by teaching serialize_deflist that it
does not need to quote T_Integer or T_Float nodes' values on output,
and then teaching deserialize_deflist to restore unquoted integer or
float values as the appropriate node type.  This should not break
anything using pg_ts_dict.dictinitoption, since that field is just
defined as being something valid to include in CREATE TEXT SEARCH
DICTIONARY.

deserialize_deflist is also used to parse the options arguments
for the ts_headline family of functions, but so far as I can see
this won't cause any problems there either: the only consumer of
that output is prsd_headline which always uses defGetString.
(Really that's a bad idea, but I won't risk changing it here.)

This is surely a bug fix, but given the lack of field complaints
I don't think it's necessary to back-patch.

Discussion: https://postgr.es/m/CAMkU=1xRcs_BUPzR0+V3WndaCAv0E_m3h6aUEJ8NF-sY1nnHsw@mail.gmail.com
2020-03-10 12:30:02 -04:00
Peter Eisentraut 17b9e7f9fe Support adding partitioned tables to publication
When a partitioned table is added to a publication, changes of all of
its partitions (current or future) are published via that publication.

This change only affects which tables a publication considers as its
members.  The receiving side still sees the data coming from the
individual leaf partitions.  So existing restrictions that partition
hierarchies can only be replicated one-to-one are not changed by this.

Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
2020-03-10 09:09:32 +01:00
Alexander Korotkov b0b5e20cd8 Show opclass and opfamily related information in psql
This commit provides psql commands for listing operator classes, operator
families and its contents in psql.  New commands will be useful for exploring
capabilities of both builtin opclasses/opfamilies as well as
opclasses/opfamilies defined in extensions.

Discussion: https://postgr.es/m/1529675324.14193.5.camel%40postgrespro.ru
Author: Sergey Cherkashin, Nikita Glukhov, Alexander Korotkov
Reviewed-by: Michael Paquier, Alvaro Herrera, Arthur Zakirov
Reviewed-by: Kyotaro Horiguchi, Andres Freund
2020-03-08 13:33:16 +03:00
Tom Lane a6525588b7 Allow Unicode escapes in any server encoding, not only UTF-8.
SQL includes provisions for numeric Unicode escapes in string
literals and identifiers.  Previously we only accepted those
if they represented ASCII characters or the server encoding
was UTF-8, making the conversion to internal form trivial.
This patch adjusts things so that we'll call the appropriate
encoding conversion function in less-trivial cases, allowing
the escape sequence to be accepted so long as it corresponds
to some character available in the server encoding.

This also applies to processing of Unicode escapes in JSONB.
However, the old restriction still applies to client-side
JSON processing, since that hasn't got access to the server's
encoding conversion infrastructure.

This patch includes some lexer infrastructure that simplifies
throwing errors with error cursors pointing into the middle of
a string (or other complex token).  For the moment I only used
it for errors relating to Unicode escapes, but we might later
expand the usage to some other cases.

Patch by me, reviewed by John Naylor.

Discussion: https://postgr.es/m/2393.1578958316@sss.pgh.pa.us
2020-03-06 14:17:43 -05:00
Tom Lane fe30e7ebfa Allow ALTER TYPE to change some properties of a base type.
Specifically, this patch allows ALTER TYPE to:
* Change the default TOAST strategy for a toastable base type;
* Promote a non-toastable type to toastable;
* Add/remove binary I/O functions for a type;
* Add/remove typmod I/O functions for a type;
* Add/remove a custom ANALYZE statistics functions for a type.

The first of these can be done by the type's owner; all the others
require superuser privilege since misuse could cause problems.

The main motivation for this patch is to allow extensions to
upgrade the feature sets of their data types, so the set of
alterable properties is biased towards that use-case.  However
it's also true that changing some other properties would be
a lot harder, as they get baked into physical storage and/or
stored expressions that depend on the type.

Along the way, refactor GenerateTypeDependencies() to make it easier
to call, refactor DefineType's volatility checks so they can be shared
by AlterType, and teach typcache.c that it might have to reload data
from the type's pg_type row, a scenario it never handled before.
Also rearrange alter_type.sgml a bit for clarity (put the
composite-type operations together).

Tomas Vondra and Tom Lane

Discussion: https://postgr.es/m/20200228004440.b23ein4qvmxnlpht@development
2020-03-06 12:19:29 -05:00
Tom Lane bb03010b9f Remove the "opaque" pseudo-type and associated compatibility hacks.
A long time ago, it was necessary to declare datatype I/O functions,
triggers, and language handler support functions in a very type-unsafe
way involving a single pseudo-type "opaque".  We got rid of those
conventions in 7.3, but there was still support in various places to
automatically convert such functions to the modern declaration style,
to be able to transparently re-load dumps from pre-7.3 servers.
It seems unnecessary to continue to support that anymore, so take out
the hacks; whereupon the "opaque" pseudo-type itself is no longer
needed and can be dropped.

This is part of a group of patches removing various server-side kluges
for transparently upgrading pre-8.0 dump files.  Since we've had few
complaints about dropping pg_dump's support for dumping from pre-8.0
servers (commit 64f3524e2), it seems okay to now remove these kluges.

Discussion: https://postgr.es/m/4110.1583255415@sss.pgh.pa.us
2020-03-05 15:48:56 -05:00
Tom Lane d677550493 Allow to_date/to_timestamp to recognize non-English month/day names.
to_char() has long allowed the TM (translation mode) prefix to
specify output of translated month or day names; but that prefix
had no effect in input format strings.  Now it does.  to_date()
and to_timestamp() will now recognize the same month or day names
that to_char() would output for the same format code.  Matching is
case-insensitive (per the active collation's notion of what that
means), just as it has always been for English month/day names
without the TM prefix.

(As per the discussion thread, there are lots of cases that this
feature will not handle, such as alternate day names.  But being
able to accept what to_char() will output seems useful enough.)

In passing, fix some shaky English and violations of message
style guidelines in jsonpath errors for the .datetime() method,
which depends on this code.

Juan José Santamaría Flecha, reviewed and modified by me,
with other commentary from Alvaro Herrera, Tomas Vondra,
Arthur Zakirov, Peter Eisentraut, Mark Dilger.

Discussion: https://postgr.es/m/CAC+AXB3u1jTngJcoC1nAHBf=M3v-jrEfo86UFtCqCjzbWS9QhA@mail.gmail.com
2020-03-03 11:06:47 -05:00
Michael Paquier 0b48f1335d Fix assertion failure with ALTER TABLE ATTACH PARTITION and indexes
Using ALTER TABLE ATTACH PARTITION causes an assertion failure when
attempting to work on a partitioned index, because partitioned indexes
cannot have partition bounds.

The grammar of ALTER TABLE ATTACH PARTITION requires partition bounds,
but not ALTER INDEX, so mixing ALTER TABLE with partitioned indexes is
confusing.  Hence, on HEAD, prevent ALTER TABLE to attach a partition if
the relation involved is a partitioned index.  On back-branches, as
applications may rely on the existing behavior, just remove the
culprit assertion.

Reported-by: Alexander Lakhin
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/16276-5cd1dcc8fb8be7b5@postgresql.org
Backpatch-through: 11
2020-03-03 13:55:41 +09:00
Fujii Masao e65497df8f Report progress of streaming base backup.
This commit adds pg_stat_progress_basebackup view that reports
the progress while an application like pg_basebackup is taking
a base backup. This uses the progress reporting infrastructure
added by c16dc1aca5, adding support for streaming base backup.

Bump catversion.

Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi, Amit Langote, Sergei Kornilov
Discussion: https://postgr.es/m/9ed8b801-8215-1f3d-62d7-65bff53f6e94@oss.nttdata.com
2020-03-03 12:03:43 +09:00
Michael Paquier d79fb88ac7 Preserve pg_index.indisclustered across REINDEX CONCURRENTLY
If the flag value is lost, a CLUSTER query following REINDEX
CONCURRENTLY could fail.  Non-concurrent REINDEX is already handling
this case consistently.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20200229024202.GH29456@telsasoft.com
Backpatch-through: 12
2020-03-03 10:12:28 +09:00
Dean Rasheed 43a899f41f Fix corner-case loss of precision in numeric ln().
When deciding on the local rscale to use for the Taylor series
expansion, ln_var() neglected to account for the fact that the result
is subsequently multiplied by a factor of 2^(nsqrt+1), where nsqrt is
the number of square root operations performed in the range reduction
step, which can be as high as 22 for very large inputs. This could
result in a loss of precision, particularly when combined with large
rscale values, for which a large number of Taylor series terms is
required (up to around 400).

Fix by computing a few extra digits in the Taylor series, based on the
weight of the multiplicative factor log10(2^(nsqrt+1)). It remains to
be proven whether or not the other 8 extra digits used for the Taylor
series is appropriate, but this at least deals with the obvious
oversight of failing to account for the effects of the final
multiplication.

Per report from Justin AnyhowStep. Reviewed by Tom Lane.

Discussion: https://postgr.es/m/16280-279f299d9c06e56f@postgresql.org
2020-03-01 14:49:25 +00:00
Tom Lane 58c47ccfff Correctly re-use hash tables in buildSubPlanHash().
Commit 356687bd8 omitted to remove leftover code for destroying
a hashed subplan's hash tables, with the result that the tables
were always rebuilt not reused; this leads to severe memory
leakage if a hashed subplan is re-executed enough times.
Moreover, the code for reusing the hashnulls table had a typo
that would have made it do the wrong thing if it were reached.

Looking at the code coverage report shows severe under-coverage
of the potential callers of ResetTupleHashTable, so add some test
cases that exercise them.

Andreas Karlsson and Tom Lane, per reports from Ranier Vilela
and Justin Pryzby.

Backpatch to v11, as the faulty commit was.

Discussion: https://postgr.es/m/edb62547-c453-c35b-3ed6-a069e4d6b937@proxel.se
Discussion: https://postgr.es/m/CAEudQAo=DCebm1RXtig9OH+QivpS97sMkikt0A9qHmMUs+g6ZA@mail.gmail.com
Discussion: https://postgr.es/m/20200210032547.GA1412@telsasoft.com
2020-02-29 13:48:09 -05:00
Alvaro Herrera b9b408c487
Record parents of triggers
This let us get rid of a recently introduced ugly hack (commit
1fa846f1c9).

Author: Álvaro Herrera
Reviewed-by: Amit Langote, Tom Lane
Discussion: https://postgr.es/m/20200217215641.GA29784@alvherre.pgsql
2020-02-27 13:23:33 -03:00
Tom Lane a477bfc1df Suppress unnecessary RelabelType nodes in more cases.
eval_const_expressions sometimes produced RelabelType nodes that
were useless because they just relabeled an expression to the same
exposed type it already had.  This is worth avoiding because it can
cause two equivalent expressions to not be equal(), preventing
recognition of useful optimizations.  In the test case added here,
an unpatched planner fails to notice that the "sqli = constant" clause
renders a sort step unnecessary, because one code path produces an
extra RelabelType and another doesn't.

Fix by ensuring that eval_const_expressions_mutator's T_RelabelType
case will not add in an unnecessary RelabelType.  Also save some
code by sharing a subroutine with the effectively-equivalent cases
for CollateExpr and CoerceToDomain.  (CollateExpr had no bug, and
I think that the case couldn't arise with CoerceToDomain, but
it seems prudent to do the same check for all three cases.)

Back-patch to v12.  In principle this has been wrong all along,
but I haven't seen a case where it causes visible misbehavior
before v12, so refrain from changing stable branches unnecessarily.

Per investigation of a report from Eric Gillum.

Discussion: https://postgr.es/m/CAMmjdmvAZsUEskHYj=KT9sTukVVCiCSoe_PBKOXsncFeAUDPCQ@mail.gmail.com
2020-02-26 18:14:12 -05:00
Peter Geoghegan 0d861bbb70 Add deduplication to nbtree.
Deduplication reduces the storage overhead of duplicates in indexes that
use the standard nbtree index access method.  The deduplication process
is applied lazily, after the point where opportunistic deletion of
LP_DEAD-marked index tuples occurs.  Deduplication is only applied at
the point where a leaf page split would otherwise be required.  New
posting list tuples are formed by merging together existing duplicate
tuples.  The physical representation of the items on an nbtree leaf page
is made more space efficient by deduplication, but the logical contents
of the page are not changed.  Even unique indexes make use of
deduplication as a way of controlling bloat from duplicates whose TIDs
point to different versions of the same logical table row.

The lazy approach taken by nbtree has significant advantages over a GIN
style eager approach.  Most individual inserts of index tuples have
exactly the same overhead as before.  The extra overhead of
deduplication is amortized across insertions, just like the overhead of
page splits.  The key space of indexes works in the same way as it has
since commit dd299df8 (the commit that made heap TID a tiebreaker
column).

Testing has shown that nbtree deduplication can generally make indexes
with about 10 or 15 tuples for each distinct key value about 2.5X - 4X
smaller, even with single column integer indexes (e.g., an index on a
referencing column that accompanies a foreign key).  The final size of
single column nbtree indexes comes close to the final size of a similar
contrib/btree_gin index, at least in cases where GIN's posting list
compression isn't very effective.  This can significantly improve
transaction throughput, and significantly reduce the cost of vacuuming
indexes.

A new index storage parameter (deduplicate_items) controls the use of
deduplication.  The default setting is 'on', so all new B-Tree indexes
automatically use deduplication where possible.  This decision will be
reviewed at the end of the Postgres 13 beta period.

There is a regression of approximately 2% of transaction throughput with
synthetic workloads that consist of append-only inserts into a table
with several non-unique indexes, where all indexes have few or no
repeated values.  The underlying issue is that cycles are wasted on
unsuccessful attempts at deduplicating items in non-unique indexes.
There doesn't seem to be a way around it short of disabling
deduplication entirely.  Note that deduplication of items in unique
indexes is fairly well targeted in general, which avoids the problem
there (we can use a special heuristic to trigger deduplication passes in
unique indexes, since we're specifically targeting "version bloat").

Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed.

No bump in BTREE_VERSION, since the representation of posting list
tuples works in a way that's backwards compatible with version 4 indexes
(i.e. indexes built on PostgreSQL 12).  However, users must still
REINDEX a pg_upgrade'd index to use deduplication, regardless of the
Postgres version they've upgraded from.  This is the only way to set the
new nbtree metapage flag indicating that deduplication is generally
safe.

Author: Anastasia Lubennikova, Peter Geoghegan
Reviewed-By: Peter Geoghegan, Heikki Linnakangas
Discussion:
    https://postgr.es/m/55E4051B.7020209@postgrespro.ru
    https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
2020-02-26 13:05:30 -08:00
Peter Geoghegan 612a1ab767 Add equalimage B-Tree support functions.
Invent the concept of a B-Tree equalimage ("equality implies image
equality") support function, registered as support function 4.  This
indicates whether it is safe (or not safe) to apply optimizations that
assume that any two datums considered equal by an operator class's order
method must be interchangeable without any loss of semantic information.
This is static information about an operator class and a collation.

Register an equalimage routine for almost all of the existing B-Tree
opclasses.  We only need two trivial routines for all of the opclasses
that are included with the core distribution.  There is one routine for
opclasses that index non-collatable types (which returns 'true'
unconditionally), plus another routine for collatable types (which
returns 'true' when the collation is a deterministic collation).

This patch is infrastructure for an upcoming patch that adds B-Tree
deduplication.

Author: Peter Geoghegan, Anastasia Lubennikova
Discussion: https://postgr.es/m/CAH2-Wzn3Ee49Gmxb7V1VJ3-AC8fWn-Fr8pfWQebHe8rYRxt5OQ@mail.gmail.com
2020-02-26 11:28:25 -08:00
Alvaro Herrera b2304a7174
Simplify FK-to-partitioned regression test query
Avoid a join between relations having the FK to detect FK violation.
The planner might optimize this considering the PK must exist on the
referenced side at some point, effectively masking a bug this test
tries to detect.

Tom Lane and Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/467.1581270529@sss.pgh.pa.us
2020-02-20 14:14:20 -03:00
Tom Lane 761a5688b1 Fix confusion about event trigger vs. plain function in plpgsql.
The function hash table keys made by compute_function_hashkey() failed
to distinguish event-trigger call context from regular call context.
This meant that once we'd successfully made a hash entry for an event
trigger (either by validation, or by normal use as an event trigger),
an attempt to call the trigger function as a plain function would
find this hash entry and thereby bypass the you-can't-do-that check in
do_compile().  Thus we'd attempt to execute the function, leading to
strange errors or even crashes, depending on function contents and
server version.

To fix, add an isEventTrigger field to PLpgSQL_func_hashkey,
paralleling the longstanding infrastructure for regular triggers.
This fits into what had been pad space, so there's no risk of an ABI
break, even assuming that any third-party code is looking at these
hash keys.  (I considered replacing isTrigger with a PLpgSQL_trigtype
enum field, but felt that that carried some API/ABI risk.  Maybe we
should change it in HEAD though.)

Per bug #16266 from Alexander Lakhin.  This has been broken since
event triggers were invented, so back-patch to all supported branches.

Discussion: https://postgr.es/m/16266-fcd7f838e97ba5d4@postgresql.org
2020-02-19 14:45:17 -05:00
Fujii Masao b7e51b350c Make inherited LOCK TABLE perform access permission checks on parent table only.
Previously, LOCK TABLE command through a parent table checked
the permissions on not only the parent table but also the children
tables inherited from it. This was a bug and inherited queries should
perform access permission checks on the parent table only. This
commit fixes LOCK TABLE so that it does not check the permission
on children tables.

This patch is applied only in the master branch. We decided not to
back-patch because it's not hard to imagine that there are some
applications expecting the old behavior and the change breaks their
security.

Author: Amit Langote
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CAHGQGwE+GauyG7POtRfRwwthAGwTjPQYdFHR97+LzA4RHGnJxA@mail.gmail.com
2020-02-18 13:13:15 +09:00
Alvaro Herrera 55173d2e66 Fix failure to create FKs correctly in partitions
On a multi-level partioned table, when adding a partition not directly
connected to the root table, foreign key constraints referencing the
root were not cloned to the new partition, leading to the FK being
possibly inadvertently violated later on.

This was caused by fuzzy thinking in CloneFkReferenced (commit
f56f8f8da6): it was skipping constraints marked as having parents on
the theory that cloning those would create duplicates; but that's only
correct for the top level of the partitioning hierarchy.  For levels
below that one, such constraints must still be considered and only
skipped if later on we see that we'd create duplicates.  Apparently, I
(Álvaro) wrote the comments right but the code implemented something
slightly different.

Author: Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/20200206004948.238352db@firost
2020-02-07 18:27:18 -03:00
Alvaro Herrera 9710d3d4a8 Fix TRUNCATE .. CASCADE on partitions
When running TRUNCATE CASCADE on a child of a partitioned table
referenced by another partitioned table, the truncate was not applied to
partitions of the referencing table; this could leave rows violating the
constraint in the referencing partitioned table.  Repair by walking the
pg_constraint chain all the way up to the topmost referencing table.

Note: any partitioned tables containing FKs that reference other
partitioned tables should be checked for possible violating rows, if
TRUNCATE has occurred in partitions of the referenced table.

Reported-by: Christophe Courtois
Author: Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/20200204183906.115f693e@firost
2020-02-07 17:09:36 -03:00
Fujii Masao cb5b28613d Fix bug in Tid scan.
Commit 147e3722f7 changed Tid scan so that it calls table_beginscan()
and uses the scan option for seq scan. This change caused two issues.

(1) The change caused Tid scan to take a predicate lock on the entire
       relation in serializable transaction even when relation-level
       lock is not necessary. This could lead to an unexpected
       serialization error.

(2) The change caused Tid scan to increment the number of seq_scan
       in pg_stat_*_tables views even though it's not seq scan. This
       could confuse the users.

This commit adds the scan option for Tid scan and makes Tid scan
use it, to avoid those issues.

Back-patch to v12, where the bug was introduced.

Author: Tatsuhito Kasahara
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada, Fujii Masao
Discussion: https://postgr.es/m/CAP0=ZVKy+gTbFmB6X_UW0pP3WaeJ-fkUWHoD-pExS=at3CY76g@mail.gmail.com
2020-02-07 22:06:31 +09:00
Michael Paquier b025f32e0b Add leader_pid to pg_stat_activity
This new field tracks the PID of the group leader used with parallel
query.  For parallel workers and the leader, the value is set to the
PID of the group leader.  So, for the group leader, the value is the
same as its own PID.  Note that this reflects what PGPROC stores in
shared memory, so as leader_pid is NULL if a backend has never been
involved in parallel query.  If the backend is using parallel query or
has used it at least once, the value is set until the backend exits.

Author: Julien Rouhaud
Reviewed-by: Sergei Kornilov, Guillaume Lelarge, Michael Paquier, Tomas
Vondra
Discussion: https://postgr.es/m/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com
2020-02-06 09:18:06 +09:00
Andrew Gierth bf6cc19e34 Force tuple conversion when the source has missing attributes.
Tuple conversion incorrectly concluded that no conversion was needed
as long as all the attributes lined up. But if the source tuple has a
missing attribute (from addition of a column with default), then the
destination tupdesc might not reflect the same default. The typical
symptom was that the affected columns would be unexpectedly NULL.

Repair by always forcing conversion if the source has missing
attributes, which will be filled in by the deform operation. (In
theory we could optimize for when the destination has the same
default, but that seemed overkill.)

Backpatch to 11 where missing attributes were added.

Per bug #16242.

Vik Fearing (discovery, code, testing) and me (analysis, testcase).

Discussion: https://postgr.es/m/16242-d1c9fca28445966b@postgresql.org
2020-02-05 20:21:20 +00:00
Tom Lane 7d91b604d9 Fix handling of "Subplans Removed" field in EXPLAIN output.
Commit 499be013d added this field in a rather poorly-thought-through
manner, with the result being that rather than being a field of the
Append or MergeAppend plan node as intended (and as it seems to be,
in text format), it was actually an element of the "Plans" subgroup.
At least in JSON format, that's flat out invalid syntax, because
"Plans" is an array not an object.

While it's not hard to move the generation of the field so that it
appears where it's supposed to, this does result in a visible change
in field order in text format, in cases where a Append or MergeAppend
plan node has any InitPlans attached.  That's slightly annoying to
do in stable branches; but the alternative of continuing to emit
broken non-text formats seems worse.

Also, since the set of fields emitted is not supposed to be
data-dependent in non-text formats, make sure that "Subplans Removed"
appears in Append and MergeAppend nodes even when it's zero, in those
formats.  (The previous coding made it look like it could appear in
some other node types such as BitmapAnd, but we don't actually support
runtime pruning there, so don't emit it in those cases.)

Per bug #16171 from Mahadevan Ramachandran.  Fix by Daniel Gustafsson
and Tom Lane, reviewed by Hamid Akhtar.  Back-patch to v11 where this
code came in.

Discussion: https://postgr.es/m/16171-b72259ab75505fa2@postgresql.org
2020-02-04 13:07:13 -05:00
Tom Lane 74b35eb468 Fix CheckAttributeType's handling of collations for ranges.
Commit fc7695891 changed CheckAttributeType to recurse into ranges,
but made it pass down the wrong collation (always InvalidOid, since
ranges as such have no collation).  This would result in guaranteed
failure when considering a range type whose subtype is collatable.

Embarrassingly, we lack any regression tests that would expose such
a problem (but fortunately, somebody noticed before we shipped this
bug in any release).

Fix it to pass down the range's subtype collation property instead,
and add some regression test cases to exercise collatable-subtype
ranges a bit more.  Back-patch to all supported branches, as the
previous patch was.

Report and patch by Julien Rouhaud, test cases tweaked by me

Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
2020-01-31 17:03:55 -05:00
Fujii Masao e6f1e560e4 Make inherited TRUNCATE perform access permission checks on parent table only.
Previously, TRUNCATE command through a parent table checked the
permissions on not only the parent table but also the children tables
inherited from it. This was a bug and inherited queries should perform
access permission checks on the parent table only. This commit fixes
that bug.

Back-patch to all supported branches.

Author: Amit Langote
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CAHGQGwFHdSvifhJE+-GSNqUHSfbiKxaeQQ7HGcYz6SC2n_oDcg@mail.gmail.com
2020-01-31 00:42:06 +09:00
Tom Lane 50fc694e43 Invent "trusted" extensions, and remove the pg_pltemplate catalog.
This patch creates a new extension property, "trusted".  An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser.  The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user.  This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE.  We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.

Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs.  This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do.  The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.

This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents.  Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.

Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.

Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
2020-01-29 18:42:43 -05:00
Amit Kapila 05f18c6b6b Added relation name in error messages for constraint checks.
This gives more information to the user about the error and it makes such
messages consistent with the other similar messages in the code.

Reported-by: Simon Riggs
Author: Mahendra Singh and Simon Riggs
Reviewed-by: Beena Emerson and Amit Kapila
Discussion: https://postgr.es/m/CANP8+j+7YUvQvGxTrCiw77R23enMJ7DFmyA3buR+fa2pKs4XhA@mail.gmail.com
2020-01-28 07:48:10 +05:30
Tom Lane 3ec20c7091 Fix EXPLAIN (SETTINGS) to follow policy about when to print empty fields.
In non-TEXT output formats, the "Settings" field should appear when
requested, even if it would be empty.

Also, get rid of the premature optimization of counting all the
GUC_EXPLAIN variables at startup.  Since there was no provision for
adjusting that count later, all it'd take would be some extension marking
a parameter as GUC_EXPLAIN to risk an assertion failure or memory stomp.
We could make get_explain_guc_options() count those variables on-the-fly,
or dynamically resize its array ... but TBH I do not think that making a
transient array of pointers a bit smaller is worth any extra complication,
especially when you consider all the other transient space EXPLAIN eats.
So just allocate that array at the max possible size.

In HEAD, also add some regression test coverage for this feature.

Because of the memory-stomp hazard, back-patch to v12 where this
feature was added.

Discussion: https://postgr.es/m/19416.1580069629@sss.pgh.pa.us
2020-01-26 16:32:19 -05:00
Tom Lane 1001368497 Clean up EXPLAIN's handling of per-worker details.
Previously, it was possible for EXPLAIN ANALYZE of a parallel query
to produce several different "Workers" fields for a single plan node,
because different portions of explain.c independently generated
per-worker data and wrapped that output in separate fields.  This
is pretty bogus, especially for the structured output formats: even
if it's not technically illegal, most programs would have a hard time
dealing with such data.

To improve matters, add infrastructure that allows redirecting
per-worker values into a side data structure, and then collect that
data into a single "Workers" field after we've finished running all
the relevant code for a given plan node.

There are a few visible side-effects:

* In text format, instead of something like

  Sort Method: external merge  Disk: 4920kB
  Worker 0:  Sort Method: external merge  Disk: 5880kB
  Worker 1:  Sort Method: external merge  Disk: 5920kB
  Buffers: shared hit=682 read=10188, temp read=1415 written=2101
  Worker 0:  actual time=130.058..130.324 rows=1324 loops=1
    Buffers: shared hit=337 read=3489, temp read=505 written=739
  Worker 1:  actual time=130.273..130.512 rows=1297 loops=1
    Buffers: shared hit=345 read=3507, temp read=505 written=744

you get

  Sort Method: external merge  Disk: 4920kB
  Buffers: shared hit=682 read=10188, temp read=1415 written=2101
  Worker 0:  actual time=130.058..130.324 rows=1324 loops=1
    Sort Method: external merge  Disk: 5880kB
    Buffers: shared hit=337 read=3489, temp read=505 written=739
  Worker 1:  actual time=130.273..130.512 rows=1297 loops=1
    Sort Method: external merge  Disk: 5920kB
    Buffers: shared hit=345 read=3507, temp read=505 written=744

* When JIT is enabled, any relevant per-worker JIT stats are attached
to the child node of the Gather or Gather Merge node, which is where
the other per-worker output has always been.  Previously, that info
was attached directly to a Gather node, or missed entirely for Gather
Merge.

* A query's summary JIT data no longer includes a bogus
"Worker Number: -1" field.

A notable code-level change is that indenting for lines of text-format
output should now be handled by calling "ExplainIndentText(es)",
instead of hard-wiring how much space to emit.  This seems a good deal
cleaner anyway.

This patch also adds a new "explain.sql" regression test script that's
dedicated to testing EXPLAIN.  There is more that can be done in that
line, certainly, but for now it just adds some coverage of the XML and
YAML output formats, which had been completely untested.

Although this is surely a bug fix, it's not clear that people would
be happy with rearranging EXPLAIN output in a minor release, so apply
to HEAD only.

Maciek Sakrejda and Tom Lane, based on an idea of Andres Freund's;
reviewed by Georgios Kokolatos

Discussion: https://postgr.es/m/CAOtHd0AvAA8CLB9Xz0wnxu1U=zJCKrr1r4QwwXi_kcQsHDVU=Q@mail.gmail.com
2020-01-25 18:16:42 -05:00
Dean Rasheed 13661ddd7e Add functions gcd() and lcm() for integer and numeric types.
These compute the greatest common divisor and least common multiple of
a pair of numbers using the Euclidean algorithm.

Vik Fearing, reviewed by Fabien Coelho.

Discussion: https://postgr.es/m/adbd3e0b-e3f1-5bbc-21db-03caf1cef0f7@2ndquadrant.com
2020-01-25 14:00:59 +00:00
Tom Lane 9a3a75cb81 Fix an oversight in commit 4c70098ff.
I had supposed that the from_char_seq_search() call sites were
all passing the constant arrays you'd expect them to pass ...
but on looking closer, the one for DY format was passing the
days[] array not days_short[].  This accidentally worked because
the day abbreviations in English are all the same as the first
three letters of the full day names.  However, once we took out
the "maximum comparison length" logic, it stopped working.

As penance for that oversight, add regression test cases covering
this, as well as every other switch case in DCH_from_char() that
was not reached according to the code coverage report.

Also, fold the DCH_RM and DCH_rm cases into one --- now that
seq_search is case independent, there's no need to pass different
comparison arrays for those cases.

Back-patch, as the previous commit was.
2020-01-23 16:15:32 -05:00
Tom Lane 4c70098ffa Clean up formatting.c's logic for matching constant strings.
seq_search(), which is used to match input substrings to constants
such as month and day names, had a lot of bizarre and unnecessary
behaviors.  It was mostly possible to avert our eyes from that before,
but we don't want to duplicate those behaviors in the upcoming patch
to allow recognition of non-English month and day names.  So it's time
to clean this up.  In particular:

* seq_search scribbled on the input string, which is a pretty dangerous
thing to do, especially in the badly underdocumented way it was done here.
Fortunately the input string is a temporary copy, but that was being made
three subroutine levels away, making it something easy to break
accidentally.  The behavior is externally visible nonetheless, in the form
of odd case-folding in error reports about unrecognized month/day names.
The scribbling is evidently being done to save a few calls to pg_tolower,
but that's such a cheap function (at least for ASCII data) that it's
pretty pointless to worry about.  In HEAD I switched it to be
pg_ascii_tolower to ensure it is cheap in all cases; but there are corner
cases in Turkish where this'd change behavior, so leave it as pg_tolower
in the back branches.

* seq_search insisted on knowing the case form (all-upper, all-lower,
or initcap) of the constant strings, so that it didn't have to case-fold
them to perform case-insensitive comparisons.  This likewise seems like
excessive micro-optimization, given that pg_tolower is certainly very
cheap for ASCII data.  It seems unsafe to assume that we know the case
form that will come out of pg_locale.c for localized month/day names, so
it's better just to define the comparison rule as "downcase all strings
before comparing".  (The choice between downcasing and upcasing is
arbitrary so far as English is concerned, but it might not be in other
locales, so follow citext's lead here.)

* seq_search also had a parameter that'd cause it to report a match
after a maximum number of characters, even if the constant string were
longer than that.  This was not actually used because no caller passed
a value small enough to cut off a comparison.  Replicating that behavior
for localized month/day names seems expensive as well as useless, so
let's get rid of that too.

* from_char_seq_search used the maximum-length parameter to truncate
the input string in error reports about not finding a matching name.
This leads to rather confusing reports in many cases.  Worse, it is
outright dangerous if the input string isn't all-ASCII, because we
risk truncating the string in the middle of a multibyte character.
That'd lead either to delivering an illegible error message to the
client, or to encoding-conversion failures that obscure the actual
data problem.  Get rid of that in favor of truncating at whitespace
if any (a suggestion due to Alvaro Herrera).

In addition to fixing these things, I const-ified the input string
pointers of DCH_from_char and its subroutines, to make sure there
aren't any other scribbling-on-input problems.

The risk of generating a badly-encoded error message seems like
enough of a bug to justify back-patching, so patch all supported
branches.

Discussion: https://postgr.es/m/29432.1579731087@sss.pgh.pa.us
2020-01-23 13:42:09 -05:00
Alvaro Herrera 611ce856f5 Add BRIN test case
This test case was sketched in commit message 4c87010981 to explain an
ancient bug; it translates to a coverage increase, so add it to the BRIN
regression tests.
2020-01-22 18:37:39 -03:00
Michael Paquier a904abe2e2 Fix concurrent indexing operations with temporary tables
Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY
on a temporary relation with ON COMMIT actions triggered unexpected
errors because those operations use multiple transactions internally to
complete their work.  Here is for example one confusing error when using
ON COMMIT DELETE ROWS:
ERROR:  index "foo" already contains data

Issues related to temporary relations and concurrent indexing are fixed
in this commit by enforcing the non-concurrent path to be taken for
temporary relations even if using CONCURRENTLY, transparently to the
user.  Using a non-concurrent path does not matter in practice as locks
cannot be taken on a temporary relation by a session different than the
one owning the relation, and the non-concurrent operation is more
effective.

The problem exists with REINDEX since v12 with the introduction of
CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for
those commands.  In all supported versions, this caused only confusing
error messages to be generated.  Note that with REINDEX, it was also
possible to issue a REINDEX CONCURRENTLY for a temporary relation owned
by a different session, leading to a server crash.

The idea to enforce transparently the non-concurrent code path for
temporary relations comes originally from Andres Freund.

Reported-by: Manuel Rigger
Author: Michael Paquier, Heikki Linnakangas
Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas
Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com
Backpatch-through: 9.4
2020-01-22 09:49:18 +09:00
Tom Lane 9b9c5f279e Clarify behavior of adding and altering a column in same ALTER command.
The behavior of something like

ALTER TABLE transactions
  ADD COLUMN status varchar(30) DEFAULT 'old',
  ALTER COLUMN status SET default 'current';

is to fill existing table rows with 'old', not 'current'.  That's
intentional and desirable for a couple of reasons:

* It makes the behavior the same whether you merge the sub-commands
into one ALTER command or give them separately;

* If we applied the new default while filling the table, there would
be no way to get the existing behavior in one SQL command.

The same reasoning applies in cases that add a column and then
manipulate its GENERATED/IDENTITY status in a second sub-command,
since the generation expression is really just a kind of default.
However, that wasn't very obvious (at least not to me; earlier in
the referenced discussion thread I'd thought it was a bug to be
fixed).  And it certainly wasn't documented.

Hence, add documentation, code comments, and a test case to clarify
that this behavior is all intentional.

In passing, adjust ATExecAddColumn's defaults-related relkind check
so that it matches up exactly with ATRewriteTables, instead of being
effectively (though not literally) the negated inverse condition.
The reasoning can be explained a lot more concisely that way, too
(not to mention that the comment now matches the code, which it
did not before).

Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
2020-01-21 16:17:21 -05:00
Tom Lane 31f403e95f Further tweaking of jsonb_set_lax().
Some buildfarm members were still warning about this, because in
9c679a08f I'd missed decorating one of the ereport() code paths
with a dummy return.

Also, adjust the error messages to be more in line with project
style guide.
2020-01-20 14:26:56 -05:00
Amit Kapila 40d964ec99 Allow vacuum command to process indexes in parallel.
This feature allows the vacuum to leverage multiple CPUs in order to
process indexes.  This enables us to perform index vacuuming and index
cleanup with background workers.  This adds a PARALLEL option to VACUUM
command where the user can specify the number of workers that can be used
to perform the command which is limited by the number of indexes on a
table.  Specifying zero as a number of workers will disable parallelism.
This option can't be used with the FULL option.

Each index is processed by at most one vacuum process.  Therefore parallel
vacuum can be used when the table has at least two indexes.

The parallel degree is either specified by the user or determined based on
the number of indexes that the table has, and further limited by
max_parallel_maintenance_workers.  The index can participate in parallel
vacuum iff it's size is greater than min_parallel_index_scan_size.

Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Robert Haas, Tomas Vondra,
Mahendra Singh and Sergei Kornilov
Tested-by: Mahendra Singh and Prabhat Sahu
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com
https://postgr.es/m/CAA4eK1J-VoR9gzS5E75pcD-OH0mEyCdp8RihcwKrcuw7J-Q0+w@mail.gmail.com
2020-01-20 07:57:49 +05:30
Alexander Korotkov 4b754d6c16 Avoid full scan of GIN indexes when possible
The strategy of GIN index scan is driven by opclass-specific extract_query
method.  This method that needed search mode is GIN_SEARCH_MODE_ALL.  This
mode means that matching tuple may contain none of extracted entries.  Simple
example is '!term' tsquery, which doesn't need any term to exist in matching
tsvector.

In order to handle such scan key GIN calculates virtual entry, which contains
all TIDs of all entries of attribute.  In fact this is full scan of index
attribute.  And typically this is very slow, but allows to handle some queries
correctly in GIN.  However, current algorithm calculate such virtual entry for
each GIN_SEARCH_MODE_ALL scan key even if they are multiple for the same
attribute.  This is clearly not optimal.

This commit improves the situation by introduction of "exclude only" scan keys.
Such scan keys are not capable to return set of matching TIDs.  Instead, they
are capable only to filter TIDs produced by normal scan keys.  Therefore,
each attribute should contain at least one normal scan key, while rest of them
may be "exclude only" if search mode is GIN_SEARCH_MODE_ALL.

The same optimization might be applied to the whole scan, not per-attribute.
But that leads to NULL values elimination problem.  There is trade-off between
multiple possible ways to do this.  We probably want to do this later using
some cost-based decision algorithm.

Discussion: https://postgr.es/m/CAOBaU_YGP5-BEt5Cc0%3DzMve92vocPzD%2BXiZgiZs1kjY0cj%3DXBg%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud
Reviewed-by: Julien Rouhaud, Tomas Vondra, Tom Lane
2020-01-18 01:11:39 +03:00
Tom Lane 41c6f9db25 Repair more failures with SubPlans in multi-row VALUES lists.
Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ecf23@postgresql.org
2020-01-17 16:17:31 -05:00
Andrew Dunstan a83586b554 Add a non-strict version of jsonb_set
jsonb_set_lax() is the same as jsonb_set, except that it takes and extra
argument that specifies what to do if the value argument is NULL. The
default is 'use_json_null'. Other possibilities are 'raise_exception',
'return_target' and 'delete_key', all these behaviours having been
suggested as reasonable by various users.

Discussion: https://postgr.es/m/375873e2-c957-3a8d-64f9-26c43c2b16e7@2ndQuadrant.com

Reviewed by: Pavel Stehule
2020-01-17 11:52:39 +10:30
Tom Lane 1281a5c907 Restructure ALTER TABLE execution to fix assorted bugs.
We've had numerous bug reports about how (1) IF NOT EXISTS clauses in
ALTER TABLE don't behave as-expected, and (2) combining certain actions
into one ALTER TABLE doesn't work, though executing the same actions as
separate statements does.  This patch cleans up all of the cases so far
reported from the field, though there are still some oddities associated
with identity columns.

The core problem behind all of these bugs is that we do parse analysis
of ALTER TABLE subcommands too soon, before starting execution of the
statement.  The root of the bugs in group (1) is that parse analysis
schedules derived commands (such as a CREATE SEQUENCE for a serial
column) before it's known whether the IF NOT EXISTS clause should cause
a subcommand to be skipped.  The root of the bugs in group (2) is that
earlier subcommands may change the catalog state that later subcommands
need to be parsed against.

Hence, postpone parse analysis of ALTER TABLE's subcommands, and do
that one subcommand at a time, during "phase 2" of ALTER TABLE which
is the phase that does catalog rewrites.  Thus the catalog effects
of earlier subcommands are already visible when we analyze later ones.
(The sole exception is that we do parse analysis for ALTER COLUMN TYPE
subcommands during phase 1, so that their USING expressions can be
parsed against the table's original state, which is what we need.
Arguably, these bugs stem from falsely concluding that because ALTER
COLUMN TYPE must do early parse analysis, every other command subtype
can too.)

This means that ALTER TABLE itself must deal with execution of any
non-ALTER-TABLE derived statements that are generated by parse analysis.
Add a suitable entry point to utility.c to accept those recursive
calls, and create a struct to pass through the information needed by
the recursive call, rather than making the argument lists of
AlterTable() and friends even longer.

Getting this to work correctly required a little bit of fiddling
with the subcommand pass structure, in particular breaking up
AT_PASS_ADD_CONSTR into multiple passes.  But otherwise it's mostly
a pretty straightforward application of the above ideas.

Fixing the residual issues for identity columns requires refactoring of
where the dependency link from an identity column to its sequence gets
set up.  So that seems like suitable material for a separate patch,
especially since this one is pretty big already.

Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
2020-01-15 18:49:24 -05:00
Alvaro Herrera a166d408eb Report progress of ANALYZE commands
This uses the progress reporting infrastructure added by c16dc1aca5,
adding support for ANALYZE.

Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Co-authored-by: Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>
Reviewed-by: Julien Rouhaud, Robert Haas, Anthony Nowocien, Kyotaro Horiguchi,
	Vignesh C, Amit Langote
2020-01-15 11:14:39 -03:00
Peter Eisentraut f595117e24 ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSION
Add an ALTER TABLE subcommand for dropping the generated property from
a column, per SQL standard.

Reviewed-by: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/2f7f1d9c-946e-0453-d841-4f38eb9d69b6%402ndquadrant.com
2020-01-14 13:36:03 +01:00
Dean Rasheed d751ba5235 Make rewriter prevent auto-updates on views with conditional INSTEAD rules.
A view with conditional INSTEAD rules and no unconditional INSTEAD
rules or INSTEAD OF triggers is not auto-updatable. Previously we
relied on a check in the executor to catch this, but that's
problematic since the planner may fail to properly handle such a query
and thus return a particularly unhelpful error to the user, before
reaching the executor check.

Instead, trap this in the rewriter and report the correct error there.
Doing so also allows us to include more useful error detail than the
executor check can provide. This doesn't change the existing behaviour
of updatable views; it merely ensures that useful error messages are
reported when a view isn't updatable.

Per report from Pengzhou Tang, though not adopting that suggested fix.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=trYr4Kn8_3_PEA@mail.gmail.com
2020-01-14 09:52:21 +00:00
Tom Lane 7f380c59f8 Reduce size of backend scanner's tables.
Previously, the core scanner's yy_transition[] array had 37045 elements.
Since that number is larger than INT16_MAX, Flex generated the array to
contain 32-bit integers.  By reimplementing some of the bulkier scanner
rules, this patch reduces the array to 20495 elements.  The much smaller
total length, combined with the consequent use of 16-bit integers for
the array elements reduces the binary size by over 200kB.  This was
accomplished in two ways:

1. Consolidate handling of quote continuations into a new start condition,
rather than duplicating that logic for five different string types.

2. Treat Unicode strings and identifiers followed by a UESCAPE sequence
as three separate tokens, rather than one.  The logic to de-escape
Unicode strings is moved to the filter code in parser.c, which already
had the ability to provide special processing for token sequences.
While we could have implemented the conversion in the grammar, that
approach was rejected for performance and maintainability reasons.

Performance in microbenchmarks of raw parsing seems equal or slightly
faster in most cases, and it's reasonable to expect that in real-world
usage (with more competition for the CPU cache) there will be a larger
win.  The exception is UESCAPE sequences; lexing those is about 10%
slower, primarily because the scanner now has to be called three times
rather than one.  This seems acceptable since that feature is very
rarely used.

The psql and epcg lexers are likewise modified, primarily because we
want to keep them all in sync.  Since those lexers don't use the
space-hogging -CF option, the space savings is much less, but it's
still good for perhaps 10kB apiece.

While at it, merge the ecpg lexer's handling of C-style comments used
in SQL and in C.  Those have different rules regarding nested comments,
but since we already have the ability to keep track of the previous
start condition, we can use that to handle both cases within a single
start condition.  This matches the core scanner more closely.

John Naylor

Discussion: https://postgr.es/m/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw@mail.gmail.com
2020-01-13 15:04:31 -05:00
Tomas Vondra eae056c19e Apply multiple multivariate MCV lists when possible
Until now we've only used a single multivariate MCV list per relation,
covering the largest number of clauses. So for example given a query

    SELECT * FROM t WHERE a = 1 AND b =1 AND c = 1 AND d = 1

and extended statistics on (a,b) and (c,d), we'd only pick and use one
of them. This commit improves this by repeatedly picking and applying
the best statistics (matching the largest number of remaining clauses)
until no additional statistics is applicable.

This greedy algorithm is simple, but may not be optimal. A different
choice of statistics may leave fewer clauses unestimated and/or give
better estimates for some other reason.

This can however happen only when there are overlapping statistics, and
selecting one makes it impossible to use the other. E.g. with statistics
on (a,b), (c,d), (b,c,d), we may pick either (a,b) and (c,d) or (b,c,d).
But it's not clear which option is the best one.

We however assume cases like this are rare, and the easiest solution is
to define statistics covering the whole group of correlated columns. In
the future we might support overlapping stats, using some of the clauses
as conditions (in conditional probability sense).

Author: Tomas Vondra
Reviewed-by: Mark Dilger, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
2020-01-13 01:21:17 +01:00
Tomas Vondra aaa6761876 Apply all available functional dependencies
When considering functional dependencies during selectivity estimation,
it's not necessary to bother with selecting the best extended statistic
object and then use just dependencies from it. We can simply consider
all applicable functional dependencies at once.

This means we need to deserialie all (applicable) dependencies before
applying them to the clauses. This is a bit more expensive than picking
the best statistics and deserializing dependencies for it. To minimize
the additional cost, we ignore statistics that are not applicable.

Author: Tomas Vondra
Reviewed-by: Mark Dilger
Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
2020-01-13 01:21:06 +01:00
Robert Haas ed10f32e37 Add pg_shmem_allocations view.
This tells you about allocations that have been made from the main
shared memory segment. The original patch also tried to show information
about dynamic shared memory allocation as well, but I decided to
leave that problem for another time.

Andres Freund and Robert Haas, reviewed by Michael Paquier, Marti
Raudsepp, Tom Lane, Álvaro Herrera, and Kyotaro Horiguchi.

Discussion: http://postgr.es/m/20140504114417.GM12715@awork2.anarazel.de
2020-01-09 10:59:07 -05:00
Tom Lane 913bbd88dc Improve the handling of result type coercions in SQL functions.
Use the parser's standard type coercion machinery to convert the
output column(s) of a SQL function's final SELECT or RETURNING
to the type(s) they should have according to the function's declared
result type.  We'll allow any case where an assignment-level
coercion is available.  Previously, we failed unless the required
coercion was a binary-compatible one (and the documentation ignored
this, falsely claiming that the types must match exactly).

Notably, the coercion now accounts for typmods, so that cases where
a SQL function is declared to return a composite type whose columns
are typmod-constrained now behave as one would expect.  Arguably
this aspect is a bug fix, but the overall behavioral change here
seems too large to consider back-patching.

A nice side-effect is that functions can now be inlined in a
few cases where we previously failed to do so because of type
mismatches.

Discussion: https://postgr.es/m/18929.1574895430@sss.pgh.pa.us
2020-01-08 11:07:59 -05:00
Tom Lane 4ac8aaa36f Fix handling of generated columns in ALTER TABLE.
ALTER TABLE failed if a column referenced in a GENERATED expression
had been added or changed in type earlier in the ALTER command.
That's because the GENERATED expression needs to be evaluated
against the table's updated tuples, but it was being evaluated
against the original tuples.  (Fortunately the executor has adequate
cross-checks to notice the mismatch, so we just got an obscure error
message and not anything more dangerous.)

Per report from Andreas Joseph Krogh.  Back-patch to v12 where
GENERATED was added.

Discussion: https://postgr.es/m/VisenaEmail.200.231b0a41523275d0.16ea7f800c7@tc7-visena
2020-01-08 09:42:53 -05:00
Tom Lane 20d6225d16 Add functions min_scale(numeric) and trim_scale(numeric).
These allow better control of trailing zeroes in numeric values.

Pavel Stehule, based on an old proposal of Marko Tiikkaja's;
review by Karl Pinc

Discussion: https://postgr.es/m/CAFj8pRDjs-navGASeF0Wk74N36YGFJ+v=Ok9_knRa7vDc-qugg@mail.gmail.com
2020-01-06 12:13:53 -05:00
Alvaro Herrera 1fa846f1c9 Fix cloning of row triggers to sub-partitions
When row triggers exist in partitioned partitions that are not either
part of FKs or deferred unique constraints, they are not correctly
cloned to their partitions.  That's because they are marked "internal",
and those are purposefully skipped when doing the clone triggers dance.
Fix by relaxing the condition on which internal triggers are skipped.

Amit Langote initially diagnosed the problem and proposed a fix, but I
used a different approach.

Reported-by: Petr Fedorov
Discussion: https://postgr.es/m/6b3f0646-ba8c-b3a9-c62d-1c6651a1920f@phystech.edu
2020-01-02 17:04:24 -03:00
Peter Eisentraut 198c7153dc Fix comment in test
The comment was apparently copy-and-pasted and did not reflect the
actual test outcome.
2020-01-02 14:40:18 +01:00
Tom Lane 823e739d4a Test GROUP BY matching of join columns that are type-coerced by USING.
If we have, say, an int column that is left-joined to a bigint column
with USING, the merged column is the int column promoted to bigint.
GROUP BY's tests for whether grouping on the merged column allows a
reference to the underlying column, or vice versa, should know about
that relationship --- and they do.  But I nearly broke this case with
an ill-advised optimization, so the lack of any test coverage for it
seems like a bad idea.
2020-01-01 19:31:41 -05:00
Tom Lane bb4114a4e2 Allow whole-row Vars to be used in partitioning expressions.
In the wake of commit 5b9312378, there's no particular reason
for this restriction (previously, it was problematic because of
the implied rowtype reference).  A simple constraint on a whole-row
Var probably isn't that useful, but conceivably somebody would want
to pass one to a function that extracts a partitioning key.  Besides
which, we're expending much more code to enforce the restriction than
we save by having it, since the latter quantity is now zero.
So drop the restriction.

Amit Langote

Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
2019-12-25 15:44:15 -05:00
Tom Lane 5b9312378e Load relcache entries' partitioning data on-demand, not immediately.
Formerly the rd_partkey and rd_partdesc data structures were always
populated immediately when a relcache entry was built or rebuilt.
This patch changes things so that they are populated only when they
are first requested.  (Hence, callers *must* now always use
RelationGetPartitionKey or RelationGetPartitionDesc; just fetching
the pointer directly is no longer acceptable.)

This seems to have some performance benefits, but the main reason to do
it is that it eliminates a recursive-reload failure that occurs if the
partkey or partdesc expressions contain any references to the relation's
rowtype (as discovered by Amit Langote).  In retrospect, since loading
these data structures might result in execution of nearly-arbitrary code
via eval_const_expressions, it was a dumb idea to require that to happen
during relcache entry rebuild.

Also, fix things so that old copies of a relcache partition descriptor
will be dropped when the cache entry's refcount goes to zero.  In the
previous coding it was possible for such copies to survive for the
lifetime of the session, as I'd complained of in a previous discussion.
(This management technique still isn't perfect, but it's better than
before.)  Improve the commentary explaining how that works and why
it's safe to hand out direct pointers to these relcache substructures.

In passing, improve RelationBuildPartitionDesc by using the same
memory-context-parent-swap approach used by RelationBuildPartitionKey,
thereby making it less dependent on strong assumptions about what
partition_bounds_copy does.  Avoid doing get_rel_relkind in the
critical section, too.

Patch by Amit Langote and Tom Lane; Robert Haas deserves some credit
for prior work in the area, too.  Although this is a pre-existing
problem, no back-patch: the patch seems too invasive to be safe to
back-patch, and the bug it fixes is a corner case that seems
relatively unlikely to cause problems in the field.

Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com
2019-12-25 14:43:13 -05:00
Tom Lane 39ebb943de Disallow partition key expressions that return pseudo-types.
This wasn't checked originally, but it should have been, because
in general pseudo-types can't be stored to and retrieved from disk.
Notably, partition bound values of type "record" would not be
interpretable by another session.

In v12 and HEAD, add another flag to CheckAttributeType's repertoire
so that it can produce a specific error message for this case.  That's
infeasible in older branches without an ABI break, so fall back to
a slightly-less-nicely-worded error message in v10 and v11.

Problem noted by Amit Langote, though this patch is not his initial
solution.  Back-patch to v10 where partitioning was introduced.

Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
2019-12-23 12:53:12 -05:00
Tom Lane fc7695891d Prevent a rowtype from being included in itself via a range.
We probably should have thought of this case when ranges were added,
but we didn't.  (It's not the fault of commit eb51af71f, because
ranges didn't exist then.)

It's an old bug, so back-patch to all supported branches.

Discussion: https://postgr.es/m/7782.1577051475@sss.pgh.pa.us
2019-12-23 12:08:23 -05:00
Tom Lane b265aa1f39 Avoid low-probability regression test failures in timestamp[tz] tests.
If the first transaction block in these tests were entered exactly
at midnight (California time), they'd report a bogus failure due
to 'now' and 'midnight' having the same values.  Commit 8c2ac75c5
had dismissed this as being of negligible probability, but we've
now seen it happen in the buildfarm, so let's prevent it.  We can
get pretty much the same test coverage without an it's-not-midnight
assumption by moving the does-'now'-work cases into their own test step.

While here, apply commit 47169c255's s/DELETE/TRUNCATE/ change to
timestamptz as well as timestamp (not sure why that didn't
occur to me at the time; the risk of failure is the same).

Back-patch to all supported branches, since the main point is
to get rid of potential buildfarm failures.

Discussion: https://postgr.es/m/14821.1577031117@sss.pgh.pa.us
2019-12-22 18:00:22 -05:00