Commit Graph

19101 Commits

Author SHA1 Message Date
Dean Rasheed 41531e42d3 Fix DEFAULT-handling in multi-row VALUES lists for updatable views.
INSERT ... VALUES for a single VALUES row is implemented differently
from a multi-row VALUES list, which causes inconsistent behaviour in
the way that DEFAULT items are handled. In particular, when inserting
into an auto-updatable view on top of a table with a column default, a
DEFAULT item in a single VALUES row gets correctly replaced with the
table column's default, but for a multi-row VALUES list it is replaced
with NULL.

Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the
VALUES list untouched if the target relation is an auto-updatable view
and has no column default, deferring DEFAULT-expansion until the query
against the base relation is rewritten. For all other types of target
relation, including tables and trigger- and rule-updatable views, we
must continue to replace DEFAULT items with NULL in the absence of a
column default.

This is somewhat complicated by the fact that if an auto-updatable
view has DO ALSO rules attached, the VALUES lists for the product
queries need to be handled differently from the original query, since
the product queries need to act like rule-updatable views whereas the
original query has auto-updatable view semantics.

Back-patch to all supported versions.

Reported by Roger Curley (bug #15623). Patch by Amit Langote and me.

Discussion: https://postgr.es/m/15623-5d67a46788ec8b7f@postgresql.org
2019-02-20 08:30:21 +00:00
Michael Paquier 56fadbedbd Mark correctly initial slot snapshots with MVCC type when built
When building an initial slot snapshot, snapshots are marked with
historic MVCC snapshots as type with the marker field being set in
SnapBuildBuildSnapshot() but not overriden in SnapBuildInitialSnapshot().
Existing callers of SnapBuildBuildSnapshot() do not care about the type
of snapshot used, but extensions calling it actually may, as reported.

While on it, mark correctly the snapshot type when importing one.  This
is cosmetic as the field is enforced to 0.

Author: Antonin Houska
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/23215.1527665193@localhost
Backpatch-through: 9.4
2019-02-20 12:31:07 +09:00
Peter Eisentraut 90cfa49003 Use varargs macro for CACHEDEBUG
Reviewed-by: Andres Freund <andres@anarazel.de>
2019-02-19 11:42:39 +01:00
Andres Freund 22bc403029 Remove line duplicated during conflict resolution.
I included the duplicated ExecTypeFromTL in 578b2297 "Remove WITH OIDS
support".

Reported-By: Peter Eisentraut
Discussion: https://postgr.es/m/ba819888-63c6-7f98-6acb-3731142d9414@2ndquadrant.com
2019-02-18 11:07:30 -08:00
Etsuro Fujita 3fdc374b5d Save PathTargets for distinct/ordered relations in root->upper_targets[].
For the convenience of extensions, we previously only saved PathTargets
for grouped, window, and final relations in root->upper_targets[] in
grouping_planner().  To improve the convenience, save PathTargets for
distinct and ordered relations as well.

Author: Antonin Houska, with an additional change by me
Discussion: https://postgr.es/m/10994.1549559088@localhost
2019-02-18 16:13:46 +09:00
Michael Paquier 0dd6ff0ac8 Avoid some unnecessary block reads in WAL reader
When reading a new page internally and depending on the way the WAL
reader facility gets used by plugins, the current implementation of the
WAL reader may finish by reading a block multiple times while it is not
actually necessary as the requested data length may be equal to what has
been already read.  This can happen for any size, but is more likely to
happen at the end of a page.  This can cause performance penalties in
plugins which rely on the block reads to be purely sequential, zlib not
liking backward reads for example.  The new behavior also shaves some
cycles when doing recovery.

Author: Arthur Zakirov
Reviewed-by: Andrey Lepikhov, Michael Paquier, Grigory Smolkin
Discussion: https://postgr.es/m/2ddf4a32-517e-d6f4-d992-4a63b6035bfd@postgrespro.ru
2019-02-18 09:52:02 +09:00
Thomas Munro 0b55aaacec Fix race in dsm_unpin_segment() when handles are reused.
Teach dsm_unpin_segment() to skip segments that are in the process
of being destroyed by another backend, when searching for a handle.
Such a segment cannot possibly be the one we are looking for, even
if its handle matches.  Another slot might hold a recently created
segment that has the same handle value by coincidence, and we need
to keep searching for that one.

The bug caused rare "cannot unpin a segment that is not pinned"
errors on 10 and 11.  Similar to commit 6c0fb941 for dsm_attach().

Back-patch to 10, where dsm_unpin_segment() landed.

Author: Thomas Munro
Reported-by: Justin Pryzby
Tested-by: Justin Pryzby (along with other recent DSA/DSM fixes)
Discussion: https://postgr.es/m/20190216023854.GF30291@telsasoft.com
2019-02-18 09:58:29 +13:00
Tom Lane a32ca78836 Fix CREATE VIEW to allow zero-column views.
We should logically have allowed this case when we allowed zero-column
tables, but it was overlooked.

Although this might be thought a feature addition, it's really a bug
fix, because it was possible to create a zero-column view via
the convert-table-to-view code path, and then you'd have a situation
where dump/reload would fail.  Hence, back-patch to all supported
branches.

Arrange the added test cases to provide coverage of the related
pg_dump code paths (since these views will be dumped and reloaded
during the pg_upgrade regression test).  I also made them test
the case where pg_dump has to postpone the view rule into post-data,
which disturbingly had no regression coverage before.

Report and patch by Ashutosh Sharma (test case by me)

Discussion: https://postgr.es/m/CAE9k0PkmHdeSaeZt2ujnb_cKucmK3sDDceDzw7+d5UZoNJPYOg@mail.gmail.com
2019-02-17 12:37:31 -05:00
Tom Lane 608b167f9f Allow user control of CTE materialization, and change the default behavior.
Historically we've always materialized the full output of a CTE query,
treating WITH as an optimization fence (so that, for example, restrictions
from the outer query cannot be pushed into it).  This is appropriate when
the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE
query is non-recursive and side-effect-free, there's no hazard of changing
the query results by pushing restrictions down.

Another argument for materialization is that it can avoid duplicate
computation of an expensive WITH query --- but that only applies if
the WITH query is called more than once in the outer query.  Even then
it could still be a net loss, if each call has restrictions that
would allow just a small part of the WITH query to be computed.

Hence, let's change the behavior for WITH queries that are non-recursive
and side-effect-free.  By default, we will inline them into the outer
query (removing the optimization fence) if they are called just once.
If they are called more than once, we will keep the old behavior by
default, but the user can override this and force inlining by specifying
NOT MATERIALIZED.  Lastly, the user can force the old behavior by
specifying MATERIALIZED; this would mainly be useful when the query had
deliberately been employing WITH as an optimization fence to prevent a
poor choice of plan.

Andreas Karlsson, Andrew Gierth, David Fetter

Discussion: https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk
2019-02-16 16:11:12 -05:00
Tom Lane 02a6a54ecd Make use of compiler builtins and/or assembly for CLZ, CTZ, POPCNT.
Test for the compiler builtins __builtin_clz, __builtin_ctz, and
__builtin_popcount, and make use of these in preference to
handwritten C code if they're available.  Create src/port
infrastructure for "leftmost one", "rightmost one", and "popcount"
so as to centralize these decisions.

On x86_64, __builtin_popcount generally won't make use of the POPCNT
opcode because that's not universally supported yet.  Provide code
that checks CPUID and then calls POPCNT via asm() if available.
This requires indirecting through a function pointer, which is
an annoying amount of overhead for a one-instruction operation,
but it's probably not worth working harder than this for our
current use-cases.

I'm not sure we've found all the existing places that could profit
from this new infrastructure; but we at least touched all the
ones that used copied-and-pasted versions of the bitmapset.c code,
and got rid of multiple copies of the associated constant arrays.

While at it, replace c-compiler.m4's one-per-builtin-function
macros with a single one that can handle all the cases we need
to worry about so far.  Also, because I'm paranoid, make those
checks into AC_LINK checks rather than just AC_COMPILE; the
former coding failed to verify that libgcc has support for the
builtin, in cases where it's not inline code.

David Rowley, Thomas Munro, Alvaro Herrera, Tom Lane

Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
2019-02-15 23:22:33 -05:00
Alvaro Herrera 457aef0f1f Revert attempts to use POPCNT etc instructions
This reverts commits fc6c72747a, 109de05cbb, d0b4663c23 and
711bab1e4d.

Somebody will have to try harder before submitting this patch again.
I've spent entirely too much time on it already, and the #ifdef maze yet
to be written in order for it to build at all got on my nerves.  The
amount of work needed to get a platform-specific performance improvement
that's barely above the noise level is not worth it.
2019-02-15 16:32:30 -03:00
Tom Lane e89f14e2bb Refactor index cost estimation functions in view of IndexClause changes.
Get rid of deconstruct_indexquals() in favor of just iterating over the
IndexClause list directly.  The extra services that that function used to
provide, such as hiding clause commutation and associating the right index
column with each clause, are no longer useful given the new data structure.
I'd originally thought that it'd provide a useful amount of abstraction
by freeing callers from paying attention to the exact clause type of each
indexqual, but that hope proves to have been vain, because few callers can
ignore the semantic differences between different clause types.  Indeed,
removing it results in a net code savings, and probably some cycles shaved
by not having to build an extra list-of-structs data structure.

Also, export a few formerly-static support functions, with the goal
of allowing extension AMs to write functionality equivalent to
genericcostestimate() without pointless code duplication.

Discussion: https://postgr.es/m/24586.1550106354@sss.pgh.pa.us
2019-02-15 13:05:19 -05:00
Michael Paquier 331a613e9d Fix support for CREATE TABLE IF NOT EXISTS AS EXECUTE
The grammar IF NOT EXISTS for CTAS is supported since 9.5 and documented
as such, however the case of using EXECUTE as query has never been
covered as EXECUTE CTAS statements and normal CTAS statements are parsed
separately.

Author: Andreas Karlsson
Discussion: https://postgr.es/m/2ddcc188-e37c-a0be-32bf-a56b07c3559e@proxel.se
Backpatch-through: 9.5
2019-02-15 17:12:24 +09:00
Thomas Munro 6c0fb94189 Fix race in dsm_attach() when handles are reused.
DSM handle values can be reused as soon as the underlying shared memory
object has been destroyed.  That means that for a brief moment we
might have two DSM slots with the same handle.  While trying to attach,
if we encounter a slot with refcnt == 1, meaning that it is currently
being destroyed, we should continue our search in case the same handle
exists in another slot.

The race manifested as a rare "dsa_area could not attach to segment"
error, and was more likely in 10 and 11 due to the lack of distinct
seed for random() in parallel workers.  It was made very unlikely in
in master by commit 197e4af9, and older releases don't usually create
new DSM segments in background workers so it was also unlikely there.

This fixes the root cause of bug report #15585, in which the error
could also sometimes result in a self-deadlock in the error path.
It's not yet clear if further changes are needed to avoid that failure
mode.

Back-patch to 9.4, where dsm.c arrived.

Author: Thomas Munro
Reported-by: Justin Pryzby, Sergei Kornilov
Discussion: https://postgr.es/m/20190207014719.GJ29720@telsasoft.com
Discussion: https://postgr.es/m/15585-324ff6a93a18da46@postgresql.org
2019-02-15 14:05:09 +13:00
Tom Lane 8fd3fdd85a Simplify the planner's new representation of indexable clauses a little.
In commit 1a8d5afb0, I thought it'd be a good idea to define
IndexClause.indexquals as NIL in the most common case where the given
clause (IndexClause.rinfo) is usable exactly as-is.  It'd be more
consistent to define the indexquals in that case as being a one-element
list containing IndexClause.rinfo, but I thought saving the palloc
overhead for making such a list would be worthwhile.

In hindsight, that was a great example of "premature optimization is the
root of all evil": it's complicated everyplace that needs to deal with
the indexquals, requiring duplicative code to handle both the simple
case and the not-simple case.  I'd initially found that tolerable but
it's getting less so as I mop up some areas that I'd not touched in
1a8d5afb0.  In any case, two more pallocs during a planner run are
surely at the noise level (a conclusion confirmed by a bit of
microbenchmarking).  So let's change this decision before it becomes
set in stone, and insist that IndexClause.indexquals always be a valid
list of the actual index quals for the clause.

Discussion: https://postgr.es/m/24586.1550106354@sss.pgh.pa.us
2019-02-14 19:37:30 -05:00
Peter Eisentraut 86eea78694 Get rid of another unconstify through API changes
This also makes the code in read_client_first_message() more similar
to read_client_final_message().

Reported-by: Mark Dilger <hornschnorter@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
2019-02-14 20:44:47 +01:00
Tom Lane 49fa99e54e Move pattern selectivity code from selfuncs.c to like_support.c.
While at it, refactor patternsel() a bit so that it can be used from
the LIKE/regex planner support functions as well.  This makes the
planner able to deal equally well with either operator or function
syntax for these operations.  I'm not excited about that as a feature
in itself, but it provides a nice model for extensions to follow if
they want such behavior for their operations.

This change localizes the use of pattern_fixed_prefix() and
make_greater_string() so that they no longer need be exported.
(We might get pushback from extensions about that, perhaps,
in which case I'd be inclined to re-export them in a new header
file like_support.h.)

This reduces the bulk of selfuncs.c a fair amount, removing ~1370
lines or about one-sixth of that file; it's still too big, but this
is progress.

Discussion: https://postgr.es/m/24537.1550093915@sss.pgh.pa.us
2019-02-14 10:51:59 -05:00
Alvaro Herrera 711bab1e4d Add basic support for using the POPCNT and SSE4.2s LZCNT opcodes
These opcodes have been around in the AMD world since 2007, and 2008 in
the case of intel.  They're supported in GCC and Clang via some __builtin
macros.  The opcodes may be unavailable during runtime, in which case we
fall back on a C-based implementation of the code.  In order to get the
POPCNT instruction we must pass the -mpopcnt option to the compiler.  We
do this only for the pg_bitutils.c file.

David Rowley (with fragments taken from a patch by Thomas Munro)

Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
2019-02-13 16:10:06 -03:00
Andrew Gierth 02ddd49932 Change floating-point output format for improved performance.
Previously, floating-point output was done by rounding to a specific
decimal precision; by default, to 6 or 15 decimal digits (losing
information) or as requested using extra_float_digits. Drivers that
wanted exact float values, and applications like pg_dump that must
preserve values exactly, set extra_float_digits=3 (or sometimes 2 for
historical reasons, though this isn't enough for float4).

Unfortunately, decimal rounded output is slow enough to become a
noticable bottleneck when dealing with large result sets or COPY of
large tables when many floating-point values are involved.

Floating-point output can be done much faster when the output is not
rounded to a specific decimal length, but rather is chosen as the
shortest decimal representation that is closer to the original float
value than to any other value representable in the same precision. The
recently published Ryu algorithm by Ulf Adams is both relatively
simple and remarkably fast.

Accordingly, change float4out/float8out to output shortest decimal
representations if extra_float_digits is greater than 0, and make that
the new default. Applications that need rounded output can set
extra_float_digits back to 0 or below, and take the resulting
performance hit.

We make one concession to portability for systems with buggy
floating-point input: we do not output decimal values that fall
exactly halfway between adjacent representable binary values (which
would rely on the reader doing round-to-nearest-even correctly). This
is known to be a problem at least for VS2013 on Windows.

Our version of the Ryu code originates from
https://github.com/ulfjack/ryu/ at commit c9c3fb1979, but with the
following (significant) modifications:

 - Output format is changed to use fixed-point notation for small
   exponents, as printf would, and also to use lowercase 'e', a
   minimum of 2 exponent digits, and a mandatory sign on the exponent,
   to keep the formatting as close as possible to previous output.

 - The output of exact midpoint values is disabled as noted above.

 - The integer fast-path code is changed somewhat (since we have
   fixed-point output and the upstream did not).

 - Our project style has been largely applied to the code with the
   exception of C99 declaration-after-statement, which has been
   retained as an exception to our present policy.

 - Most of upstream's debugging and conditionals are removed, and we
   use our own configure tests to determine things like uint128
   availability.

Changing the float output format obviously affects a number of
regression tests. This patch uses an explicit setting of
extra_float_digits=0 for test output that is not expected to be
exactly reproducible (e.g. due to numerical instability or differing
algorithms for transcendental functions).

Conversions from floats to numeric are unchanged by this patch. These
may appear in index expressions and it is not yet clear whether any
change should be made, so that can be left for another day.

This patch assumes that the only supported floating point format is
now IEEE format, and the documentation is updated to reflect that.

Code by me, adapting the work of Ulf Adams and other contributors.

References:
https://dl.acm.org/citation.cfm?id=3192369

Reviewed-by: Tom Lane, Andres Freund, Donald Dong
Discussion: https://postgr.es/m/87r2el1bx6.fsf@news-spur.riddles.org.uk
2019-02-13 15:20:33 +00:00
Andrew Gierth f397e08599 Use strtof() and not strtod() for float4 input.
Using strtod() creates a double-rounding problem; the input decimal
value is first rounded to the nearest double; rounding that to the
nearest float may then give an incorrect result.

An example is that 7.038531e-26 when input via strtod and then rounded
to float4 gives 0xAE43FEp-107 instead of the correct 0xAE43FDp-107.

Values output by earlier PG versions with extra_float_digits=3 should
all be read in with the same values as previously. However, values
supplied by other software using shortest representations could be
mis-read.

On platforms that lack a strtof() entirely, we fall back to the old
incorrect rounding behavior. (As strtof() is required by C99, such
platforms are considered of primarily historical interest.) On VS2013,
some workarounds are used to get correct error handling.

The regression tests now test for the correct input values, so
platforms that lack strtof() will need resultmap entries. An entry for
HP-UX 10 is included (more may be needed).

Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/871s5emitx.fsf@news-spur.riddles.org.uk
Discussion: https://postgr.es/m/87d0owlqpv.fsf@news-spur.riddles.org.uk
2019-02-13 15:19:44 +00:00
Peter Eisentraut 37d9916020 More unconstify use
Replace casts whose only purpose is to cast away const with the
unconstify() macro.

Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
2019-02-13 11:50:16 +01:00
Peter Eisentraut cf40dc65b6 Remove useless casts
Some of these were uselessly casting away "const", some were just
nearby, but they where all unnecessary anyway.

Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
2019-02-13 11:50:09 +01:00
Michael Paquier 6ea95166a0 Fix comment related to calculation location of total_table_pages
As of commit c6e4133, the calculation happens in make_one_rel() and not
query_planner().

Author: Amit Langote
Discussion: https://postgr.es/m/c7a04a90-42e6-28a4-811a-a7e352831ba1@lab.ntt.co.jp
2019-02-13 16:31:20 +09:00
Thomas Munro 7215efdc00 Fix rare dsa_allocate() failures due to freepage.c corruption.
In a corner case, a btree page was allocated during a clean-up operation
that could cause the tracking of the largest contiguous span of free
space to get out of whack.  That was supposed to be prevented by the use
of the "soft" flag to avoid allocating internal pages during incidental
clean-up work, but the flag was ignored in the case where the FPM was
promoted from singleton format to btree format.  Repair.

Remove an obsolete comment in passing.

Back-patch to 10, where freepage.c arrived (as support for dsa.c).

Author: Robert Haas
Diagnosed-by: Thomas Munro and Robert Haas
Reported-by: Justin Pryzby, Rick Otten, Sand Stone, Arne Roland and others
Discussion: https://postgr.es/m/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com
2019-02-13 13:24:11 +13:00
Tom Lane 75c46149fc Clean up planner confusion between ncolumns and nkeycolumns.
We're only going to consider key columns when creating indexquals,
so there is no point in having the outer loops in indxpath.c iterate
further than nkeycolumns.

Doing so in match_pathkeys_to_index() is actually wrong, and would have
caused crashes by now, except that we have no index AMs supporting both
amcanorderbyop and amcaninclude.

It's also wrong in relation_has_unique_index_for().  The effect there is
to fail to prove uniqueness even when the index does prove it, if there
are extra columns.

Also future-proof examine_variable() for the day when extra columns can
be expressions, and fix what's either a thinko or just an oversight in
btcostestimate(): we should consider the number of key columns, not the
total, when deciding whether to derate correlation.

None of these things seemed important enough to risk changing in a
just-before-wrap patch, but since we're past the release wrap window,
time to fix 'em.

Discussion: https://postgr.es/m/25526.1549847928@sss.pgh.pa.us
2019-02-12 18:38:32 -05:00
Alvaro Herrera 8c67d29fd5 Relax overly strict assertion
Ever since its birth, ReorderBufferBuildTupleCidHash() has contained an
assertion that a catalog tuple cannot change Cmax after acquiring one.  But
that's wrong: if a subtransaction executes DDL that affects that catalog
tuple, and later aborts and another DDL affects the same tuple, it will
change Cmax.  Relax the assertion to merely verify that the Cmax remains
valid and monotonically increasing, instead.

Add a test that tickles the relevant code.

Diagnosed by, and initial patch submitted by: Arseny Sher
Co-authored-by: Arseny Sher
Discussion: https://postgr.es/m/874l9p8hyw.fsf@ars-thinkpad
2019-02-12 18:42:37 -03:00
Alvaro Herrera fe33a196de Use Getopt::Long for catalog scripts
Replace hand-rolled option parsing with the Getopt module. This is
shorter and easier to read. In passing, make some cosmetic adjustments
for consistency.

Author: John Naylor
Reviewed-by: David Fetter
Discussion: https://postgr.es/m/CACPNZCvRjepXh5b2N50njN+rO_2Nzcf=jhMkKX7=79XWUKJyKA@mail.gmail.com
2019-02-12 12:22:08 -03:00
Tom Lane 232a8e233f Fix erroneous error reports in snapbuild.c.
It's pretty unhelpful to report the wrong file name in a complaint
about syscall failure, but SnapBuildSerialize managed to do that twice
in a span of 50 lines.  Also fix half a dozen missing or poorly-chosen
errcode assignments; that's mostly cosmetic, but still wrong.

Noted while studying recent failures on buildfarm member nightjar.
I'm not sure whether those reports are actually giving the wrong
filename, because there are two places here with identically
spelled error messages.  The other one is specifically coded not
to report ENOENT, but if it's this one, how could we be getting
ENOENT from open() with O_CREAT?  Need to sit back and await results.

However, these ereports are clearly broken from birth, so back-patch.
2019-02-12 01:12:52 -05:00
Michael Paquier b7ec820559 Fix description of WAL record XLOG_PARAMETER_CHANGE
max_wal_senders and max_worker_processes got reversed in the output
generated because of ea92368.

Reported-by: Kevin Hale Boyes
Discussion: https://postgr.es/m/CADAecHVAD4=26KAx4nj5DBvxqqvJkuwsy+riiiNhQqwnZg2K8Q@mail.gmail.com
2019-02-12 13:10:59 +09:00
Tom Lane 74dfe58a59 Allow extensions to generate lossy index conditions.
For a long time, indxpath.c has had the ability to extract derived (lossy)
index conditions from certain operators such as LIKE.  For just as long,
it's been obvious that we really ought to make that capability available
to extensions.  This commit finally accomplishes that, by adding another
API for planner support functions that lets them create derived index
conditions for their functions.  As proof of concept, the hardwired
"special index operator" code formerly present in indxpath.c is pushed
out to planner support functions attached to LIKE and other relevant
operators.

A weak spot in this design is that an extension needs to know OIDs for
the operators, datatypes, and opfamilies involved in the transformation
it wants to make.  The core-code prototypes use hard-wired OID references
but extensions don't have that option for their own operators etc.  It's
usually possible to look up the required info, but that may be slow and
inconvenient.  However, improving that situation is a separate task.

I want to do some additional refactorization around selfuncs.c, but
that also seems like a separate task.

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-11 21:26:14 -05:00
Michael Paquier ea92368cd1 Move max_wal_senders out of max_connections for connection slot handling
Since its introduction, max_wal_senders is counted as part of
max_connections when it comes to define how many connection slots can be
used for replication connections with a WAL sender context.  This can
lead to confusion for some users, as it could be possible to block a
base backup or replication from happening because other backend sessions
are already taken for other purposes by an application, and
superuser-only connection slots are not a correct solution to handle
that case.

This commit makes max_wal_senders independent of max_connections for its
handling of PGPROC entries in ProcGlobal, meaning that connection slots
for WAL senders are handled using their own free queue, like autovacuum
workers and bgworkers.

One compatibility issue that this change creates is that a standby now
requires to have a value of max_wal_senders at least equal to its
primary.  So, if a standby created enforces the value of
max_wal_senders to be lower than that, then this could break failovers.
Normally this should not be an issue though, as any settings of a
standby are inherited from its primary as postgresql.conf gets normally
copied as part of a base backup, so parameters would be consistent.

Author: Alexander Kukushkin
Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii
Kliukin
Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
2019-02-12 10:07:56 +09:00
Tom Lane 1d92a0c9f7 Redesign the partition dependency mechanism.
The original setup for dependencies of partitioned objects had
serious problems:

1. It did not verify that a drop cascading to a partition-child object
also cascaded to at least one of the object's partition parents.  Now,
normally a child object would share all its dependencies with one or
another parent (e.g. a child index's opclass dependencies would be shared
with the parent index), so that this oversight is usually harmless.
But if some dependency failed to fit this pattern, the child could be
dropped while all its parents remain, creating a logically broken
situation.  (It's easy to construct artificial cases that break it,
such as attaching an unrelated extension dependency to the child object
and then dropping the extension.  I'm not sure if any less-artificial
cases exist.)

2. Management of partition dependencies during ATTACH/DETACH PARTITION
was complicated and buggy; for example, after detaching a partition
table it was possible to create cases where a formerly-child index
should be dropped and was not, because the correct set of dependencies
had not been reconstructed.

Less seriously, because multiple partition relationships were
represented identically in pg_depend, there was an order-of-traversal
dependency on which partition parent was cited in error messages.
We also had some pre-existing order-of-traversal hazards for error
messages related to internal and extension dependencies.  This is
cosmetic to users but causes testing problems.

To fix #1, add a check at the end of the partition tree traversal
to ensure that at least one partition parent got deleted.  To fix #2,
establish a new policy that partition dependencies are in addition to,
not instead of, a child object's usual dependencies; in this way
ATTACH/DETACH PARTITION need not cope with adding or removing the
usual dependencies.

To fix the cosmetic problem, distinguish between primary and secondary
partition dependency entries in pg_depend, by giving them different
deptypes.  (They behave identically except for having different
priorities for being cited in error messages.)  This means that the
former 'I' dependency type is replaced with new 'P' and 'S' types.

This also fixes a longstanding bug that after handling an internal
dependency by recursing to the owning object, findDependentObjects
did not verify that the current target was now scheduled for deletion,
and did not apply the current recursion level's objflags to it.
Perhaps that should be back-patched; but in the back branches it
would only matter if some concurrent transaction had removed the
internal-linkage pg_depend entry before the recursive call found it,
or the recursive call somehow failed to find it, both of which seem
unlikely.

Catversion bump because the contents of pg_depend change for
partitioning relationships.

Patch HEAD only.  It's annoying that we're not fixing #2 in v11,
but there seems no practical way to do so given that the problem
is exactly a poor choice of what entries to put in pg_depend.
We can't really fix that while staying compatible with what's
in pg_depend in existing v11 installations.

Discussion: https://postgr.es/m/CAH2-Wzkypv1R+teZrr71U23J578NnTBt2X8+Y=Odr4pOdW1rXg@mail.gmail.com
2019-02-11 14:41:17 -05:00
Peter Eisentraut 256fc004af Adjust gratuitously different error message wording 2019-02-11 14:01:05 +01:00
Peter Eisentraut 78b0cac74d Remove unused macro
Last use was removed in 2c66f9924c.
2019-02-11 10:07:25 +01:00
Tom Lane 6bdc3005b5 Fix indexable-row-comparison logic to account for covering indexes.
indxpath.c needs a good deal more attention for covering indexes than
it's gotten.  But so far as I can tell, the only really awful breakage
is in expand_indexqual_rowcompare (nee adjust_rowcompare_for_index),
which was only half fixed in c266ed31a.  The other problems aren't
bad enough to take the risk of a just-before-wrap fix.

The problem here is that if the leading column of a row comparison
matches an index (allowing this code to be reached), and some later
column doesn't match the index, it'll nonetheless believe that that
column matches the first included index column.  Typically that'll
lead to an error like "operator M is not a member of opfamily N" as
a result of fetching a garbage opfamily OID.  But with enough bad
luck, maybe a broken plan would be generated.

Discussion: https://postgr.es/m/25526.1549847928@sss.pgh.pa.us
2019-02-10 22:51:32 -05:00
Alvaro Herrera cb90de1aac Fix trigger drop procedure
After commit 123cc697a8, we remove redundant FK action triggers during
partition ATTACH by merely deleting the catalog tuple, but that's wrong:
it should use performDeletion() instead.  Repair, and make the comments
more explicit.

Per code review from Tom Lane.

Discussion: https://postgr.es/m/18885.1549642539@sss.pgh.pa.us
2019-02-10 10:00:11 -03:00
Tom Lane a391ff3c3d Build out the planner support function infrastructure.
Add support function requests for estimating the selectivity, cost,
and number of result rows (if a SRF) of the target function.

The lack of a way to estimate selectivity of a boolean-returning
function in WHERE has been a recognized deficiency of the planner
since Berkeley days.  This commit finally fixes it.

In addition, non-constant estimates of cost and number of output
rows are now possible.  We still fall back to looking at procost
and prorows if the support function doesn't service the request,
of course.

To make concrete use of the possibility of estimating output rowcount
for SRFs, this commit adds support functions for array_unnest(anyarray)
and the integer variants of generate_series; the lack of plausible
rowcount estimates for those, even when it's obvious to a human,
has been a repeated subject of complaints.  Obviously, much more
could now be done in this line, but I'm mostly just trying to get
the infrastructure in place.

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-09 18:32:23 -05:00
Tom Lane 1fb57af920 Create the infrastructure for planner support functions.
Rename/repurpose pg_proc.protransform as "prosupport".  The idea is
still that it names an internal function that provides knowledge to
the planner about the behavior of the function it's attached to;
but redesign the API specification so that it's not limited to doing
just one thing, but can support an extensible set of requests.

The original purpose of simplifying a function call is handled by
the first request type to be invented, SupportRequestSimplify.
Adjust all the existing transform functions to handle this API,
and rename them fron "xxx_transform" to "xxx_support" to reflect
the potential generalization of what they do.  (Since we never
previously provided any way for extensions to add transform functions,
this change doesn't create an API break for them.)

Also add DDL and pg_dump support for attaching a support function to a
user-defined function.  Unfortunately, DDL access has to be restricted
to superusers, at least for now; but seeing that support functions
will pretty much have to be written in C, that limitation is just
theoretical.  (This support is untested in this patch, but a follow-on
patch will add cases that exercise it.)

Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
2019-02-09 18:08:48 -05:00
Tom Lane 1a8d5afb0d Refactor the representation of indexable clauses in IndexPaths.
In place of three separate but interrelated lists (indexclauses,
indexquals, and indexqualcols), an IndexPath now has one list
"indexclauses" of IndexClause nodes.  This holds basically the same
information as before, but in a more useful format: in particular, there
is now a clear connection between an indexclause (an original restriction
clause from WHERE or JOIN/ON) and the indexquals (directly usable index
conditions) derived from it.

We also change the ground rules a bit by mandating that clause commutation,
if needed, be done up-front so that what is stored in the indexquals list
is always directly usable as an index condition.  This gets rid of repeated
re-determination of which side of the clause is the indexkey during costing
and plan generation, as well as repeated lookups of the commutator
operator.  To minimize the added up-front cost, the typical case of
commuting a plain OpExpr is handled by a new special-purpose function
commute_restrictinfo().  For RowCompareExprs, generating the new clause
properly commuted to begin with is not really any more complex than before,
it's just different --- and we can save doing that work twice, as the
pretty-klugy original implementation did.

Tracking the connection between original and derived clauses lets us
also track explicitly whether the derived clauses are an exact or lossy
translation of the original.  This provides a cheap solution to getting
rid of unnecessary rechecks of boolean index clauses, which previously
seemed like it'd be more expensive than it was worth.

Another pleasant (IMO) side-effect is that EXPLAIN now always shows
index clauses with the indexkey on the left; this seems less confusing.

This commit leaves expand_indexqual_conditions() and some related
functions in a slightly messy state.  I didn't bother to change them
any more than minimally necessary to work with the new data structure,
because all that code is going to be refactored out of existence in
a follow-on patch.

Discussion: https://postgr.es/m/22182.1549124950@sss.pgh.pa.us
2019-02-09 17:30:43 -05:00
Tom Lane 6401583863 Call set_rel_pathlist_hook before generate_gather_paths, not after.
The previous ordering of these steps satisfied the nominal requirement
that set_rel_pathlist_hook could editorialize on the whole set of Paths
constructed for a base relation.  In practice, though, trying to change
the set of partial paths was impossible.  Adding one didn't work because
(a) it was too late to be included in Gather paths made by the core code,
and (b) calling add_partial_path after generate_gather_paths is unsafe,
because it might try to delete a path it thinks is dominated, but that
is already embedded in some Gather path(s).  Nor could the hook safely
remove partial paths, for the same reason that they might already be
embedded in Gathers.

Better to call extensions first, let them add partial paths as desired,
and then gather.  In v11 and up, we already doubled down on that ordering
by postponing gathering even further for single-relation queries; so even
if the hook wished to editorialize on Gather path construction, it could
not.

Report and patch by KaiGai Kohei.  Back-patch to 9.6 where Gather paths
were added.

Discussion: https://postgr.es/m/CAOP8fzahwpKJRTVVTqo2AE=mDTz_efVzV6Get_0=U3SO+-ha1A@mail.gmail.com
2019-02-09 11:41:09 -05:00
Andres Freund 356687bd82 Reset, not recreate, execGrouping.c style hashtables.
This uses the facility added in the preceding commit to fix
performance issues caused by rebuilding the hashtable (with its
comparator expression being the most expensive bit), after every
reset. That's especially important when the comparator is JIT
compiled.

Bug: #15592 #15486
Reported-By: Jakub Janeček, Dmitry Marakasov
Author: Andres Freund
Discussion:
    https://postgr.es/m/15486-05850f065da42931@postgresql.org
    https://postgr.es/m/20190114180423.ywhdg2iagzvh43we@alap3.anarazel.de
Backpatch: 11, where I broke this in bf6c614a2f
2019-02-09 01:05:49 -08:00
Andres Freund 317ffdfeaa Allow to reset execGrouping.c style tuple hashtables.
This has the advantage that the comparator expression, the table's
slot, etc do not have to be rebuilt. Additionally the simplehash.h
hashtable within the tuple hashtable now keeps its previous size and
doesn't need to be reallocated. That both reduces allocator overhead,
and improves performance in cases where the input estimation was off
by a significant factor.

To avoid an API/ABI break, the new parameter is exposed via the new
BuildTupleHashTableExt(), and BuildTupleHashTable() now is a wrapper
around the former, that continues to allocate the table itself in the
tablecxt.

Using this fixes performance issues discovered in the two bugs
referenced. This commit however has not converted the callers, that's
done in a separate commit.

Bug: #15592 #15486
Reported-By: Jakub Janeček, Dmitry Marakasov
Author: Andres Freund
Discussion:
    https://postgr.es/m/15486-05850f065da42931@postgresql.org
    https://postgr.es/m/20190114180423.ywhdg2iagzvh43we@alap3.anarazel.de
Backpatch: 11, this is a prerequisite for other fixes
2019-02-09 01:05:49 -08:00
Andres Freund 5567d12ce0 Plug leak in BuildTupleHashTable by creating ExprContext in correct context.
In bf6c614a2f I added a expr context to evaluate the grouping
expression. Unfortunately the code I added initialized them while in
the calling context, rather the table context.  Additionally, I used
CreateExprContext() rather than CreateStandaloneExprContext(), which
creates the econtext in the estate's query context.

Fix that by using CreateStandaloneExprContext when in the table's
tablecxt. As we rely on the memory being freed by a memory context
reset that means that the econtext's shutdown callbacks aren't being
called, but that seems ok as the expressions are tightly controlled
due to ExecBuildGroupingEqual().

Bug: #15592
Reported-By: Dmitry Marakasov
Author: Andres Freund
Discussion: https://postgr.es/m/20190114222838.h6r3fuyxjxkykf6t@alap3.anarazel.de
Backpatch: 11, where I broke this in bf6c614a2f
2019-02-09 01:05:49 -08:00
Tom Lane 0edef16d76 Defend against null error message reported by libxml2.
While this isn't really supposed to happen, it can occur in OOM
situations and perhaps others.  Instead of crashing, substitute
"(no message provided)".

I didn't worry about localizing this text, since we aren't
localizing anything else here; besides, if we're on the edge of
OOM, it's unlikely gettext() would work.

Report and fix by Sergio Conde Gómez in bug #15624.

Discussion: https://postgr.es/m/15624-4dea54091a2864e6@postgresql.org
2019-02-08 13:30:42 -05:00
Peter Eisentraut 3d462f0861 Fix error handling around ssl_*_protocol_version settings
In case of a reload, we just want to LOG errors instead of FATAL when
processing SSL configuration, but the more recent code for the
ssl_*_protocol_version settings didn't behave like that.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-02-08 11:58:19 +01:00
Peter Eisentraut 08d25d7850 Add some const decorations
These mainly help understanding the function signatures better.
2019-02-08 10:13:24 +01:00
Michael Paquier 3677a0b26b Add pg_partition_root to display top-most parent of a partition tree
This is useful when looking at partition trees with multiple layers, and
combined with pg_partition_tree, it provides the possibility to show up
an entire tree by just knowing one member at any level.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Amit Langote
Discussion: https://postgr.es/m/20181207014015.GP2407@paquier.xyz
2019-02-08 08:56:14 +09:00
Tom Lane 34ea1ab7fd Split create_foreignscan_path() into three functions.
Up to now postgres_fdw has been using create_foreignscan_path() to
generate not only base-relation paths, but also paths for foreign joins
and foreign upperrels.  This is wrong, because create_foreignscan_path()
calls get_baserel_parampathinfo() which will only do the right thing for
baserels.  It accidentally fails to fail for unparameterized paths, which
are the only ones postgres_fdw (thought it) was handling, but we really
need different APIs for the baserel and join cases.

In HEAD, the best thing to do seems to be to split up the baserel,
joinrel, and upperrel cases into three functions so that they can
have different APIs.  I haven't actually given create_foreign_join_path
a different API in this commit: we should spend a bit of time thinking
about just what we want to do there, since perhaps FDWs would want to
do something different from the build-up-a-join-pairwise approach that
get_joinrel_parampathinfo expects.  In the meantime, since postgres_fdw
isn't prepared to generate parameterized joins anyway, just give it a
defense against trying to plan joins with lateral refs.

In addition (and this is what triggered this whole mess) fix bug #15613
from Srinivasan S A, by teaching file_fdw and postgres_fdw that plain
baserel foreign paths still have outer refs if the relation has
lateral_relids.  Add some assertions in relnode.c to catch future
occurrences of the same error --- in particular, to catch other FDWs
doing that, but also as backstop against core-code mistakes like the
one fixed by commit bdd9a99aa.

Bug #15613 also needs to be fixed in the back branches, but the
appropriate fix will look quite a bit different there, since we don't
want to assume that existing FDWs get the word right away.

Discussion: https://postgr.es/m/15613-092be1be9576c728@postgresql.org
2019-02-07 13:11:12 -05:00
Peter Eisentraut 0c1f8f166c Use EXECUTE FUNCTION syntax for triggers more
Change pg_dump and ruleutils.c to use the FUNCTION keyword instead of
PROCEDURE in trigger and event trigger definitions.

This completes the pieces of the transition started in
0a63f996e0 that were kept out of
PostgreSQL 11 because of the required catversion change.

Discussion: https://www.postgresql.org/message-id/381bef53-f7be-29c8-d977-948e389161d6@2ndquadrant.com
2019-02-07 09:21:34 +01:00
Peter Eisentraut 13b89f96d0 Allow some recovery parameters to be changed with reload
Change

archive_cleanup_command
promote_trigger_file
recovery_end_command
recovery_min_apply_delay

from PGC_POSTMASTER to PGC_SIGHUP.  This did not require any further
changes.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/ca28011a-cfaa-565c-d622-c1907c33ecf7%402ndquadrant.com
2019-02-07 08:34:48 +01:00