Commit Graph

47816 Commits

Author SHA1 Message Date
Alvaro Herrera
d58c9c0e9f Make relnote item wording consistent 2019-08-04 01:31:35 -04:00
Andres Freund
a668bc7599 Fix representation of hash keys in Hash/HashJoin nodes.
In 5f32b29c18 I changed the creation of HashState.hashkeys to
actually use HashState as the parent (instead of HashJoinState, which
was incorrect, as they were executed below HashState), to fix the
problem of hashkeys expressions otherwise relying on slot types
appropriate for HashJoinState, rather than HashState as would be
correct. That reliance was only introduced in 12, which is why it
previously worked to use HashJoinState as the parent (although I'd be
unsurprised if there were problematic cases).

Unfortunately that's not a sufficient solution, because before this
commit, the to-be-hashed expressions referenced inner/outer as
appropriate for the HashJoin, not Hash. That didn't have obvious bad
consequences, because the slots containing the tuples were put into
ecxt_innertuple when hashing a tuple for HashState (even though Hash
doesn't have an inner plan).

There are less common cases where this can cause visible problems
however (rather than just confusion when inspecting such executor
trees). E.g. "ERROR: bogus varno: 65000", when explaining queries
containing a HashJoin where the subsidiary Hash node's hash keys
reference a subplan. While normally hashkeys aren't displayed by
EXPLAIN, if one of those expressions references a subplan, that
subplan may be printed as part of the Hash node - which then failed
because an inner plan was referenced, and Hash doesn't have that.

It seems quite possible that there's other broken cases, too.

Fix the problem by properly splitting the expression for the HashJoin
and Hash nodes at plan time, and have them reference the proper
subsidiary node. While other workarounds are possible, fixing this
correctly seems easy enough. It was a pretty ugly hack to have
ExecInitHashJoin put the expression into the already initialized
HashState, in the first place.

I decided to not just split inner/outer hashkeys inside
make_hashjoin(), but also to separate out hashoperators and
hashcollations at plan time. Otherwise we would have ended up having
two very similar loops, one at plan time and the other during executor
startup. The work seems to more appropriately belong to plan time,
anyway.

Reported-By: Nikita Glukhov, Alexander Korotkov
Author: Andres Freund
Reviewed-By: Tom Lane, in an earlier version
Discussion: https://postgr.es/m/CAPpHfdvGVegF_TKKRiBrSmatJL2dR9uwFCuR+teQ_8tEXU8mxg@mail.gmail.com
Backpatch: 12-
2019-08-02 00:02:49 -07:00
Michael Paquier
20f5cb1958 Fix handling of previous password hooks in passwordcheck
When piling up loading of modules using check_password_hook_type,
loading passwordcheck would remove any trace of a previously-loaded
hook.  Unloading the module would also cause previous hooks to be
entirely gone.

Reported-by: Rafael Castro
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/15932-78f48f9ef166778c@postgresql.org
Backpatch-through: 9.4
2019-08-01 09:37:48 +09:00
Tom Lane
408f759380 Fix pg_dump's handling of dependencies for custom opclasses.
Since pg_dump doesn't treat the member operators and functions of operator
classes/families (that is, the pg_amop and pg_amproc entries, not the
underlying operators/functions) as separate dumpable objects, it missed
their dependency information.  I think this was safe when the code was
designed, because the default object sorting rule emits operators and
functions before opclasses, and there were no dependency types that could
mess that up.  However, the introduction of range types in 9.2 broke it:
now a type can have a dependency on an opclass, allowing dependency rules
to push the opclass before the type and hence before custom operators.
Lacking any information showing that it shouldn't do so, pg_dump emitted
the objects in the wrong order.

Fix by teaching getDependencies() to translate pg_depend entries for
pg_amop/amproc rows to look like dependencies for their parent opfamily.

I added a regression test for this in HEAD/v12, but not further back;
life is too short to fight with 002_pg_dump.pl.

Per bug #15934 from Tom Gottfried.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/15934-58b8c8ab7a09ea15@postgresql.org
2019-07-31 15:42:50 -04:00
Andres Freund
c4b7bb3cf1 Remove superfluous newlines in function prototypes.
These were introduced by pgindent due to fixe to broken
indentation (c.f. 8255c7a5ee). Previously the mis-indentation of
function prototypes was creatively used to reduce indentation in a few
places.

As that formatting only exists in master and REL_12_STABLE, it seems
better to fix it in both, rather than having some odd indentation in
v12 that somebody might copy for future patches or such.

Author: Andres Freund
Discussion: https://postgr.es/m/20190728013754.jwcbe5nfyt3533vx@alap3.anarazel.de
Backpatch: 12-
2019-07-31 00:07:09 -07:00
Heikki Linnakangas
394f7500ae Allow table AM's to use rd_amcache, too.
The rd_amcache allows an index AM to cache arbitrary information in a
relcache entry. This commit moves the cleanup of rd_amcache so that it
can also be used by table AMs. Nothing takes advantage of that yet, but
I'm sure it'll come handy for anyone writing new table AMs.

Backpatch to v12, where table AM interface was introduced.

Reviewed-by: Julien Rouhaud
2019-07-30 21:43:58 +03:00
Heikki Linnakangas
718e313244 Print WAL position correctly in pg_rewind error message.
This has been wrong ever since pg_rewind was added. The if-branch just
above this, where we print the same error with an extra message supplied
by XLogReadRecord() got this right, but the variable name was wrong in the
else-branch. As a consequence, the error printed the WAL position as
0/0 if there was an error reading a WAL file.

Backpatch to 9.5, where pg_rewind was added.
2019-07-30 21:14:34 +03:00
Tomas Vondra
e1947f6c3e Don't build extended statistics on inheritance trees
When performing ANALYZE on inheritance trees, we collect two samples for
each relation - one for the relation alone, and one for the inheritance
subtree (relation and its child relations). And then we build statistics
on each sample, so for each relation we get two sets of statistics.

For regular (per-column) statistics this works fine, because the catalog
includes a flag differentiating statistics built from those two samples.
But we don't have such flag in the extended statistics catalogs, and we
ended up updating the same row twice, triggering this error:

  ERROR:  tuple already updated by self

The simplest solution is to disable extended statistics on inheritance
trees, which is what this commit is doing. In the future we may need to
do something similar to per-column statistics, but that requires adding a
flag to the catalog - and that's not backpatchable. Moreover, the current
selectivity estimation code only works with individual relations, so
building statistics on inheritance trees would be pointless anyway.

Author: Tomas Vondra
Backpatch-to: 10-
Discussion: https://postgr.es/m/20190618231233.GA27470@telsasoft.com
Reported-by: Justin Pryzby
2019-07-30 19:48:13 +02:00
Tom Lane
d933816c04 Fix busted logic for parallel lock grouping in TopoSort().
A "break" statement erroneously left behind by commit a1c1af2a1
caused TopoSort to do the wrong thing if a lock's wait list
contained multiple members of the same locking group.

Because parallel workers don't normally need any locks not already
taken by their leader, this is very hard --- maybe impossible ---
to hit in production.  Still, if it did happen, the queries involved
in an otherwise-resolvable deadlock would block until canceled.

In addition to removing the bogus "break", add an Assert showing
that the conflicting uses of the beforeConstraints[] array (for both
counts and flags) don't overlap, and add some commentary explaining
why not; because it's not obvious without explanation, IMHO.

Original report and patch from Rui Hai Jiang; additional assert
and commentary by me.  Back-patch to 9.6 where the bug came in.

Discussion: https://postgr.es/m/CAEri+mLd3bpHLyW+a9pSe1y=aEkeuJpwBSwvo-+m4n7-ceRmXw@mail.gmail.com
2019-07-29 18:49:04 -04:00
Michael Paquier
28bbf7a81b Fix handling of expressions and predicates in REINDEX CONCURRENTLY
When copying the definition of an index rebuilt concurrently for the new
entry, the index information was taken directly from the old index using
the relation cache.  In this case, predicates and expressions have
some post-processing to prepare things for the planner, which loses some
information including the collations added in any of them.

This inconsistency can cause issues when attempting for example a table
rewrite, and makes the new indexes rebuilt concurrently inconsistent
with the old entries.

In order to fix the problem, fetch expressions and predicates directly
from the catalog of the old entry, and fill in IndexInfo for the new
index with that.  This makes the process more consistent with
DefineIndex(), and the code is refactored with the addition of a routine
to create an IndexInfo node.

Reported-by: Manuel Rigger
Author: Michael Paquier
Discussion: https://postgr.es/m/CA+u7OA5Hp0ra235F3czPom_FyAd-3+XwSJmX95r1+sRPOJc9VQ@mail.gmail.com
Backpatch-through: 12
2019-07-29 10:01:09 +09:00
Thomas Munro
180825fe43 Avoid macro clash with LLVM 9.
Early previews of LLVM 9 reveal that our Min() macro causes compiler
errors in LLVM headers reached by the #include directives in
llvmjit_inline.cpp.  Let's just undefine it.  Per buildfarm animal
seawasp.  Back-patch to 11.

Reviewed-by: Fabien Coelho, Tom Lane
Discussion: https://postgr.es/m/20190606173216.GA6306%40alvherre.pgsql
2019-07-29 10:24:20 +12:00
Michael Paquier
e396d1ee7d Doc: Fix event trigger firing table
The table has not been updated for some commands introduced in recent
releases, so refresh it.  While on it, reorder entries alphabetically.

Backpatch all the way down for all the commands which have gone
missing.

Reported-by: Jeremy Smith
Discussion: https://postgr.es/m/15883-afff0ea3cc2dbbb6@postgresql.org
Backpatch-through: 9.4
2019-07-28 22:02:30 +09:00
Peter Eisentraut
66190f371d pg_upgrade: Update obsolescent documentation note
Recently released xfsprogs 5.1.0 has reflink support enabled by
default, so the note that it's not enabled by default can be removed.
2019-07-27 08:41:40 +02:00
Alvaro Herrera
5ac684381d Don't uselessly escape a string that doesn't need escaping
Per gripe from Ian Barwick

Co-authored-by: Ian Barwick <ian@2ndquadrant.com>
Discussion: https://postgr.es/m/CABvVfJWNnNKb8cHsTLhkTsvL1+G6BVcV+57+w1JZ61p8YGPdWQ@mail.gmail.com
2019-07-26 17:46:40 -04:00
Tom Lane
d095b2fe67 Tweak our special-case logic for the IANA "Factory" timezone.
pg_timezone_names() tries to avoid showing the "Factory" zone in
the view, mainly because that has traditionally had a very long
"abbreviation" such as "Local time zone must be set--see zic manual page",
so that showing it messes up psql's formatting of the whole view.
Since tzdb version 2016g, IANA instead uses the abbreviation "-00",
which is sane enough that there's no reason to discriminate against it.

On the other hand, it emerges that FreeBSD and possibly other packagers
are so wedded to backwards compatibility that they hack the IANA data
to keep the old spelling --- and not just that old spelling, but even
older spellings that IANA used back in the stone age.  This caused the
filter logic to fail to suppress "Factory" at all on such platforms,
though the formatting problem is definitely real in that case.

To solve both problems, get rid of the hard-wired assumption about
exactly what Factory's abbreviation is, and instead reject abbreviations
exceeding 31 characters.  This will allow Factory to appear in the view
if and only if it's using the modern abbreviation.

In passing, simplify the code we add to zic.c to support "zic -P"
to remove its now-obsolete hacks to not print the Factory zone's
abbreviation.  Unlike pg_timezone_names(), there's no reason for
that code to support old/nonstandard timezone data.

Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.

Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
2019-07-26 13:07:08 -04:00
Tom Lane
e31dfe99c8 Avoid choosing "localtime" or "posixrules" as TimeZone during initdb.
Some platforms create a file named "localtime" in the system
timezone directory, making it a copy or link to the active time
zone file.  If Postgres is built with --with-system-tzdata, initdb
will see that file as an exact match to localtime(3)'s behavior,
and it may decide that "localtime" is the most preferred spelling of
the active zone.  That's a very bad choice though, because it's
neither informative, nor portable, nor stable if someone changes
the system timezone setting.  Extend the preference logic added by
commit e3846a00c so that we will prefer any other zone file that
matches localtime's behavior over "localtime".

On the same logic, also discriminate against "posixrules", which
is another not-really-a-zone file that is often present in the
timezone directory.  (Since we install "posixrules" but not
"localtime", this change can affect the behavior of Postgres
with or without --with-system-tzdata.)

Note that this change doesn't prevent anyone from choosing these
pseudo-zones if they really want to (i.e., by setting TZ for initdb,
or modifying the timezone GUC later on).  It just prevents initdb
from preferring these zone names when there are multiple matches to
localtime's behavior.

Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.

Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com
Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us
2019-07-26 12:45:52 -04:00
Tom Lane
01e0538e8b Fix loss of fractional digits for large values in cash_numeric().
Money values exceeding about 18 digits (depending on lc_monetary)
could be inaccurately converted to numeric, due to select_div_scale()
deciding it didn't need to compute any fractional digits.  Force
its hand by setting the dscale of one division input to equal the
number of fractional digits we need.

In passing, rearrange the logic to not do useless work in locales
where money values are considered integral.

Per bug #15925 from Slawomir Chodnicki.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
2019-07-26 11:59:00 -04:00
Bruce Momjian
ba2347a6d9 doc: PG 12 relnotes - add item about amcheck index root check
Reported-by: Peter Geoghegan

Discussion: https://postgr.es/m/LGbT2ncB7tiDsndK0eXHTKmogLjJ5rO52HqXigP8bCA@mail.gmail.com

Backpatch-through: 12
2019-07-25 21:37:04 -04:00
Bruce Momjian
551394ff3a doc: PG 12 relnotes, add item - pg_test_fsync fix on Windows
Reported-by: Laurenz Albe

Discussion: https://postgr.es/m/39fd196ca3af345f67595087519268d9da6891b3.camel@cybertec.at

Backpatch-through: 12
2019-07-25 21:05:51 -04:00
Thomas Munro
3964d3bce9 Fix LDAP test instability.
After starting slapd, wait until it can accept a connection before
beginning the real test work.  This avoids occasional test failures.
Back-patch to 11, where the LDAP tests arrived.

Author: Thomas Munro
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20190719033013.GI1859%40paquier.xyz
2019-07-26 10:09:31 +12:00
Andres Freund
c4944a93eb Add missing (COSTS OFF) to EXPLAIN added in previous commit.
Backpatch: 12-, like the previous commit
2019-07-25 14:51:57 -07:00
Andres Freund
8677c62eae Fix slot type handling for Agg nodes performing internal sorts.
Since 15d8f8312 we assert that - and since 7ef04e4d2c, 4da597edf1
rely on - the slot type for an expression's
ecxt_{outer,inner,scan}tuple not changing, unless explicitly flagged
as such. That allows to either skip deforming (for a virtual tuple
slot) or optimize the code for JIT accelerated deforming
appropriately (for other known slot types).

This assumption was sometimes violated for grouping sets, when
nodeAgg.c internally uses tuplesorts, and the child node doesn't
return a TTSOpsMinimalTuple type slot. Detect that case, and flag that
the outer slot might not be "fixed".

It's probably worthwhile to optimize this further in the future, and
more granularly determine whether the slot is fixed. As we already
instantiate per-phase transition and equal expressions, we could
cheaply set the slot type appropriately for each phase.  But that's a
separate change from this bugfix.

This commit does include a very minor optimization by avoiding to
create a slot for handling tuplesorts, if no such sorts are
performed. Previously we created that slot unnecessarily in the common
case of computing all grouping sets via hashing. The code looked too
confusing without that, as the conditions for needing a sort slot and
flagging that the slot type isn't fixed, are the same.

Reported-By: Ashutosh Sharma
Author: Andres Freund
Discussion: https://postgr.es/m/CAE9k0PmNaMD2oHTEAhRyxnxpaDaYkuBYkLa1dpOpn=RS0iS2AQ@mail.gmail.com
Backpatch: 12-, where the bug was introduced in 15d8f8312
2019-07-25 14:29:26 -07:00
Tom Lane
b245370467 Fix syntax error in commit 20e99cddd.
Per buildfarm.
2019-07-25 14:42:08 -04:00
Tom Lane
c58cf97f2f Fix failures to ignore \r when reading Windows-style newlines.
libpq failed to ignore Windows-style newlines in connection service files.
This normally wasn't a problem on Windows itself, because fgets() would
convert \r\n to just \n.  But if libpq were running inside a program that
changes the default fopen mode to binary, it would see the \r's and think
they were data.  In any case, it's project policy to ignore \r in text
files unconditionally, because people sometimes try to use files with
DOS-style newlines on Unix machines, where the C library won't hide that
from us.

Hence, adjust parseServiceFile() to ignore \r as well as \n at the end of
the line.  In HEAD, go a little further and make it ignore all trailing
whitespace, to match what it's always done with leading whitespace.

In HEAD, also run around and fix up everyplace where we have
newline-chomping code to make all those places look consistent and
uniformly drop \r.  It is not clear whether any of those changes are
fixing live bugs.  Most of the non-cosmetic changes are in places that
are reading popen output, and the jury is still out as to whether popen
on Windows can return \r\n.  (The Windows-specific code in pipe_read_line
seems to think so, but our lack of support for this elsewhere suggests
maybe it's not a problem in practice.)  Hence, I desisted from applying
those changes to back branches, except in run_ssl_passphrase_command()
which is new enough and little-tested enough that we'd probably not have
heard about any problems there.

Tom Lane and Michael Paquier, per bug #15827 from Jorge Gustavo Rocha.
Back-patch the parseServiceFile() change to all supported branches,
and the run_ssl_passphrase_command() change to v11 where that was added.

Discussion: https://postgr.es/m/15827-e6ba53a3a7ed543c@postgresql.org
2019-07-25 12:11:18 -04:00
Andrew Dunstan
6e5417d773 Honor MSVC WindowsSDKVersion if set
Add a line to the project file setting the target SDK. Otherwise, in for
example VS2017, if the default but optional 8.1 SDK is not installed the
build will fail.

Patch from Peifeng Qiu, slightly edited by me.

Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com

Backpatch to all live branches.
2019-07-25 11:39:26 -04:00
Tom Lane
665329abe7 Fix contrib/sepgsql test policy to work with latest SELinux releases.
As of Fedora 30, it seems that the system-provided macros for setting
up user privileges in SELinux policies don't grant the ability to read
/etc/passwd, as they formerly did.  This restriction breaks psql
(which tries to use getpwuid() to obtain the user name it's running
under) and thereby the contrib/sepgsql regression test.  Add explicit
specifications that we need the right to read /etc/passwd.

Mike Palmiotto, per a report from me.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/23856.1563381159@sss.pgh.pa.us
2019-07-25 11:02:57 -04:00
Peter Eisentraut
2e60117aa9 doc: Fix typo 2019-07-25 13:59:23 +02:00
Andres Freund
f9257cffef Fix system column accesses in ON CONFLICT ... RETURNING.
After 277cb78983 ON CONFLICT ... SET ... RETURNING failed with
ERROR:  virtual tuple table slot does not have system attributes
when taking the update path, as the slot used to insert into the
table (and then process RETURNING) was defined to be a virtual slot in
that commit. Virtual slots don't support system columns except for
tableoid and ctid, as the other system columns are AM dependent.

Fix that by using a slot of the table's type. Add tests for system
column accesses in ON CONFLICT ...  RETURNING.

Reported-By: Roby, bisected to the relevant commit by Jeff Janes
Author: Andres Freund
Discussion: https://postgr.es/m/73436355-6432-49B1-92ED-1FE4F7E7E100@finefun.com.au
Backpatch: 12-, where the bug was introduced in 277cb78983
2019-07-24 18:46:20 -07:00
Michael Paquier
24a6b6af65 Fix failure with pgperlcritic from the TAP test of synchronous replication
Oversight in 7d81bdc, which introduced a new routine in perl lacking a
return clause.  Per buildfarm member crake.

Backpatch down to 9.6 like its parent.

Reported-by: Andrew Dunstan
Discussion: https://postgr.es/m/16da29fa-d504-1380-7095-40de586dc038@2ndQuadrant.com
Backpatch-through: 9.6
2019-07-25 07:55:28 +09:00
Tom Lane
d16e514bef Fix infelicities in describeOneTableDetails' partitioned-table handling.
describeOneTableDetails issued a partition-constraint-fetching query
for every table, even ones it knows perfectly well are not partitions.

To add insult to injury, it then proceeded to leak the empty PGresult
if the table wasn't a partition.  Doing that a lot of times might
amount to a meaningful leak, so this seems like a back-patchable bug.

Fix that, and also fix a related PGresult leak in the partition-parent
case (though that leak would occur only if we got no row, which is
unexpected).

Minor code beautification too, to make this code look more like the
pre-existing code around it.

Back-patch the whole change into v12.  However, the fact that we already
know whether the table is a partition dates only to commit 1af25ca0c;
back-patching the relevant changes from that is probably more churn
than is justified in released branches.  Hence, in v11 and v10, just
do the minimum to fix the PGresult leaks.

Noted while messing around with adjacent code for yesterday's \d
improvements.
2019-07-24 18:14:45 -04:00
Heikki Linnakangas
fb5344c969 Use full 64-bit XID for checking if a deleted GiST page is old enough.
Otherwise, after a deleted page gets even older, it becomes unrecyclable
again. B-tree has the same problem, and has had since time immemorial,
but let's at least fix this in GiST, where this is new.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:25:22 +03:00
Heikki Linnakangas
e2e992c931 Refactor checks for deleted GiST pages.
The explicit check in gistScanPage() isn't currently really necessary, as
a deleted page is always empty, so the loop would fall through without
doing anything, anyway. But it's a marginal optimization, and it gives a
nice place to attach a comment to explain how it works.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:25:21 +03:00
Andrew Dunstan
bfa4263e71 Don't assume expr is available in pgbench tests
Windows hosts do not normally come with expr, so instead of using that
to test the \setshell command, use echo instead, which is fairly
universally available.

Backpatch to release 11, where this came in.

Problem found by me, patch by Fabien Coelho.
2019-07-24 11:47:58 -04:00
Michael Paquier
a15a40aaae Doc: Clarify interactions of pg_receivewal with remote_apply
Using pg_receivewal with synchronous_commit = remote_apply set in the
backend is incompatible if pg_receivewal is a synchronous standby as it
never applies WAL, so document this problem and solutions to it.

Backpatch to 9.6, where remote_apply has been added.

Author: Robert Haas, Jesper Pedersen
Reviewed-by: Laurenz Albe, Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/1427a2d3-1e51-9335-1931-4f8853d90d5e@redhat.com
Backpatch-through: 9.6
2019-07-24 11:26:24 +09:00
Michael Paquier
0ceb28da06 Improve stability of TAP test for synchronous replication
Slow buildfarm machines have run into issues with this TAP test caused
by a race condition related to the startup of a set of standbys, where
it is possible to finish with an unexpected order in the WAL sender
array of the primary.

This closes the race condition by making sure that any standby started
is registered into the WAL sender array of the primary before starting
the next one based on lookups of pg_stat_replication.

Backpatch down to 9.6 where the test has been introduced.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Noah Misch
Discussion: https://postgr.es/m/20190617055145.GB18917@paquier.xyz
Backpatch-through: 9.6
2019-07-24 10:54:20 +09:00
Alvaro Herrera
3ffe655c0a Check that partitions are not in use when dropping constraints
If the user creates a deferred constraint in a partition, and in a
transaction they cause the constraint's trigger execution to be deferred
until commit time *and* drop the constraint, then when commit time comes
the queued trigger will fail to run because the trigger object will have
been dropped.

This is explained because when a constraint gets dropped in a
partitioned table, the recursion to drop the ones in partitions is done
by the dependency mechanism, not by ALTER TABLE traversing the recursion
tree as in all other cases.  In the non-partitioned case, this problem
is avoided by checking that the table is not "in use" by alter-table;
other alter-table subcommands that recurse to partitions do that check
for each partition.  But the dependency mechanism doesn't have a way to
do that.  Fix the problem by applying the same check to all partitions
during ALTER TABLE's "prep" phase, which correctly raises the necessary
error.

Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>
Discussion: https://postgr.es/m/CAKcux6nZiO9-eEpr1ZD84bT1mBoVmeZkfont8iSpcmYrjhGWgA@mail.gmail.com
2019-07-23 17:22:15 -04:00
Tom Lane
79e573fa49 Install dependencies to prevent dropping partition key columns.
The logic in ATExecDropColumn that rejects dropping partition key
columns is quite an inadequate defense, because it doesn't execute
in cases where a column needs to be dropped due to cascade from
something that only the column, not the whole partitioned table,
depends on.  That leaves us with a badly broken partitioned table;
even an attempt to load its relcache entry will fail.

We really need to have explicit pg_depend entries that show that the
column can't be dropped without dropping the whole table.  Hence,
add those entries.  In v12 and HEAD, bump catversion to ensure that
partitioned tables will have such entries.  We can't do that in
released branches of course, so in v10 and v11 this patch affords
protection only to partitioned tables created after the patch is
installed.  Given the lack of field complaints (this bug was found
by fuzz-testing not by end users), that's probably good enough.

In passing, fix ATExecDropColumn and ATPrepAlterColumnType
messages to be more specific about which partition key column
they're complaining about.

Per report from Manuel Rigger.  Back-patch to v10 where partitioned
tables were added.

Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com
Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
2019-07-22 14:55:23 -04:00
Peter Geoghegan
980224b4a2 Don't rely on estimates for amcheck Bloom filters.
Solely relying on a relation's reltuples/relpages estimate to size the
Bloom filters used by amcheck verification makes verification less
effective when the estimates are very stale.  In extreme cases,
verification options that use Bloom filters internally could be totally
ineffective, without users receiving any clear indication that certain
types of corruption might easily be missed.

To fix, use RelationGetNumberOfBlocks() instead of relpages to size the
downlink block Bloom filter.  Use the same RelationGetNumberOfBlocks()
value to derive a minimum size for the heapallindexed Bloom filter,
rather than completely trusting reltuples.  Verification will still be
reasonably effective when the projected/estimated number of Bloom filter
elements is at least 1/5 of the final number of elements, which is
assured by the new sizing logic.

Reported-By: Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-Wzk0ke2J42KrNYBKu0Xovjy-sU5ub7PWjgpbsKdAQcL4OA@mail.gmail.com
Backpatch: 11-, where downlink/heapallindexed verification were added.
2019-07-20 11:11:54 -07:00
Tomas Vondra
5a8ee9f2db Use column collation for extended statistics
The current extended statistics code was a bit confused which collation
to use.  When building the statistics, the collations defined as default
for the data types were used (since commit 5e0928005).  The MCV code was
however using the column collations for MCV serialization, and then
DEFAULT_COLLATION_OID when computing estimates. So overall the code was
using all three possible options, inconsistently.

This uses the column colation everywhere - this makes it consistent with
what 5e0928005 did for regular stats.  We however do not track the
collations in a catalog, because we can derive them from column-level
information.  This may need to change in the future, e.g. after allowing
statistics on expressions.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:35:15 +02:00
Tomas Vondra
fc4faea179 Rework examine_opclause_expression to use varonleft
The examine_opclause_expression function needs to return information on
which side of the operator we found the Var, but the variable was called
"isgt" which is rather misleading (it assumes the operator is either
less-than or greater-than, but it may be equality or something else).
Other places in the planner use a variable called "varonleft" for this
purpose, so just adopt the same convention here.

The code also assumed we don't care about this flag for equality, as
(Var = Const) and (Const = Var) should be the same thing. But that does
not work for cross-type operators, in which case we need to pass the
parameters to the procedure in the right order. So just use the same
code for all types of expressions.

This means we don't need to care about the selectivity estimation
function anymore, at least not in this code. We should only get the
supported cases here (thanks to statext_is_compatible_clause).

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:34:58 +02:00
Tom Lane
533522846b Silence compiler warning, hopefully.
Absorb commit e5e04c962a5d12eebbf867ca25905b3ccc34cbe0 from upstream
IANA code, in hopes of silencing warnings from MSVC about negating
a bool value.

Discussion: https://postgr.es/m/20190719035347.GJ1859@paquier.xyz
2019-07-19 14:49:05 -04:00
Michael Paquier
b990cc38dc Doc: clarify when table rewrites happen with column addition and DEFAULT
16828d5 has improved ALTER TABLE so as a column addition does not
require a rewrite for a non-NULL default with constant expressions, but
one spot in the documentation did not get updated consistently.
The documentation also now clarifies the fact that this does not apply
if the expression is volatile, where a table rewrite is still required.

Reported-by: Daniel Westermann
Author: Ian Barwick
Reviewed-by: Michael Paquier, Daniel Westermann
Discussion: https://postgr.es/m/DB6PR0902MB2184C7D5645CF15D75EB7957D2CF0@DB6PR0902MB2184.eurprd09.prod.outlook.com
Backpatch-through: 11
2019-07-19 11:43:05 +09:00
Jeff Davis
5245552ddb Fix error in commit e6feef57.
I was careless passing a datum directly to DATE_NOT_FINITE without
calling DatumGetDateADT() first.

Backpatch-through: 9.4
2019-07-18 17:26:56 -07:00
Jeff Davis
613eabcd51 Fix daterange canonicalization for +/- infinity.
The values 'infinity' and '-infinity' are a part of the DATE type
itself, so a bound of the date 'infinity' is not the same as an
unbounded/infinite range. However, it is still wrong to try to
canonicalize such values, because adding or subtracting one has no
effect. Fix by treating 'infinity' and '-infinity' the same as
unbounded ranges for the purposes of canonicalization (but not other
purposes).

Backpatch to all versions because it is inconsistent with the
documented behavior. Note that this could be an incompatibility for
applications relying on the behavior contrary to the documentation.

Author: Laurenz Albe
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at
Backpatch-through: 9.4
2019-07-18 17:26:47 -07:00
Peter Geoghegan
7772dece98 Fix nbtree metapage cache upgrade bug.
Commit 857f9c36cd, which taught nbtree VACUUM to avoid unnecessary
index scans, bumped the nbtree version number from 2 to 3, while adding
the ability for nbtree indexes to be upgraded on-the-fly.  Various
assertions that assumed that an nbtree index was always on version 2 had
to be changed to accept any supported version (version 2 or 3 on
Postgres 11).

However, a few assertions were missed in the initial commit, all of
which were in code paths that cache a local copy of the metapage
metadata, where the index had been expected to be on the current version
(no longer version 2) as a generic sanity check.  Rather than simply
update the assertions, follow-up commit 0a64b45152 intentionally made
the metapage caching code update the per-backend cached metadata version
without changing the on-disk version at the same time.  This could even
happen when the planner needed to determine the height of a B-Tree for
costing purposes.  The assertions only fail on Postgres v12 when
upgrading from v10, because they were adjusted to use the authoritative
shared memory metapage by v12's commit dd299df8.

To fix, remove the cache-only upgrade mechanism entirely, and update the
assertions themselves to accept any supported version (go back to using
the cached version in v12).  The fix is almost a full revert of commit
0a64b45152 on the v11 branch.

VACUUM only considers the authoritative metapage, and never bothers with
a locally cached version, whereas everywhere else isn't interested in
the metapage fields that were added by commit 857f9c36cd.  It seems
unlikely that this bug has affected any user on v11.

Reported-By: Christoph Berg
Bug: #15896
Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org
Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
2019-07-18 13:22:54 -07:00
Tomas Vondra
79d3a1e52a Simplify bitmap updates in multivariate MCV code
When evaluating clauses on a multivariate MCV list, we build a bitmap
tracking how the clauses match each item of the MCV list.  When updating
the bitmap we need to consider the current value (tracking how the item
matches preceding clauses), match for the current clause and whether the
clauses are connected by AND or OR.

Until now the logic was copied on every place updating the bitmap, which
was not quite readable.  So just move it to a separate function and call
it where needed.

Backpatch to 12, where the code was introduced. While not a bugfix, this
should make maintenance and future backpatches easier.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
2019-07-18 11:30:12 +02:00
Tomas Vondra
1c2acc32dc Fix handling of NULLs in MCV items and constants
There were two issues in how the extended statistics handled NULL values
in opclauses. Firstly, the code was oblivious to the possibility that
Const may be NULL (constisnull=true) in which case the constvalue is
undefined. We need to treat this as a mismatch, and not call the proc.

Secondly, the MCV item itself may contain NULL values too - the code
already did check that, and updated the match bitmap accordingly, but
failed to ensure we won't call the operator procedure anyway. It did
work for AND-clauses, because in that case false in the bitmap stops
evaluation of further clauses. But for OR-clauses ir was not easy to
get incorrect estimates or even trigger a crash.

This fixes both issues by extending the existing check so that it looks
at constisnull too, and making sure it skips calling the procedure.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
2019-07-18 11:30:12 +02:00
Tomas Vondra
42276976a1 Fix handling of opclauses in extended statistics
We expect opclauses to have exactly one Var and one Const, but the code
was checking the Const by calling is_pseudo_constant_clause() which is
incorrect - we need a proper constant.

Fixed by using plain IsA(x,Const) to check type of the node. We need to
do these checks in two places, so move it into a separate function that
can be called in both places.

Reported by Andreas Seltenreich, based on crash reported by sqlsmith.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:30:12 +02:00
Tomas Vondra
3944e855bc Remove unnecessary TYPECACHE_GT_OPR lookup
The TYPECACHE_GT_OPR is not needed (it used to be in older version of
the MCV code), but the compiler failed to detect this as the result was
used in a fmgr_info() call, populating a FmgrInfo entry.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:30:12 +02:00
Andres Freund
fa7bb93453 tableam: comment improvements.
Author: Brad DeJong
Discussion: https://postgr.es/m/CAJnrtnxDYOQFsDfWz2iri0T_fFL2ZbbzgCOE=4yaMcszgcsf4A@mail.gmail.com
Backpatch: 12-
2019-07-17 19:51:17 -07:00