Commit Graph

34152 Commits

Author SHA1 Message Date
Alexander Korotkov
3218ff5c6a Adjust string comparison in jsonpath
We have implemented jsonpath string comparison using default database locale.
However, standard requires us to compare Unicode codepoints.  This commit
implements that, but for performance reasons we still use per-byte comparison
for "==" operator.  Thus, for consistency other comparison operators do per-byte
comparison if Unicode codepoints appear to be equal.

In some edge cases, when same Unicode codepoints have different binary
representations in database encoding, we diverge standard to achieve better
performance of "==" operator.  In future to implement strict standard
conformance, we can do normalization of input JSON strings.

Original patch was written by Nikita Glukhov, rewritten by me.

Reported-by: Markus Winand
Discussion: https://postgr.es/m/8B7FA3B4-328D-43D7-95A8-37B8891B8C78%40winand.at
Author: Nikita Glukhov, Alexander Korotkov
Backpatch-through: 12
2019-08-11 22:55:21 +03:00
Tom Lane
6ce0366be2 Fix "ANALYZE t, t" inside a transaction block.
This failed with either "tuple already updated by self" or "duplicate
key value violates unique constraint", depending on whether the table
had previously been analyzed or not.  The reason is that ANALYZE tried
to insert or update the same pg_statistic rows twice, and there was no
CommandCounterIncrement between.  So add one.  The same case works fine
outside a transaction block, because then there's a whole transaction
boundary between, as a consequence of the way VACUUM works.

This issue has been latent all along, but the problem was unreachable
before commit 11d8d72c2 added the ability to specify multiple tables
in ANALYZE.  We could, perhaps, alternatively fix it by adding code to
de-duplicate the list of VacuumRelations --- but that would add a
lot of overhead to work around dumb commands, so it's not attractive.

Per bug #15946 from Yaroslav Schekin.  Back-patch to v11.

(Note: in v11 I also back-patched the test added by commit 23224563d;
otherwise the problem doesn't manifest in the test I added, because
"vactst" is empty when the tests for multiple ANALYZE targets are
reached.  That seems like not a very good thing anyway, so I did this
rather than rethinking the choice of test case.)

Discussion: https://postgr.es/m/15946-5c7570a2884a26cf@postgresql.org
2019-08-10 11:30:11 -04:00
Tom Lane
59d3789294 Fix SIGSEGV in pruning for ScalarArrayOp with constant-null array.
Not much to be said here: commit 9fdb675fc should have checked
constisnull, didn't.

Per report from Piotr Włodarczyk.  Back-patch to v11 where
bug was introduced.

Discussion: https://postgr.es/m/CAP-dhMr+vRpwizEYjUjsiZ1vwqpohTm+3Pbdt6Pr7FEgPq9R0Q@mail.gmail.com
2019-08-09 13:20:28 -04:00
Andrew Dunstan
4116420bae Fix certificate subjects in ldap test
openssl doesn't like lower case subject attribute names. Error observed
in buildfarm results.

Backpatch to release 11.
2019-08-08 15:02:13 -04:00
Etsuro Fujita
ca8a57b636 Fix typos in comments. 2019-08-07 19:05:18 +09:00
Heikki Linnakangas
f8d30182b1 Fix predicate-locking of HOT updated rows.
In serializable mode, heap_hot_search_buffer() incorrectly acquired a
predicate lock on the root tuple, not the returned tuple that satisfied
the visibility checks. As explained in README-SSI, the predicate lock does
not need to be copied or extended to other tuple versions, but for that to
work, the correct, visible, tuple version must be locked in the first
place.

The original SSI commit had this bug in it, but it was fixed back in 2013,
in commit 81fbbfe335. But unfortunately, it was reintroduced a few months
later in commit b89e151054. Wising up from that, add a regression test
to cover this, so that it doesn't get reintroduced again. Also, move the
code that sets 't_self', so that it happens at the same time that the
other HeapTuple fields are set, to make it more clear that all the code in
the loop operate on the "current" tuple in the chain, not the root tuple.

Bug spotted by Andres Freund, analysis and original fix by Thomas Munro,
test case and some additional changes to the fix by Heikki Linnakangas.
Backpatch to all supported versions (9.4).

Discussion: https://www.postgresql.org/message-id/20190731210630.nqhszuktygwftjty%40alap3.anarazel.de
2019-08-07 12:41:00 +03:00
Michael Paquier
d8652ec555 Fix some incorrect parsing of time with time zone strings
When parsing a timetz string with a dynamic timezone abbreviation or a
timezone not specified, it was possible to generate incorrect timestamps
based on a date which uses some non-initialized variables if the input
string did not specify fully a date to parse.  This is already checked
when a full timezone spec is included in the input string, but the two
other cases mentioned above missed the same checks.

This gets fixed by generating an error as this input is invalid, or in
short when a date is not fully specified.

Valgrind was complaining about this problem.

Bug: #15910
Author: Alexander Lakhin
Discussion: https://postgr.es/m/15910-2eba5106b9aa0c61@postgresql.org
Backpatch-through: 9.4
2019-08-07 18:17:34 +09:00
Tom Lane
514e7e911f Save Kerberos and LDAP daemon logs where the buildfarm can find them.
src/test/kerberos and src/test/ldap try to run private authentication
servers, which of course might fail.  The logs from these servers
were being dropped into the tmp_check/ subdirectory, but they should
be put in tmp_check/log/, because the buildfarm will only capture
log files in that subdirectory.  Without the log output there's
little hope of diagnosing buildfarm failures related to these servers.

Backpatch to v11 where these test suites were added.

Discussion: https://postgr.es/m/16017.1565047605@sss.pgh.pa.us
2019-08-06 17:08:32 -04:00
Tom Lane
7c45b994f1 Stamp 12beta3. 2019-08-05 17:10:44 -04:00
Tom Lane
de4b75c154 Fix choice of comparison operators for cross-type hashed subplans.
Commit bf6c614a2 rearranged the lookup of the comparison operators
needed in a hashed subplan, and in so doing, broke the cross-type
case: it caused the original LHS-vs-RHS operator to be used to compare
hash table entries too (which of course are all of the RHS type).
This leads to C functions being passed a Datum that is not of the
type they expect, with the usual hazards of crashes and unauthorized
server memory disclosure.

For the set of hashable cross-type operators present in v11 core
Postgres, this bug is nearly harmless on 64-bit machines, which
may explain why it escaped earlier detection.  But it is a live
security hazard on 32-bit machines; and of course there may be
extensions that add more hashable cross-type operators, which
would increase the risk.

Reported by Andreas Seltenreich.  Back-patch to v11 where the
problem came in.

Security: CVE-2019-10209
2019-08-05 11:20:33 -04:00
Noah Misch
9993fa9dd2 Require the schema qualification in pg_temp.type_name(arg).
Commit aa27977fe2 introduced this
restriction for pg_temp.function_name(arg); do likewise for types
created in temporary schemas.  Programs that this breaks should add
"pg_temp." schema qualification or switch to arg::type_name syntax.
Back-patch to 9.4 (all supported versions).

Reviewed by Tom Lane.  Reported by Tom Lane.

Security: CVE-2019-10208
2019-08-05 07:48:45 -07:00
Peter Eisentraut
106c6635b5 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: e255bc8b15d0f173f9de9048d3d6ad6e40085a48
2019-08-05 15:54:23 +02:00
Michael Paquier
159e4bf740 Fix tab completion for ALTER LANGUAGE in psql
OWNER_TO was used for the completion, which is not a supported grammar,
but OWNER TO is.

This error has been introduced by d37b816, so backpatch down to 9.6.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com
Backpatch-through: 9.6
2019-08-05 14:30:05 +09:00
Tomas Vondra
d5f53a8e26 Revert "Add log_statement_sample_rate parameter"
This reverts commit 88bdbd3f74.

As committed, statement sampling used the existing duration threshold
(log_min_duration_statement) when decide which statements to sample.
The issue is that even the longest statements are subject to sampling,
and so may not end up logged. An improvement was proposed, introducing
a second duration threshold, but it would not be backwards compatible.
So we've decided to revert this feature - the separate threshold should
be part of the feature itself.

Discussion: https://postgr.es/m/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com
2019-08-04 23:37:44 +02:00
Tomas Vondra
d8453ccfbf Revert "Silence compiler warning"
This reverts commit 9dc1225855.

As committed, statement sampling used the existing duration threshold
(log_min_duration_statement) when decide which statements to sample.
The issue is that even the longest statements are subject to sampling,
and so may not end up logged. An improvement was proposed, introducing
a second duration threshold, but it would not be backwards compatible.
So we've decided to revert this feature - the separate threshold should
be part of the feature itself.

Discussion: https://postgr.es/m/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com
2019-08-04 23:37:33 +02:00
Tom Lane
4844c63032 Avoid picking already-bound TCP ports in kerberos and ldap test suites.
src/test/kerberos and src/test/ldap need to run a private authentication
server of the relevant type, for which they need a free TCP port.
They were just picking a random port number in 48K-64K, which works
except when something's already using the particular port.  Notably,
the probability of failure rises dramatically if one simply runs those
tests in a tight loop, because each test cycle leaves behind a bunch of
high ports that are transiently in TIME_WAIT state.

To fix, split out the code that PostgresNode.pm already had for
identifying a free TCP port number, so that it can be invoked to choose
a port for the KDC or LDAP server.  This isn't 100% bulletproof, since
conceivably something else on the machine could grab the port between
the time we check and the time we actually start the server.  But that's
a pretty short window, so in practice this should be good enough.

Back-patch to v11 where these test suites were added.

Patch by me, reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/3397.1564872168@sss.pgh.pa.us
2019-08-04 13:07:12 -04:00
Alvaro Herrera
8654407148 Improve pruning of a default partition
When querying a partitioned table containing a default partition, we
were wrongly deciding to include it in the scan too early in the
process, failing to exclude it in some cases.  If we reinterpret the
PruneStepResult.scan_default flag slightly, we can do a better job at
detecting that it can be excluded.  The change is that we avoid setting
the flag for that pruning step unless the step absolutely requires the
default partition to be scanned (in contrast with the previous
arrangement, which was to set it unless the step was able to prune it).
So get_matching_partitions() must explicitly check the partition that
each returned bound value corresponds to in order to determine whether
the default one needs to be included, rather than relying on the flag
from the final step result.

Author: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp>
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Discussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp
2019-08-04 11:18:45 -04:00
Andres Freund
a668bc7599 Fix representation of hash keys in Hash/HashJoin nodes.
In 5f32b29c18 I changed the creation of HashState.hashkeys to
actually use HashState as the parent (instead of HashJoinState, which
was incorrect, as they were executed below HashState), to fix the
problem of hashkeys expressions otherwise relying on slot types
appropriate for HashJoinState, rather than HashState as would be
correct. That reliance was only introduced in 12, which is why it
previously worked to use HashJoinState as the parent (although I'd be
unsurprised if there were problematic cases).

Unfortunately that's not a sufficient solution, because before this
commit, the to-be-hashed expressions referenced inner/outer as
appropriate for the HashJoin, not Hash. That didn't have obvious bad
consequences, because the slots containing the tuples were put into
ecxt_innertuple when hashing a tuple for HashState (even though Hash
doesn't have an inner plan).

There are less common cases where this can cause visible problems
however (rather than just confusion when inspecting such executor
trees). E.g. "ERROR: bogus varno: 65000", when explaining queries
containing a HashJoin where the subsidiary Hash node's hash keys
reference a subplan. While normally hashkeys aren't displayed by
EXPLAIN, if one of those expressions references a subplan, that
subplan may be printed as part of the Hash node - which then failed
because an inner plan was referenced, and Hash doesn't have that.

It seems quite possible that there's other broken cases, too.

Fix the problem by properly splitting the expression for the HashJoin
and Hash nodes at plan time, and have them reference the proper
subsidiary node. While other workarounds are possible, fixing this
correctly seems easy enough. It was a pretty ugly hack to have
ExecInitHashJoin put the expression into the already initialized
HashState, in the first place.

I decided to not just split inner/outer hashkeys inside
make_hashjoin(), but also to separate out hashoperators and
hashcollations at plan time. Otherwise we would have ended up having
two very similar loops, one at plan time and the other during executor
startup. The work seems to more appropriately belong to plan time,
anyway.

Reported-By: Nikita Glukhov, Alexander Korotkov
Author: Andres Freund
Reviewed-By: Tom Lane, in an earlier version
Discussion: https://postgr.es/m/CAPpHfdvGVegF_TKKRiBrSmatJL2dR9uwFCuR+teQ_8tEXU8mxg@mail.gmail.com
Backpatch: 12-
2019-08-02 00:02:49 -07:00
Tom Lane
408f759380 Fix pg_dump's handling of dependencies for custom opclasses.
Since pg_dump doesn't treat the member operators and functions of operator
classes/families (that is, the pg_amop and pg_amproc entries, not the
underlying operators/functions) as separate dumpable objects, it missed
their dependency information.  I think this was safe when the code was
designed, because the default object sorting rule emits operators and
functions before opclasses, and there were no dependency types that could
mess that up.  However, the introduction of range types in 9.2 broke it:
now a type can have a dependency on an opclass, allowing dependency rules
to push the opclass before the type and hence before custom operators.
Lacking any information showing that it shouldn't do so, pg_dump emitted
the objects in the wrong order.

Fix by teaching getDependencies() to translate pg_depend entries for
pg_amop/amproc rows to look like dependencies for their parent opfamily.

I added a regression test for this in HEAD/v12, but not further back;
life is too short to fight with 002_pg_dump.pl.

Per bug #15934 from Tom Gottfried.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/15934-58b8c8ab7a09ea15@postgresql.org
2019-07-31 15:42:50 -04:00
Andres Freund
c4b7bb3cf1 Remove superfluous newlines in function prototypes.
These were introduced by pgindent due to fixe to broken
indentation (c.f. 8255c7a5ee). Previously the mis-indentation of
function prototypes was creatively used to reduce indentation in a few
places.

As that formatting only exists in master and REL_12_STABLE, it seems
better to fix it in both, rather than having some odd indentation in
v12 that somebody might copy for future patches or such.

Author: Andres Freund
Discussion: https://postgr.es/m/20190728013754.jwcbe5nfyt3533vx@alap3.anarazel.de
Backpatch: 12-
2019-07-31 00:07:09 -07:00
Heikki Linnakangas
394f7500ae Allow table AM's to use rd_amcache, too.
The rd_amcache allows an index AM to cache arbitrary information in a
relcache entry. This commit moves the cleanup of rd_amcache so that it
can also be used by table AMs. Nothing takes advantage of that yet, but
I'm sure it'll come handy for anyone writing new table AMs.

Backpatch to v12, where table AM interface was introduced.

Reviewed-by: Julien Rouhaud
2019-07-30 21:43:58 +03:00
Heikki Linnakangas
718e313244 Print WAL position correctly in pg_rewind error message.
This has been wrong ever since pg_rewind was added. The if-branch just
above this, where we print the same error with an extra message supplied
by XLogReadRecord() got this right, but the variable name was wrong in the
else-branch. As a consequence, the error printed the WAL position as
0/0 if there was an error reading a WAL file.

Backpatch to 9.5, where pg_rewind was added.
2019-07-30 21:14:34 +03:00
Tomas Vondra
e1947f6c3e Don't build extended statistics on inheritance trees
When performing ANALYZE on inheritance trees, we collect two samples for
each relation - one for the relation alone, and one for the inheritance
subtree (relation and its child relations). And then we build statistics
on each sample, so for each relation we get two sets of statistics.

For regular (per-column) statistics this works fine, because the catalog
includes a flag differentiating statistics built from those two samples.
But we don't have such flag in the extended statistics catalogs, and we
ended up updating the same row twice, triggering this error:

  ERROR:  tuple already updated by self

The simplest solution is to disable extended statistics on inheritance
trees, which is what this commit is doing. In the future we may need to
do something similar to per-column statistics, but that requires adding a
flag to the catalog - and that's not backpatchable. Moreover, the current
selectivity estimation code only works with individual relations, so
building statistics on inheritance trees would be pointless anyway.

Author: Tomas Vondra
Backpatch-to: 10-
Discussion: https://postgr.es/m/20190618231233.GA27470@telsasoft.com
Reported-by: Justin Pryzby
2019-07-30 19:48:13 +02:00
Tom Lane
d933816c04 Fix busted logic for parallel lock grouping in TopoSort().
A "break" statement erroneously left behind by commit a1c1af2a1
caused TopoSort to do the wrong thing if a lock's wait list
contained multiple members of the same locking group.

Because parallel workers don't normally need any locks not already
taken by their leader, this is very hard --- maybe impossible ---
to hit in production.  Still, if it did happen, the queries involved
in an otherwise-resolvable deadlock would block until canceled.

In addition to removing the bogus "break", add an Assert showing
that the conflicting uses of the beforeConstraints[] array (for both
counts and flags) don't overlap, and add some commentary explaining
why not; because it's not obvious without explanation, IMHO.

Original report and patch from Rui Hai Jiang; additional assert
and commentary by me.  Back-patch to 9.6 where the bug came in.

Discussion: https://postgr.es/m/CAEri+mLd3bpHLyW+a9pSe1y=aEkeuJpwBSwvo-+m4n7-ceRmXw@mail.gmail.com
2019-07-29 18:49:04 -04:00
Michael Paquier
28bbf7a81b Fix handling of expressions and predicates in REINDEX CONCURRENTLY
When copying the definition of an index rebuilt concurrently for the new
entry, the index information was taken directly from the old index using
the relation cache.  In this case, predicates and expressions have
some post-processing to prepare things for the planner, which loses some
information including the collations added in any of them.

This inconsistency can cause issues when attempting for example a table
rewrite, and makes the new indexes rebuilt concurrently inconsistent
with the old entries.

In order to fix the problem, fetch expressions and predicates directly
from the catalog of the old entry, and fill in IndexInfo for the new
index with that.  This makes the process more consistent with
DefineIndex(), and the code is refactored with the addition of a routine
to create an IndexInfo node.

Reported-by: Manuel Rigger
Author: Michael Paquier
Discussion: https://postgr.es/m/CA+u7OA5Hp0ra235F3czPom_FyAd-3+XwSJmX95r1+sRPOJc9VQ@mail.gmail.com
Backpatch-through: 12
2019-07-29 10:01:09 +09:00
Thomas Munro
180825fe43 Avoid macro clash with LLVM 9.
Early previews of LLVM 9 reveal that our Min() macro causes compiler
errors in LLVM headers reached by the #include directives in
llvmjit_inline.cpp.  Let's just undefine it.  Per buildfarm animal
seawasp.  Back-patch to 11.

Reviewed-by: Fabien Coelho, Tom Lane
Discussion: https://postgr.es/m/20190606173216.GA6306%40alvherre.pgsql
2019-07-29 10:24:20 +12:00
Alvaro Herrera
5ac684381d Don't uselessly escape a string that doesn't need escaping
Per gripe from Ian Barwick

Co-authored-by: Ian Barwick <ian@2ndquadrant.com>
Discussion: https://postgr.es/m/CABvVfJWNnNKb8cHsTLhkTsvL1+G6BVcV+57+w1JZ61p8YGPdWQ@mail.gmail.com
2019-07-26 17:46:40 -04:00
Tom Lane
d095b2fe67 Tweak our special-case logic for the IANA "Factory" timezone.
pg_timezone_names() tries to avoid showing the "Factory" zone in
the view, mainly because that has traditionally had a very long
"abbreviation" such as "Local time zone must be set--see zic manual page",
so that showing it messes up psql's formatting of the whole view.
Since tzdb version 2016g, IANA instead uses the abbreviation "-00",
which is sane enough that there's no reason to discriminate against it.

On the other hand, it emerges that FreeBSD and possibly other packagers
are so wedded to backwards compatibility that they hack the IANA data
to keep the old spelling --- and not just that old spelling, but even
older spellings that IANA used back in the stone age.  This caused the
filter logic to fail to suppress "Factory" at all on such platforms,
though the formatting problem is definitely real in that case.

To solve both problems, get rid of the hard-wired assumption about
exactly what Factory's abbreviation is, and instead reject abbreviations
exceeding 31 characters.  This will allow Factory to appear in the view
if and only if it's using the modern abbreviation.

In passing, simplify the code we add to zic.c to support "zic -P"
to remove its now-obsolete hacks to not print the Factory zone's
abbreviation.  Unlike pg_timezone_names(), there's no reason for
that code to support old/nonstandard timezone data.

Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.

Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
2019-07-26 13:07:08 -04:00
Tom Lane
e31dfe99c8 Avoid choosing "localtime" or "posixrules" as TimeZone during initdb.
Some platforms create a file named "localtime" in the system
timezone directory, making it a copy or link to the active time
zone file.  If Postgres is built with --with-system-tzdata, initdb
will see that file as an exact match to localtime(3)'s behavior,
and it may decide that "localtime" is the most preferred spelling of
the active zone.  That's a very bad choice though, because it's
neither informative, nor portable, nor stable if someone changes
the system timezone setting.  Extend the preference logic added by
commit e3846a00c so that we will prefer any other zone file that
matches localtime's behavior over "localtime".

On the same logic, also discriminate against "posixrules", which
is another not-really-a-zone file that is often present in the
timezone directory.  (Since we install "posixrules" but not
"localtime", this change can affect the behavior of Postgres
with or without --with-system-tzdata.)

Note that this change doesn't prevent anyone from choosing these
pseudo-zones if they really want to (i.e., by setting TZ for initdb,
or modifying the timezone GUC later on).  It just prevents initdb
from preferring these zone names when there are multiple matches to
localtime's behavior.

Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.

Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com
Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us
2019-07-26 12:45:52 -04:00
Tom Lane
01e0538e8b Fix loss of fractional digits for large values in cash_numeric().
Money values exceeding about 18 digits (depending on lc_monetary)
could be inaccurately converted to numeric, due to select_div_scale()
deciding it didn't need to compute any fractional digits.  Force
its hand by setting the dscale of one division input to equal the
number of fractional digits we need.

In passing, rearrange the logic to not do useless work in locales
where money values are considered integral.

Per bug #15925 from Slawomir Chodnicki.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
2019-07-26 11:59:00 -04:00
Thomas Munro
3964d3bce9 Fix LDAP test instability.
After starting slapd, wait until it can accept a connection before
beginning the real test work.  This avoids occasional test failures.
Back-patch to 11, where the LDAP tests arrived.

Author: Thomas Munro
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20190719033013.GI1859%40paquier.xyz
2019-07-26 10:09:31 +12:00
Andres Freund
c4944a93eb Add missing (COSTS OFF) to EXPLAIN added in previous commit.
Backpatch: 12-, like the previous commit
2019-07-25 14:51:57 -07:00
Andres Freund
8677c62eae Fix slot type handling for Agg nodes performing internal sorts.
Since 15d8f8312 we assert that - and since 7ef04e4d2c, 4da597edf1
rely on - the slot type for an expression's
ecxt_{outer,inner,scan}tuple not changing, unless explicitly flagged
as such. That allows to either skip deforming (for a virtual tuple
slot) or optimize the code for JIT accelerated deforming
appropriately (for other known slot types).

This assumption was sometimes violated for grouping sets, when
nodeAgg.c internally uses tuplesorts, and the child node doesn't
return a TTSOpsMinimalTuple type slot. Detect that case, and flag that
the outer slot might not be "fixed".

It's probably worthwhile to optimize this further in the future, and
more granularly determine whether the slot is fixed. As we already
instantiate per-phase transition and equal expressions, we could
cheaply set the slot type appropriately for each phase.  But that's a
separate change from this bugfix.

This commit does include a very minor optimization by avoiding to
create a slot for handling tuplesorts, if no such sorts are
performed. Previously we created that slot unnecessarily in the common
case of computing all grouping sets via hashing. The code looked too
confusing without that, as the conditions for needing a sort slot and
flagging that the slot type isn't fixed, are the same.

Reported-By: Ashutosh Sharma
Author: Andres Freund
Discussion: https://postgr.es/m/CAE9k0PmNaMD2oHTEAhRyxnxpaDaYkuBYkLa1dpOpn=RS0iS2AQ@mail.gmail.com
Backpatch: 12-, where the bug was introduced in 15d8f8312
2019-07-25 14:29:26 -07:00
Tom Lane
b245370467 Fix syntax error in commit 20e99cddd.
Per buildfarm.
2019-07-25 14:42:08 -04:00
Tom Lane
c58cf97f2f Fix failures to ignore \r when reading Windows-style newlines.
libpq failed to ignore Windows-style newlines in connection service files.
This normally wasn't a problem on Windows itself, because fgets() would
convert \r\n to just \n.  But if libpq were running inside a program that
changes the default fopen mode to binary, it would see the \r's and think
they were data.  In any case, it's project policy to ignore \r in text
files unconditionally, because people sometimes try to use files with
DOS-style newlines on Unix machines, where the C library won't hide that
from us.

Hence, adjust parseServiceFile() to ignore \r as well as \n at the end of
the line.  In HEAD, go a little further and make it ignore all trailing
whitespace, to match what it's always done with leading whitespace.

In HEAD, also run around and fix up everyplace where we have
newline-chomping code to make all those places look consistent and
uniformly drop \r.  It is not clear whether any of those changes are
fixing live bugs.  Most of the non-cosmetic changes are in places that
are reading popen output, and the jury is still out as to whether popen
on Windows can return \r\n.  (The Windows-specific code in pipe_read_line
seems to think so, but our lack of support for this elsewhere suggests
maybe it's not a problem in practice.)  Hence, I desisted from applying
those changes to back branches, except in run_ssl_passphrase_command()
which is new enough and little-tested enough that we'd probably not have
heard about any problems there.

Tom Lane and Michael Paquier, per bug #15827 from Jorge Gustavo Rocha.
Back-patch the parseServiceFile() change to all supported branches,
and the run_ssl_passphrase_command() change to v11 where that was added.

Discussion: https://postgr.es/m/15827-e6ba53a3a7ed543c@postgresql.org
2019-07-25 12:11:18 -04:00
Andrew Dunstan
6e5417d773 Honor MSVC WindowsSDKVersion if set
Add a line to the project file setting the target SDK. Otherwise, in for
example VS2017, if the default but optional 8.1 SDK is not installed the
build will fail.

Patch from Peifeng Qiu, slightly edited by me.

Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com

Backpatch to all live branches.
2019-07-25 11:39:26 -04:00
Andres Freund
f9257cffef Fix system column accesses in ON CONFLICT ... RETURNING.
After 277cb78983 ON CONFLICT ... SET ... RETURNING failed with
ERROR:  virtual tuple table slot does not have system attributes
when taking the update path, as the slot used to insert into the
table (and then process RETURNING) was defined to be a virtual slot in
that commit. Virtual slots don't support system columns except for
tableoid and ctid, as the other system columns are AM dependent.

Fix that by using a slot of the table's type. Add tests for system
column accesses in ON CONFLICT ...  RETURNING.

Reported-By: Roby, bisected to the relevant commit by Jeff Janes
Author: Andres Freund
Discussion: https://postgr.es/m/73436355-6432-49B1-92ED-1FE4F7E7E100@finefun.com.au
Backpatch: 12-, where the bug was introduced in 277cb78983
2019-07-24 18:46:20 -07:00
Michael Paquier
24a6b6af65 Fix failure with pgperlcritic from the TAP test of synchronous replication
Oversight in 7d81bdc, which introduced a new routine in perl lacking a
return clause.  Per buildfarm member crake.

Backpatch down to 9.6 like its parent.

Reported-by: Andrew Dunstan
Discussion: https://postgr.es/m/16da29fa-d504-1380-7095-40de586dc038@2ndQuadrant.com
Backpatch-through: 9.6
2019-07-25 07:55:28 +09:00
Tom Lane
d16e514bef Fix infelicities in describeOneTableDetails' partitioned-table handling.
describeOneTableDetails issued a partition-constraint-fetching query
for every table, even ones it knows perfectly well are not partitions.

To add insult to injury, it then proceeded to leak the empty PGresult
if the table wasn't a partition.  Doing that a lot of times might
amount to a meaningful leak, so this seems like a back-patchable bug.

Fix that, and also fix a related PGresult leak in the partition-parent
case (though that leak would occur only if we got no row, which is
unexpected).

Minor code beautification too, to make this code look more like the
pre-existing code around it.

Back-patch the whole change into v12.  However, the fact that we already
know whether the table is a partition dates only to commit 1af25ca0c;
back-patching the relevant changes from that is probably more churn
than is justified in released branches.  Hence, in v11 and v10, just
do the minimum to fix the PGresult leaks.

Noted while messing around with adjacent code for yesterday's \d
improvements.
2019-07-24 18:14:45 -04:00
Heikki Linnakangas
fb5344c969 Use full 64-bit XID for checking if a deleted GiST page is old enough.
Otherwise, after a deleted page gets even older, it becomes unrecyclable
again. B-tree has the same problem, and has had since time immemorial,
but let's at least fix this in GiST, where this is new.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:25:22 +03:00
Heikki Linnakangas
e2e992c931 Refactor checks for deleted GiST pages.
The explicit check in gistScanPage() isn't currently really necessary, as
a deleted page is always empty, so the loop would fall through without
doing anything, anyway. But it's a marginal optimization, and it gives a
nice place to attach a comment to explain how it works.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:25:21 +03:00
Andrew Dunstan
bfa4263e71 Don't assume expr is available in pgbench tests
Windows hosts do not normally come with expr, so instead of using that
to test the \setshell command, use echo instead, which is fairly
universally available.

Backpatch to release 11, where this came in.

Problem found by me, patch by Fabien Coelho.
2019-07-24 11:47:58 -04:00
Michael Paquier
0ceb28da06 Improve stability of TAP test for synchronous replication
Slow buildfarm machines have run into issues with this TAP test caused
by a race condition related to the startup of a set of standbys, where
it is possible to finish with an unexpected order in the WAL sender
array of the primary.

This closes the race condition by making sure that any standby started
is registered into the WAL sender array of the primary before starting
the next one based on lookups of pg_stat_replication.

Backpatch down to 9.6 where the test has been introduced.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Noah Misch
Discussion: https://postgr.es/m/20190617055145.GB18917@paquier.xyz
Backpatch-through: 9.6
2019-07-24 10:54:20 +09:00
Alvaro Herrera
3ffe655c0a Check that partitions are not in use when dropping constraints
If the user creates a deferred constraint in a partition, and in a
transaction they cause the constraint's trigger execution to be deferred
until commit time *and* drop the constraint, then when commit time comes
the queued trigger will fail to run because the trigger object will have
been dropped.

This is explained because when a constraint gets dropped in a
partitioned table, the recursion to drop the ones in partitions is done
by the dependency mechanism, not by ALTER TABLE traversing the recursion
tree as in all other cases.  In the non-partitioned case, this problem
is avoided by checking that the table is not "in use" by alter-table;
other alter-table subcommands that recurse to partitions do that check
for each partition.  But the dependency mechanism doesn't have a way to
do that.  Fix the problem by applying the same check to all partitions
during ALTER TABLE's "prep" phase, which correctly raises the necessary
error.

Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>
Discussion: https://postgr.es/m/CAKcux6nZiO9-eEpr1ZD84bT1mBoVmeZkfont8iSpcmYrjhGWgA@mail.gmail.com
2019-07-23 17:22:15 -04:00
Tom Lane
79e573fa49 Install dependencies to prevent dropping partition key columns.
The logic in ATExecDropColumn that rejects dropping partition key
columns is quite an inadequate defense, because it doesn't execute
in cases where a column needs to be dropped due to cascade from
something that only the column, not the whole partitioned table,
depends on.  That leaves us with a badly broken partitioned table;
even an attempt to load its relcache entry will fail.

We really need to have explicit pg_depend entries that show that the
column can't be dropped without dropping the whole table.  Hence,
add those entries.  In v12 and HEAD, bump catversion to ensure that
partitioned tables will have such entries.  We can't do that in
released branches of course, so in v10 and v11 this patch affords
protection only to partitioned tables created after the patch is
installed.  Given the lack of field complaints (this bug was found
by fuzz-testing not by end users), that's probably good enough.

In passing, fix ATExecDropColumn and ATPrepAlterColumnType
messages to be more specific about which partition key column
they're complaining about.

Per report from Manuel Rigger.  Back-patch to v10 where partitioned
tables were added.

Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com
Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
2019-07-22 14:55:23 -04:00
Tomas Vondra
5a8ee9f2db Use column collation for extended statistics
The current extended statistics code was a bit confused which collation
to use.  When building the statistics, the collations defined as default
for the data types were used (since commit 5e0928005).  The MCV code was
however using the column collations for MCV serialization, and then
DEFAULT_COLLATION_OID when computing estimates. So overall the code was
using all three possible options, inconsistently.

This uses the column colation everywhere - this makes it consistent with
what 5e0928005 did for regular stats.  We however do not track the
collations in a catalog, because we can derive them from column-level
information.  This may need to change in the future, e.g. after allowing
statistics on expressions.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:35:15 +02:00
Tomas Vondra
fc4faea179 Rework examine_opclause_expression to use varonleft
The examine_opclause_expression function needs to return information on
which side of the operator we found the Var, but the variable was called
"isgt" which is rather misleading (it assumes the operator is either
less-than or greater-than, but it may be equality or something else).
Other places in the planner use a variable called "varonleft" for this
purpose, so just adopt the same convention here.

The code also assumed we don't care about this flag for equality, as
(Var = Const) and (Const = Var) should be the same thing. But that does
not work for cross-type operators, in which case we need to pass the
parameters to the procedure in the right order. So just use the same
code for all types of expressions.

This means we don't need to care about the selectivity estimation
function anymore, at least not in this code. We should only get the
supported cases here (thanks to statext_is_compatible_clause).

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:34:58 +02:00
Tom Lane
533522846b Silence compiler warning, hopefully.
Absorb commit e5e04c962a5d12eebbf867ca25905b3ccc34cbe0 from upstream
IANA code, in hopes of silencing warnings from MSVC about negating
a bool value.

Discussion: https://postgr.es/m/20190719035347.GJ1859@paquier.xyz
2019-07-19 14:49:05 -04:00
Jeff Davis
5245552ddb Fix error in commit e6feef57.
I was careless passing a datum directly to DATE_NOT_FINITE without
calling DatumGetDateADT() first.

Backpatch-through: 9.4
2019-07-18 17:26:56 -07:00
Jeff Davis
613eabcd51 Fix daterange canonicalization for +/- infinity.
The values 'infinity' and '-infinity' are a part of the DATE type
itself, so a bound of the date 'infinity' is not the same as an
unbounded/infinite range. However, it is still wrong to try to
canonicalize such values, because adding or subtracting one has no
effect. Fix by treating 'infinity' and '-infinity' the same as
unbounded ranges for the purposes of canonicalization (but not other
purposes).

Backpatch to all versions because it is inconsistent with the
documented behavior. Note that this could be an incompatibility for
applications relying on the behavior contrary to the documentation.

Author: Laurenz Albe
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at
Backpatch-through: 9.4
2019-07-18 17:26:47 -07:00
Peter Geoghegan
7772dece98 Fix nbtree metapage cache upgrade bug.
Commit 857f9c36cd, which taught nbtree VACUUM to avoid unnecessary
index scans, bumped the nbtree version number from 2 to 3, while adding
the ability for nbtree indexes to be upgraded on-the-fly.  Various
assertions that assumed that an nbtree index was always on version 2 had
to be changed to accept any supported version (version 2 or 3 on
Postgres 11).

However, a few assertions were missed in the initial commit, all of
which were in code paths that cache a local copy of the metapage
metadata, where the index had been expected to be on the current version
(no longer version 2) as a generic sanity check.  Rather than simply
update the assertions, follow-up commit 0a64b45152 intentionally made
the metapage caching code update the per-backend cached metadata version
without changing the on-disk version at the same time.  This could even
happen when the planner needed to determine the height of a B-Tree for
costing purposes.  The assertions only fail on Postgres v12 when
upgrading from v10, because they were adjusted to use the authoritative
shared memory metapage by v12's commit dd299df8.

To fix, remove the cache-only upgrade mechanism entirely, and update the
assertions themselves to accept any supported version (go back to using
the cached version in v12).  The fix is almost a full revert of commit
0a64b45152 on the v11 branch.

VACUUM only considers the authoritative metapage, and never bothers with
a locally cached version, whereas everywhere else isn't interested in
the metapage fields that were added by commit 857f9c36cd.  It seems
unlikely that this bug has affected any user on v11.

Reported-By: Christoph Berg
Bug: #15896
Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org
Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
2019-07-18 13:22:54 -07:00
Tomas Vondra
79d3a1e52a Simplify bitmap updates in multivariate MCV code
When evaluating clauses on a multivariate MCV list, we build a bitmap
tracking how the clauses match each item of the MCV list.  When updating
the bitmap we need to consider the current value (tracking how the item
matches preceding clauses), match for the current clause and whether the
clauses are connected by AND or OR.

Until now the logic was copied on every place updating the bitmap, which
was not quite readable.  So just move it to a separate function and call
it where needed.

Backpatch to 12, where the code was introduced. While not a bugfix, this
should make maintenance and future backpatches easier.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
2019-07-18 11:30:12 +02:00
Tomas Vondra
1c2acc32dc Fix handling of NULLs in MCV items and constants
There were two issues in how the extended statistics handled NULL values
in opclauses. Firstly, the code was oblivious to the possibility that
Const may be NULL (constisnull=true) in which case the constvalue is
undefined. We need to treat this as a mismatch, and not call the proc.

Secondly, the MCV item itself may contain NULL values too - the code
already did check that, and updated the match bitmap accordingly, but
failed to ensure we won't call the operator procedure anyway. It did
work for AND-clauses, because in that case false in the bitmap stops
evaluation of further clauses. But for OR-clauses ir was not easy to
get incorrect estimates or even trigger a crash.

This fixes both issues by extending the existing check so that it looks
at constisnull too, and making sure it skips calling the procedure.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
2019-07-18 11:30:12 +02:00
Tomas Vondra
42276976a1 Fix handling of opclauses in extended statistics
We expect opclauses to have exactly one Var and one Const, but the code
was checking the Const by calling is_pseudo_constant_clause() which is
incorrect - we need a proper constant.

Fixed by using plain IsA(x,Const) to check type of the node. We need to
do these checks in two places, so move it into a separate function that
can be called in both places.

Reported by Andreas Seltenreich, based on crash reported by sqlsmith.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:30:12 +02:00
Tomas Vondra
3944e855bc Remove unnecessary TYPECACHE_GT_OPR lookup
The TYPECACHE_GT_OPR is not needed (it used to be in older version of
the MCV code), but the compiler failed to detect this as the result was
used in a fmgr_info() call, populating a FmgrInfo entry.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:30:12 +02:00
Andres Freund
fa7bb93453 tableam: comment improvements.
Author: Brad DeJong
Discussion: https://postgr.es/m/CAJnrtnxDYOQFsDfWz2iri0T_fFL2ZbbzgCOE=4yaMcszgcsf4A@mail.gmail.com
Backpatch: 12-
2019-07-17 19:51:17 -07:00
Tom Lane
d4f283162e Update time zone data files to tzdata release 2019b.
Brazil no longer observes DST.
Historical corrections for Palestine, Hong Kong, and Italy.
2019-07-17 19:15:35 -04:00
Tom Lane
59cea8f36b Sync our copy of the timezone library with IANA release tzcode2019b.
A large fraction of this diff is just due to upstream's somewhat
random decision to rename a bunch of internal variables and struct
fields.  However, there is an interesting new feature in zic:
it's grown a "-b slim" option that emits zone files without 32-bit
data and other backwards-compatibility hacks.  We should consider
whether we wish to enable that.
2019-07-17 18:26:23 -04:00
Tom Lane
e0d13dc6fd Fix thinko in construction of old_conpfeqop list.
This should lappend the OIDs, not lcons them; the existing code produced
a list in reversed order.  This is harmless for single-key FKs or FKs
where all the key columns are of the same type, which probably explains
how it went unnoticed.  But if those conditions are not met,
ATAddForeignKeyConstraint would make the wrong decision about whether an
existing FK needs to be revalidated.  I think it would almost always err
in the safe direction by revalidating a constraint that didn't need it.
You could imagine scenarios where the pfeqop check was fooled by
swapping the types of two FK columns in one ALTER TABLE, but that case
would probably be rejected by other tests, so it might be impossible to
get to the worst-case scenario where an FK should be revalidated and
isn't.  (And even then, it's likely to be fine, unless there are weird
inconsistencies in the equality behavior of the replacement types.)
However, this is a performance bug at least.

Noted while poking around to see whether lcons calls could be converted
to lappend.

This bug is old, dating to commit cb3a7c2b9, so back-patch to all
supported branches.
2019-07-16 18:17:47 -04:00
Peter Geoghegan
ac37bd51a4 Correct nbtsplitloc.c comment.
The logic just added by commit e3899ffd falls back on a 50:50 page split
in the event of a new item that's just to the right of our provisional
"many duplicates" split point.  Fix a comment that incorrectly claimed
that the new item had to be just to the left of our provisional split
point.

Backpatch: 12-, just like commit e3899ffd.
2019-07-15 14:35:05 -07:00
Peter Geoghegan
2f23453e49 Fix pathological nbtree split point choice issue.
Specific ever-decreasing insertion patterns could cause successive
unbalanced nbtree page splits.  Problem cases involve a large group of
duplicates to the left, and ever-decreasing insertions to the right.

To fix, detect the situation by considering the newitem offset before
performing a split using nbtsplitloc.c's "many duplicates" strategy.  If
the new item was inserted just to the right of our provisional "many
duplicates" split point, infer ever-decreasing insertions and fall back
on a 50:50 (space delta optimal) split.  This seems to barely affect
cases that already had acceptable space utilization.

An alternative fix also seems possible.  Instead of changing
nbtsplitloc.c split choice logic, we could instead teach _bt_truncate()
to generate a new value for new high keys by interpolating from the
lastleft and firstright key values.  That would certainly be a more
elegant fix, but it isn't suitable for backpatching.

Discussion: https://postgr.es/m/CAH2-WznCNvhZpxa__GqAa1fgQ9uYdVc=_apArkW2nc-K3O7_NA@mail.gmail.com
Backpatch: 12-, where the nbtree page split enhancements were introduced.
2019-07-15 13:19:12 -07:00
Noah Misch
dffb891d78 Revive test of concurrent OID generation.
Commit 578b229718 replaced it with a
concurrent "nextval" test.  That version does not detect PostgreSQL's
incompatibility with xlc 13.1.3, so bring back an OID-based test that
does.  Back-patch to v12, where that commit first appeared.

Discussion: https://postgr.es/m/20190707170035.GA1485546@rfd.leadboat.com
2019-07-13 13:34:38 -07:00
Tom Lane
cee976c4e8 Fix get_actual_variable_range() to cope with broken HOT chains.
Commit 3ca930fc3 modified get_actual_variable_range() to use a new
"SnapshotNonVacuumable" snapshot type for selecting tuples that it
would consider valid.  However, because that snapshot type can accept
recently-dead tuples, this caused a bug when using a recently-created
index: we might accept a recently-dead tuple that is an early member
of a broken HOT chain and does not actually match the index entry.
Then, the data extracted from the heap tuple would not necessarily be
an endpoint value of the column; it could even be NULL, leading to
get_actual_variable_range() itself reporting "found unexpected null
value in index".  Even without an error, this could lead to poor
plan choices due to an erroneous notion of the endpoint value.

We can improve matters by changing the code to use the index-only
scan technique (which didn't exist when get_actual_variable_range was
originally written).  If any of the tuples in a HOT chain are live
enough to satisfy SnapshotNonVacuumable, we take the data from the
index entry, ignoring what is in the heap.  This fixes the problem
without changing the live-vs-dead-tuple behavior from what was
intended by commit 3ca930fc3.

A side benefit is that for static tables we might not have to touch
the heap at all (when the extremal value is in an all-visible page).
In addition, we can save some overhead by not having to create a
complete ExecutorState, and we don't need to run FormIndexDatum,
avoiding more cycles as well as the possibility of failure for
indexes on expressions.  (I'm not sure that this code would ever
be used to determine the extreme value of an expression, in the
current state of the planner; but it's definitely possible that
lower-order columns of the selected index could be expressions.
So one could construct perhaps-artificial examples in which the
old code unexpectedly failed due to trying to compute an
expression's value for a now-dead row.)

Per report from Manuel Rigger.  Back-patch to v11 where commit
3ca930fc3 came in.

Discussion: https://postgr.es/m/CA+u7OA7W4NWEhCvftdV6_8bbm2vgypi5nuxfnSEJQqVKFSUoMg@mail.gmail.com
2019-07-12 16:24:59 -04:00
David Rowley
30a268ed2e Fix RANGE partition pruning with multiple boolean partition keys
match_clause_to_partition_key incorrectly would return
PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current
partition key.  This was a problem, as it causes the calling function to
discard the qual and not try to match it to any other partition key.  If
there was another partition key which did match this qual, then the qual
would not be checked again and we could fail to prune some partitions.

The worst this could do was to cause partitions not to be pruned when they
could have been, so there was no danger of incorrect query results here.

Fix this by changing match_boolean_partition_clause to have it return a
PartClauseMatchStatus rather than a boolean value.  This allows it to
communicate if the qual is unsupported or if it just does not match this
particular partition key, previously these two cases were treated the
same.  Now, if match_clause_to_partition_key is unable to match the qual
to any other qual type then we can simply return the value from the
match_boolean_partition_clause call so that the calling function properly
treats the qual as either unmatched or unsupported.

Reported-by: Rares Salcudean
Reviewed-by: Amit Langote
Backpatch-through: 11 where partition pruning was introduced
Discussion: https://postgr.es/m/CAHp_FN2xwEznH6oyS0hNTuUUZKp5PvegcVv=Co6nBXJ+mC7Y5w@mail.gmail.com
2019-07-12 19:11:45 +12:00
Michael Paquier
4d71359591 Fix variable initialization when using buffering build with GiST
This can cause valgrind to complain, as the flag marking a buffer as a
temporary copy was not getting initialized.

While on it, fill in with zeros newly-created buffer pages.  This does
not matter when loading a block from a temporary file, but it makes the
push of an index tuple into a new buffer page safer.

This has been introduced by 1d27dcf, so backpatch all the way down to
9.4.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/15899-0d24fb273b3dd90c@postgresql.org
Backpatch-through: 9.4
2019-07-10 15:15:03 +09:00
David Rowley
b8ef33b2d1 Fix missing calls to table_finish_bulk_insert during COPY, take 2
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM.  However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition.  Before 86b85044e, when we always called the heap functions,
there was no need to call heapam_finish_bulk_insert for partitions since
it only did any work when performing a copy without WAL.  For partitioned
tables, this was unsupported anyway, so there was no issue.  With
pluggable storage, we can't make any assumptions about what the table AM
might want to do in its equivalent function, so we'd better ensure we
always call table_finish_bulk_insert each partition that's received a row.

For now, we make the table_finish_bulk_insert call whenever we evict a
CopyMultiInsertBuffer out of the CopyMultiInsertInfo.  This does mean
that it's possible that we call table_finish_bulk_insert multiple times
per partition, which is not a problem other than being an inefficiency.
Improving this requires a more invasive patch, so let's leave that for
another day.

This also changes things so that we no longer needlessly call
table_finish_bulk_insert when performing a COPY FROM for a non-partitioned
table when not using multi-inserts.

Reported-by: Robert Haas
Backpatch-through: 12
Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
2019-07-10 16:02:18 +12:00
Amit Kapila
e33a1435be Fix few typos and minor word smithing in tableam comments.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal
Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
2019-07-10 07:59:51 +05:30
Thomas Munro
abbb2e1432 Pass QueryEnvironment down to EvalPlanQual's EState.
Otherwise the executor can't see trigger transition tables during
EPQ evaluation.  Fixes bug #15900 and almost certainly also #15720.
Back-patch to 10, where trigger transition tables landed.

Author: Alex Aktsipetrov
Reviewed-by: Thomas Munro, Tom Lane
Discussion: https://postgr.es/m/15900-bc482754fe8d7415%40postgresql.org
Discussion: https://postgr.es/m/15720-38c2b29e5d720187%40postgresql.org
2019-07-10 10:15:43 +12:00
Alvaro Herrera
ae0a2945b4 Propagate trigger arguments to partitions
We were creating the cloned triggers with an empty list of arguments,
losing the ones that had been specified by the user when creating the
trigger in the partitioned table.  Repair.

This was forgotten in commit 86f575948c.

Author: Patrick McHardy
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20190709130027.amr2cavjvo7rdvac@access1.trash.net
Discussion: https://postgr.es/m/15752-123bc90287986de4@postgresql.org
2019-07-09 17:16:36 -04:00
Peter Eisentraut
48d6d5a2af Message style improvements 2019-07-09 15:57:58 +02:00
Thomas Munro
5526baf549 Force hash joins to be enabled in the hash join regression tests.
Otherwise the regressplans.sh tests generate extremely slow nested
loop joins.  Back-patch to 11 where the hash join tests came in.

Reported-by: Michael Paquier
Discussion: https://postgr.es/m/20190708055256.GB2709%40paquier.xyz
2019-07-09 18:30:37 +12:00
Michael Meskes
58006f8041 Fix small memory leak in ecpglib ecpg_update_declare_statement() is called the
second time.

Author: "Zhang, Jie" <zhangjie2@cn.fujitsu.com>
2019-07-08 08:30:31 +02:00
Tom Lane
cf665ad4c8 In pg_log_generic(), be more paranoid about preserving errno.
This code failed to account for the possibility that malloc() would
change errno, resulting in wrong output for %m, not to mention the
possibility of message truncation.  Such a change is obviously
expected when malloc fails, but there's reason to fear that on some
platforms even a successful malloc call can modify errno.

Discussion: https://postgr.es/m/2576.1527382833@sss.pgh.pa.us
2019-07-06 11:25:48 -04:00
Peter Eisentraut
c8e0997239 Add missing source files to nls.mk 2019-07-06 15:02:53 +02:00
Peter Eisentraut
d8ec5ebaea psql: Fix logging output format
In normal interactive mode, psql's log messages accidentally got a
"psql:" prefix that was not supposed to be there.  This only happened
if there was no .psqlrc file being read, so it wasn't discovered for a
while.  Fix this by adding the appropriate logging format
configuration call in the right code path.

Discussion: https://www.postgresql.org/message-id/7586.1560540361@sss.pgh.pa.us
2019-07-06 14:58:08 +02:00
Amit Kapila
e5eba10d79 Add missing assertions for required table am callbacks.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal
Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
2019-07-06 11:45:39 +05:30
Tomas Vondra
a5dc37866a Remove unused variable in statext_mcv_serialize()
The itemlen variable used to be referenced in multiple places, but since
reworking the serialization code it's used only in one assert. Fixed by
removing the variable and calling the macro from the assert directly.

Backpatch to 12, where this code was introduced.

Reported-by: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1zc_ovH9NZd_9ovuiEWkF9yX06URUDdXCmgDydf-bqB5A@mail.gmail.com
2019-07-05 18:52:31 +02:00
Tomas Vondra
e74d141d0d Simplify pg_mcv_list (de)serialization
The serialization format of multivariate MCV lists included alignment in
order to allow direct access to part of the serialized data, but despite
multiple fixes (see for example commits d85e0f366a and ea4e1c0e8f) this
proved to be problematic.

This commit abandons alignment in the serialized format, and just copies
everything during deserialization.  We now also track amount of memory
needed after deserialization (including alignment), which allows us to
deserialize the MCV list in a single pass.

Bump catversion, as this affects contents of pg_statistic_ext_data.

Backpatch to 12, where multi-column MCV lists were introduced.

Author: Tomas Vondra
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2201.1561521148@sss.pgh.pa.us
2019-07-05 16:45:05 +02:00
Tomas Vondra
cc052b423a Fix pg_mcv_list_items() to produce text[]
The function pg_mcv_list_items() returns values stored in MCV items. The
items may contain columns with different data types, so the function was
generating text array-like representation, but in an ad-hoc way without
properly escaping various characters etc.

Fixed by simply building a text[] array, which also makes it easier to
use from queries etc.

Requires changes to pg_proc entry, so bump catversion.

Backpatch to 12, where multi-column MCV lists were introduced.

Author: Tomas Vondra
Reviewed-by: Dean Rasheed
Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
2019-07-05 16:18:10 +02:00
Tomas Vondra
57f459cf6c Speed-up build of MCV lists with many distinct values
When building multi-column MCV lists, we compute base frequency for each
item, i.e. a product of per-column frequencies for values from the item.
As a value may be in multiple groups, the code was scanning the whole
array of groups while adding items to the MCV list.  This works fine as
long as the number of distinct groups is small, but it's easy to trigger
trigger O(N^2) behavior, especially after increasing statistics target.

This commit precomputes frequencies for values in all columns, so that
when computing the base frequency it's enough to make a simple bsearch
lookup in the array.

Backpatch to 12, where multi-column MCV lists were introduced.

Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
2019-07-05 16:00:12 +02:00
Tom Lane
1dd8cf1b46 Ensure plpgsql result tuples have the right composite type marking.
A function that is declared to return a named composite type must
return tuple datums that are physically marked as having that type.
The plpgsql code path that allowed directly returning an expanded-record
datum forgot to check that, so that an expanded record marked as type
RECORDOID could be returned if it had a physically-compatible tupdesc.
This'd be harmless, I think, if the record value never escaped the
current session --- but it's possible for it to get stored into a table,
and then subsequent sessions can't interpret the anonymous record type.

Fix by flattening the record into a tuple datum and overwriting its
type/typmod fields, if its declared type doesn't match the function's
declared type.  (In principle it might be possible to just change the
expanded record's stored type ID info, but there are enough tricky
consequences that I didn't want to mess with that, especially not in
a back-patched bug fix.)

Per bug report from Steve Rogerson.  Back-patch to v11 where the bug
was introduced.

Discussion: https://postgr.es/m/cbaecae6-7b87-584e-45f6-4d047b92ca2a@yewtc.demon.co.uk
2019-07-03 18:08:53 -04:00
David Rowley
0cce43a716 Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by.  This is fine to do since we know
that there will only be one row per group coming from this relation.
However, the logic failed to consider inheritance parent relations.  These
can have child relations without a primary key, but even if they did, they
could duplicate one of the parent's rows or one from another child
relation.  In this case, those additional GROUP BY columns are required.

Fix this by disabling the optimization for inheritance parent tables.
In v11 and beyond, partitioned tables are fine since partitions cannot
overlap and before v11 partitioned tables could not have a primary key.

Reported-by: Manuel Rigger
Discussion: http://postgr.es/m/CA+u7OA7VLKf_vEr6kLF3MnWSA9LToJYncgpNX2tQ-oWzYCBQAw@mail.gmail.com
Backpatch-through: 9.6
2019-07-03 23:45:25 +12:00
Michael Paquier
f2d069ce95 Add support for Visual Studio 2019 in build scripts
This fixes at the same time a set of inconsistencies in the
documentation and the scripts related to the versions of Windows SDK
supported.

Author: Haribabu Kommi
Reviewed-by: Andrew Dunstan, Juan José Santamaría Flecha, Michael
Paquier
Discussion: https://postgr.es/m/CAJrrPGcfqXhfPyMrny9apoDU7M1t59dzVAvoJ9AeAh5BJi+UzA@mail.gmail.com
Backpatch-through: 9.4
2019-07-03 08:57:22 +09:00
Peter Eisentraut
91be5387c2 Fix accidentally swapped error message arguments
Author: Alexey Kondratov <a.kondratov@postgrespro.ru>
2019-07-02 23:45:00 +01:00
Peter Eisentraut
7956d7ac98 Remove redundant newlines from error messages
These are no longer needed/allowed with the new logging API.
2019-07-02 23:28:42 +01:00
Tom Lane
a4ce93395e Don't treat complete_from_const as equivalent to complete_from_list.
Commit 4f3b38fe2 supposed that complete_from_const() is equivalent to
the one-element-list case of complete_from_list(), but that's not
really true at all.  complete_from_const() supposes that the completion
is certain enough to justify wiping out whatever the user typed, while
complete_from_list() will only provide completions that match the
word-so-far.

In practice, given the lame parsing technology used by tab-complete.c,
it's fairly hard to believe that we're *ever* certain enough about
a completion to justify auto-correcting user input that doesn't match.

Hence, remove the inappropriate unification of the two cases.
As things now stand, complete_from_const() is used only for the
situation where we have no matches and we need to keep readline
from applying its default complete-with-file-names behavior.

This (mis?) behavior actually exists much further back, but
I'm hesitant to change it in released branches.  It's not too
late for v12, though, especially seeing that the aforesaid
commit is new in v12.

Per gripe from Ken Tanzer.

Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
2019-07-02 14:04:42 -04:00
Tom Lane
4f3648de2a Fix tab completion of "SET variable TO|=" to not offer bogus completions.
Don't think that the context "UPDATE tab SET var =" is a GUC-setting
command.

If we have "SET var =" but the "var" is not a known GUC variable,
don't offer any completions.  The most likely explanation is that
we've misparsed the context and it's not really a GUC-setting command.

Per gripe from Ken Tanzer.  Back-patch to 9.6.  The issue exists
further back, but before 9.6 the code looks very different and it
doesn't actually know whether the "var" name matches anything,
so I desisted from trying to fix it.

Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
2019-07-02 13:35:14 -04:00
Tom Lane
d4b090f54f Revert "Insert temporary debugging output in regression tests."
This reverts commit f03a9ca436,
in the v12 branch only.  We don't want to ship v12 with that,
since it causes occasional test failures (as a result of statistics
transmission not being entirely reliable).

I'll leave it in HEAD though, in hopes that we'll eventually
capture an instance of the original problematic behavior.
2019-07-01 19:46:04 -04:00
Tom Lane
9e1c9f9594 pgindent run prior to branching v12.
pgperltidy and reformat-dat-files too, though the latter didn't
find anything to change.
2019-07-01 12:37:52 -04:00
David Rowley
f5db56fc4d Revert fix missing call to table_finish_bulk_insert during COPY
This reverts commits 4de60244e and b2d69806d. Further thought is
required to make this work properly.
2019-07-02 03:44:56 +12:00
David Rowley
b2d69806d8 Remove surplus call to table_finish_bulk_insert
4de60244e added the call to table_finish_bulk_insert to the
CopyMultiInsertBufferCleanup function.  We use a CopyMultiInsertBuffer even
for non-partitioned tables, so having the cleanup do that meant we would
call table_finsh_bulk_insert twice when performing COPY FROM with
a non-partitioned table.

Here we can just remove the direct call in CopyFrom and let
CopyMultiInsertBufferCleanup handle the call instead.
2019-07-02 03:07:15 +12:00
David Rowley
4de60244e2 Fix missing call to table_finish_bulk_insert during COPY
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM.  However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition.  Before 86b85044e, when we always called the heap functions,
there was no need to call heapam_finish_bulk_insert for partitions since
it only did any work when performing a copy without WAL. For partitioned
tables, this was unsupported anyway, so there was no issue. With pluggable
storage, we can't make any assumptions about what the table AM might want
to do in its equivalent function, so we'd better ensure we always call
table_finish_bulk_insert each partition that's received a row.

For now, we make the table_finish_bulk_insert call whenever we evict a
CopyMultiInsertBuffer out of the CopyMultiInsertInfo.  This does mean
that it's possible that we call table_finish_bulk_insert multiple times
per partition, which is not a problem other than being an inefficiency.
Improving this requires a more invasive patch, so let's leave that for
another day.

In passing, move the table_finish_bulk_insert for the target of the COPY
command so that it's only called when we're actually performing bulk
inserts.  We don't need to call this when inserting 1 row at a time.

Reported-by: Robert Haas
Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
2019-07-02 01:23:26 +12:00
Peter Eisentraut
95bbe5d82e Convert some stragglers to new frontend logging API 2019-07-01 13:34:31 +02:00
Peter Eisentraut
1b29e990e3 Add missing serial commas 2019-07-01 13:07:14 +02:00
Michael Paquier
c74d49d41c Fix many typos and inconsistencies
Author: Alexander Lakhin
Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
2019-07-01 10:00:23 +09:00
Noah Misch
459c3cdb4a Don't read fields of a misaligned ExpandedObjectHeader or AnyArrayType.
UBSan complains about this.  Instead, cast to a suitable type requiring
only 4-byte alignment.  DatumGetAnyArrayP() already assumes one can cast
between AnyArrayType and ArrayType, so this doesn't introduce a new
assumption.  Back-patch to 9.5, where AnyArrayType was introduced.

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20190629210334.GA1244217@rfd.leadboat.com
2019-06-30 17:34:17 -07:00
Andrew Gierth
da53be23d1 Repair logic for reordering grouping sets optimization.
The logic in reorder_grouping_sets to order grouping set elements to
match a pre-specified sort ordering was defective, resulting in
unnecessary sort nodes (though the query output would still be
correct). Repair, simplifying the code a little, and add a test.

Per report from Richard Guo, though I didn't use their patch. Original
bug seems to have been my fault.

Backpatch back to 9.5 where grouping sets were introduced.

Discussion: https://postgr.es/m/CAN_9JTzyjGcUjiBHxLsgqfk7PkdLGXiM=pwM+=ph2LsWw0WO1A@mail.gmail.com
2019-06-30 23:49:13 +01:00
Tom Lane
c000a47ad1 Exclude new src/test/modules/unsafe_tests directory from MSVC build.
There's nothing to build here, and that was confusing AddContrib().
Per buildfarm.
2019-06-30 14:05:24 -04:00
Tom Lane
681cca86f5 Blind attempt to fix SSPI-auth case in 010_dump_connstr.pl.
Up to now, pg_regress --config-auth had a hard-wired assumption
that the target cluster uses the default bootstrap superuser name.
pg_dump's 010_dump_connstr.pl TAP test uses non-default superuser
names, and was klugily getting around the restriction by listing
the desired superuser name as a role to "create".  This is pretty
confusing (or at least, it confused me).  Let's make it clearer by
allowing --config-auth mode to be told the bootstrap superuser name.
Repurpose the existing --user switch for that, since it has no
other function in --config-auth mode.

Per buildfarm.  I don't have an environment at hand in which I can
test this fix, but the buildfarm should soon show if it works.

Discussion: https://postgr.es/m/3142.1561840611@sss.pgh.pa.us
2019-06-30 13:34:45 -04:00
Tom Lane
c91504b958 Move rolenames test out of the core regression tests.
This test script is unsafe to run in "make installcheck" mode for
(at least) two reasons: it creates and destroys some role names
that don't follow the "regress_xxx" naming convention, and it
sets and then resets the application_name GUC attached to every
existing role.  While we've not had complaints, these surely are
not good things to do within a production installation, and
regress.sgml pretty clearly implies that we won't do them.

Rather than lose test coverage altogether, let's just move this
script somewhere where it will get run by "make check" but not
"make installcheck".  src/test/modules/ already has that property.

Since it seems likely that we'll want other regression tests in
future that also exceed the constraints of "make installcheck",
create a generically-named src/test/modules/unsafe_tests/
directory to hold them.

Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
2019-06-30 12:51:12 -04:00
Peter Eisentraut
2e810508f6 Fix breakage introduced in pg_lsn_in()
Using PG_RETURN_LSN() from non-fmgr pg_lsn_in_internal() happened to
work on some platforms, but should just be a plain "return".
2019-06-30 13:25:33 +02:00
Peter Eisentraut
21f428ebde Don't call data type input functions in GUC check hooks
Instead of calling pg_lsn_in() in check_recovery_target_lsn and
timestamptz_in() in check_recovery_target_time, reorganize the
respective code so that we don't raise any errors in the check hooks.
The previous code tried to use PG_TRY/PG_CATCH to handle errors in a
way that is not safe, so now the code contains no ereport() calls and
can operate safely within the GUC error handling system.

Moreover, since the interpretation of the recovery_target_time string
may depend on the time zone, we cannot do the final processing of that
string until all the GUC processing is done.  Instead,
check_recovery_target_time() now does some parsing for syntax
checking, but the actual conversion to a timestamptz value is done
later in the recovery code that uses it.

Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20190611061115.njjwkagvxp4qujhp%40alap3.anarazel.de
2019-06-30 10:27:43 +02:00
Peter Eisentraut
666cbae16d Remove explicit error handling for obsolete date/time values
The date/time values 'current', 'invalid', and 'undefined' were
removed a long time ago, but the code still contains explicit error
handling for the transition.  To simplify the code and avoid having to
handle these values everywhere, just remove the recognition of these
tokens altogether now.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-06-30 10:27:35 +02:00
Tom Lane
54100f5c60 Add an enforcement mechanism for global object names in regression tests.
In commit 18555b132 we tentatively established a rule that regression
tests should use names containing "regression" for databases, and names
starting with "regress_" for all other globally-visible object names, so
as to circumscribe the side-effects that "make installcheck" could have
on an existing installation.

This commit adds a simple enforcement mechanism for that rule: if the code
is compiled with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS defined, it
will emit a warning (not an error) whenever a database, role, tablespace,
subscription, or replication origin name is created that doesn't obey the
rule.  Running one or more buildfarm members with that symbol defined
should be enough to catch new violations, at least in the regular
regression tests.  Most TAP tests wouldn't notice such warnings, but
that's actually fine because TAP tests don't execute against an existing
server anyway.

Since it's already the case that running src/test/modules/ tests in
installcheck mode is deprecated, we can use that as a home for tests
that seem unsafe to run against an existing server, such as tests that
might have side-effects on existing roles.  Document that (though this
commit doesn't in itself make it any less safe than before).

Update regress.sgml to define these restrictions more clearly, and
to clean up assorted lack-of-up-to-date-ness in its descriptions of
the available regression tests.

Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
2019-06-29 11:34:00 -04:00
Tom Lane
ca129e58c0 Fix regression tests to use only global names beginning with "regress_".
In commit 18555b132 we tentatively established a rule that regression
tests should use names containing "regression" for databases, and names
starting with "regress_" for all other globally-visible object names, so
as to circumscribe the side-effects that "make installcheck" could have on
an existing installation.  However, no enforcement mechanism was created,
so it's unsurprising that some new violations have crept in since then.

In fact, a whole new *category* of violations has crept in, to wit we now
also have globally-visible subscription and replication origin names, and
"make installcheck" could very easily clobber user-created objects of
those types.  So it's past time to do something about this.

This commit sanitizes the tests enough that they will pass (i.e. not
generate any visible warnings) with the enforcement mechanism I'll add
in the next commit.  There are some TAP tests that still trigger the
warnings, but the warnings do not cause test failure.  Since these tests
do not actually run against a pre-existing installation, there's no need
to worry whether they could conflict with user-created objects.

The problem with rolenames.sql testing special role names like "user"
is still there, and is dealt with only very cosmetically in this patch
(by hiding the warnings :-().  What we actually need to do to be safe is
to take that test script out of "make installcheck" altogether, but that
seems like material for a separate patch.

Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
2019-06-29 11:09:03 -04:00
Tom Lane
a1e61badf9 Disallow user-created replication origins named "pg_xxx".
Since we generate such names internally, it seems like a good idea
to have a policy of disallowing them for user use, as we do for many
other object types.  Otherwise attempts to use them will randomly
fail due to collisions with internally-generated names.

Discussion: https://postgr.es/m/3606.1561747369@sss.pgh.pa.us
2019-06-29 10:30:08 -04:00
Michael Paquier
c0faa72750 Remove unnecessary header from be-secure-gssapi.c
libpq/libpq-be.h is included by libpq/libpq.h so there is no need to
explicitly include it separately.

Author: Daniel Gustafsson
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/A4852E46-9ED1-4861-A23B-22A83E34A084@yesql.se
2019-06-29 11:17:37 +09:00
Alvaro Herrera
23cccb17fe Fix for dropped columns in a partitioned table's default partition
We forgot to map column numbers to/from the default partition for
various operations, leading to valid cases failing with spurious
errors, such as
ERROR:  attribute N of type some_partition has been dropped

It was also possible that the search for conflicting rows in the default
partition when attaching another partition would fail to detect some.
Secondarily, it was also possible that such a search should be skipped
(because the constraint was implied) but wasn't.

Fix all this by mapping column numbers when necessary.

Reported by: Daniel Wilches
Author: Amit Langote
Discussion: https://postgr.es/m/15873-8c61945d6b3ef87c@postgresql.org
2019-06-28 14:51:08 -04:00
Thomas Munro
74b7cc8c02 Fix misleading comment in nodeIndexonlyscan.c.
The stated reason for acquiring predicate locks on heap pages hasn't
existed since commit c01262a8, so fix the comment.  Perhaps in a later
release we'll also be able to change the code to use tuple locks.

Back-patch all the way.

Reviewed-by: Ashwin Agrawal
Discussion: https://postgr.es/m/CAEepm%3D2GK3FVdnt5V3d%2Bh9njWipCv_fNL%3DwjxyUhzsF%3D0PcbNg%40mail.gmail.com
2019-06-28 17:13:08 +12:00
Tomas Vondra
69fd82fedd Update reference to sampling algorithm in analyze.c
Commit 83e176ec1 moved row sampling functions from analyze.c to
utils/misc/sampling.c, but failed to update comment referring to
the sampling algorithm from Jeff Vitter's paper. Correct the
comment by pointing to utils/misc/sampling.c.

Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK154gp%2BQd%3DcorQOv%2BPmbyVyZBjp_%2Bhb766UJeD1e_ie6XQ%40mail.gmail.com
2019-06-27 18:01:54 +02:00
Alvaro Herrera
050098b14e Fix use-after-free introduced in 55ed3defc9
Evidenced by failure under RELCACHE_FORCE_RELEASE (buildfarm member
prion).

Author: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqGV=k_Eh4jBiQw66ivvdG+EUkrEYeHTYL1SvDj_YOYV0g@mail.gmail.com
2019-06-27 11:57:10 -04:00
Peter Eisentraut
f2f0082ef5 Update comment
Function was renamed/replaced in
c2fe139c20 but the header comment was
not updated.
2019-06-27 15:57:14 +02:00
Michael Paquier
322c5bfdc3 Remove remaining traces of Rand_OpenSSL() from the tree
fe0a0b5 has removed the last use of this routine from pgcrypto, leading
to a useless symbol definition and an extra configure check.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/20190626142544.GN1714@paquier.xyz
2019-06-27 08:25:26 +09:00
Alvaro Herrera
55ed3defc9 Fix partitioned index creation with foreign partitions
When a partitioned tables contains foreign tables as partitions, it is
not possible to implement unique or primary key indexes -- but when
regular indexes are created, there is no reason to do anything other
than ignoring such partitions.  We were raising errors upon encountering
the foreign partitions, which is unfriendly and doesn't protect against
any actual problems.

Relax this restriction so that index creation is allowed on partitioned
tables containing foreign partitions, becoming a no-op on them.  (We may
later want to redefine this so that the FDW is told to create the
indexes on the foreign side.)  This applies to CREATE INDEX, as well as
ALTER TABLE / ATTACH PARTITION and CREATE TABLE / PARTITION OF.

Backpatch to 11, where indexes on partitioned tables were introduced.

Discussion: https://postgr.es/m/15724-d5a58fa9472eef4f@postgresql.org
Author: Álvaro Herrera
Reviewed-by: Amit Langote
2019-06-26 18:38:51 -04:00
Tom Lane
c360477d2e Follow the rule that regression-test-created roles are named "regress_xxx".
Commit 1c5d9270e had not gotten the word about this.  (For previous
context, see 18555b132.)
2019-06-25 22:53:42 -04:00
Michael Paquier
d993e0fb82 Add support for OpenSSL 1.1.0 and newer versions in MSVC scripts
Up to now, the MSVC build scripts are able to support only one fixed
version of OpenSSL, and they lacked logic to detect the version of
OpenSSL a given compilation of Postgres is linking to (currently 1.0.2,
the latest LTS of upstream which will be EOL'd at the end of 2019).

This commit adds more logic to detect the version of OpenSSL used by a
build and makes use of it to add support for compilation with OpenSSL
1.1.0 which requires a new set of compilation flags to work properly.

The supported OpenSSL installers have changed their library layer with
various library renames with the upgrade to 1.1.0, making the logic a
bit more complicated.  The scripts are now able to adapt to the new
world order.

Reported-by: Sergey Pashkov
Author: Juan José Santamaría Flecha, Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/15789-8fc75dea3c5a17c8@postgresql.org
2019-06-26 10:44:46 +09:00
Michael Paquier
ce59b75d44 Add toast-level reloption for vacuum_index_cleanup
a96c41f has introduced the option for heap, but it still lacked the
variant to control the behavior for toast relations.

While on it, refactor the tests so as they stress more scenarios with
the various values that vacuum_index_cleanup can use.  It would be
useful to couple those tests with pageinspect to check that pages are
actually cleaned up, but this is left for later.

Author: Masahiko Sawada, Michael Paquier
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAD21AoCqs8iN04RX=i1KtLSaX5RrTEM04b7NHYps4+rqtpWNEg@mail.gmail.com
2019-06-25 09:09:27 +09:00
Thomas Munro
a2dec37480 Remove misleading comment from pathnodes.h.
As of commit e5253fdc, it is no longer true that the leader always
executes the subplan of a Gather Merge node.  Remove comment to that
effect.

Back-patch to 11.

Discussion: https://postgr.es/m/CA%2BhUKGJEaZJYezXAOutuiWT%2BfxCA44%2BoKtVPAND2ubLiigR%3D-w%40mail.gmail.com
2019-06-25 09:21:54 +12:00
Tom Lane
ccfcc8fdbd Purely-cosmetic adjustments in tablecmds.c.
Move ATExecAlterColumnGenericOptions away from where it was unthinkingly
dropped, in the middle of a lot of ALTER COLUMN TYPE code.  I don't have
any high principles about where to put it instead, so let's just put it
after ALTER COLUMN TYPE and before ALTER OWNER, matching existing
decisions about how to order related code stanzas.

Also add the minimal function header comment that the original author
was too cool to bother with.

Along the way, upgrade header comments for nearby ALTER COLUMN TYPE
functions.

Discussion: https://postgr.es/m/14787.1561403130@sss.pgh.pa.us
2019-06-24 17:19:37 -04:00
Peter Eisentraut
82be666ee3 Update unicode_norm_table.h to Unicode 12.1.0 2019-06-24 22:50:56 +02:00
Peter Eisentraut
2cadefbb97 Make script output more pgindent compatible 2019-06-24 22:50:56 +02:00
Peter Eisentraut
eb8d05bfec Correct script name in README file 2019-06-24 22:50:56 +02:00
Tom Lane
f946a40914 Further fix ALTER COLUMN TYPE's handling of indexes and index constraints.
This patch reverts all the code changes of commit e76de8861, which turns
out to have been seriously misguided.  We can't wait till later to compute
the definition string for an index; we must capture that before applying
the data type change for any column it depends on, else ruleutils.c will
deliverr wrong/misleading results.  (This fine point was documented
nowhere, of course.)

I'd also managed to forget that ATExecAlterColumnType executes once per
ALTER COLUMN TYPE clause, not once per statement; which resulted in the
code being basically completely broken for any case in which multiple ALTER
COLUMN TYPE clauses are applied to a table having non-constraint indexes
that must be rebuilt.  Through very bad luck, none of the existing test
cases nor the ones added by e76de8861 caught that, but of course it was
soon found in the field.

The previous patch also had an implicit assumption that if a constraint's
index had a dependency on a table column, so would the constraint --- but
that isn't actually true, so it didn't fix such cases.

Instead of trying to delete unneeded index dependencies later, do the
is-there-a-constraint lookup immediately on seeing an index dependency,
and switch to remembering the constraint if so.  In the unusual case of
multiple column dependencies for a constraint index, this will result in
duplicate constraint lookups, but that's not that horrible compared to all
the other work that happens here.  Besides, such cases did not work at all
before, so it's hard to argue that they're performance-critical for anyone.

Per bug #15865 from Keith Fiske.  As before, back-patch to all supported
branches.

Discussion: https://postgr.es/m/15865-17940eacc8f8b081@postgresql.org
2019-06-24 16:43:21 -04:00
Tom Lane
f31111bbe8 Drop test user when done with it.
Commit d7f8d26d9 added a test case that created a user, but forgot
to drop it again.  This is no good; for one thing, it causes repeated
"make installcheck" runs to fail.
2019-06-24 12:36:51 -04:00
Peter Eisentraut
12e037e209 Upgrade internal error message to external
As part of REINDEX CONCURRENTLY, this formerly internal-only error
message becomes potentially user-visible (see regression tests), so
change from errmsg_internal() to errmsg(), and update comment.
2019-06-24 10:39:12 +02:00
Noah Misch
9a81c9fa3f Don't call PG_RETURN_BOOL() in a function not returning Datum.
This code is new in v12, and the defect probably was not user-visible.
2019-06-23 12:02:19 -07:00
Dean Rasheed
d7f8d26d9f Add security checks to the multivariate MCV estimation code.
The multivariate MCV estimation code may run user-defined operators on
the values in the MCV list, which means that those operators may
potentially leak the values from the MCV list. Guard against leaking
data to unprivileged users by checking that the user has SELECT
privileges on the table or all of the columns referred to by the
statistics.

Additionally, if there are any securityQuals on the RTE (either due to
RLS policies on the table, or accessing the table via a security
barrier view), not all rows may be visible to the current user, even
if they have table or column privileges. Thus we further insist that
the operator be leakproof in this case.

Dean Rasheed, reviewed by Tomas Vondra.

Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui=Vdx4N==VV5XOK5dsXfnGgVOz_JhAicB=ZA@mail.gmail.com
2019-06-23 18:50:08 +01:00
Thomas Munro
89ff7c08ee Remove unnecessary comment.
Author: Vik Fearing
Discussion: https://postgr.es/m/150d3e9f-c7ec-3fb3-4fdb-def47c4144af%402ndquadrant.com
2019-06-23 22:19:59 +12:00
Tom Lane
1323bfce55 Fix spinlock assembly code for MIPS so it works on MIPS r6.
Original MIPS-I processors didn't have the LL/SC instructions (nor any
other userland synchronization primitive).  If the build toolchain
targets that ISA variant by default, as an astonishingly large fraction
of MIPS platforms still do, the assembler won't take LL/SC without
coercion in the form of a ".set mips2" instruction.  But we issued that
unconditionally, making it an ISA downgrade for chips later than MIPS2.
That breaks things for the latest MIPS r6 ISA, which encodes these
instructions differently.  Adjust the code so we don't change ISA level
if it's >= 2.

Note that this patch doesn't change what happens on an actual MIPS-I
processor: either the kernel will emulate these instructions
transparently, or you'll get a SIGILL failure.  That tradeoff seemed
fine in 2002 when this code was added (cf 3cbe6b247), and it's even
more so today when MIPS-I is basically extinct.  But let's add a
comment about that.

YunQiang Su (with cosmetic adjustments by me).  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/15844-8f62fe7e163939b3@postgresql.org
2019-06-22 20:31:50 -04:00
Noah Misch
660a2b1903 Consolidate methods for translating a Perl path to a Windows path.
This fixes some TAP suites when using msys Perl and a builddir located
in an msys mount point other than "/".  For example, builddir=/c/pg
exhibited the problem, since /c/pg falls in mount point "/c".
Back-patch to 9.6, where tests first started to perform such
translations.  In back branches, offer both new and old APIs.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20190610045838.GA238501@rfd.leadboat.com
2019-06-21 20:34:23 -07:00
Thomas Munro
25b93a2967 Remove obsolete comments about sempahores from proc.c.
Commit 6753333f switched from a semaphore-based wait to a latch-based
wait for ProcSleep()/ProcWakeup(), but left behind some stray references
to semaphores.

Back-patch to 9.5.

Reviewed-by: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/CA+hUKGLs5H6zhmgTijZ1OaJvC1sG0=AFXc1aHuce32tKiQrdEA@mail.gmail.com
2019-06-21 10:57:07 +12:00
Michael Paquier
20e1cc898d Rework some error strings for REINDEX CONCURRENTLY with system catalogs
This makes the whole user experience more consistent when bumping into
failures, and more in line with the rewording done via 508300e.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20190514153252.GA22168@alvherre.pgsql
2019-06-20 13:28:12 +09:00
Peter Eisentraut
cd917ffb9a pg_upgrade: Improve error messages
Make wording more accurate and add strerror() information.

Discussion: https://www.postgresql.org/message-id/24c8bd05-aed1-6301-919d-8acbabdb8c24@2ndquadrant.com
2019-06-19 21:50:24 +02:00
Peter Eisentraut
aba78ab4a9 pg_upgrade: Improve invalid option handling
Currently, calling pg_upgrade with an invalid command-line option
aborts pg_upgrade but leaves a pg_upgrade_internal.log file lying
around.  Reorder things a bit so that that file is not created until
all the options have been parsed.

Discussion: https://www.postgresql.org/message-id/24c8bd05-aed1-6301-919d-8acbabdb8c24@2ndquadrant.com
2019-06-19 21:50:24 +02:00
Alexander Korotkov
261a5c1928 Support 'q' flag in jsonpath 'like_regex' predicate
SQL/JSON standard defines that jsonpath 'like_regex' predicate should support
the same set of flags as XQuery/XPath.  It appears that implementation of 'q'
flag was missed.  This commit fixes that.

Discussion: https://postgr.es/m/CAPpHfdtyfPsxLYiTjp5Ov8T5xGsB5t3CwE5%2B3PS%3DLLwA%2BxTJog%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov
2019-06-19 22:41:57 +03:00
Peter Eisentraut
d8594d123c Update list of combining characters
The list of combining characters to ignore for calculating the display
width of a string (used for example by psql) was wildly outdated and
incorrect.

Discussion: https://www.postgresql.org/message-id/flat/bbb19114-af1e-513b-08a9-61272794bd5c%402ndquadrant.com
2019-06-19 21:35:41 +02:00
Magnus Hagander
66013fe730 Fix typo
Author: Daniel Gustafsson
2019-06-19 14:59:26 +02:00
Michael Paquier
414cca40d5 Remove last references to WAL segment size in MSVC scripts
fc49e24 has removed the last use of this compile-time variable as WAL
segment size is something that can now be set at initdb time, still this
commit has forgotten some references to it.

Discussion: https://postgr.es/m/20190617073228.GE18917@paquier.xyz
2019-06-19 11:18:50 +09:00
Michael Paquier
3c28fd2281 Fix description of WAL record XLOG_BTREE_META_CLEANUP
This record uses one metadata buffer and registers some data associated
to the buffer, but when parsing the record for its description a direct
access to the record data was done, but there is none.  This leads
usually to an incorrect description, but can also cause crashes like in
pg_waldump.  Instead, fix things so as the parsing uses the data
associated to the metadata block.

This is an oversight from 3d92796, so backpatch down to 11.

Author: Michael Paquier
Description: https://postgr.es/m/20190617013059.GA3153@paquier.xyz
Backpatch-through: 11
2019-06-19 11:02:19 +09:00
Andres Freund
23224563d9 Fix memory corruption/crash in ANALYZE.
This fixes an embarrassing oversight I (Andres) made in 737a292b,
namely missing two place where liverows/deadrows were used when
converting those variables to pointers, leading to incrementing the
pointer, rather than the value.

It's not that actually that easy to trigger a crash: One needs tuples
deleted by the current transaction, followed by a tuple deleted in
another session, all in one page. Which is presumably why this hasn't
been noticed before.

Reported-By: Steve Singer
Author: Steve Singer
Discussion: https://postgr.es/m/c7988239-d42c-ddc4-41db-171b23b35e4f@ssinger.info
2019-06-18 15:51:04 -07:00
Alvaro Herrera
8b21b416ed Avoid spurious deadlocks when upgrading a tuple lock
This puts back reverted commit de87a084c0, with some bug fixes.

When two (or more) transactions are waiting for transaction T1 to release a
tuple-level lock, and transaction T1 upgrades its lock to a higher level, a
spurious deadlock can be reported among the waiting transactions when T1
finishes.  The simplest example case seems to be:

T1: select id from job where name = 'a' for key share;
Y: select id from job where name = 'a' for update; -- starts waiting for T1
Z: select id from job where name = 'a' for key share;
T1: update job set name = 'b' where id = 1;
Z: update job set name = 'c' where id = 1; -- starts waiting for T1
T1: rollback;

At this point, transaction Y is rolled back on account of a deadlock: Y
holds the heavyweight tuple lock and is waiting for the Xmax to be released,
while Z holds part of the multixact and tries to acquire the heavyweight
lock (per protocol) and goes to sleep; once T1 releases its part of the
multixact, Z is awakened only to be put back to sleep on the heavyweight
lock that Y is holding while sleeping.  Kaboom.

This can be avoided by having Z skip the heavyweight lock acquisition.  As
far as I can see, the biggest downside is that if there are multiple Z
transactions, the order in which they resume after T1 finishes is not
guaranteed.

Backpatch to 9.6.  The patch applies cleanly on 9.5, but the new tests don't
work there (because isolationtester is not smart enough), so I'm not going
to risk it.

Author: Oleksii Kliukin
Discussion: https://postgr.es/m/B9C9D7CD-EB94-4635-91B6-E558ACEC0EC3@hintbits.com
Discussion: https://postgr.es/m/2815.1560521451@sss.pgh.pa.us
2019-06-18 18:23:16 -04:00
Thomas Munro
aca127c105 Prevent Parallel Hash Join for JOIN_UNIQUE_INNER.
WHERE EXISTS (...) queries cannot be executed by Parallel Hash Join
with jointype JOIN_UNIQUE_INNER, because there is no way to make a
partial plan totally unique.  The consequence of allowing such plans
was duplicate results from some EXISTS queries.

Back-patch to 11.  Bug #15857.

Author: Thomas Munro
Reviewed-by: Tom Lane
Reported-by: Vladimir Kriukov
Discussion: https://postgr.es/m/15857-d1ba2a64bce0795e%40postgresql.org
2019-06-19 01:25:57 +12:00
Tom Lane
0ab7110bcb Stamp 12beta2. 2019-06-17 17:12:29 -04:00
Peter Eisentraut
91acff7a53 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1a710c413ce4c4cd081843e563cde256bb95f490
2019-06-17 15:30:20 +02:00
Michael Paquier
b674211788 Fix buffer overflow when processing SCRAM final message in libpq
When a client connects to a rogue server sending specifically-crafted
messages, this can suffice to execute arbitrary code as the operating
system account used by the client.

While on it, fix one error handling when decoding an incorrect salt
included in the first message received from server.

Author: Michael Paquier
Reviewed-by: Jonathan Katz, Heikki Linnakangas
Security: CVE-2019-10164
Backpatch-through: 10
2019-06-17 22:13:57 +09:00
Michael Paquier
09ec55b933 Fix buffer overflow when parsing SCRAM verifiers in backend
Any authenticated user can overflow a stack-based buffer by changing the
user's own password to a purpose-crafted value.  This often suffices to
execute arbitrary code as the PostgreSQL operating system account.

This fix is contributed by multiple folks, based on an initial analysis
from Tom Lane.  This issue has been introduced by 68e61ee, so it was
possible to make use of it at authentication time.  It became more
easily to trigger after ccae190 which has made the SCRAM parsing more
strict when changing a password, in the case where the client passes
down a verifier already hashed using SCRAM.  Back-patch to v10 where
SCRAM has been introduced.

Reported-by: Alexander Lakhin
Author: Jonathan Katz, Heikki Linnakangas, Michael Paquier
Security: CVE-2019-10164
Backpatch-through: 10
2019-06-17 21:48:17 +09:00
Michael Paquier
3412030205 Fix more typos and inconsistencies in the tree
Author: Alexander Lakhin
Discussion: https://postgr.es/m/0a5419ea-1452-a4e6-72ff-545b1a5a8076@gmail.com
2019-06-17 16:13:16 +09:00
Alvaro Herrera
9d20b0ec8f Revert "Avoid spurious deadlocks when upgrading a tuple lock"
This reverts commits 3da73d6839 and de87a084c0.

This code has some tricky corner cases that I'm not sure are correct and
not properly tested anyway, so I'm reverting the whole thing for next
week's releases (reintroducing the deadlock bug that we set to fix).
I'll try again afterwards.

Discussion: https://postgr.es/m/E1hbXKQ-0003g1-0C@gemulon.postgresql.org
2019-06-16 22:24:21 -04:00
Tom Lane
6973b058bc Further fix privileges on pg_statistic_ext[_data].
We don't need to restrict column privileges on pg_statistic_ext;
all of that data is OK to read publicly.  What we *do* need to do,
which was overlooked by 6cbfb784c, is revoke public read access on
pg_statistic_ext_data; otherwise we still have the same security
hole we started with.

Catversion bump to ensure that installations calling themselves
beta2 will have this fix.

Diagnosis/correction by Dean Rasheed and Tomas Vondra, but I'm
going to go ahead and push this fix ASAP so we get more buildfarm
cycles on it.

Discussion: https://postgr.es/m/8833.1560647898@sss.pgh.pa.us
2019-06-16 11:00:23 -04:00
Tomas Vondra
fc8cf3df47 Fix privileges on pg_statistic_ext.tableoid
The GRANT in system_views allowed SELECT privileges on various columns in
the pg_statistic_ext catalog, but tableoid was not included in the list.
That made pg_dump fail because it's accessing this column when building
the list of extended statistics to dump.

Discussion: https://postgr.es/m/8833.1560647898%40sss.pgh.pa.us
2019-06-16 12:12:16 +02:00