Commit Graph

34148 Commits

Author SHA1 Message Date
Michael Paquier 5ab892c391 Add support for --jobs in reindexdb
When doing a schema-level or a database-level operation, a list of
relations to build is created which gets processed in parallel using
multiple connections, based on the recent refactoring for parallel slots
in src/bin/scripts/.  System catalogs are processed first in a
serialized fashion to prevent deadlocks, followed by the rest done in
parallel.

This new option is not compatible with --system as reindexing system
catalogs in parallel can lead to deadlocks, and with --index as there is
no conflict handling for indexes rebuilt in parallel depending in the
same relation.

Author: Julien Rouhaud
Reviewed-by: Sergei Kornilov, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg@mail.gmail.com
2019-07-27 22:21:18 +09:00
Peter Eisentraut 959f6d6a18 pg_upgrade: Default new bindir to pg_upgrade location
Make the directory where the pg_upgrade binary resides the default for
new bindir, as running the pg_upgrade binary from where the new
cluster is installed is a very common scenario.  Setting this as the
defauly bindir for the new cluster will remove the need to provide it
explicitly via -B in many cases.

To support directories being missing from option parsing, extend the
directory check with a missingOk mode where the path must be filled at
a later point before being used.  Also move the exec_path check to
earlier in setup to make sure we know the new cluster bindir when we
scan for required executables.

This removes the exec_path from the OSInfo struct as it is not used
anywhere.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/9328.1552952117@sss.pgh.pa.us
2019-07-27 08:19:04 +02:00
Peter Eisentraut 0befb4f313 pg_upgrade: Check all used executables
Expand the validate_exec() calls to cover all the used binaries.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/9328.1552952117@sss.pgh.pa.us
2019-07-27 07:48:08 +02:00
Peter Eisentraut 28cb0555c1 Fix typo in pg_upgrade file header
Author: Daniel Gustafsson <daniel@yesql.se>
2019-07-27 07:46:02 +02:00
Alvaro Herrera 0994cfc0ac Don't uselessly escape a string that doesn't need escaping
Per gripe from Ian Barwick

Co-authored-by: Ian Barwick <ian@2ndquadrant.com>
Discussion: https://postgr.es/m/CABvVfJWNnNKb8cHsTLhkTsvL1+G6BVcV+57+w1JZ61p8YGPdWQ@mail.gmail.com
2019-07-26 17:46:40 -04:00
Tom Lane 8ab66081ca Tweak our special-case logic for the IANA "Factory" timezone.
pg_timezone_names() tries to avoid showing the "Factory" zone in
the view, mainly because that has traditionally had a very long
"abbreviation" such as "Local time zone must be set--see zic manual page",
so that showing it messes up psql's formatting of the whole view.
Since tzdb version 2016g, IANA instead uses the abbreviation "-00",
which is sane enough that there's no reason to discriminate against it.

On the other hand, it emerges that FreeBSD and possibly other packagers
are so wedded to backwards compatibility that they hack the IANA data
to keep the old spelling --- and not just that old spelling, but even
older spellings that IANA used back in the stone age.  This caused the
filter logic to fail to suppress "Factory" at all on such platforms,
though the formatting problem is definitely real in that case.

To solve both problems, get rid of the hard-wired assumption about
exactly what Factory's abbreviation is, and instead reject abbreviations
exceeding 31 characters.  This will allow Factory to appear in the view
if and only if it's using the modern abbreviation.

In passing, simplify the code we add to zic.c to support "zic -P"
to remove its now-obsolete hacks to not print the Factory zone's
abbreviation.  Unlike pg_timezone_names(), there's no reason for
that code to support old/nonstandard timezone data.

Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.

Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
2019-07-26 13:07:08 -04:00
Tom Lane 3754113f33 Avoid choosing "localtime" or "posixrules" as TimeZone during initdb.
Some platforms create a file named "localtime" in the system
timezone directory, making it a copy or link to the active time
zone file.  If Postgres is built with --with-system-tzdata, initdb
will see that file as an exact match to localtime(3)'s behavior,
and it may decide that "localtime" is the most preferred spelling of
the active zone.  That's a very bad choice though, because it's
neither informative, nor portable, nor stable if someone changes
the system timezone setting.  Extend the preference logic added by
commit e3846a00c so that we will prefer any other zone file that
matches localtime's behavior over "localtime".

On the same logic, also discriminate against "posixrules", which
is another not-really-a-zone file that is often present in the
timezone directory.  (Since we install "posixrules" but not
"localtime", this change can affect the behavior of Postgres
with or without --with-system-tzdata.)

Note that this change doesn't prevent anyone from choosing these
pseudo-zones if they really want to (i.e., by setting TZ for initdb,
or modifying the timezone GUC later on).  It just prevents initdb
from preferring these zone names when there are multiple matches to
localtime's behavior.

Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.

Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com
Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us
2019-07-26 12:45:32 -04:00
Tom Lane b9d2c5c7ac Fix loss of fractional digits for large values in cash_numeric().
Money values exceeding about 18 digits (depending on lc_monetary)
could be inaccurately converted to numeric, due to select_div_scale()
deciding it didn't need to compute any fractional digits.  Force
its hand by setting the dscale of one division input to equal the
number of fractional digits we need.

In passing, rearrange the logic to not do useless work in locales
where money values are considered integral.

Per bug #15925 from Slawomir Chodnicki.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
2019-07-26 11:59:00 -04:00
Thomas Munro 27cd521e6e Fix LDAP test instability.
After starting slapd, wait until it can accept a connection before
beginning the real test work.  This avoids occasional test failures.
Back-patch to 11, where the LDAP tests arrived.

Author: Thomas Munro
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20190719033013.GI1859%40paquier.xyz
2019-07-26 10:01:18 +12:00
Andres Freund f63d9e68d4 Add missing (COSTS OFF) to EXPLAIN added in previous commit.
Backpatch: 12-, like the previous commit
2019-07-25 14:52:36 -07:00
Andres Freund af3deff3f2 Fix slot type handling for Agg nodes performing internal sorts.
Since 15d8f8312 we assert that - and since 7ef04e4d2c, 4da597edf1
rely on - the slot type for an expression's
ecxt_{outer,inner,scan}tuple not changing, unless explicitly flagged
as such. That allows to either skip deforming (for a virtual tuple
slot) or optimize the code for JIT accelerated deforming
appropriately (for other known slot types).

This assumption was sometimes violated for grouping sets, when
nodeAgg.c internally uses tuplesorts, and the child node doesn't
return a TTSOpsMinimalTuple type slot. Detect that case, and flag that
the outer slot might not be "fixed".

It's probably worthwhile to optimize this further in the future, and
more granularly determine whether the slot is fixed. As we already
instantiate per-phase transition and equal expressions, we could
cheaply set the slot type appropriately for each phase.  But that's a
separate change from this bugfix.

This commit does include a very minor optimization by avoiding to
create a slot for handling tuplesorts, if no such sorts are
performed. Previously we created that slot unnecessarily in the common
case of computing all grouping sets via hashing. The code looked too
confusing without that, as the conditions for needing a sort slot and
flagging that the slot type isn't fixed, are the same.

Reported-By: Ashutosh Sharma
Author: Andres Freund
Discussion: https://postgr.es/m/CAE9k0PmNaMD2oHTEAhRyxnxpaDaYkuBYkLa1dpOpn=RS0iS2AQ@mail.gmail.com
Backpatch: 12-, where the bug was introduced in 15d8f8312
2019-07-25 14:28:55 -07:00
Tom Lane cb9bb15783 Fix syntax error in commit 20e99cddd.
Per buildfarm.
2019-07-25 14:42:02 -04:00
Tom Lane b654714f9b Fix failures to ignore \r when reading Windows-style newlines.
libpq failed to ignore Windows-style newlines in connection service files.
This normally wasn't a problem on Windows itself, because fgets() would
convert \r\n to just \n.  But if libpq were running inside a program that
changes the default fopen mode to binary, it would see the \r's and think
they were data.  In any case, it's project policy to ignore \r in text
files unconditionally, because people sometimes try to use files with
DOS-style newlines on Unix machines, where the C library won't hide that
from us.

Hence, adjust parseServiceFile() to ignore \r as well as \n at the end of
the line.  In HEAD, go a little further and make it ignore all trailing
whitespace, to match what it's always done with leading whitespace.

In HEAD, also run around and fix up everyplace where we have
newline-chomping code to make all those places look consistent and
uniformly drop \r.  It is not clear whether any of those changes are
fixing live bugs.  Most of the non-cosmetic changes are in places that
are reading popen output, and the jury is still out as to whether popen
on Windows can return \r\n.  (The Windows-specific code in pipe_read_line
seems to think so, but our lack of support for this elsewhere suggests
maybe it's not a problem in practice.)  Hence, I desisted from applying
those changes to back branches, except in run_ssl_passphrase_command()
which is new enough and little-tested enough that we'd probably not have
heard about any problems there.

Tom Lane and Michael Paquier, per bug #15827 from Jorge Gustavo Rocha.
Back-patch the parseServiceFile() change to all supported branches,
and the run_ssl_passphrase_command() change to v11 where that was added.

Discussion: https://postgr.es/m/15827-e6ba53a3a7ed543c@postgresql.org
2019-07-25 12:11:17 -04:00
Andrew Dunstan 20e99cdddb Honor MSVC WindowsSDKVersion if set
Add a line to the project file setting the target SDK. Otherwise, in for
example VS2017, if the default but optional 8.1 SDK is not installed the
build will fail.

Patch from Peifeng Qiu, slightly edited by me.

Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com

Backpatch to all live branches.
2019-07-25 11:38:43 -04:00
Andres Freund ecbdd00934 Fix system column accesses in ON CONFLICT ... RETURNING.
After 277cb78983 ON CONFLICT ... SET ... RETURNING failed with
ERROR:  virtual tuple table slot does not have system attributes
when taking the update path, as the slot used to insert into the
table (and then process RETURNING) was defined to be a virtual slot in
that commit. Virtual slots don't support system columns except for
tableoid and ctid, as the other system columns are AM dependent.

Fix that by using a slot of the table's type. Add tests for system
column accesses in ON CONFLICT ...  RETURNING.

Reported-By: Roby, bisected to the relevant commit by Jeff Janes
Author: Andres Freund
Discussion: https://postgr.es/m/73436355-6432-49B1-92ED-1FE4F7E7E100@finefun.com.au
Backpatch: 12-, where the bug was introduced in 277cb78983
2019-07-24 18:45:58 -07:00
Michael Paquier c8e177f0bb Fix failure with pgperlcritic from the TAP test of synchronous replication
Oversight in 7d81bdc, which introduced a new routine in perl lacking a
return clause.  Per buildfarm member crake.

Backpatch down to 9.6 like its parent.

Reported-by: Andrew Dunstan
Discussion: https://postgr.es/m/16da29fa-d504-1380-7095-40de586dc038@2ndQuadrant.com
Backpatch-through: 9.6
2019-07-25 07:55:23 +09:00
Tom Lane 4e784f3514 Fix infelicities in describeOneTableDetails' partitioned-table handling.
describeOneTableDetails issued a partition-constraint-fetching query
for every table, even ones it knows perfectly well are not partitions.

To add insult to injury, it then proceeded to leak the empty PGresult
if the table wasn't a partition.  Doing that a lot of times might
amount to a meaningful leak, so this seems like a back-patchable bug.

Fix that, and also fix a related PGresult leak in the partition-parent
case (though that leak would occur only if we got no row, which is
unexpected).

Minor code beautification too, to make this code look more like the
pre-existing code around it.

Back-patch the whole change into v12.  However, the fact that we already
know whether the table is a partition dates only to commit 1af25ca0c;
back-patching the relevant changes from that is probably more churn
than is justified in released branches.  Hence, in v11 and v10, just
do the minimum to fix the PGresult leaks.

Noted while messing around with adjacent code for yesterday's \d
improvements.
2019-07-24 18:14:43 -04:00
Heikki Linnakangas 6655a7299d Use full 64-bit XID for checking if a deleted GiST page is old enough.
Otherwise, after a deleted page gets even older, it becomes unrecyclable
again. B-tree has the same problem, and has had since time immemorial,
but let's at least fix this in GiST, where this is new.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:24:07 +03:00
Heikki Linnakangas 9eb5607e69 Refactor checks for deleted GiST pages.
The explicit check in gistScanPage() isn't currently really necessary, as
a deleted page is always empty, so the loop would fall through without
doing anything, anyway. But it's a marginal optimization, and it gives a
nice place to attach a comment to explain how it works.

Backpatch to v12, where GiST page deletion was introduced.

Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
2019-07-24 20:24:05 +03:00
Andrew Dunstan 1a721248f3 Don't assume expr is available in pgbench tests
Windows hosts do not normally come with expr, so instead of using that
to test the \setshell command, use echo instead, which is fairly
universally available.

Backpatch to release 11, where this came in.

Problem found by me, patch by Fabien Coelho.
2019-07-24 11:41:39 -04:00
Michael Paquier 7d81bdc8c0 Improve stability of TAP test for synchronous replication
Slow buildfarm machines have run into issues with this TAP test caused
by a race condition related to the startup of a set of standbys, where
it is possible to finish with an unexpected order in the WAL sender
array of the primary.

This closes the race condition by making sure that any standby started
is registered into the WAL sender array of the primary before starting
the next one based on lookups of pg_stat_replication.

Backpatch down to 9.6 where the test has been introduced.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Noah Misch
Discussion: https://postgr.es/m/20190617055145.GB18917@paquier.xyz
Backpatch-through: 9.6
2019-07-24 10:53:39 +09:00
Alvaro Herrera 5562272a42 Check that partitions are not in use when dropping constraints
If the user creates a deferred constraint in a partition, and in a
transaction they cause the constraint's trigger execution to be deferred
until commit time *and* drop the constraint, then when commit time comes
the queued trigger will fail to run because the trigger object will have
been dropped.

This is explained because when a constraint gets dropped in a
partitioned table, the recursion to drop the ones in partitions is done
by the dependency mechanism, not by ALTER TABLE traversing the recursion
tree as in all other cases.  In the non-partitioned case, this problem
is avoided by checking that the table is not "in use" by alter-table;
other alter-table subcommands that recurse to partitions do that check
for each partition.  But the dependency mechanism doesn't have a way to
do that.  Fix the problem by applying the same check to all partitions
during ALTER TABLE's "prep" phase, which correctly raises the necessary
error.

Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>
Discussion: https://postgr.es/m/CAKcux6nZiO9-eEpr1ZD84bT1mBoVmeZkfont8iSpcmYrjhGWgA@mail.gmail.com
2019-07-23 17:22:15 -04:00
Tom Lane 24f62e93f3 Improve psql's \d output for partitioned indexes.
Include partitioning information much as we do for partitioned tables.
(However, \d+ doesn't show the partition bounds, because those are
not stored for indexes.)

In passing, fix a couple of queries to look less messy in -E output.

Also, add some tests for \d on tables with nondefault tablespaces.
(Somebody previously added a rather silly number of tests for \d
on partitioned indexes, yet completely neglected other cases.)

Justin Pryzby, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/20190422154902.GH14223@telsasoft.com
2019-07-23 17:04:21 -04:00
Tom Lane eb5472da9f Improve psql's \d output for TOAST tables.
Add the name of the owning table to the footers for a TOAST table.
Also, show all the same footers as for a regular table (in practice,
this adds the index and perhaps the tablespace and access method).

Justin Pryzby, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/20190422154902.GH14223@telsasoft.com
2019-07-23 15:25:56 -04:00
Peter Eisentraut 06140c201b Add CREATE DATABASE LOCALE option
This sets both LC_COLLATE and LC_CTYPE with one option.  Similar
behavior is already supported in initdb, CREATE COLLATION, and
createdb.

Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/d9d5043a-dc70-da8a-0166-1e218e6e34d4%402ndquadrant.com
2019-07-23 14:47:24 +02:00
Michael Paquier 3cae75f420 Remove more progname references in vacuumdb.c
Oversight in 5f384037.

Author: Álvaro Herrera
Discussion: https://postgr.es/m/20190722151806.GA22634@alvherre.pgsql
2019-07-23 14:29:34 +09:00
Tom Lane a0555ddab9 Install dependencies to prevent dropping partition key columns.
The logic in ATExecDropColumn that rejects dropping partition key
columns is quite an inadequate defense, because it doesn't execute
in cases where a column needs to be dropped due to cascade from
something that only the column, not the whole partitioned table,
depends on.  That leaves us with a badly broken partitioned table;
even an attempt to load its relcache entry will fail.

We really need to have explicit pg_depend entries that show that the
column can't be dropped without dropping the whole table.  Hence,
add those entries.  In v12 and HEAD, bump catversion to ensure that
partitioned tables will have such entries.  We can't do that in
released branches of course, so in v10 and v11 this patch affords
protection only to partitioned tables created after the patch is
installed.  Given the lack of field complaints (this bug was found
by fuzz-testing not by end users), that's probably good enough.

In passing, fix ATExecDropColumn and ATPrepAlterColumnType
messages to be more specific about which partition key column
they're complaining about.

Per report from Manuel Rigger.  Back-patch to v10 where partitioned
tables were added.

Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com
Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
2019-07-22 14:55:40 -04:00
Peter Eisentraut 7961886580 Revert "initdb: Change authentication defaults"
This reverts commit 09f08930f0.

The buildfarm client needs some adjustments first.
2019-07-22 19:28:25 +02:00
Peter Eisentraut 09f08930f0 initdb: Change authentication defaults
Change the defaults for the pg_hba.conf generated by initdb to "peer"
for local (if supported, else "md5") and "md5" for host.

(Changing from "md5" to SCRAM is left as a separate exercise.)

"peer" is currently not supported on AIX, HP-UX, and Windows.  Users
on those operating systems will now either have to provide a password
to initdb or choose a different authentication method when running
initdb.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/bec17f0a-ddb1-8b95-5e69-368d9d0a3390%40postgresql.org
2019-07-22 15:14:27 +02:00
David Rowley 1e6a759838 Use appendBinaryStringInfo in more places where the length is known
When we already know the length that we're going to append, then it
makes sense to use appendBinaryStringInfo instead of
appendStringInfoString so that the append can be performed with a simple
memcpy() using a known length rather than having to first perform a
strlen() call to obtain the length.

Discussion: https://postgr.es/m/CAKJS1f8+FRAM1s5+mAa3isajeEoAaicJ=4e0WzrH3tAusbbiMQ@mail.gmail.com
2019-07-23 00:14:11 +12:00
Peter Eisentraut 19781729f7 Make identity sequence management more robust
Some code could get confused when certain catalog state involving both
identity and serial sequences was present, perhaps during an attempt
to upgrade the latter to the former.  Specifically, dropping the
default of a serial column maintains the ownership of the sequence by
the column, and so it would then be possible to afterwards make the
column an identity column that would now own two sequences.  This
causes the code that looks up the identity sequence to error out,
making the new identity column inoperable until the ownership of the
previous sequence is released.

To fix this, make the identity sequence lookup only consider sequences
with the appropriate dependency type for an identity sequence, so it
only ever finds one (unless something else is broken).  In the above
example, the old serial sequence would then be ignored.  Reorganize
the various owned-sequence-lookup functions a bit to make this
clearer.

Reported-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/470c54fc8590be4de0f41b0d295fd6390d5e8a6c.camel@cybertec.at
2019-07-22 12:07:10 +02:00
David Rowley efdcca55a3 Make better use of the new List implementation in a couple of places
In nodeAppend.c and nodeMergeAppend.c there were some foreach loops which
looped over the list of subplans and only performed any work if the
subplan index was found in a Bitmapset.  With the old linked list
implementation of List, this form made sense as accessing the Nth list
element was O(N).  However, thanks to 1cff1b95a we now have array-based
lists, so accessing the Nth element has become O(1).

Here we make the most of the O(1) lookups and just loop over the set
members of the Bitmapset with bms_next_member().  This performs slightly
better when a small number of the list items are in the Bitmapset.  Micro
benchmarks show that when the Bitmapset contains all or most of the list
items then the new code is ever so slightly slower.  In practice, the cost
is so small that it's drowned out by various other things such as locking
the relations belonging to each subplan, etc.

The primary goal here is to leave better code examples around which benefit
better from the new list implementation.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAKJS1f8ZcsLVgkF4wOfRyMYTcPgLFiUAOedFC+U2vK_aFZk-BA@mail.gmail.com
2019-07-22 19:03:12 +12:00
Michael Paquier 23bccc823d Fix inconsistencies and typos in the tree
This is numbered take 7, and addresses a set of issues with code
comments, variable names and unreferenced variables.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/dff75442-2468-f74f-568c-6006e141062f@gmail.com
2019-07-22 10:01:50 +09:00
David Rowley e1a0f6a983 Adjust overly strict Assert
3373c7155 changed how we determine EquivalenceClasses for relations and
added an Assert to ensure all relations mentioned in each EC's ec_relids
was a RELOPT_BASEREL.  However, the join removal code may remove a LEFT
JOIN and since it does not clean up EC members belonging to the removed
relations it can leave RELOPT_DEADREL rels in ec_relids.

Fix this by adjusting the Assert to allow RELOPT_DEADREL rels too.

Reported-by: sqlsmith via Andreas Seltenreich
Discussion: https://postgr.es/m/87y30r8sls.fsf@ansel.ydns.eu
2019-07-22 10:29:41 +12:00
Tom Lane 330cafdfaa Remove no-longer-helpful reliance on fixed-size local array.
Coverity complained about this code, apparently because it uses a local
array of size FUNC_MAX_ARGS without a guard that the input argument list
is no longer than that.  (Not sure why it complained today, since this
code's been the same for a long time; possibly it re-analyzed everything
the List API change touched?)

Rather than add a guard, though, let's just get rid of the local array
altogether.  It was only there to avoid list_nth() calls, and those are
no longer expensive.
2019-07-21 11:42:11 -04:00
Michael Paquier 90317ab7e6 Fix compilation warning of pg_basebackup with MinGW
Several buildfarm members have been complaining about that with gcc,
like jacana.  Weirdly enough, Visual Studio's compilers do not find this
issue.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20190719050830.GK1859@paquier.xyz
2019-07-21 22:27:11 +09:00
David Rowley 3373c71553 Speed up finding EquivalenceClasses for a given set of rels
Previously in order to determine which ECs a relation had members in, we
had to loop over all ECs stored in PlannerInfo's eq_classes and check if
ec_relids mentioned the relation.  For the most part, this was fine, as
generally, unless queries were fairly complex, the overhead of performing
the lookup would have not been that significant.  However, when queries
contained large numbers of joins and ECs, the overhead to find the set of
classes matching a given set of relations could become a significant
portion of the overall planning effort.

Here we allow a much more efficient method to access the ECs which match a
given relation or set of relations.  A new Bitmapset field in RelOptInfo
now exists to store the indexes into PlannerInfo's eq_classes list which
each relation is mentioned in.  This allows very fast lookups to find all
ECs belonging to a single relation.  When we need to lookup ECs belonging
to a given pair of relations, we can simply bitwise-AND the Bitmapsets from
each relation and use the result to perform the lookup.

We also take the opportunity to write a new implementation of
generate_join_implied_equalities which makes use of the new indexes.
generate_join_implied_equalities_for_ecs must remain as is as it can be
given a custom list of ECs, which we can't easily determine the indexes of.

This was originally intended to fix the performance penalty of looking up
foreign keys matching a join condition which was introduced by 100340e2d.
However, we're speeding up much more than just that here.

Author: David Rowley, Tom Lane
Reviewed-by: Tom Lane, Tomas Vondra
Discussion: https://postgr.es/m/6970.1545327857@sss.pgh.pa.us
2019-07-21 17:30:58 +12:00
Tomas Vondra a63378a03e Use column collation for extended statistics
The current extended statistics code was a bit confused which collation
to use.  When building the statistics, the collations defined as default
for the data types were used (since commit 5e0928005).  The MCV code was
however using the column collations for MCV serialization, and then
DEFAULT_COLLATION_OID when computing estimates. So overall the code was
using all three possible options, inconsistently.

This uses the column colation everywhere - this makes it consistent with
what 5e0928005 did for regular stats.  We however do not track the
collations in a catalog, because we can derive them from column-level
information.  This may need to change in the future, e.g. after allowing
statistics on expressions.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:37:37 +02:00
Tomas Vondra e38a55ba46 Rework examine_opclause_expression to use varonleft
The examine_opclause_expression function needs to return information on
which side of the operator we found the Var, but the variable was called
"isgt" which is rather misleading (it assumes the operator is either
less-than or greater-than, but it may be equality or something else).
Other places in the planner use a variable called "varonleft" for this
purpose, so just adopt the same convention here.

The code also assumed we don't care about this flag for equality, as
(Var = Const) and (Const = Var) should be the same thing. But that does
not work for cross-type operators, in which case we need to pass the
parameters to the procedure in the right order. So just use the same
code for all types of expressions.

This means we don't need to care about the selectivity estimation
function anymore, at least not in this code. We should only get the
supported cases here (thanks to statext_is_compatible_clause).

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-20 16:37:30 +02:00
Tom Lane 4214668635 Silence compiler warning, hopefully.
Absorb commit e5e04c962a5d12eebbf867ca25905b3ccc34cbe0 from upstream
IANA code, in hopes of silencing warnings from MSVC about negating
a bool value.

Discussion: https://postgr.es/m/20190719035347.GJ1859@paquier.xyz
2019-07-19 14:48:57 -04:00
Michael Paquier 5f3840370b Refactor parallelization processing code in src/bin/scripts/
The existing facility of vacuumdb to handle parallel connections into a
given database with an authentication set is moved to a common file in
src/bin/scripts/, named scripts_parallel.c.  This introduces a set of
routines to initialize, wait and terminate a set of connections,
simplifying a bit the code of vacuumdb on the way.  More routines
related to result handling and database connection are moved to
common.c.

The initial plan is to use that for reindexdb, but it could be applied
to other tools like clusterdb.

While on it, clean up a set of variables "progname" which were defined
as routine arguments for error messages.  Since most of the callers have
switched to pg_log_error() and such there is no need for this variable.

Author: Julien Rouhaud
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg@mail.gmail.com
2019-07-19 09:31:58 +09:00
Jeff Davis b538c90b1b Fix error in commit e6feef57.
I was careless passing a datum directly to DATE_NOT_FINITE without
calling DatumGetDateADT() first.

Backpatch-through: 9.4
2019-07-18 17:04:50 -07:00
Michael Paquier 70a33b2109 Fix typo in mvdistinct.c
Noticed while browsing the code.
2019-07-19 08:50:14 +09:00
Jeff Davis e6feef571a Fix daterange canonicalization for +/- infinity.
The values 'infinity' and '-infinity' are a part of the DATE type
itself, so a bound of the date 'infinity' is not the same as an
unbounded/infinite range. However, it is still wrong to try to
canonicalize such values, because adding or subtracting one has no
effect. Fix by treating 'infinity' and '-infinity' the same as
unbounded ranges for the purposes of canonicalization (but not other
purposes).

Backpatch to all versions because it is inconsistent with the
documented behavior. Note that this could be an incompatibility for
applications relying on the behavior contrary to the documentation.

Author: Laurenz Albe
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at
Backpatch-through: 9.4
2019-07-18 13:41:10 -07:00
Peter Geoghegan d004147eb3 Fix nbtree metapage cache upgrade bug.
Commit 857f9c36cd, which taught nbtree VACUUM to avoid unnecessary
index scans, bumped the nbtree version number from 2 to 3, while adding
the ability for nbtree indexes to be upgraded on-the-fly.  Various
assertions that assumed that an nbtree index was always on version 2 had
to be changed to accept any supported version (version 2 or 3 on
Postgres 11).

However, a few assertions were missed in the initial commit, all of
which were in code paths that cache a local copy of the metapage
metadata, where the index had been expected to be on the current version
(no longer version 2) as a generic sanity check.  Rather than simply
update the assertions, follow-up commit 0a64b45152 intentionally made
the metapage caching code update the per-backend cached metadata version
without changing the on-disk version at the same time.  This could even
happen when the planner needed to determine the height of a B-Tree for
costing purposes.  The assertions only fail on Postgres v12 when
upgrading from v10, because they were adjusted to use the authoritative
shared memory metapage by v12's commit dd299df8.

To fix, remove the cache-only upgrade mechanism entirely, and update the
assertions themselves to accept any supported version (go back to using
the cached version in v12).  The fix is almost a full revert of commit
0a64b45152 on the v11 branch.

VACUUM only considers the authoritative metapage, and never bothers with
a locally cached version, whereas everywhere else isn't interested in
the metapage fields that were added by commit 857f9c36cd.  It seems
unlikely that this bug has affected any user on v11.

Reported-By: Christoph Berg
Bug: #15896
Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org
Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
2019-07-18 13:22:56 -07:00
Tom Lane bc8393cf27 Further adjust SPITupleTable to provide a public row-count field.
Now that commit fec0778c8 drew a clear line between public and private
fields in SPITupleTable, it seems pretty silly that the count of valid
tuples isn't on the public side of that line.  The reason why not was
that there wasn't such a count.  For reasons lost in the mists of time,
spi.c preferred to keep a count of remaining free entries in the array.
But that seems pretty pointless: it's unlike the way we handle similar
code everywhere else, and it involves extra subtractions that surely
outweigh having to do a comparison rather than test-for-zero to check
for array-full.

Hence, rearrange so that this code does the expansible array logic
the same as everywhere else, with a count of valid entries alongside
the allocated array length.  And document the count as public.

I looked for core-code callers where it would make sense to start
relying on tuptable->numvals rather than the separate SPI_processed
variable.  Right now there don't seem to be places where it'd be
a win to do so without more code restructuring than I care to
undertake today.  In principle, though, having SPITupleTables be
fully self-contained should be helpful down the line.

Discussion: https://postgr.es/m/16852.1563395722@sss.pgh.pa.us
2019-07-18 10:37:13 -04:00
Tomas Vondra 7d24f6a490 Simplify bitmap updates in multivariate MCV code
When evaluating clauses on a multivariate MCV list, we build a bitmap
tracking how the clauses match each item of the MCV list.  When updating
the bitmap we need to consider the current value (tracking how the item
matches preceding clauses), match for the current clause and whether the
clauses are connected by AND or OR.

Until now the logic was copied on every place updating the bitmap, which
was not quite readable.  So just move it to a separate function and call
it where needed.

Backpatch to 12, where the code was introduced. While not a bugfix, this
should make maintenance and future backpatches easier.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
2019-07-18 11:29:38 +02:00
Tomas Vondra e4deae7396 Fix handling of NULLs in MCV items and constants
There were two issues in how the extended statistics handled NULL values
in opclauses. Firstly, the code was oblivious to the possibility that
Const may be NULL (constisnull=true) in which case the constvalue is
undefined. We need to treat this as a mismatch, and not call the proc.

Secondly, the MCV item itself may contain NULL values too - the code
already did check that, and updated the match bitmap accordingly, but
failed to ensure we won't call the operator procedure anyway. It did
work for AND-clauses, because in that case false in the bitmap stops
evaluation of further clauses. But for OR-clauses ir was not easy to
get incorrect estimates or even trigger a crash.

This fixes both issues by extending the existing check so that it looks
at constisnull too, and making sure it skips calling the procedure.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
2019-07-18 11:29:38 +02:00
Tomas Vondra e8b6ae2130 Fix handling of opclauses in extended statistics
We expect opclauses to have exactly one Var and one Const, but the code
was checking the Const by calling is_pseudo_constant_clause() which is
incorrect - we need a proper constant.

Fixed by using plain IsA(x,Const) to check type of the node. We need to
do these checks in two places, so move it into a separate function that
can be called in both places.

Reported by Andreas Seltenreich, based on crash reported by sqlsmith.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:29:38 +02:00
Tomas Vondra a4303a078c Remove unnecessary TYPECACHE_GT_OPR lookup
The TYPECACHE_GT_OPR is not needed (it used to be in older version of
the MCV code), but the compiler failed to detect this as the result was
used in a fmgr_info() call, populating a FmgrInfo entry.

Backpatch to v12, where this code was introduced.

Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
Backpatch-to: 12
2019-07-18 11:29:38 +02:00
Andres Freund 21039555cd tableam: comment improvements.
Author: Brad DeJong
Discussion: https://postgr.es/m/CAJnrtnxDYOQFsDfWz2iri0T_fFL2ZbbzgCOE=4yaMcszgcsf4A@mail.gmail.com
Backpatch: 12-
2019-07-17 19:48:47 -07:00
Tom Lane 93907478e1 Update time zone data files to tzdata release 2019b.
Brazil no longer observes DST.
Historical corrections for Palestine, Hong Kong, and Italy.
2019-07-17 19:15:21 -04:00
Tom Lane f285322f9c Sync our copy of the timezone library with IANA release tzcode2019b.
A large fraction of this diff is just due to upstream's somewhat
random decision to rename a bunch of internal variables and struct
fields.  However, there is an interesting new feature in zic:
it's grown a "-b slim" option that emits zone files without 32-bit
data and other backwards-compatibility hacks.  We should consider
whether we wish to enable that.
2019-07-17 18:26:23 -04:00
Tom Lane fec0778c80 Clarify the distinction between public and private SPITupleTable fields.
The fields that we consider public are "tupdesc" and "vals", which
historically are in the middle of the struct.  Move them to the front
(this should be perfectly safe to do in HEAD) and add comments to make
it quite clear which fields are public or not.

Also adjust spi.sgml's documentation of the struct to match.
That doc had bit-rotted somewhat, as it was missing some fields.
(Arguably we should just remove all the private fields from the docs,
but for now I refrained.)

Daniel Gustafsson, reviewed by Fabien Coelho

Discussion: https://postgr.es/m/0D19F836-B743-4340-B6A2-F148CA3DD1F0@yesql.se
2019-07-17 14:55:13 -04:00
Tom Lane d97b714a21 Avoid using lcons and list_delete_first where it's easy to do so.
Formerly, lcons was about the same speed as lappend, but with the new
List implementation, that's not so; with a long List, data movement
imposes an O(N) cost on lcons and list_delete_first, but not lappend.

Hence, invent list_delete_last with semantics parallel to
list_delete_first (but O(1) cost), and change various places to use
lappend and list_delete_last where this can be done without much
violence to the code logic.

There are quite a few places that construct result lists using lcons not
lappend.  Some have semantic rationales for that; I added comments about
it to a couple that didn't have them already.  In many such places though,
I think the coding is that way only because back in the dark ages lcons
was faster than lappend.  Hence, switch to lappend where this can be done
without causing semantic changes.

In ExecInitExprRec(), this results in aggregates and window functions that
are in the same plan node being executed in a different order than before.
Generally, the executions of such functions ought to be independent of
each other, so this shouldn't result in visibly different query results.
But if you push it, as one regression test case does, you can show that
the order is different.  The new order seems saner; it's closer to
the order of the functions in the query text.  And we never documented
or promised anything about this, anyway.

Also, in gistfinishsplit(), don't bother building a reverse-order list;
it's easy now to iterate backwards through the original list.

It'd be possible to go further towards removing uses of lcons and
list_delete_first, but it'd require more extensive logic changes,
and I'm not convinced it's worth it.  Most of the remaining uses
deal with queues that probably never get long enough to be worth
sweating over.  (Actually, I doubt that any of the changes in this
patch will have measurable performance effects either.  But better
to have good examples than bad ones in the code base.)

Patch by me, thanks to David Rowley and Daniel Gustafsson for review.

Discussion: https://postgr.es/m/21272.1563318411@sss.pgh.pa.us
2019-07-17 11:15:34 -04:00
Thomas Munro dfd0121dc7 Move some md.c-specific logic from smgr.c to md.c.
Potential future SMGR implementations may not want to create
tablespace directories when creating an SMGR relation.  Move that
logic to mdcreate().  Move the initialization of md-specific
data structures from smgropen() to a new callback mdopen().

Author: Thomas Munro
Reviewed-by: Shawn Debnath (as part of an earlier patch set)
Discussion: https://postgr.es/m/CA%2BhUKG%2BOZqOiOuDm5tC5DyQZtJ3FH4%2BFSVMqtdC4P1atpJ%2Bqhg%40mail.gmail.com
2019-07-17 15:00:22 +12:00
Tom Lane 3093eb2b83 Fix thinko in construction of old_conpfeqop list.
This should lappend the OIDs, not lcons them; the existing code produced
a list in reversed order.  This is harmless for single-key FKs or FKs
where all the key columns are of the same type, which probably explains
how it went unnoticed.  But if those conditions are not met,
ATAddForeignKeyConstraint would make the wrong decision about whether an
existing FK needs to be revalidated.  I think it would almost always err
in the safe direction by revalidating a constraint that didn't need it.
You could imagine scenarios where the pfeqop check was fooled by
swapping the types of two FK columns in one ALTER TABLE, but that case
would probably be rejected by other tests, so it might be impossible to
get to the worst-case scenario where an FK should be revalidated and
isn't.  (And even then, it's likely to be fine, unless there are weird
inconsistencies in the equality behavior of the replacement types.)
However, this is a performance bug at least.

Noted while poking around to see whether lcons calls could be converted
to lappend.

This bug is old, dating to commit cb3a7c2b9, so back-patch to all
supported branches.
2019-07-16 18:17:47 -04:00
Tom Lane c245776906 Remove lappend_cell...() family of List functions.
It seems worth getting rid of these functions because they require the
caller to retain a ListCell pointer into a List that it's modifying,
which is a dangerous practice with the new List implementation.
(The only other List-modifying function that takes a ListCell pointer
as input is list_delete_cell, which nowadays is preferentially used
via the constrained API foreach_delete_current.)

There was only one remaining caller of these functions after commit
2f5b8eb5a, and that was some fairly ugly GEQO code that can be much
more clearly expressed using a list-index variable and list_insert_nth.
Hence, rewrite that code, and remove the functions.

Discussion: https://postgr.es/m/26193.1563228600@sss.pgh.pa.us
2019-07-16 13:12:24 -04:00
Tom Lane 2f5b8eb5a2 Clean up some ad-hoc code for sorting and de-duplicating Lists.
heap.c and relcache.c contained nearly identical copies of logic
to insert OIDs into an OID list while preserving the list's OID
ordering (and rejecting duplicates, in one case but not the other).

The comments argue that this is faster than qsort for small numbers
of OIDs, which is at best unproven, and seems even less likely to be
true now that lappend_cell_oid has to move data around.  In any case
it's ugly and hard-to-follow code, and if we do have a lot of OIDs
to consider, it's O(N^2).

Hence, replace with simply lappend'ing OIDs to a List, then list_sort
the completed List, then remove adjacent duplicates if necessary.
This is demonstrably O(N log N) and it's much simpler for the
callers.  It's possible that this would be somewhat inefficient
if there were a very large number of duplicates, but that seems
unlikely in the existing usage.

This adds list_deduplicate_oid and list_oid_cmp infrastructure
to list.c.  I didn't bother with equivalent functionality for
integer or pointer Lists, but such could always be added later
if we find a use for it.

Discussion: https://postgr.es/m/26193.1563228600@sss.pgh.pa.us
2019-07-16 12:04:06 -04:00
Tom Lane 569ed7f483 Redesign the API for list sorting (list_qsort becomes list_sort).
In the wake of commit 1cff1b95a, the obvious way to sort a List
is to apply qsort() directly to the array of ListCells.  list_qsort
was building an intermediate array of pointers-to-ListCells, which
we no longer need, but getting rid of it forces an API change:
the comparator functions need to do one less level of indirection.

Since we're having to touch the callers anyway, let's do two additional
changes: sort the given list in-place rather than making a copy (as
none of the existing callers have any use for the copying behavior),
and rename list_qsort to list_sort.  It was argued that the old name
exposes more about the implementation than it should, which I find
pretty questionable, but a better reason to rename it is to be sure
we get the attention of any external callers about the need to fix
their comparator functions.

While we're at it, change four existing callers of qsort() to use
list_sort instead; previously, they all had local reinventions
of list_qsort, ie build-an-array-from-a-List-and-qsort-it.
(There are some other places where changing to list_sort perhaps
would be worthwhile, but they're less obviously wins.)

Discussion: https://postgr.es/m/29361.1563220190@sss.pgh.pa.us
2019-07-16 11:51:44 -04:00
Michael Paquier 0896ae561b Fix inconsistencies and typos in the tree
This is numbered take 7, and addresses a set of issues around:
- Fixes for typos and incorrect reference names.
- Removal of unneeded comments.
- Removal of unreferenced functions and structures.
- Fixes regarding variable name consistency.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/10bfd4ac-3e7c-40ab-2b2e-355ed15495e8@gmail.com
2019-07-16 13:23:53 +09:00
Tom Lane 4c3d05d875 Remove dead code.
These memory context switches are useless in the wake of commit
1cff1b95a.  Noted by Jesper Pedersen.

Discussion: https://postgr.es/m/f078ce63-9e04-0f3e-d200-d7ee66279abe@redhat.com
2019-07-15 23:27:13 -04:00
Thomas Munro 5823677acc Provide pgbench --show-script to dump built-in scripts.
Author: Fabien Coelho
Reviewed-by: Ibrar Ahmed
Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081737390.5867%40lancre
2019-07-16 11:57:49 +12:00
Thomas Munro ce8f946764 Report the time taken by pgbench initialization steps.
Author: Fabien Coelho
Reviewed-by: Ibrar Ahmed
Discussion: https://postgr.es/m/alpine.DEB.2.21.1904061810510.3678%40lancre
2019-07-16 11:37:37 +12:00
Peter Geoghegan bfdbac2ab3 Correct nbtsplitloc.c comment.
The logic just added by commit e3899ffd falls back on a 50:50 page split
in the event of a new item that's just to the right of our provisional
"many duplicates" split point.  Fix a comment that incorrectly claimed
that the new item had to be just to the left of our provisional split
point.

Backpatch: 12-, just like commit e3899ffd.
2019-07-15 14:35:06 -07:00
Peter Geoghegan e3899ffd8b Fix pathological nbtree split point choice issue.
Specific ever-decreasing insertion patterns could cause successive
unbalanced nbtree page splits.  Problem cases involve a large group of
duplicates to the left, and ever-decreasing insertions to the right.

To fix, detect the situation by considering the newitem offset before
performing a split using nbtsplitloc.c's "many duplicates" strategy.  If
the new item was inserted just to the right of our provisional "many
duplicates" split point, infer ever-decreasing insertions and fall back
on a 50:50 (space delta optimal) split.  This seems to barely affect
cases that already had acceptable space utilization.

An alternative fix also seems possible.  Instead of changing
nbtsplitloc.c split choice logic, we could instead teach _bt_truncate()
to generate a new value for new high keys by interpolating from the
lastleft and firstright key values.  That would certainly be a more
elegant fix, but it isn't suitable for backpatching.

Discussion: https://postgr.es/m/CAH2-WznCNvhZpxa__GqAa1fgQ9uYdVc=_apArkW2nc-K3O7_NA@mail.gmail.com
Backpatch: 12-, where the nbtree page split enhancements were introduced.
2019-07-15 13:19:13 -07:00
Tom Lane 1cff1b95ab Represent Lists as expansible arrays, not chains of cons-cells.
Originally, Postgres Lists were a more or less exact reimplementation of
Lisp lists, which consist of chains of separately-allocated cons cells,
each having a value and a next-cell link.  We'd hacked that once before
(commit d0b4399d8) to add a separate List header, but the data was still
in cons cells.  That makes some operations -- notably list_nth() -- O(N),
and it's bulky because of the next-cell pointers and per-cell palloc
overhead, and it's very cache-unfriendly if the cons cells end up
scattered around rather than being adjacent.

In this rewrite, we still have List headers, but the data is in a
resizable array of values, with no next-cell links.  Now we need at
most two palloc's per List, and often only one, since we can allocate
some values in the same palloc call as the List header.  (Of course,
extending an existing List may require repalloc's to enlarge the array.
But this involves just O(log N) allocations not O(N).)

Of course this is not without downsides.  The key difficulty is that
addition or deletion of a list entry may now cause other entries to
move, which it did not before.

For example, that breaks foreach() and sister macros, which historically
used a pointer to the current cons-cell as loop state.  We can repair
those macros transparently by making their actual loop state be an
integer list index; the exposed "ListCell *" pointer is no longer state
carried across loop iterations, but is just a derived value.  (In
practice, modern compilers can optimize things back to having just one
loop state value, at least for simple cases with inline loop bodies.)
In principle, this is a semantics change for cases where the loop body
inserts or deletes list entries ahead of the current loop index; but
I found no such cases in the Postgres code.

The change is not at all transparent for code that doesn't use foreach()
but chases lists "by hand" using lnext().  The largest share of such
code in the backend is in loops that were maintaining "prev" and "next"
variables in addition to the current-cell pointer, in order to delete
list cells efficiently using list_delete_cell().  However, we no longer
need a previous-cell pointer to delete a list cell efficiently.  Keeping
a next-cell pointer doesn't work, as explained above, but we can improve
matters by changing such code to use a regular foreach() loop and then
using the new macro foreach_delete_current() to delete the current cell.
(This macro knows how to update the associated foreach loop's state so
that no cells will be missed in the traversal.)

There remains a nontrivial risk of code assuming that a ListCell *
pointer will remain good over an operation that could now move the list
contents.  To help catch such errors, list.c can be compiled with a new
define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents
whenever that could possibly happen.  This makes list operations
significantly more expensive so it's not normally turned on (though it
is on by default if USE_VALGRIND is on).

There are two notable API differences from the previous code:

* lnext() now requires the List's header pointer in addition to the
current cell's address.

* list_delete_cell() no longer requires a previous-cell argument.

These changes are somewhat unfortunate, but on the other hand code using
either function needs inspection to see if it is assuming anything
it shouldn't, so it's not all bad.

Programmers should be aware of these significant performance changes:

* list_nth() and related functions are now O(1); so there's no
major access-speed difference between a list and an array.

* Inserting or deleting a list element now takes time proportional to
the distance to the end of the list, due to moving the array elements.
(However, it typically *doesn't* require palloc or pfree, so except in
long lists it's probably still faster than before.)  Notably, lcons()
used to be about the same cost as lappend(), but that's no longer true
if the list is long.  Code that uses lcons() and list_delete_first()
to maintain a stack might usefully be rewritten to push and pop at the
end of the list rather than the beginning.

* There are now list_insert_nth...() and list_delete_nth...() functions
that add or remove a list cell identified by index.  These have the
data-movement penalty explained above, but there's no search penalty.

* list_concat() and variants now copy the second list's data into
storage belonging to the first list, so there is no longer any
sharing of cells between the input lists.  The second argument is
now declared "const List *" to reflect that it isn't changed.

This patch just does the minimum needed to get the new implementation
in place and fix bugs exposed by the regression tests.  As suggested
by the foregoing, there's a fair amount of followup work remaining to
do.

Also, the ENABLE_LIST_COMPAT macros are finally removed in this
commit.  Code using those should have been gone a dozen years ago.

Patch by me; thanks to David Rowley, Jesper Pedersen, and others
for review.

Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
2019-07-15 13:41:58 -04:00
Thomas Munro 67b9b3ca32 Provide XLogRecGetFullXid().
In order to be able to work with FullTransactionId values during replay
without increasing the size of the WAL, infer the epoch.  In general we
can't do that safely, but during replay we can because we know that
nextFullXid can't advance concurrently.

Prevent frontend code from seeing this new function, due to the above
restriction.  Perhaps in future it will be possible to extract the value
entirely from independent WAL records, and then this restriction can be
lifted.

Author: Thomas Munro, based on earlier code from Andres Freund
Discussion: https://postgr.es/m/CA%2BhUKG%2BmLmuDjMi6o1dxkKvGRL56Y2Rz%2BiXAcrZV03G9ZuFQ8Q%40mail.gmail.com
2019-07-15 17:04:29 +12:00
Peter Eisentraut 5925e55498 Add gen_random_uuid function
This adds a built-in function to generate UUIDs.

PostgreSQL hasn't had a built-in function to generate a UUID yet,
relying on external modules such as uuid-ossp and pgcrypto to provide
one.  Now that we have a strong random number generator built-in, we
can easily provide a version 4 (random) UUID generation function.

This patch takes the existing function gen_random_uuid() from pgcrypto
and makes it a built-in function.  The pgcrypto implementation now
internally redirects to the built-in one.

Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/6a65610c-46fc-2323-6b78-e8086340a325@2ndquadrant.com
2019-07-14 14:30:27 +02:00
Alexander Korotkov 565f339000 Forgotten catversion bump
6254c55f81, c085e1c1cb and 075f0a880f all change system catalog.  But
catversion bump is missed in all of them.  So, do catversion bump now.

Also, I need mention patch reviewer Fabien Coelho, who has been missed in
commit messages of 6254c55f81, c085e1c1cb and 075f0a880f.
2019-07-14 15:22:21 +03:00
Alexander Korotkov 075f0a880f Add support for <-> (box, point) operator to SP-GiST box_ops
Opclass support functions already can handle this operator, just catalog
adjustment appears to be required.

Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Tom Lane, Alexander Korotkov
2019-07-14 15:09:23 +03:00
Alexander Korotkov c085e1c1cb Add support for <-> (box, point) operator to GiST box_ops
Index-based calculation of this operator is exact.  So, signature of
gist_bbox_distance() function is changes so that caller is responsible for
setting *recheck flag.

Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Tom Lane, Alexander Korotkov
2019-07-14 15:09:15 +03:00
Alexander Korotkov 6254c55f81 Add missing commutators for distance operators
Some of <-> operators between geometric types have their commutators missed.
This commit adds them.  The motivation is upcoming kNN support for some of those
operators.

Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Tom Lane, Alexander Korotkov
2019-07-14 14:55:01 +03:00
Noah Misch 8a0cbb8852 Revive test of concurrent OID generation.
Commit 578b229718 replaced it with a
concurrent "nextval" test.  That version does not detect PostgreSQL's
incompatibility with xlc 13.1.3, so bring back an OID-based test that
does.  Back-patch to v12, where that commit first appeared.

Discussion: https://postgr.es/m/20190707170035.GA1485546@rfd.leadboat.com
2019-07-13 13:34:22 -07:00
Michael Paquier 39aadc9842 Fix some inconsistencies in MSVC scripts
In configure scripts, --with-ossp-uuid is obsolete is replaced by
--with-uuid, and it needs to specify a path to its library builds when
building with the MSVC scripts.  --with-perl needs also to specify a
path.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20190712.121529.194600624.horikyota.ntt@gmail.com
2019-07-13 16:51:31 +09:00
Thomas Munro 5b51bbfbd5 Fix tab completion for UPDATE.
Previously it suggested an extra "=" after "SET x=".

Reported-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CA%2BhUKGLk%3D0yLDjfviONJLzcHEzygj%3Dx6VbGH43LnXbBUvQb52g%40mail.gmail.com
2019-07-13 16:08:13 +12:00
Thomas Munro 7bdc6556fb Tab completion for CREATE TYPE.
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CA%2BhUKGLk%3D0yLDjfviONJLzcHEzygj%3Dx6VbGH43LnXbBUvQb52g%40mail.gmail.com
2019-07-13 16:07:47 +12:00
Thomas Munro b91dd9de5e Forward received condition variable signals on cancel.
After a process decides not to wait for a condition variable, it can
still consume a signal before it reaches ConditionVariableCancelSleep().
In that case, pass the signal on to another waiter if possible, so that
a signal doesn't go missing when there is another process ready to
receive it.

Author: Thomas Munro
Reviewed-by: Shawn Debnath
Discussion: https://postgr.es/m/CA%2BhUKGLQ_RW%2BXs8znDn36e-%2Bmq2--zrPemBqTQ8eKT-VO1OF4Q%40mail.gmail.com
2019-07-13 14:50:18 +12:00
Thomas Munro 1321509fa4 Introduce timed waits for condition variables.
Provide ConditionVariableTimedSleep(), like ConditionVariableSleep()
but with a timeout argument.

Author: Shawn Debnath
Reviewed-by: Kyotaro Horiguchi, Thomas Munro
Discussion: https://postgr.es/m/eeb06007ccfe46e399df6af18bfcd15a@EX13D05UWC002.ant.amazon.com
2019-07-13 13:51:05 +12:00
Thomas Munro b31fbe852c Warn if wal_level is too low when creating a publication.
Provide a hint to users that they need to increase wal_level before
subscriptions can work.

Author: Lucas Viecelli, with some adjustments by Thomas Munro
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAPjy-57rn5Y9g4e5u--eSOP-7P4QrE9uOZmT2ZcUebF8qxsYhg%40mail.gmail.com
2019-07-13 10:35:34 +12:00
Tom Lane d3751adcf1 Fix get_actual_variable_range() to cope with broken HOT chains.
Commit 3ca930fc3 modified get_actual_variable_range() to use a new
"SnapshotNonVacuumable" snapshot type for selecting tuples that it
would consider valid.  However, because that snapshot type can accept
recently-dead tuples, this caused a bug when using a recently-created
index: we might accept a recently-dead tuple that is an early member
of a broken HOT chain and does not actually match the index entry.
Then, the data extracted from the heap tuple would not necessarily be
an endpoint value of the column; it could even be NULL, leading to
get_actual_variable_range() itself reporting "found unexpected null
value in index".  Even without an error, this could lead to poor
plan choices due to an erroneous notion of the endpoint value.

We can improve matters by changing the code to use the index-only
scan technique (which didn't exist when get_actual_variable_range was
originally written).  If any of the tuples in a HOT chain are live
enough to satisfy SnapshotNonVacuumable, we take the data from the
index entry, ignoring what is in the heap.  This fixes the problem
without changing the live-vs-dead-tuple behavior from what was
intended by commit 3ca930fc3.

A side benefit is that for static tables we might not have to touch
the heap at all (when the extremal value is in an all-visible page).
In addition, we can save some overhead by not having to create a
complete ExecutorState, and we don't need to run FormIndexDatum,
avoiding more cycles as well as the possibility of failure for
indexes on expressions.  (I'm not sure that this code would ever
be used to determine the extreme value of an expression, in the
current state of the planner; but it's definitely possible that
lower-order columns of the selected index could be expressions.
So one could construct perhaps-artificial examples in which the
old code unexpectedly failed due to trying to compute an
expression's value for a now-dead row.)

Per report from Manuel Rigger.  Back-patch to v11 where commit
3ca930fc3 came in.

Discussion: https://postgr.es/m/CA+u7OA7W4NWEhCvftdV6_8bbm2vgypi5nuxfnSEJQqVKFSUoMg@mail.gmail.com
2019-07-12 16:24:59 -04:00
David Rowley cfde234939 Fix RANGE partition pruning with multiple boolean partition keys
match_clause_to_partition_key incorrectly would return
PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current
partition key.  This was a problem, as it causes the calling function to
discard the qual and not try to match it to any other partition key.  If
there was another partition key which did match this qual, then the qual
would not be checked again and we could fail to prune some partitions.

The worst this could do was to cause partitions not to be pruned when they
could have been, so there was no danger of incorrect query results here.

Fix this by changing match_boolean_partition_clause to have it return a
PartClauseMatchStatus rather than a boolean value.  This allows it to
communicate if the qual is unsupported or if it just does not match this
particular partition key, previously these two cases were treated the
same.  Now, if match_clause_to_partition_key is unable to match the qual
to any other qual type then we can simply return the value from the
match_boolean_partition_clause call so that the calling function properly
treats the qual as either unmatched or unsupported.

Reported-by: Rares Salcudean
Reviewed-by: Amit Langote
Backpatch-through: 11 where partition pruning was introduced
Discussion: https://postgr.es/m/CAHp_FN2xwEznH6oyS0hNTuUUZKp5PvegcVv=Co6nBXJ+mC7Y5w@mail.gmail.com
2019-07-12 19:12:38 +12:00
Tom Lane b5810de3f4 Reduce memory consumption for multi-statement query strings.
Previously, exec_simple_query always ran parse analysis, rewrite, and
planning in MessageContext, allowing all the data generated thereby
to persist until the end of processing of the whole query string.
That's fine for single-command strings, but if a client sends many
commands in a single simple-Query message, this strategy could result
in annoying memory bloat, as complained of by Andreas Seltenreich.

To fix, create a child context to do this work in, and reclaim it
after each command.  But we only do so for parsetrees that are not
last in their query string.  That avoids adding any memory management
overhead for the typical case of a single-command string.  Memory
allocated for the last parsetree would be freed immediately after
finishing the command string anyway.

Similarly, adjust extension.c's execute_sql_string() to reclaim memory
after each command.  In that usage, multi-command strings are the norm,
so it's a bit surprising that no one has yet complained of bloat ---
especially since the bloat extended to whatever data ProcessUtility
execution might leak.

Amit Langote, reviewed by Julien Rouhaud

Discussion: https://postgr.es/m/87ftp6l2qr.fsf@credativ.de
2019-07-10 14:32:38 -04:00
Michael Paquier fa19a08d71 Fix variable initialization when using buffering build with GiST
This can cause valgrind to complain, as the flag marking a buffer as a
temporary copy was not getting initialized.

While on it, fill in with zeros newly-created buffer pages.  This does
not matter when loading a block from a temporary file, but it makes the
push of an index tuple into a new buffer page safer.

This has been introduced by 1d27dcf, so backpatch all the way down to
9.4.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/15899-0d24fb273b3dd90c@postgresql.org
Backpatch-through: 9.4
2019-07-10 15:14:54 +09:00
David Rowley f7c830f1ab Fix missing calls to table_finish_bulk_insert during COPY, take 2
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM.  However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition.  Before 86b85044e, when we always called the heap functions,
there was no need to call heapam_finish_bulk_insert for partitions since
it only did any work when performing a copy without WAL.  For partitioned
tables, this was unsupported anyway, so there was no issue.  With
pluggable storage, we can't make any assumptions about what the table AM
might want to do in its equivalent function, so we'd better ensure we
always call table_finish_bulk_insert each partition that's received a row.

For now, we make the table_finish_bulk_insert call whenever we evict a
CopyMultiInsertBuffer out of the CopyMultiInsertInfo.  This does mean
that it's possible that we call table_finish_bulk_insert multiple times
per partition, which is not a problem other than being an inefficiency.
Improving this requires a more invasive patch, so let's leave that for
another day.

This also changes things so that we no longer needlessly call
table_finish_bulk_insert when performing a COPY FROM for a non-partitioned
table when not using multi-inserts.

Reported-by: Robert Haas
Backpatch-through: 12
Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
2019-07-10 16:03:04 +12:00
Amit Kapila bd56cd75d2 Fix few typos and minor wordsmithing in tableam comments.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal
Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
2019-07-10 07:52:51 +05:30
Thomas Munro f5825853e3 Pass QueryEnvironment down to EvalPlanQual's EState.
Otherwise the executor can't see trigger transition tables during
EPQ evaluation.  Fixes bug #15900 and almost certainly also #15720.
Back-patch to 10, where trigger transition tables landed.

Author: Alex Aktsipetrov
Reviewed-by: Thomas Munro, Tom Lane
Discussion: https://postgr.es/m/15900-bc482754fe8d7415%40postgresql.org
Discussion: https://postgr.es/m/15720-38c2b29e5d720187%40postgresql.org
2019-07-10 10:15:32 +12:00
Alvaro Herrera 2c84ea6cf9 Propagate trigger arguments to partitions
We were creating the cloned triggers with an empty list of arguments,
losing the ones that had been specified by the user when creating the
trigger in the partitioned table.  Repair.

This was forgotten in commit 86f575948c.

Author: Patrick McHardy
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20190709130027.amr2cavjvo7rdvac@access1.trash.net
Discussion: https://postgr.es/m/15752-123bc90287986de4@postgresql.org
2019-07-09 17:16:36 -04:00
Peter Eisentraut e435c1e7d9 Message style improvements 2019-07-09 15:47:09 +02:00
Thomas Munro cba0fe024e Force hash joins to be enabled in the hash join regression tests.
Otherwise the regressplans.sh tests generate extremely slow nested
loop joins.  Back-patch to 11 where the hash join tests came in.

Reported-by: Michael Paquier
Discussion: https://postgr.es/m/20190708055256.GB2709%40paquier.xyz
2019-07-09 18:33:44 +12:00
Bruce Momjian ba09342518 Adjust ssl_ciphers to be specific to OpenSSL
Syntax is OpenSSL-specific, so only use it for OpenSSL.

Discussion: https://postgr.es/m/8232E273-7B25-47F4-B0E7-3D4264106F82@yesql.se

Author: Daniel Gustafsson

Backpatch-through: head
2019-07-08 19:39:48 -04:00
Bruce Momjian 481837783b Remove unused C structure member
Remove quote_all_identifiers from struct _dumpOptions.

Discussion: https://postgr.es/m/d3d92ce9-78a4-8adb-0393-d3deeec29f7e@postgrespro.ru

Author: Arthur Zakirov

Backpatch-through: head
2019-07-08 19:31:16 -04:00
Robert Haas 554106b116 tableam: Provide helper functions for relation sizing.
Most block-based table AMs will need the exact same implementation of
the relation_size callback as the heap, and if they use a standard
page layout, they will likely need an implementation of the
relation_estimate_size callback that is very similar to that of the
heap.  Rearrange to facilitate code reuse.

Patch by me, reviewed by Michael Paquier, Daniel Gustafsson, and
Álvaro Herrera.

Discussion: http://postgr.es/m/CA+TgmoZ6DBPnP1E-vRpQZUJQijJFD54F+SR_pxGiAAS-MyrigA@mail.gmail.com
2019-07-08 14:51:53 -04:00
Michael Paquier 6b8548964b Fix inconsistencies in the code
This addresses a couple of issues in the code:
- Typos and inconsistencies in comments and function declarations.
- Removal of unreferenced function declarations.
- Removal of unnecessary compile flags.
- A cleanup error in regressplans.sh.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/0c991fdf-2670-1997-c027-772a420c4604@gmail.com
2019-07-08 13:15:09 +09:00
Peter Eisentraut 7e9a4c5c3d Use consistent style for checking return from system calls
Use

    if (something() != 0)
        error ...

instead of just

    if (something)
        error ...

The latter is not incorrect, but it's a bit confusing and not the
common style.

Discussion: https://www.postgresql.org/message-id/flat/5de61b6b-8be9-7771-0048-860328efe027%402ndquadrant.com
2019-07-07 15:28:49 +02:00
Michael Paquier d1a040543b Remove more unreferenced function declarations
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoDuAYsRb3Q9aobkFZ6DZMWxsyg4HOmgkwgeWNfSkTwGxw@mail.gmail.com
2019-07-07 09:58:33 +09:00
Tom Lane fb30c9c1c5 In pg_log_generic(), be more paranoid about preserving errno.
This code failed to account for the possibility that malloc() would
change errno, resulting in wrong output for %m, not to mention the
possibility of message truncation.  Such a change is obviously
expected when malloc fails, but there's reason to fear that on some
platforms even a successful malloc call can modify errno.

Discussion: https://postgr.es/m/2576.1527382833@sss.pgh.pa.us
2019-07-06 11:25:37 -04:00
Peter Eisentraut b33283c364 Add missing source files to nls.mk 2019-07-06 15:03:21 +02:00
Peter Eisentraut 3f3542621f psql: Fix logging output format
In normal interactive mode, psql's log messages accidentally got a
"psql:" prefix that was not supposed to be there.  This only happened
if there was no .psqlrc file being read, so it wasn't discovered for a
while.  Fix this by adding the appropriate logging format
configuration call in the right code path.

Discussion: https://www.postgresql.org/message-id/7586.1560540361@sss.pgh.pa.us
2019-07-06 15:03:17 +02:00
Amit Kapila 78d41f6c9b Add missing assertions for required table am callbacks.
Reported-by: Ashwin Agrawal
Author: Ashwin Agrawal
Reviewed-by: Amit Kapila
Backpatch-through: 12, where it was introduced
Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
2019-07-06 11:41:23 +05:30