Commit Graph

20894 Commits

Author SHA1 Message Date
Jeff Davis a547e68675 Adjust cost model for HashAgg that spills to disk.
Tomas Vondra observed that the IO behavior for HashAgg tends to be
worse than for Sort. Penalize HashAgg IO costs accordingly.

Also, account for the CPU effort of spilling the tuples and reading
them back.

Discussion: https://postgr.es/m/20200906212112.nzoy5ytrzjjodpfh@development
Reviewed-by: Tomas Vondra
Backpatch-through: 13
2020-09-07 13:31:59 -07:00
Tom Lane 53367e6c62 Clarify comments in enforce_generic_type_consistency().
Some of the pre-existing comments were vague about whether they
referred to all polymorphic types or only the old-style ones.

Also be more consistent about using the "family 1" vs "family 2"
terminology.

Himanshu Upadhyaya and Tom Lane

Discussion: https://postgr.es/m/CAPF61jBUg9XoMPNuLpoZ+h6UZ2VxKdNt3rQL1xw1GOBwjWzAXQ@mail.gmail.com
2020-09-07 14:52:33 -04:00
Tom Lane 9c79e646c6 Frob numeric.c loop so that clang will auto-vectorize it too.
Experimentation shows that clang will auto-vectorize the critical
multiplication loop if the termination condition is written "i2 < limit"
rather than "i2 <= limit".  This seems unbelievably stupid, but I've
reproduced it on both clang 9.0.1 (RHEL8) and 11.0.3 (macOS Catalina).
gcc doesn't care, so tweak the code to do it that way.

Discussion: https://postgr.es/m/CAJ3gD9evtA_vBo+WMYMyT-u=keHX7-r8p2w7OSRfXf42LTwCZQ@mail.gmail.com
2020-09-07 12:03:04 -04:00
Thomas Munro 861c6e7c8e Skip unnecessary stat() calls in walkdir().
Some kernels can tell us the type of a "dirent", so we can avoid a call
to stat() or lstat() in many cases.  Define a new function
get_dirent_type() to contain that logic, for use by the backend and
frontend versions of walkdir(), and perhaps other callers in future.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BFzxupGGN4GpUdbzZN%2Btn6FQPHo8w0Q%2BAPH5Wz8RG%2Bww%40mail.gmail.com
2020-09-07 18:28:06 +12:00
Tom Lane 8870917623 Apply auto-vectorization to the inner loop of numeric multiplication.
Compile numeric.c with -ftree-vectorize where available, and adjust
the innermost loop of mul_var() so that it is amenable to being
auto-vectorized.  (Mainly, that involves making it process the arrays
left-to-right not right-to-left.)

Applying -ftree-vectorize actually makes numeric.o smaller, at least
with my compiler (gcc 8.3.1 on x86_64), and it's a little faster too.
Independently of that, fixing the inner loop to be vectorizable also
makes things a bit faster.  But doing both is a huge win for
multiplications with lots of digits.  For me, the numeric regression
test is the same speed to within measurement noise, but numeric_big
is a full 45% faster.

We also looked into applying -funroll-loops, but that makes numeric.o
bloat quite a bit, and the additional speed improvement is very
marginal.

Amit Khandekar, reviewed and edited a little by me

Discussion: https://postgr.es/m/CAJ3gD9evtA_vBo+WMYMyT-u=keHX7-r8p2w7OSRfXf42LTwCZQ@mail.gmail.com
2020-09-06 21:40:39 -04:00
Tom Lane 695de5d1ed Split Makefile symbol CFLAGS_VECTOR into two symbols.
Replace CFLAGS_VECTOR with CFLAGS_UNROLL_LOOPS and CFLAGS_VECTORIZE,
allowing us to distinguish whether we want to apply -funroll-loops,
-ftree-vectorize, or both to a particular source file.  Up to now
the only consumer of the symbol has been checksum.c which wants
both, so that there was no need to distinguish; but that's about
to change.

Amit Khandekar, reviewed and edited a little by me

Discussion: https://postgr.es/m/CAJ3gD9evtA_vBo+WMYMyT-u=keHX7-r8p2w7OSRfXf42LTwCZQ@mail.gmail.com
2020-09-06 21:28:16 -04:00
Tom Lane 8e3c58e6e4 Refactor pg_get_line() to expose an alternative StringInfo-based API.
Letting the caller provide a StringInfo to read into is helpful when
the caller needs to merge lines or otherwise modify the data after
it's been read.  Notably, now the code added by commit 8f8154a50
can use pg_get_line_append() instead of having its own copy of that
logic.  A follow-on commit will also make use of this.

Also, since StringInfo buffers are a minimum of 1KB long, blindly
using pg_get_line() in a loop can eat a lot more memory than one would
expect.  I discovered for instance that commit e0f05cd5b caused initdb
to consume circa 10MB to read postgres.bki, even though that's under
1MB worth of data.  A less memory-hungry alternative is to re-use the
same StringInfo for all lines and pg_strdup the results.

Discussion: https://postgr.es/m/1315832.1599345736@sss.pgh.pa.us
2020-09-06 14:13:19 -04:00
Magnus Hagander 2a093355aa Fix typo in comment
Author: Hou, Zhijie
2020-09-06 19:26:55 +02:00
Tom Lane 19ad7e1d7b Fix misleading error message about inconsistent moving-aggregate types.
We reported the wrong types when complaining that an aggregate's
moving-aggregate implementation is inconsistent with its regular
implementation.

This was wrong since the feature was introduced, so back-patch
to all supported branches.

Jeff Janes

Discussion: https://postgr.es/m/CAMkU=1x808LH=LPhZp9mNSP0Xd1xDqEd+XeGcvEe48dfE6xV=A@mail.gmail.com
2020-09-06 12:55:13 -04:00
Tom Lane e0f05cd5ba Improve some ancient, crufty code in bootstrap + initdb.
At some point back in the last century, somebody felt that reading
all of pg_type twice was cheaper, or at least easier, than using
repalloc() to resize the Typ[] array dynamically.  That seems like an
entirely wacko proposition, so rewrite the code to do it the other
way.  (To add insult to injury, there were two not-quite-identical
copies of said code.)

initdb.c's readfile() function had the same disease of preferring
to do double the I/O to avoid resizing its output array.  Here,
we can make things easier by using the just-invented pg_get_line()
function to handle reading individual lines without a predetermined
notion of how long they are.

On my machine, it's difficult to detect any net change in the
overall runtime of initdb from these changes; but they should
help on slower buildfarm machines (especially since a buildfarm
cycle involves a lot of initdb's these days).

My attention was drawn to these places by scan-build complaints,
but on inspection they needed a lot more work than just suppressing
dead stores :-(
2020-09-05 16:20:04 -04:00
Tom Lane a5cc4dab6d Yet more elimination of dead stores and useless initializations.
I'm not sure what tool Ranier was using, but the ones I contributed
were found by using a newer version of scan-build than I tried before.

Ranier Vilela and Tom Lane

Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com
2020-09-05 13:17:32 -04:00
Michael Paquier 8febfd1855 Switch to multi-inserts when registering dependencies for many code paths
This commit improves the dependency registrations by taking advantage of
the preliminary work done in 63110c62, to group together the insertion
of dependencies of the same type to pg_depend.  With the current layer
of routines available, and as only dependencies of the same type can be
grouped, there are code paths still doing more than one multi-insert
when it is necessary to register dependencies of multiple types
(constraint and index creation are two cases doing that).

While on it, this refactors some of the code to use ObjectAddressSet()
when manipulating object addresses.

Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera
Discussion: https://postgr.es/m/20200807061619.GA23955@paquier.xyz
2020-09-05 21:33:53 +09:00
Peter Eisentraut 556cbdfce4 Fix typo in comment 2020-09-05 11:32:20 +02:00
Michael Paquier 63110c6264 Use multi-inserts for pg_depend
This is a follow-up of the work done in e3931d01.  This case is a bit
different than pg_attribute and pg_shdepend: the maximum number of items
to insert is known in advance, but there is no need to handle pinned
dependencies.  Hence, the base allocation for slots is done based on the
number of items and the maximum allowed with a cap at 64kB.  Slots are
initialized once used to minimize the overhead of the operation.

The insertions can be done for dependencies of the same type.  More
could be done by grouping the insertion of multiple dependency types in
a single batch.  This is left as future work.

Some of the multi-insert logic is also simplified for pg_shdepend, as
per the feedback discussed for this specific patch.  This also moves to
indexing.h the variable capping the maximum amount of data that can be
used at once for a multi-insert, instead of having separate definitions
for pg_attribute, pg_depend and pg_shdepend.

Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera
Discussion: https://postgr.es/m/20200807061619.GA23955@paquier.xyz
2020-09-05 13:52:47 +09:00
Tom Lane c8746f999e Fix over-eager ping'ing in logical replication receiver.
Commit 3f60f690f only partially fixed the broken-status-tracking
issue in LogicalRepApplyLoop: we need ping_sent to have the same
lifetime as last_recv_timestamp.  The effects are much less serious
than what that commit fixed, though.  AFAICS this would just lead to
extra ping requests being sent, once per second until the sender
responds.  Still, it's a bug, so backpatch to v10 as before.

Discussion: https://postgr.es/m/959627.1599248476@sss.pgh.pa.us
2020-09-04 20:33:36 -04:00
Tom Lane 9a851039aa Remove still more useless assignments.
Fix some more things scan-build pointed to as dead stores.  In some of
these cases, rearranging the code a little leads to more readable
code IMO.  It's all cosmetic, though.

Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com
2020-09-04 20:33:36 -04:00
Jeff Davis 0852006a94 Fix bogus MaxAllocSize check in logtape.c.
Reported-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wz=NZPZc3-fkdmvu=w2itx0PiB-G6QpxHXZOjuvFAzPdZw@mail.gmail.com
Backpatch-through: 13
2020-09-04 12:09:52 -07:00
Alvaro Herrera f43e295f68
Report expected contrecord length on mismatch
When reading a WAL record fails to find continuation record(s) of the
proper length, report what it expects, for clarity.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20200903212152.GA15319@alvherre.pgsql
2020-09-04 14:58:32 -04:00
Tom Lane 38a2d70329 Remove some more useless assignments.
Found with clang's scan-build tool.  It also whines about a lot of
other dead stores that we should *not* change IMO, either as a matter
of style or future-proofing.  But these places seem like clear
oversights.

Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com
2020-09-04 14:32:19 -04:00
Amit Kapila ac15b499f7 Fix inline marking introduced in commit 464824323e.
Forgot to add inline marking in changes_filename() declaration. In the passing, add
inline marking for a similar function subxact_filename().

Reported-By: Nathan Bossart
Discussion: https://postgr.es/m/E98FBE8F-B878-480D-A728-A60C6EED3047@amazon.com
2020-09-04 11:25:16 +05:30
Bruce Momjian e36e936e0e remove redundant initializations
Reported-by: Ranier Vilela

Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com

Author: Ranier Vilela

Backpatch-through: master
2020-09-03 22:57:35 -04:00
Michael Paquier 844c05abc3 Remove variable "concurrent" from ReindexStmt
This node already handles multiple options using a bitmask, so having a
separate boolean flag is not necessary.  This simplifies the code a bit
with less arguments to give to the reindex routines, by replacing the
boolean with an equivalent bitmask value.

Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/20200902110326.GA14963@paquier.xyz
2020-09-04 10:43:32 +09:00
Tom Lane 67a472d71c Remove arbitrary restrictions on password length.
This patch started out with the goal of harmonizing various arbitrary
limits on password length, but after awhile a better idea emerged:
let's just get rid of those fixed limits.

recv_password_packet() has an arbitrary limit on the packet size,
which we don't really need, so just drop it.  (Note that this doesn't
really affect anything for MD5 or SCRAM password verification, since
those will hash the user's password to something shorter anyway.
It does matter for auth methods that require a cleartext password.)

Likewise remove the arbitrary error condition in pg_saslprep().

The remaining limits are mostly in client-side code that prompts
for passwords.  To improve those, refactor simple_prompt() so that
it allocates its own result buffer that can be made as big as
necessary.  Actually, it proves best to make a separate routine
pg_get_line() that has essentially the semantics of fgets(), except
that it allocates a suitable result buffer and hence will never
return a truncated line.  (pg_get_line has a lot of potential
applications to replace randomly-sized fgets buffers elsewhere,
but I'll leave that for another patch.)

I built pg_get_line() atop stringinfo.c, which requires moving
that code to src/common/; but that seems fine since it was a poor
fit for src/port/ anyway.

This patch is mostly mine, but it owes a good deal to Nathan Bossart
who pressed for a solution to the password length problem and
created a predecessor patch.  Also thanks to Peter Eisentraut and
Stephen Frost for ideas and discussion.

Discussion: https://postgr.es/m/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
2020-09-03 20:09:18 -04:00
Tom Lane be4b0c0077 Avoid lockup of a parallel worker when reporting a long error message.
Because sigsetjmp() will restore the initial state with signals blocked,
the code path in bgworker.c for reporting an error and exiting would
execute that way.  Usually this is fairly harmless; but if a parallel
worker had an error message exceeding the shared-memory communication
buffer size (16K) it would lock up, because it would wait for a
resume-sending signal from its parallel leader which it would never
detect.

To fix, just unblock signals at the appropriate point.

This can be shown to fail back to 9.6.  The lack of parallel query
infrastructure makes it difficult to provide a simple test case for
9.5; but I'm pretty sure the issue exists in some form there as well,
so apply the code change there too.

Vignesh C, reviewed by Bharath Rupireddy, Robert Haas, and myself

Discussion: https://postgr.es/m/CALDaNm1d1hHPZUg3xU4XjtWBOLCrA+-2cJcLpw-cePZ=GgDVfA@mail.gmail.com
2020-09-03 16:52:09 -04:00
Tom Lane 8f8154a503 Allow records to span multiple lines in pg_hba.conf and pg_ident.conf.
A backslash at the end of a line now causes the next line to be appended
to the current one (effectively, the backslash and newline are discarded).
This allows long HBA entries to be created without legibility problems.

While we're here, get rid of the former hard-wired length limit on
pg_hba.conf lines, by using an expansible StringInfo buffer instead
of a fixed-size local variable.

Since the same code is used to read the ident map file, these changes
apply there as well.

Fabien Coelho, reviewed by Justin Pryzby and David Zhang

Discussion: https://postgr.es/m/alpine.DEB.2.21.2003251906140.15243@pseudo
2020-09-03 12:16:48 -04:00
Amit Kapila 464824323e Add support for streaming to built-in logical replication.
To add support for streaming of in-progress transactions into the
built-in logical replication, we need to do three things:

* Extend the logical replication protocol, so identify in-progress
transactions, and allow adding additional bits of information (e.g.
XID of subtransactions).

* Modify the output plugin (pgoutput) to implement the new stream
API callbacks, by leveraging the extended replication protocol.

* Modify the replication apply worker, to properly handle streamed
in-progress transaction by spilling the data to disk and then
replaying them on commit.

We however must explicitly disable streaming replication during
replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover we don't have a replication connection open so we
don't have where to send the data anyway.

Author: Tomas Vondra, Dilip Kumar and Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2020-09-03 07:54:07 +05:30
Tom Lane 66f1630680 Add string_to_table() function.
This splits a string at occurrences of a delimiter.  It is exactly like
string_to_array() except for producing a set of values instead of an
array of values.  Thus, the relationship of these two functions is
the same as between regexp_split_to_table() and regexp_split_to_array().

Although the same results could be had from unnest(string_to_array()),
this is somewhat faster than that, and anyway it seems reasonable to
have it for symmetry with the regexp functions.

Pavel Stehule, reviewed by Peter Smith

Discussion: https://postgr.es/m/CAFj8pRD8HOpjq2TqeTBhSo_QkzjLOhXzGCpKJ4nCs7Y9SQkuPw@mail.gmail.com
2020-09-02 18:23:56 -04:00
Fujii Masao be9788e998 Avoid unnecessary acquisition of SyncRepLock in transaction commit time.
In SyncRepWaitForLSN() routine called in transaction commit time,
SyncRepLock is necessary to atomically both check the shared
sync_standbys_defined flag and operate the sync replication wait-queue.
On the other hand, when the flag is false, the lock is not necessary
because the wait-queue is not touched. But due to the changes by
commit 48c9f49265, previously the lock was taken whatever the flag was.
This could cause unnecessary performance overhead in every transaction
commit time. Therefore this commit avoids that unnecessary aquisition
of SyncRepLock.

Author: Fujii Masao
Reviewed-by: Asim Praveen, Masahiko Sawada,
Discussion: https://postgr.es/m/20200406050332.nsscfqjzk2d57zyx@alap3.anarazel.de
2020-09-02 10:55:55 +09:00
Michael Paquier 1d65416661 Improve handling of dropped relations for REINDEX DATABASE/SCHEMA/SYSTEM
When multiple relations are reindexed, a scan of pg_class is done first
to build the list of relations to work on.  However the REINDEX logic
has never checked if a relation listed still exists when beginning the
work on it, causing for example sudden cache lookup failures.

This commit adds safeguards against dropped relations for REINDEX,
similarly to VACUUM or CLUSTER where we try to open the relation,
ignoring it if it is missing.  A new option is added to the REINDEX
routines to control if a missed relation is OK to ignore or not.

An isolation test, based on REINDEX SCHEMA, is added for the concurrent
and non-concurrent cases.

Author: Michael Paquier
Reviewed-by: Anastasia Lubennikova
Discussion: https://postgr.es/m/20200813043805.GE11663@paquier.xyz
2020-09-02 09:08:12 +09:00
Tom Lane a7212be8b9 Set cutoff xmin more aggressively when vacuuming a temporary table.
Since other sessions aren't allowed to look into a temporary table
of our own session, we do not need to worry about the global xmin
horizon when setting the vacuum XID cutoff.  Indeed, if we're not
inside a transaction block, we may set oldestXmin to be the next
XID, because there cannot be any in-doubt tuples in a temp table,
nor any tuples that are dead but still visible to some snapshot of
our transaction.  (VACUUM, of course, is never inside a transaction
block; but we need to test that because CLUSTER shares the same code.)

This approach allows us to always clean out a temp table completely
during VACUUM, independently of concurrent activity.  Aside from
being useful in its own right, that simplifies building reproducible
test cases.

Discussion: https://postgr.es/m/3490536.1598629609@sss.pgh.pa.us
2020-09-01 18:40:43 -04:00
Alvaro Herrera afc7e0ad55
Raise error on concurrent drop of partitioned index
We were already raising an error for DROP INDEX CONCURRENTLY on a
partitioned table, albeit a different and confusing one:
  ERROR:  DROP INDEX CONCURRENTLY must be first action in transaction

Change that to throw a more comprehensible error:
  ERROR:  cannot drop partitioned index \"%s\" concurrently

Michael Paquier authored the test case for indexes on temporary
partitioned tables.

Backpatch to 11, where indexes on partitioned tables were added.

Reported-by: Jan Mussler <jan.mussler@zalando.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/16594-d2956ca909585067@postgresql.org
2020-09-01 13:40:43 -04:00
Amit Kapila 4ab77697f6 Fix the SharedFileSetUnregister API.
Commit 808e13b282 introduced a few APIs to extend the existing Buffile
interface. In SharedFileSetDeleteOnProcExit, it tries to delete the list
element while traversing the list with 'foreach' construct which makes the
behavior of list traversal unpredictable.

Author: Amit Kapila
Reviewed-by: Dilip Kumar
Tested-by: Dilip Kumar and Neha Sharma
Discussion: https://postgr.es/m/CAA4eK1JhLatVcQ2OvwA_3s0ih6Hx9+kZbq107cXVsSWWukH7vA@mail.gmail.com
2020-09-01 08:11:39 +05:30
Tom Lane 3d351d916b Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.
Historically, we've considered the state with relpages and reltuples
both zero as indicating that we do not know the table's tuple density.
This is problematic because it's impossible to distinguish "never yet
vacuumed" from "vacuumed and seen to be empty".  In particular, a user
cannot use VACUUM or ANALYZE to override the planner's normal heuristic
that an empty table should not be believed to be empty because it is
probably about to get populated.  That heuristic is a good safety
measure, so I don't care to abandon it, but there should be a way to
override it if the table is indeed intended to stay empty.

Hence, represent the initial state of ignorance by setting reltuples
to -1 (relpages is still set to zero), and apply the minimum-ten-pages
heuristic only when reltuples is still -1.  If the table is empty,
VACUUM or ANALYZE (but not CREATE INDEX) will override that to
reltuples = relpages = 0, and then we'll plan on that basis.

This requires a bunch of fiddly little changes, but we can get rid of
some ugly kluges that were formerly needed to maintain the old definition.

One notable point is that FDWs' GetForeignRelSize methods will see
baserel->tuples = -1 when no ANALYZE has been done on the foreign table.
That seems like a net improvement, since those methods were formerly
also in the dark about what baserel->tuples = 0 really meant.  Still,
it is an API change.

I bumped catversion because code predating this change would get confused
by seeing reltuples = -1.

Discussion: https://postgr.es/m/F02298E0-6EF4-49A1-BCB6-C484794D9ACC@thebuild.com
2020-08-30 12:21:51 -04:00
Michael Paquier 9511fb37ac Reset indisreplident for an invalid index in DROP INDEX CONCURRENTLY
A failure when dropping concurrently an index used in a replica identity
could leave in pg_index an index marked as !indisvalid and
indisreplident.  Reindexing this index would switch back indisvalid to
true, and if the replica identity of the parent relation was switched to
use a different index, it would be possible to finish with more than one
index marked as indisreplident.  If that were to happen, this could mess
up with the relation cache as an incorrect index could be used for the
replica identity.

Indexes marked as invalid are discarded as candidates for the replica
identity, as of RelationGetIndexList(), so similarly to what is done
with indisclustered, resetting indisreplident when the index is marked
as invalid keeps things consistent.  REINDEX CONCURRENTLY's swapping
already resets the flag for the old index, while the new index inherits
the value of the old index to-be-dropped, so only DROP INDEX was an
issue.

Even if this is a bug, the sequence able to reproduce a problem requires
a failure while running DROP INDEX CONCURRENTLY, something unlikely
going to happen in the field, so no backpatch is done.

Author: Michael Paquier
Reviewed-by: Dmitry Dolgov
Discussion: https://postgr.es/m/20200827025721.GN2017@paquier.xyz
2020-08-30 14:14:34 +09:00
Tom Lane 10564ee02c Fix code for re-finding scan position in a multicolumn GIN index.
collectMatchBitmap() needs to re-find the index tuple it was previously
looking at, after transiently dropping lock on the index page it's on.
The tuple should still exist and be at its prior position or somewhere
to the right of that, since ginvacuum never removes tuples but
concurrent insertions could add one.  However, there was a thinko in
that logic, to the effect of expecting any inserted tuples to have the
same index "attnum" as what we'd been scanning.  Since there's no
physical separation of tuples with different attnums, it's not terribly
hard to devise scenarios where this fails, leading to transient "lost
saved point in index" errors.  (While I've duplicated this with manual
testing, it seems impossible to make a reproducible test case with our
available testing technology.)

Fix by just continuing the scan when the attnum doesn't match.

While here, improve the error message used if we do fail, so that it
matches the wording used in btree for a similar case.

collectMatchBitmap()'s posting-tree code path was previously not
exercised at all by our regression tests.  While I can't make
a regression test that exhibits the bug, I can at least improve
the code coverage here, so do that.  The test case I made for this
is an extension of one added by 4b754d6c1, so it only works in
HEAD and v13; didn't seem worth trying hard to back-patch it.

Per bug #16595 from Jesse Kinkead.  This has been broken since
multicolumn capability was added to GIN (commit 27cb66fdf),
so back-patch to all supported branches.

Discussion: https://postgr.es/m/16595-633118be8eef9ce2@postgresql.org
2020-08-27 17:36:13 -04:00
Michael Paquier 77c7267c37 Fix comment in procarray.c
The description of GlobalVisDataRels was missing, GlobalVisCatalogRels
being mentioned instead.

Author: Jim Nasby
Discussion: https://postgr.es/m/8e06c883-2858-1fd4-07c5-560c28b08dcd@amazon.com
2020-08-27 16:40:34 +09:00
Tom Lane e942af7b82 Suppress compiler warning in non-cassert builds.
Oversight in 808e13b28, reported by Bruce Momjian.

Discussion: https://postgr.es/m/20200826160251.GB21909@momjian.us
2020-08-26 17:08:11 -04:00
Amit Kapila 7e453634bb Add additional information in the vacuum error context.
The additional information added will be an offset number for heap
operations. This information will help us in finding the exact tuple due
to which the error has occurred.

Author: Mahendra Singh Thalor and Amit Kapila
Reviewed-by: Sawada Masahiko, Justin Pryzby and Amit Kapila
Discussion: https://postgr.es/m/CAKYtNApK488TDF4bMbw+1QH8HJf9cxdNDXquhU50TK5iv_FtCQ@mail.gmail.com
2020-08-26 09:40:52 +05:30
Amit Kapila 808e13b282 Extend the BufFile interface.
Allow BufFile to support temporary files that can be used by the single
backend when the corresponding files need to be survived across the
transaction and need to be opened and closed multiple times. Such files
need to be created as a member of a SharedFileSet.

Additionally, this commit implements the interface for BufFileTruncate to
allow files to be truncated up to a particular offset and extends the
BufFileSeek API to support the SEEK_END case. This also adds an option to
provide a mode while opening the shared BufFiles instead of always opening
in read-only mode.

These enhancements in BufFile interface are required for the upcoming
patch to allow the replication apply worker, to handle streamed
in-progress transactions.

Author: Dilip Kumar, Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Neha Sharma
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2020-08-26 07:36:43 +05:30
Fujii Masao 50db5964ee Move codes for pg_backend_memory_contexts from mmgr/mcxt.c to adt/mcxtfuncs.c.
Previously the codes for pg_backend_memory_contexts were in
src/backend/utils/mmgr/mcxt.c. This commit moves them to
src/backend/utils/adt/mcxtfuncs.c so that mcxt.c basically includes
only the low-level interface for memory contexts.

Author: Atsushi Torikoshi
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/20200819135545.GC19121@paquier.xyz
2020-08-26 10:51:31 +09:00
Fujii Masao 29dd6d8bc6 Prevent non-superusers from reading pg_backend_memory_contexts, by default.
pg_backend_memory_contexts view contains some internal information of
memory contexts. Since exposing them to any users by default may cause
security issue, this commit allows only superusers to read this view,
by default, like we do for pg_shmem_allocations view.

Bump catalog version.

Author: Atsushi Torikoshi
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/1414992.1597849297@sss.pgh.pa.us
2020-08-26 10:50:02 +09:00
David Rowley c34605daed Fixup some misusages of bms_num_members()
It's a bit inefficient to test if a Bitmapset is empty by counting all the
members and seeing if that number is zero. It's much better just to use
bms_is_empty().  Likewise for checking if there are at least two members,
just use bms_membership(), which does not need to do anything more after
finding two members.

Discussion: https://postgr.es/m/CAApHDvpvwm_QjbDOb5xga%2BKmX9XkN9xQavNGm3SvDbVnCYOerQ%40mail.gmail.com
Reviewed-by: Tomas Vondra
2020-08-26 10:51:36 +12:00
Amit Kapila a3c66de6c5 Improve the vacuum error context phase information.
We were displaying the wrong phase information for 'info' message in the
index clean up phase because we were switching to the previous phase a bit
early. We were also not displaying context information for heap phase
unless the block number is valid which is fine for error cases but for
messages at 'info' or lower error level it appears to be inconsistent with
index phase information.

Reported-by: Sawada Masahiko
Author: Sawada Masahiko
Reviewed-by: Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CA+fd4k4HcbhPnCs7paRTw1K-AHin8y4xKomB9Ru0ATw0UeTy2w@mail.gmail.com
2020-08-24 08:16:19 +05:30
Tom Lane 4d346def15 Avoid pushing quals down into sub-queries that have grouping sets.
The trouble with doing this is that an apparently-constant subquery
output column isn't really constant if it is a grouping column that
appears in only some of the grouping sets.  A qual using such a
column would be subject to incorrect const-folding after push-down,
as seen in bug #16585 from Paul Sivash.

To fix, just disable qual pushdown altogether if the sub-query has
nonempty groupingSets.  While we could imagine far less restrictive
solutions, there is not much point in working harder right now,
because subquery_planner() won't move HAVING clauses to WHERE within
such a subquery.  If the qual stays in HAVING it's not going to be
a lot more useful than if we'd kept it at the outer level.

Having said that, this restriction could be removed if we used a
parsetree representation that distinguished such outputs from actual
constants, which is something I hope to do in future.  Hence, make
the patch a minimal addition rather than integrating it more tightly
(e.g. by renumbering the existing items in subquery_is_pushdown_safe's
comment).

Back-patch to 9.5 where grouping sets were introduced.

Discussion: https://postgr.es/m/16585-9d8c340d23ade8c1@postgresql.org
2020-08-22 14:46:40 -04:00
Tom Lane 5b02d68e75 Fix ALTER TABLE's scheduling rules for AT_AddConstraint subcommands.
Commit 1281a5c90 rearranged the logic in this area rather drastically,
and it broke the case of adding a foreign key constraint in the same
ALTER that adds the pkey or unique constraint it depends on.  While
self-referential fkeys are surely a pretty niche case, this used to
work so we shouldn't break it.

To fix, reorganize the scheduling rules in ATParseTransformCmd so
that a transformed AT_AddConstraint subcommand will be delayed into
a later pass in all cases, not only when it's been spit out as a
side-effect of parsing some other command type.

Also tweak the logic so that we won't run ATParseTransformCmd twice
while doing this.  It seems to work even without that, but it's
surely wasting cycles to do so.

Per bug #16589 from Jeremy Evans.  Back-patch to v13 where the new
code was introduced.

Discussion: https://postgr.es/m/16589-31c8d981ca503896@postgresql.org
2020-08-22 12:34:17 -04:00
Tom Lane 5028981923 Fix handling of CREATE TABLE LIKE with inheritance.
If a CREATE TABLE command uses both LIKE and traditional inheritance,
Vars in CHECK constraints and expression indexes that are absorbed
from a LIKE parent table tended to get mis-numbered, resulting in
wrong answers and/or bizarre error messages (though probably not any
actual crashes, thanks to validation occurring in the executor).

In v12 and up, the same could happen to Vars in GENERATED expressions,
even in cases with no LIKE clause but multiple traditional-inheritance
parents.

The cause of the problem for LIKE is that parse_utilcmd.c supposed
it could renumber such Vars correctly during transformCreateStmt(),
which it cannot since we have not yet accounted for columns added via
inheritance.  Fix that by postponing processing of LIKE INCLUDING
CONSTRAINTS, DEFAULTS, GENERATED, INDEXES till after we've performed
DefineRelation().

The error with GENERATED and multiple inheritance is a simple oversight
in MergeAttributes(); it knows it has to renumber Vars in inherited
CHECK constraints, but forgot to apply the same processing to inherited
GENERATED expressions (a/k/a defaults).

Per bug #16272 from Tom Gottfried.  The non-GENERATED variants of the
issue are ancient, presumably dating right back to the addition of
CREATE TABLE LIKE; hence back-patch to all supported branches.

Discussion: https://postgr.es/m/16272-6e32da020e9a9381@postgresql.org
2020-08-21 15:00:47 -04:00
Fujii Masao 9d701e624f Rework EXPLAIN for planner's buffer usage.
Commit ce77abe63c allowed EXPLAIN (BUFFERS) to report the information
on buffer usage during planning phase. However three issues were
reported regarding this feature.

(1) Previously, EXPLAIN option BUFFERS required ANALYZE. So the query
    had to be actually executed by specifying ANALYZE even when we
    want to see only the planner's buffer usage. This was inconvenient
    especially when the query was write one like DELETE.

(2) EXPLAIN included the planner's buffer usage in summary
    information. So SUMMARY option had to be enabled to report that.
    Also this format was confusing.

(3) The output structure for planning information was not consistent
    between TEXT format and the others. For example, "Planning" tag
    was output in JSON format, but not in TEXT format.

For (1), this commit allows us to perform EXPLAIN (BUFFERS) without
ANALYZE to report the planner's buffer usage.

For (2), this commit changed EXPLAIN output so that the planner's
buffer usage is reported before summary information.

For (3), this commit made the output structure for planning
information more consistent between the formats.

Back-patch to v13 where the planner's buffer usage was allowed to
be reported in EXPLAIN.

Reported-by: Pierre Giraud, David Rowley
Author: Fujii Masao
Reviewed-by: David Rowley, Julien Rouhaud, Pierre Giraud
Discussion: https://postgr.es/m/07b226e6-fa49-687f-b110-b7c37572f69e@dalibo.com
2020-08-21 20:48:59 +09:00
Fujii Masao d259afa736 Fix typos in comments.
Author: Masahiko Sawada
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CA+fd4k4m9hFSrRLB3etPWO5_v5=MujVZWRtz63q+55hM0Dz25Q@mail.gmail.com
2020-08-21 12:35:22 +09:00
David Rowley 8431d33079 Fix a few typos in JIT comments and README
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com
Backpatch-through: 11, where JIT was added
2020-08-21 09:33:56 +12:00
Andres Freund c62a0a49f3 Revert "Make vacuum a bit more verbose to debug BF failure."
This reverts commit 49967da65a.

Enough time has passed that we can be confident that 07f32fcd23
resolved the issue. Therefore we can remove the temporary debugging
aids.

Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/E1k7tGP-0005V0-5k@gemulon.postgresql.org
2020-08-20 12:59:00 -07:00