Since slot_keep_segs indicates the number of WAL segments not LSN,
its datatype should not be XLogRecPtr.
Back-patch to v13 where this issue was added.
Reported-by: Atsushi Torikoshi
Author: Atsushi Torikoshi, tweaked by Fujii Masao
Discussion: https://postgr.es/m/ebd0d674f3e050222238a960cac5251a@oss.nttdata.com
I neglected to test this scenario while preparing commit f3faf35f3,
so of course it was broken, thanks to some very obscure and undocumented
code in pg_dump. Pre-v12 databases might have toast tables attached to
partitioned tables, which we need to ignore since newer servers never
create such useless toast tables. There was a filter for this case in
binary_upgrade_set_type_oids_by_rel_oid(), which appeared to just
prevent the pg_type OID from being copied. But actually it managed to
prevent the toast table from being created at all --- or it did before
I took out that logic. But that was a fundamentally bizarre place to be
making the test in the first place. The place where the filter should
have been, one would think, is binary_upgrade_set_pg_class_oids(), so
add it there.
While at it, reorganize binary_upgrade_set_pg_class_oids() so that it
doesn't make a completely useless query when it knows it's being
invoked for an index. And correct a comment that mis-described the
scenario where we need to force creation of a TOAST table.
Per buildfarm.
Commit f7f70d5e2 left one inconsistency behind: we're still creating
pg_type entries for the composite types of sequences and toast tables,
but not arrays over those composites. But there seems precious little
reason to have named composite types for toast tables, and not much more
to have them for sequences (especially given the thought that sequences
may someday not be standalone relations at all).
So, let's close that inconsistency by removing these composite types,
rather than adding arrays for them. This buys back a little bit of
the initial pg_type bloat added by the previous patch, and could be
a significant savings in a large database with many toast tables.
Aside from a small logic rearrangement in heap_create_with_catalog,
this patch mostly needs to clean up some places that were assuming that
pg_class.reltype always has a valid value. Those are really pre-existing
bugs, given that it's documented otherwise; notably, the plpgsql changes
fix code that gives "cache lookup failed for type 0" on indexes today.
But none of these seem interesting enough to back-patch.
Also, remove the pg_dump/pg_upgrade infrastructure for propagating
a toast table's pg_type OID into the new database, since we no longer
need that.
Discussion: https://postgr.es/m/761F1389-C6A8-4C15-80CE-950C961F5341@gmail.com
The previous definition of the column was almost universally disliked,
so provide this updated definition which is more useful for monitoring
purposes: a large positive value is good, while zero or a negative value
means danger. This should be operationally more convenient.
Backpatch to 13, where the new column to pg_replication_slots (and the
feature it represents) were added.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/9ddfbf8c-2f67-904d-44ed-cf8bc5916228@oss.nttdata.com
Previously we checked that the ssl pointer was not null, but this puts a
requirement on there being such a pointer which may not be true in
future multi-ssl-library supporting times. This seems to have been an
oversight in 9029f4b374, but hasn't really had any effect since we only
have one library.
Author: Daniel Gustafsson
nbtree index builds cannot write out an empty page. That would mean
that there was no way to create a pivot tuple pointing to the page one
level up, since _bt_truncate() generates one based on page's firstright
tuple.
Replace the unnecessary PageIsEmpty() check with an assertion that
checks that the page has space for at least two line pointers (the
would-be high key line pointer, plus at least one valid "data item"
tuple line pointer).
The PageIsEmpty() check was added by commit 5d9f146c over 20 years ago.
It looks like it has always been unnecessary.
When we invented arrays of composite types (commit bc8036fc6),
we excluded system catalogs, basically just on the grounds of not
wanting to bloat pg_type. However, it's definitely inconsistent that
catalogs' composite types can't be put into arrays when others can.
Another problem is that the exclusion is done by checking
IsUnderPostmaster in heap_create_with_catalog, which means that
(1) If a user tries to create a table in single-user mode, it doesn't
get an array type. That's bad in itself, plus it breaks pg_upgrade.
(2) If someone drops and recreates a system view or information_schema
view (as we occasionally recommend doing), it will now have an array
type where it did not before, making for still more inconsistency.
So this is all pretty messy. Let's just get rid of the inconsistency
and decree that system-created relations should have array types if
similar user-created ones would, i.e. it only depends on the relkind.
As of HEAD, that means that the initial contents of pg_type grow from
411 rows to 605, which is a lot of growth percentage-wise, but it's
still quite a small catalog compared to others.
Wenjing Zeng, reviewed by Shawn Wang, further hacking by me
Discussion: https://postgr.es/m/761F1389-C6A8-4C15-80CE-950C961F5341@gmail.com
This introduces a new set of extended routines for procedure and
operator name lookups, with a flag bitmask argument that can modify the
result. The following options are available:
- Force schema qualification, ignoring search_path. This is similar to
the existing option for format_{operator|procedure}_qualified().
- Force NULL as result instead of a numeric OID for an undefined
object. This option is new.
This is a refactoring similar to 1185c78, that will be used for a future
patch to improve the SQL functions providing information using object
addresses for undefined objects.
Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson,
Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
If a type scanned is undefined, type format routines have two behaviors
depending on if FORMAT_TYPE_ALLOW_INVALID is used by the caller or not:
- Issue a cache lookup error
- Return an undefined type name "???", "???[]" or "-"
The current interface is not really helpful for callers willing to
format properly a type name, but still make sure that the type is
defined as there could be types matching the strings generated when
looking for an undefined type, even if that should not be a problem in
practice. In order to counter that, add a new flag called
FORMAT_TYPE_INVALID_AS_NULL that returns a NULL result instead of "???
or "-" which does not generate an error. This flag will be used in a
follow-up patch improving the set of SQL functions showing information
for object addresses when it comes to undefined objects.
Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson,
Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
This commit includes two improvements for build.pl in src/tools/msvc/:
- Fix two warnings related to $ARGV[0] being uninitialized, something
that happens when calling build.pl (or build.bat) without arguments to
compile all the components with a release quality.
- If calling the script with more than two arguments, exit immediately
and show a new help output. build.pl was not failing in this case
before this commit, and the extra arguments were just ignored, but the
new behavior helps in understanding how to run the script without
looking at the documentation for the Windows builds.
Reported-by: Ranier Vilela
Author: Michael Paquier
Reviewed-by: Juan José Santamaría Flecha, David Zhang
Discussion: https://postgr.es/m/CAEudQAo38dfR_0Ugt2OHy4mq-6Hz93XPSBAGEUV67RhKdgp8Zg@mail.gmail.com
In the common case where this function isn't actually asked to perform
any type conversion, there's nothing it has to do beyond comparing the
arguments. Arrange for that part to be inlined into callers, with the
slower path remaining out-of-line. This seems to be good for several
percent speedup on simple cases, with only minimal code bloat.
Amit Khandekar
Discussion: https://postgr.es/m/CAJ3gD9eBNrmUD7WBBLG8ohaZ485H9y+4eihQTgr+K8Lhka3vcQ@mail.gmail.com
The cfbot and some BF animals are complaining about the previous
read_binary_file commit because of ignoring return value of ‘fread’.
So let's make everyone happy by testing the return value even though
not strictly needed.
Reported by Justin Pryzby, and suggested patch by Tom Lane. Backpatched
to v11 same as the previous commit.
Reported-By: Justin Pryzby
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com
Backpatch-through: 11
read_binary_file(), used by SQL functions pg_read_file() and friends,
uses stat to determine file length to read, when not passed an explicit
length as an argument. This is problematic, for example, if the file
being read is a virtual file with a stat-reported length of zero.
Arrange to read until EOF, or StringInfo data string lenth limit, is
reached instead.
Original complaint and patch by me, with significant review, corrections,
advice, and code optimizations by Tom Lane. Backpatched to v11. Prior to
that only paths relative to the data and log dirs were allowed for files,
so no "zero length" files were reachable anyway.
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com
Backpatch-through: 11
After running GetForeignRelSize for a foreign table, adjust rel->tuples
to be at least as large as rel->rows. This prevents bizarre behavior
in estimate_num_groups() and perhaps other places, especially in the
scenario where rel->tuples is zero because pg_class.reltuples is
(suggesting that ANALYZE has never been run for the table). As things
stood, we'd end up estimating one group out of any GROUP BY on such a
table, whereas the default group-count estimate is more likely to result
in a sane plan.
Also, clarify in the documentation that GetForeignRelSize has the option
to override the rel->tuples value if it has a better idea of what to use
than what is in pg_class.reltuples.
Per report from Jeff Janes. Back-patch to all supported branches.
Patch by me; thanks to Etsuro Fujita for review
Discussion: https://postgr.es/m/CAMkU=1xNo9cnan+Npxgz0eK7394xmjmKg-QEm8wYG9P5-CcaqQ@mail.gmail.com
Commit ecd9e9f0b fixed the problem in the wrong place, causing unwanted
side-effects on the behavior of GetNextTempTableSpace(). Instead,
let's make SharedFileSetInit() responsible for subbing in the value
of MyDatabaseTableSpace when the default tablespace is called for.
The convention about what is in the tempTableSpaces[] array is
evidently insufficiently documented, so try to improve that.
It also looks like SharedFileSetInit() is doing the wrong thing in the
case where temp_tablespaces is empty. It was hard-wiring use of the
pg_default tablespace, but it seems like using MyDatabaseTableSpace
is more consistent with what happens for other temp files.
Back-patch the reversion of PrepareTempTablespaces()'s behavior to
9.5, as ecd9e9f0b was. The changes in SharedFileSetInit() go back
to v11 where that was introduced. (Note there is net zero code change
before v11 from these two patch sets, so nothing to release-note.)
Magnus Hagander and Tom Lane
Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com
This saves one level of C function call per plpgsql statement executed,
and permits a tiny additional optimization of not saving and restoring
estate->err_stmt for each statement in a block. The net effect seems
nearly un-measurable on x86_64, but there's a clear win on aarch64,
amounting to two or three percent in a loop over a few simple plpgsql
statements.
To do this, we have to get rid of the other existing call sites for
exec_stmt(). Replace them with exec_toplevel_block(), which is just
defined to do what exec_stmts() does, but for a single
PLpgSQL_stmt_block statement. Hard-wiring the expectation of which
statement type applies here allows us to skip the dispatch switch,
making this not much uglier than the previous factorization.
Amit Khandekar, tweaked a bit by me
Discussion: https://postgr.es/m/CAJ3gD9eBNrmUD7WBBLG8ohaZ485H9y+4eihQTgr+K8Lhka3vcQ@mail.gmail.com
A likely copy/paste error in 98e8b48053 from back in 2004 would
cause temp tablespace to be reset to InvalidOid if temp_tablespaces
was set to the same value as the primary tablespace in the database.
This would cause shared filesets (such as for parallel hash joins)
to ignore them, putting the temporary files in the default tablespace
instead of the configured one. The bug is in the old code, but it
appears to have been exposed only once we had shared filesets.
Reviewed-By: Daniel Gustafsson
Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com
Backpatch-through: 9.5
Do the same for the maintenance_work_mem global variable.
Oversight in commit 848ae330a4, which increased the previous defaults
for work_mem and maintenance_work_mem by 4X.
Make some of the variable names in _bt_search() consistent with
corresponding variables within _bt_getstackbuf(). This naming scheme is
clearer because the variable names always express a relationship between
the currently locked buffer/page and some other page.
src/backend/replication/README includes since 32bc08b a basic
description of the WAL receiver hooks available in walreceiver.h for a
module like libpqwalreceiver, but the README has never been updated to
reflect changes done to the hooks, so the contents of the README have
rotten with the time. This commit moves the description from the README
to walreceiver.h, where it will be hard to miss that a description
update or addition is needed depending on the modifications done to the
hooks.
Each hook now includes a description of what it does in walreceiver.h,
and the replication's README mentions walreceiver.h.
Thanks also to Amit Kapila for the discussion.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20200502024606.GA471944@paquier.xyz
This is a follow-up commit similar to 68de144, with more places in the
backend code simplified with the macros able to assign values to the
fields of ObjectAddress. The code paths changed here could be
transitioned later into using more grouping when inserting dependency
records, simplifying this future work.
Author: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/20190213182737.mxn6hkdxwrzgxk35@alap3.anarazel.de
Use separate functions to save and restore error context information as
that made code easier to understand. Also, make it clear that the index
information required for error context is sane.
Author: Andres Freund, Justin Pryzby, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CAA4eK1LWo+v1OWu=Sky27GTGSCuOmr7iaURNbc5xz6jO+SaPeA@mail.gmail.com
When creating an extension, the same type of dependency is used when
registering a dependency to a schema and required extensions. This
improves the code so as those dependencies are not recorded one-by-one,
but grouped together. Note that this has as side effect to remove
duplicate dependency entries, even if it should not happen in practice
as extensions listed as required in a control file should be listed only
once.
Extracted from a larger patch by the same author.
Author: Daniel Dustafsson
Discussion: https://postgr.es/m/20200629065535.GA183079@paquier.xyz
001_ssltests.pl and 002_scram.pl both generated an extra file for a
client key used in the tests that were not removed. In Debian, this
causes repeated builds to fail.
The code refactoring done in 4dc6355 broke the cleanup done in
001_ssltests.pl, and the new tests added in 002_scram.pl via d6e612f
forgot the removal of one file. While on it, fix a second issue
introduced in 002_scram.pl where we use the same file name in 001 and
002 for the temporary client key whose permissions are changed in the
test, as using the same file name in both tests could cause failures
with parallel jobs of src/test/ssl/ if one test removes a file still
needed by the second test.
Reported-by: Felix Lechner
Author: Daniel Gustafsson, Felix Lechner
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CAFHYt543sjX=Cm_aEeoejStyP47C+Y3+Wh6WbirLXsgUMaw7iw@mail.gmail.com
Backpatch-through: 13
The "Disk Usage" and "HashAgg Batches" properties in the EXPLAIN ANALYZE
output for HashAgg were previously only shown if the number of batches
was greater than 0. Here we change this so that these properties are
always shown for EXPLAIN ANALYZE formats other than "text". The idea here
is that since the HashAgg could have spilled to disk if there had been
more data or groups to aggregate, then it's relevant that we're clear in
the EXPLAIN ANALYZE output when no spilling occurred in this particular
execution of the given plan.
For the "text" EXPLAIN format, we still hide these properties when no
spilling occurs. This EXPLAIN format is designed to be easy for humans
to read. To maintain the readability we have a higher threshold for which
properties we display for this format.
Discussion: https://postgr.es/m/CAApHDvo_dmNozQQTmN-2jGp1vT%3Ddxx7Q0vd%2BMvD1cGpv2HU%3DSg%40mail.gmail.com
Backpatch-through: 13, where the hashagg spilling code was added.
Any frontend-only file of src/common/ should include a protection to
prevent such code to be included in the backend compilation.
fe_memutils.c and restricted_token.c have been doing that, while
file_utils.c (since bf5bb2e) and logging.c (since fc9a62a) forgot it.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200625080757.GI130132@paquier.xyz
The IANA tzcode library has a feature to read a time zone file named
"posixrules" and apply the daylight-savings transition dates and times
therein, when it is given a POSIX-style time zone specification that
lacks an explicit transition rule. However, there's a problem with
that code: it doesn't work for dates past the Y2038 time_t rollover.
(Effectively, all times beyond that point are treated as standard
time.) The IANA crew regard this feature as legacy, so their plan is
to remove it not fix it. The time frame in which that will happen
is unclear, but presumably it'll happen well before 2038.
Moreover, effective with the next IANA data update (probably this
fall), the recommended default will be to not install a "posixrules"
file in the first place. The time frame in which tzdata packagers
might adopt that suggestion is likewise unclear, but at least some
platforms will probably do it in the next year or so. While we could
ignore that recommendation so far as PG-supplied tzdata trees are
concerned, builds using --with-system-tzdata will be subject to
whatever the platform's tzdata packager decides to do.
Thus, whether or not we do anything, some increasing fraction of
Postgres users will be exposed to the behavior observed when there
is no "posixrules" file; and if we do nothing, we'll have essentially
no control over the timing of that change.
The best thing to do to ameliorate the uncertainty seems to be to
proactively remove the posixrules-reading feature. If we do that in
a scheduled release then at least we can release-note the behavioral
change, rather than having users be surprised by it after a routine
tzdata update.
The change in question is fairly minor anyway: to be affected,
you have to be using a POSIX-style timezone spec, it has to not
have an explicit rule, and it has to not be one of the four traditional
continental-USA zone names (EST5EDT, CST6CDT, MST7MDT, or PST8PDT),
as those are special-cased. Since the default "posixrules" file
provides USA DST rules, the number of people who are likely to find
such a zone spec useful is probably quite small. Moreover, the
fallback behavior with no explicit rule and no "posixrules" file is to
apply current USA rules, so the only thing that really breaks is the
DST transitions in years before 2007 (and you get the countervailing
fix that transitions after 2038 will be applied).
Now, some installations might have replaced the "posixrules" file,
allowing e.g. EU rules to be applied to a POSIX-style timezone spec.
That won't work anymore. But it's not exactly clear why this solution
would be preferable to using a regular named zone. In any case, given
the Y2038 issue, we need to be pushing users to stop depending on this.
Back-patch into v13; it hasn't been released yet, so it seems OK to
change its behavior. (Personally I think we ought to back-patch
further, but I've been outvoted.)
Discussion: https://postgr.es/m/1390.1562258309@sss.pgh.pa.us
Discussion: https://postgr.es/m/20200621211855.6211-1-eggert@cs.ucla.edu
Commit 54cd4f045 added some kluges to work around an old glibc bug,
namely that %.*s could misbehave if glibc thought any characters in
the supplied string were incorrectly encoded. Now that we use our
own snprintf.c implementation, we need not worry about that bug (even
if it still exists in the wild). Revert a couple of particularly
ugly hacks, and remove or improve assorted comments.
Note that there can still be encoding-related hazards here: blindly
clipping at a fixed length risks producing wrongly-encoded output
if the clip splits a multibyte character. However, code that's
doing correct multibyte-aware clipping doesn't really need a comment
about that, while code that isn't needs an explanation why not,
rather than a red-herring comment about an obsolete bug.
Discussion: https://postgr.es/m/279428.1593373684@sss.pgh.pa.us
Since %c only passes a C "char" to printf, it's incapable of dealing
with multibyte characters. Passing just the first byte of such a
character leads to an output string that is visibly not correctly
encoded, resulting in undesirable behavior such as encoding conversion
failures while sending error messages to clients.
We've lived with this issue for a long time because it was inconvenient
to avoid in a portable fashion. However, now that we always use our own
snprintf code, it's reasonable to use the %.*s format to print just one
possibly-multibyte character in a string. (We previously avoided that
obvious-looking answer in order to work around glibc's bug #6530, cf
commits 54cd4f045 and ed437e2b2.)
Hence, run around and fix a bunch of places that used %c to report
a character found in a user-supplied string. For simplicity, I did
not touch places that were emitting non-user-facing debug messages,
or reporting catalog data that should always be ASCII. (It's also
unclear how useful this approach could be in frontend code, where
it's less certain that we know what encoding we're dealing with.)
In passing, improve a couple of poorly-written error messages in
pageinspect/heapfuncs.c.
This is a longstanding issue, but I'm hesitant to back-patch because
of the impact on translatable message strings. In any case this fix
would not work reliably before v12.
Tom Lane and Quan Zongliang
Discussion: https://postgr.es/m/a120087c-4c88-d9d4-1ec5-808d7a7f133d@gmail.com
SQL:1999 had syntax
SUBSTRING(text FROM pattern FOR escapechar)
but this was replaced in SQL:2003 by the more clear
SUBSTRING(text SIMILAR pattern ESCAPE escapechar)
but this was never implemented in PostgreSQL. This patch adds that
new syntax as an alternative in the parser, and updates documentation
and tests to indicate that this is the preferred alternative now.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/a15db31c-d0f8-8ce0-9039-578a31758adb%402ndquadrant.com
The logic used to build the set of dependencies needed for a type is
rather repetitive with direct assignments for each ObjectAddress field.
This refactors the code to use the macro ObjectAddressSet() instead, to
do the same work. There are more areas of the backend code that could
use this macro, but these are left for a follow-up patch that will
partially rework the way dependencies are recorded as well. Type
dependencies are left out of the follow-up patch, so they are refactored
separately here.
Extracted from a larger patch by the same author.
Author: Daniel Gustafsson
Discussion: https://potgr.es/m/20190213182737.mxn6hkdxwrzgxk35@alap3.anarazel.de
OpenSSL's native reports about problems related to protocol version
restrictions are pretty opaque and inconsistent. When we get an
SSL error that is plausibly due to this, emit a hint message that
includes the range of SSL protocol versions we (think we) are
allowing. This should at least get the user thinking in the right
direction to resolve the problem, even if the hint isn't totally
accurate, which it might not be for assorted reasons.
Back-patch to v13 where we increased the default minimum protocol
version, thereby increasing the risk of this class of failure.
Patch by me, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
When we initially created this parameter, in commit ff8ca5fad, we left
the default as "allow any protocol version" on grounds of backwards
compatibility. However, that's inconsistent with the backend's default
since b1abfec82; protocol versions prior to 1.2 are not considered very
secure; and OpenSSL has had TLSv1.2 support since 2012, so the number
of PG servers that need a lesser minimum is probably quite small.
On top of those things, it emerges that some popular distros (including
Debian and RHEL) set MinProtocol=TLSv1.2 in openssl.cnf. Thus, far
from having "allow any protocol version" behavior in practice, what
we actually have as things stand is a platform-dependent lower limit.
So, change our minds and set the min version to TLSv1.2. Anybody
wanting to connect with a new libpq to a pre-2012 server can either
set ssl_min_protocol_version=TLSv1 or accept the fallback to non-SSL.
Back-patch to v13 where the aforementioned patches appeared.
Patch by me, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
We failed to save slot to disk after invalidating it, so the state was
lost in case of server restart or crash. Fix by marking it dirty and
flushing.
Also, if the slot is known invalidated we don't need to reason about the
LSN at all -- it's known invalidated. Only test the LSN if the slot is
known not invalidated.
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/17a69cfe-f1c1-a416-ee25-ae15427c69eb@oss.nttdata.com
Commit 0d861bbb, which added deduplication to nbtree, had
_bt_check_unique() pass a TID to table_index_fetch_tuple_check() that
isn't safe to mutate. table_index_fetch_tuple_check()'s tid argument is
modified when the TID in question is not the latest visible tuple in a
hot chain, though this wasn't documented.
To fix, go back to using a local copy of the TID in _bt_check_unique(),
and update comments above table_index_fetch_tuple_check().
Backpatch: 13-, where B-Tree deduplication was introduced.
If restart_lsn of logical replication slot gets behind more than
max_slot_wal_keep_size from the current LSN, the logical replication slot
would be invalidated and its restart_lsn is reset to an invalid LSN.
If this logical replication slot with an invalid restart_lsn was specified as
the source slot in pg_copy_logical_replication_slot(), the function caused
the assertion failure unexpectedly.
This assertion was added because restart_lsn should not be invalid before.
But in v13, it can be invalid thanks to max_slot_wal_keep_size. So since this
assertion is no longer useful, this commit removes it.
This commit also changes the errcode in the error message that
pg_copy_logical_replication_slot() emits when the slot with an invalid
restart_lsn is specified, to more appropriate one.
Back-patch to v13 where max_slot_wal_keep_size was added and
the assertion was no longer valid.
Author: Fujii Masao
Reviewed-by: Alvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/f91de4fb-a7ab-b90e-8132-74796e049d51@oss.nttdata.com
I forgot that INT64_FORMAT can't be used with sscanf on Windows.
Use the same trick of sscanf'ing into a temp variable as we do in
some other places in zic.c.
The upstream IANA code avoids the portability problem by relying on
<inttypes.h>'s SCNdFAST64 macro. Once we're requiring C99 in all
branches, we should do likewise and drop this set of diffs from
upstream. For now, though, a hack seems fine, since we do not
actually care about leapseconds anyway.
Discussion: https://postgr.es/m/4e5d1a5b-143e-e70e-a99d-a3b01c1ae7c3@2ndquadrant.com
In pg_replication_slot, change output from normal/reserved/lost to
reserved/extended/unreserved/ lost, which better expresses the possible
states particularly near the time where segments are no longer safe but
checkpoint has not run yet.
Under the new definition, reserved means the slot is consuming WAL
that's still under the normal WAL size constraints; extended means it's
consuming WAL that's being protected by wal_keep_segments or the slot
itself, whose size is below max_slot_wal_keep_size; unreserved means the
WAL is no longer safe, but checkpoint has not yet removed those files.
Such as slot is in imminent danger, but can still continue for a little
while and may catch up to the reserved WAL space.
Also, there were some bugs in the calculations used to report the
status; fixed those.
Backpatch to 13.
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200616.120236.1809496990963386593.horikyota.ntt@gmail.com
The current definition is dangerous. No bugs exist in our code at
present, but backpatch to 11 nonetheless where it was introduced.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
The description of InsertPgAttributeTuple() does not match its handling
of pg_attribute contents with NULL values for a long time, with 911e702
making things more inconsistent. This adjusts the description to match
the reality.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/4E4E4B33-9FDF-4D21-B77A-642D027AEAD9@yesql.se
explain_get_index_name() applied quote_identifier() to the index name.
This is fine for text output, but the non-text output formats all have
their own quoting conventions and would much rather start from the
actual index name. For example in JSON you'd get something like
"Index Name": "\"My Index\"",
which is surely not desirable, especially when the same does not
happen for table names. Hence, move the responsibility for applying
quoting out to the callers, where it can go into already-existing
special code paths for text format.
This changes the API spec for users of explain_get_index_name_hook:
before, they were supposed to apply quote_identifier() if necessary,
now they should not. Research suggests that the only publicly
available user of the hook is hypopg, and it actually forgot to
apply quoting anyway, so it's fine. (In any case, there's no
behavioral change for the output of a hook as seen in non-text
EXPLAIN formats, so this won't break any case that programs should
be relying on.)
Digging in the commit logs, it appears that quoting was included in
explain_get_index_name's duties when commit 604ffd280 invented it;
and that was fine at the time because we only had text output format.
This should have been rethought when non-text formats were invented,
but it wasn't.
This is a fairly clear bug for users of non-text EXPLAIN formats,
so back-patch to all supported branches.
Per bug #16502 from Maciek Sakrejda. Patch by me (based on
investigation by Euler Taveira); thanks to Julien Rouhaud for review.
Discussion: https://postgr.es/m/16502-57bd1c9f913ed1d1@postgresql.org
Both INDEX_CLEANUP and TRUNCATE have been available since v12, and are
enabled by default except if respectively vacuum_index_cleanup and
vacuum_truncate are disabled for a given relation. This change adds
support for disabling these options from vacuumdb.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6F7F17EF-B1F2-4681-8D03-BA96365717C0@amazon.com
spg_mask() didn't take into account that pd_lower equal to SizeOfPageHeaderData
is still valid value. This commit fixes that. Backpatch to 11, where
spg_mask() pg_lower check was introduced.
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/20200615131405.GM52676%40paquier.xyz
Backpatch-through: 11
The function has been reading global variable forceSyncCommit, mirroring
the intent of the caller that passed forceSync=forceSyncCommit. The
other caller, RecordTransactionCommitPrepared(), passed false. Since
COMMIT PREPARED can't share a transaction with any command, it certainly
doesn't share a transaction with a command that sets forceSyncCommit.
Reviewed by Michael Paquier.
Discussion: https://postgr.es/m/20200617032615.GC2916904@rfd.leadboat.com
A few places calling fwrite and gzwrite were not setting errno to ENOSPC
when reporting errors, as is customary; this led to some failures being
reported as
"could not write file: Success"
which makes us look silly. Make a few of these places in pg_dump and
pg_basebackup use our customary pattern.
Backpatch-to: 9.5
Author: Justin Pryzby <pryzby@telsasoft.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200611153753.GU14879@telsasoft.com
The IANA time zone folk have deprecated use of a "posixrules" file in
the tz database. While for now it's our choice whether to keep
supplying one in our own builds, installations built with
--with-system-tzdata will soon be needing to cope with that file not
being present, at least on some platforms.
This causes a problem for the horology test, which expected the
nonstandard POSIX zone spec "CST7CDT" to apply pre-2007 US daylight
savings rules. That does happen if the posixrules file supplies such
information, but otherwise the test produces undesired results.
To fix, add an explicit transition date rule that matches 2005 practice.
(We could alternatively have switched the test to use some real time
zone, but it seems useful to have coverage of this type of zone spec.)
While at it, update a documentation example that also relied on
"CST7CDT"; use a real-world zone name instead. Also, document why
the zone names EST5EDT, CST6CDT, MST7MDT, PST8PDT aren't subject to
similar failures when "posixrules" is missing.
Back-patch to all supported branches, since the hazard is the same
for all.
Discussion: https://postgr.es/m/1665379.1592581287@sss.pgh.pa.us
It was possible for deduplication's single value strategy to mistakenly
believe that a very small duplicate tuple counts as one of the six large
tuples that it aims to leave behind after the page finally splits. This
could cause slightly suboptimal space utilization with very low
cardinality indexes, though only under fairly narrow conditions.
To fix, be particular about what kind of tuple counts as a
maxpostingsize-capped tuple. This avoids confusion in the event of a
small tuple that gets "wedged" between two large tuples, where all
tuples on the page are duplicates of the same value.
Discussion: https://postgr.es/m/CAH2-Wz=Y+sgSFc-O3LpiZX-POx2bC+okec2KafERHuzdVa7-rQ@mail.gmail.com
Backpatch: 13-, where deduplication was introduced (by commit 0d861bbb)
This commit fixes the following issues.
1. There is the case where the slot is dropped while trying to invalidate it.
InvalidateObsoleteReplicationSlots() did not handle this case, and
which could cause checkpoint to fail.
2. InvalidateObsoleteReplicationSlots() could emit the same log message
multiple times unnecessary. It should be logged only once.
3. When marking the slot as used, we always searched the target slot from
all the replication slots even if we already found it. This could cause
useless waste of cycles.
Back-patch to v13 where these issues were added as a part of
max_slot_wal_keep_size code.
Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi, Alvaro Herrera
Discussion: https://postgr.es/m/66c05b67-3396-042c-1b41-bfa6c3ddcf82@oss.nttdata.com
Since 1f39bce02, HashAgg nodes have had the ability to spill to disk when
memory consumption exceeds work_mem. That commit added new properties to
EXPLAIN ANALYZE to show the maximum memory usage and disk usage, however,
it didn't quite go as far as showing that information for parallel
workers. Since workers may have experienced something very different from
the main process, we should show this information per worker, as is done
in Sort.
Reviewed-by: Justin Pryzby
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/CAApHDvpEKbfZa18mM1TD7qV6PG+w97pwCWq5tVD0dX7e11gRJw@mail.gmail.com
Backpatch-through: 13, where the hashagg spilling code was added.
This was a danger only for --disable-spinlocks in combination with
atomic operations unsupported by the current platform.
While atomics.c was careful to signal that a separate semaphore ought
to be used when spinlock emulation is active, spin.c didn't actually
implement that mechanism. That's my (Andres') fault, it seems to have
gotten lost during the development of the atomic operations support.
Fix that issue and add test for nesting atomic operations inside a
spinlock.
Author: Andres Freund
Discussion: https://postgr.es/m/20200605023302.g6v3ydozy5txifji@alap3.anarazel.de
Backpatch: 9.5-
As s_lock_test, the already existing test for spinlocks, isn't run in
an automated fashion (and doesn't test a normal backend environment),
adding tests that are run as part of a normal regression run is a good
idea. Particularly in light of several recent and upcoming spinlock
related fixes.
Currently the new tests are run as part of the pre-existing
test_atomic_ops() test. That perhaps can be quibbled about, but for
now seems ok.
The only operations that s_lock_test tests but the new tests don't are
the detection of a stuck spinlock and S_LOCK_FREE (which is otherwise
unused, not implemented on all platforms, and will be removed).
This currently contains a test for more than INT_MAX spinlocks (only
run with --disable-spinlocks), to ensure the recent commit fixing a
bug with more than INT_MAX spinlock initializations is correct. That
test is somewhat slow, so we might want to disable it after a few
days.
It might be worth retiring s_lock_test after this. The added coverage
of a stuck spinlock probably isn't worth the added complexity?
Author: Andres Freund
Discussion: https://postgr.es/m/20200606023103.avzrctgv7476xj7i@alap3.anarazel.de
Advancing a replication slot did not recompute the oldest xmin and LSN
values across replication slots, preventing resource removal like
segments not recycled at checkpoint time. The original commit that
introduced the slot advancing in 9c7d06d never did the update of those
oldest values, and b0afdca removed this code.
This commit adds a TAP test to check segment recycling with advancing
for physical slots, enforcing an extra segment switch before advancing
to check if the segment gets correctly recycled after a checkpoint.
Reported-by: Andres Freund
Reviewed-by: Alexey Kondratov, Kyptaro Horiguchi
Discussion: https://postgr.es/m/20200609171904.kpltxxvjzislidks@alap3.anarazel.de
Backpatch-through: 11
testtablespace is an extra path used as tablespace location in the main
regression test suite, computed from --outputdir as defined by the
caller of pg_regress (current directory if undefined).
This special handling was introduced as of f10589e to be specific to
MSVC, as we let pg_regress' Makefile handle this cleanup in other
environments. This moves the cleanup to the MSVC script running
regression tests instead where needed: check, installcheck and
upgradecheck. I have also checked this patch on MSVC with repeated runs
of each target.
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20200219.142519.437573253063431435.horikyota.ntt@gmail.com
This absorbs a leap-second-related bug fix in localtime.c, and
teaches zic to handle an expiration marker in the leapseconds file.
Neither are of any interest to us (for the foreseeable future
anyway), but we need to stay more or less in sync with upstream.
Also adjust some over-eager changes in the README from commit 957338418.
I have no intention of making changes that require C99 in this code,
until such time as all the live back branches require C99. Otherwise
back-patching will get too exciting.
For the same reason, absorb assorted whitespace and other cosmetic
changes from HEAD into the back branches; mostly this reflects use of
improved versions of pgindent.
All in all then, quite a boring update. But I figured I'd get it
done while I was looking at this code.
On platforms without support for 64bit atomic operations where we also
cannot rely on 64bit reads to have single copy atomicity, such atomics
are implemented using a spinlock based fallback. That means it's not
safe to even read such atomics from within a signal handler (since the
signal handler might run when the spinlock already is held).
To avoid this issue defer global barrier processing out of the signal
handler. Instead of checking local / shared barrier generation to
determine whether to set ProcSignalBarrierPending, introduce
PROCSIGNAL_BARRIER and always set ProcSignalBarrierPending when
receiving such a signal. Additionally avoid redundant work in
ProcessProcSignalBarrier if ProcSignalBarrierPending is unnecessarily.
Also do a small amount of other polishing.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20200609193723.eu5ilsjxwdpyxhgz@alap3.anarazel.de
Backpatch: 13-, where the code was introduced.
Don't use fread(), since that doesn't necessarily set errno. We could
use read() instead, but it's even better to use pg_pread(), which
allows us to avoid some extra calls to seek to the desired location in
the file.
Also, advertise a wait event while reading from a file, as we do for
most other places where we're reading data from files.
Patch by me, reviewed by Hamid Akhtar.
Discussion: http://postgr.es/m/CA+TgmobBw-3573vMosGj06r72ajHsYeKtksT_oTxH8XvTL7DxA@mail.gmail.com
Merge two calls to sendDir() that are exactly the same except for
the fifth argument. Adjust comments to match.
Also, don't bother checking whether tblspc_map_file is NULL. We
initialize it in all cases, so it can't be.
Patch by me, reviewed by Amit Kapila and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/CA+TgmoYq+59SJ2zBbP891ngWPA9fymOqntqYcweSDYXS2a620A@mail.gmail.com
Commit 72d422a522 made xlog.c call
sendTablespace() with the 'sizeonly' argument set to true, which
required basebackup.c to export sendTablespace(). However, that's
kind of ugly, so instead defer the call to sendTablespace() until
basebackup.c regains control. That way, it can still be a static
function.
Patch by me, reviewed by Amit Kapila and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/CA+TgmoYq+59SJ2zBbP891ngWPA9fymOqntqYcweSDYXS2a620A@mail.gmail.com
Per POSIX this case should produce +0, but buildfarm member fossa
(with icc (ICC) 19.0.5.281 20190815) is reporting -0. icc has a
boatload of unsafe floating-point optimizations, with a corresponding
boatload of not-too-well-documented compiler switches, and it seems our
default use of "-mp1" isn't whacking it hard enough to keep it from
misoptimizing the stanza in dpow() that checks whether y is odd.
There's nothing wrong with that code (seeing that no other buildfarm
member has trouble with it), so I'm content to blame this on the
compiler. But without access to the compiler I'm not going to guess at
what switches might be needed to fix it. For now, tweak the test case
so it will accept either -0 or +0 as a correct answer.
Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
I failed to notice that we don't really need to check for y being an
integer in the code path where x = -inf; we already did.
Also make some further cosmetic rearrangements in that spot in hopes
of dodging the seeming compiler bug that buildfarm member fossa is
hitting. And be consistent about declaring variables as "float8"
not "double", since the pre-existing variables in this function are
like that.
Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
Convert buffile.c error handling to use ereport. This fixes cases where
I/O errors were indistinguishable from EOF or not reported. Also remove
"%m" from error messages where errno would be bogus. While we're
modifying those strings, add block numbers and short read byte counts
where appropriate.
Back-patch to all supported releases.
Reported-by: Amit Khandekar <amitdkhan.pg@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Ibrar Ahmed <ibrar.ahmad@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGJE04G%3D8TLK0DLypT_27D9dR8F1RQgNp0jK6qR0tZGWOw%40mail.gmail.com
Non-zero vacuum_defer_cleanup_age values cause pg_upgrade freezing of
the system catalogs to be incomplete, or do nothing. This will cause
the upgrade to fail in confusing ways.
Reported-by: Laurenz Albe
Discussion: https://postgr.es/m/7d6f6c22ba05ce0c526e9e8b7bfa8105e7da45e6.camel@cybertec.at
Backpatch-through: 9.5
Buildfarm results for commit e532b1d57 reveal the error in my thinking
about the unexpected-EDOM case. I'd supposed this was no longer really
a live issue, but it seems the fix for glibc's bug #3866 is not all that
old, and we still have at least one buildfarm animal (lapwing) with the
bug. Hence, resurrect essentially the previous logic (but, I hope, less
opaquely presented), and explain what it is we're really doing here.
Also, blindly try to fix fossa's failure by tweaking the logic that
figures out whether y is an odd integer when x is -inf. This smells
a whole lot like a compiler bug, but I lack access to icc to try to
pin it down. Maybe doing division instead of multiplication will
dodge the issue.
Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
Introduce TAR_BLOCK_SIZE and replace many instances of 512 with
the new constant. Introduce function tarPaddingBytesRequired
and use it to replace numerous repetitions of (x + 511) & ~511.
Add preprocessor guards against multiple inclusion to pgtar.h.
Reformat the prototype for tarCreateHeader so it doesn't extend
beyond 80 characters.
Discussion: http://postgr.es/m/CA+TgmobWbfReO9-XFk8urR1K4wTNwqoHx_v56t7=T8KaiEoKNw@mail.gmail.com
Buildfarm results for commit decbe2bfb show that AIX and illumos
have non-POSIX-compliant pow() functions, as do ancient NetBSD
and HPUX releases. While it's dubious how much we should care
about the latter two platforms, the former two are probably enough
reason to put in manual handling of infinite-input cases. Hence,
do so, and clean up the post-pow() error handling to reflect its
now-more-limited scope. (Notably, while we no longer expect to
ever see EDOM from pow(), report it as a domain error if we do.
The former coding had the net effect of expensively converting the
error to ERANGE, which seems highly questionable: if pow() wanted
to report ERANGE, it would have done so.)
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
Previously, these functions tended to throw underflow errors for
negative-infinity exponents. The correct thing per POSIX is to
return 0, so let's do that instead. (Note that the SQL standard
is silent on such issues, as it lacks the concepts of either Inf
or NaN; so our practice is to follow POSIX whenever a corresponding
C-library function exists.)
Also, add a bunch of test cases verifying that exp() and power()
actually do follow POSIX for Inf and NaN inputs. While this patch
should guarantee that exp() passes the tests, power() will not unless
the platform's pow(3) is fully POSIX-compliant. I already know that
gaur fails some of the tests, and I am suspicious that the Windows
animals will too; the extent of compliance of other old platforms
remains to be seen. We might choose to drop failing test cases, or
to work harder at overriding pow(3) for these cases, but first let's
see just how good or bad the situation is.
Discussion: https://postgr.es/m/582552.1591917752@sss.pgh.pa.us
This patch removes the hardcoded check for superuser privileges when
executing replication origin functions. Instead, execution is revoked
from public, meaning that those functions can be executed by a superuser
and that access to them can be granted.
Author: Martín Marqués
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Masahiko Sawada
Discussion: https:/postgr.es/m/CAPdiE1xJMZOKQL3dgHMUrPqysZkgwzSMXETfKkHYnBAB7-0VRQ@mail.gmail.com
var_samp(numeric) and stddev_samp(numeric) disagreed with their float
cousins about what to do for a single non-null input value that is NaN.
The float versions return NULL on the grounds that the calculation is
only defined for more than one non-null input, which seems like the
right answer. But the numeric versions returned NaN, as a result of
dealing with edge cases in the wrong order. Fix that. The patch
also gets rid of an insignificant memory leak in such cases.
This inconsistency is of long standing, but on the whole it seems best
not to back-patch the change into stable branches; nobody's complained
and it's such an obscure point that nobody's likely to complain.
(Note that v13 and v12 now contain test cases that will notice if we
accidentally back-patch this behavior change in future.)
Report and patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
When there is just one non-null input value, and it is infinity or NaN,
aggregates such as stddev_pop and covar_pop should produce a NaN
result, because the calculation is not well-defined. They used to do
so, but since we adopted Youngs-Cramer aggregation in commit e954a727f,
they produced zero instead. That's an oversight, so fix it. Add tests
exercising these edge cases.
Affected aggregates are
var_pop(double precision)
stddev_pop(double precision)
var_pop(real)
stddev_pop(real)
regr_sxx(double precision,double precision)
regr_syy(double precision,double precision)
regr_sxy(double precision,double precision)
regr_r2(double precision,double precision)
regr_slope(double precision,double precision)
regr_intercept(double precision,double precision)
covar_pop(double precision,double precision)
corr(double precision,double precision)
Back-patch to v12 where the behavior change was accidentally introduced.
Report and patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
Unify the grammar of COMMENT, DROP, and SECURITY LABEL further. They
all effectively just take an object address for later processing, so
we can make the grammar more generalized. Some extra checking about
which object types are supported can be done later in the statement
execution.
Discussion: https://www.postgresql.org/message-id/flat/163c00a5-f634-ca52-fc7c-0e53deda8735%402ndquadrant.com
Using --outputdir with a custom output repository has never created by
default the sql/ and expected/ paths generated with contents from
respectively input/ and output/ if they don't exist, while the base
output directory gets created if it does not exist. If sql/ and
expected/ are not present, pg_regress would fail with the path missing,
requiring test scripts to create those extra paths by themselves. This
commit changes pg_regress so as both get created by default if they do
not exist, removing the need for external test scripts to do so.
This cleans up two code paths in the tree for pg_upgrade tests in MSVC
and environments able to use test.sh. sql/ and expected/ were created
as part of each test script, but this is not needed anymore as
pg_regress handles the work now.
Author: Roman Zharkov, Daniel Gustafsson
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/16484-4d89e9cc11241996@postgresql.org
Core by no means makes excessive use of these functions, but quite a large
number of those usages do require the caller to call strlen() on the
returned string. This is quite wasteful since these functions do already
have a good idea of the length of the string, so we might as well just
have them return that.
Reviewed-by: Andrew Gierth
Discussion: https://postgr.es/m/CAApHDvrm2A5x2uHYxsqriO2cUaGcFvND%2BksC9e7Tjep0t2RK_A%40mail.gmail.com
In passing, also remove a few surplus empty lines from pg_ltoa and
pg_ulltoa_n in numutils.c
Reported-by: Andrew Gierth
Discussion: https://postgr.es/m/87y2ou3xuh.fsf@news-spur.riddles.org.uk
Backpatch-through: 13, where these changes were introduced
plpgsql has always executed the query given in a RETURN QUERY command
by opening it as a cursor and then fetching a few rows at a time,
which it turns around and dumps into the function's result tuplestore.
The point of this was to keep from blowing out memory with an oversized
SPITupleTable result (note that while a tuplestore can spill tuples
to disk, SPITupleTable cannot). However, it's rather inefficient, both
because of extra data copying and because of executor entry/exit
overhead. In recent versions, a new performance problem has emerged:
use of a cursor prevents use of a parallel plan for the executed query.
We can improve matters by skipping use of a cursor and having the
executor push result tuples directly into the function's result
tuplestore. However, a moderate amount of new infrastructure is needed
to make that idea work:
* We can use the existing tstoreReceiver.c DestReceiver code to funnel
executor output to the tuplestore, but it has to be extended to support
plpgsql's requirement for possibly applying a tuple conversion map.
* SPI needs to be extended to allow use of a caller-supplied
DestReceiver instead of its usual receiver that puts tuples into
a SPITupleTable. Two new API calls are needed to handle both the
RETURN QUERY and RETURN QUERY EXECUTE cases.
I also felt that I didn't want these new API calls to use the legacy
method of specifying query parameter values with "char" null flags
(the old ' '/'n' convention); rather they should accept ParamListInfo
objects containing the parameter type and value info. This required
a bit of additional new infrastructure since we didn't yet have any
parse analysis callback that would interpret $N parameter symbols
according to type data supplied in a ParamListInfo. There seems to be
no harm in letting makeParamList install that callback by default,
rather than leaving a new ParamListInfo's parserSetup hook as NULL.
(Indeed, as of HEAD, I couldn't find anyplace that was using the
parserSetup field at all; plpgsql was using parserSetupArg for its
own purposes, but parserSetup seemed to be write-only.)
We can actually get plpgsql out of the business of using legacy null
flags altogether, and using ParamListInfo instead of its ad-hoc
PreparedParamsData structure; but this requires inventing one more
SPI API call that can replace SPI_cursor_open_with_args. That seems
worth doing, though.
SPI_execute_with_args and SPI_cursor_open_with_args are now unused
anywhere in the core PG distribution. Perhaps someday we could
deprecate/remove them. But cleaning up the crufty bits of the SPI
API is a task for a different patch.
Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to
consider back-patching. Patch by me; thanks to Hamid Akhtar for review.
Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
Rewrite the documentation of these functions, in light of recent bug fix
commit 5940ffb2.
Back-patch to 13 where the check-for-conflict-out code was split up into
AM-specific and generic parts, and new documentation was added that now
looked wrong.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443%40jepsen.io
When merging two NumericAggStates, the code missed adding the new
state's NaNcount unless its N was also nonzero; since those counts
are independent, this is wrong.
This would only have visible effect if some partial aggregate scans
found only NaNs while earlier ones found only non-NaNs; then we could
end up falsely deciding that there were no NaNs and fail to return a
NaN final result as expected. That's pretty improbable, so it's no
surprise this hasn't been reported from the field. Still, it's a bug.
I didn't try to produce a regression test that would show the bug,
but I did notice that these functions weren't being reached at all
in our regression tests, so I improved the tests to at least
exercise them. With these additions, I see pretty complete code
coverage on the aggregation-related functions in numeric.c.
Back-patch to 9.6 where this code was introduced. (I only added
the improved test case as far back as v10, though, since the
relevant part of aggregates.sql isn't there at all in 9.6.)
Eliminate enable_groupingsets_hash_disk, which was primarily useful
for testing grouping sets that use HashAgg and spill. Instead, hack
the table stats to convince the planner to choose hashed aggregation
for grouping sets that will spill to disk. Suggested by Melanie
Plageman.
Rename enable_hashagg_disk to hashagg_avoid_disk_plan, and invert the
meaning of on/off. The new name indicates more strongly that it only
affects the planner. Also, the word "avoid" is less definite, which
should avoid surprises when HashAgg still needs to use the
disk. Change suggested by Justin Pryzby, though I chose a different
GUC name.
Discussion: https://postgr.es/m/CAAKRu_aisiENMsPM2gC4oUY1hHG3yrCwY-fXUg22C6_MJUwQdA%40mail.gmail.com
Discussion: https://postgr.es/m/20200610021544.GA14879@telsasoft.com
Backpatch-through: 13
SSI's HeapCheckForSerializableConflictOut() test failed to correctly
handle conditions involving a concurrently inserted tuple which is later
concurrently updated by a separate transaction . A SELECT statement
that called HeapCheckForSerializableConflictOut() could end up using the
same XID (updater's XID) for both the original tuple, and the successor
tuple, missing the XID of the xact that created the original tuple
entirely. This only happened when neither tuple from the chain was
visible to the transaction's MVCC snapshot.
The observable symptoms of this bug were subtle. A pair of transactions
could commit, with the later transaction failing to observe the effects
of the earlier transaction (because of the confusion created by the
update to the non-visible row). This bug dates all the way back to
commit dafaa3ef, which added SSI.
To fix, make sure that we check the xmin of concurrently inserted tuples
that happen to also have been updated concurrently.
Author: Peter Geoghegan
Reported-By: Kyle Kingsbury
Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443@jepsen.io
Backpatch: All supported versions
fe_archive.c was compiled only for the frontend in src/common/, but as
it will never share anything with the backend, it makes most sense to
move this file to src/fe_utils/.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/e9766d71-8655-ac86-bdf6-77e0e7169977@2ndquadrant.com
Backpatch-through: 13
access_method, database_name, and index_name are all just name, and
they are not used consistently for their alleged purpose, so remove
them. They have been around since ancient times but have no current
reason for existing. Removing them can simplify future grammar
refactoring.
Discussion: https://www.postgresql.org/message-id/flat/163c00a5-f634-ca52-fc7c-0e53deda8735%402ndquadrant.com
The previous description string still described the pre-PostgreSQL
10 (pre eb61136dc7) behavior of
selecting between encrypted and unencrypted, but it is now choosing
between encryption algorithms.
Commit cec2edfa78 introduced logical_decoding_work_mem to limit
ReorderBuffer memory usage. We spill the changes once the memory occupied
by changes exceeds logical_decoding_work_mem. There was an assumption
in the code that by evicting the largest (sub)transaction we will come
under the memory limit as the selected transaction will be at least as
large as the most recent change (which caused us to go over the memory
limit). However, that is not true because a user can reduce the
logical_decoding_work_mem to a smaller value before the most recent
change.
We fix it by allowing to evict the transactions until we reach under the
memory limit.
Reported-by: Fujii Masao
Author: Amit Kapila
Reviewed-by: Fujii Masao
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/2b7ba291-22e0-a187-d167-9e5309a3458d@oss.nttdata.com
There are a number of Remove${Something}ById() functions that are
essentially identical in structure and only different in which catalog
they are working on. Refactor this to be one generic function. The
information about which oid column, index, etc. to use was already
available in ObjectProperty for most catalogs, in a few cases it was
easily added.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/331d9661-1743-857f-1cbb-d5728bcd62cb%402ndquadrant.com
Commit 0c882e52a tried to force table atest12 to have more-accurate-
than-default statistics; but transiently setting default_statistics_target
isn't enough for that, because autovacuum could come along and overwrite
the stats later. This evidently explains some intermittent buildfarm
failures we've seen since then. Repair by disabling autovac on this table.
Thanks to David Rowley for correctly diagnosing the cause.
Discussion: https://postgr.es/m/CA+hUKG+OUkQSOUTg=qo=S=fWa_tbm99i7rB7mfbHz1SYm4v-jQ@mail.gmail.com
Previously we used pg_atomic_write_64_impl inside
pg_atomic_init_u64. That works correctly, but on platforms without
64bit single copy atomicity it could trigger spurious valgrind errors
about uninitialized memory, because we use compare_and_swap for atomic
writes on such platforms.
I previously suppressed one instance of this problem (6c878edc1d),
but as Tom reports that wasn't enough. As the atomic variable cannot
yet be concurrently accessible during initialization, it seems better
to have pg_atomic_init_64_impl set the value directly.
Change pg_atomic_init_u32_impl for symmetry.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/1714601.1591503815@sss.pgh.pa.us
Backpatch: 9.5-
The redo routines for XLOG_CHECKPOINT_{ONLINE,SHUTDOWN} must acquire
ControlFileLock before modifying ControlFile->checkPointCopy, or the
checkpointer could write out a control file with a bad checksum.
Likewise, XLogReportParameters() must acquire ControlFileLock before
modifying ControlFile and calling UpdateControlFile().
Back-patch to all supported releases.
Author: Nathan Bossart <bossartn@amazon.com>
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/70BF24D6-DC51-443F-B55A-95735803842A%40amazon.com
Since database connections can be used with WAL senders in 9.4, it is
possible to use physical replication. This commit fixes a crash when
starting physical replication with a WAL sender using a database
connection, caused by the refactoring done in 850196b.
There have been discussions about forbidding the use of physical
replication in a database connection, but this is left for later,
taking care only of the crash new to 13.
While on it, add a test to check for a failure when attempting logical
replication if the WAL sender does not have a database connection. This
part is extracted from a larger patch by Kyotaro Horiguchi.
Reported-by: Vladimir Sitnikov
Author: Michael Paquier, Kyotaro Horiguchi
Reviewed-by: Kyotaro Horiguchi, Álvaro Herrera
Discussion: https://postgr.es/m/CAB=Je-GOWMj1PTPkeUhjqQp-4W3=nW-pXe2Hjax6rJFffB5_Aw@mail.gmail.com
Backpatch-through: 13
Commit 7be5d8df1f surfaced the logic
error, which had no functional implications, by adding "use warnings".
The buildfarm always customizes PROVE_FLAGS, so the warning did not
appear there. Back-patch to 9.5 (all supported versions).
Even when we've concluded that we have a hard write failure on the
socket, we should continue to try to read data. This gives us an
opportunity to collect any final error message that the backend might
have sent before closing the connection; moreover it is the job of
pqReadData not pqSendSome to close the socket once EOF is detected.
Due to an oversight in 1f39a1c06, pqSendSome failed to try to collect
data in the case where we'd already set write_failed. The problem was
masked for ordinary query operations (which really only make one write
attempt anyway), but COPY to the server would continue to send data
indefinitely after a mid-COPY connection loss.
Hence, add pqReadData calls into the paths where pqSendSome drops data
because of write_failed. If we've lost the connection, this will
eventually result in closing the socket and setting CONNECTION_BAD,
which will cause PQputline and siblings to report failure, allowing
the application to terminate the COPY sooner. (Basically this restores
what happened before 1f39a1c06.)
There are related issues that this does not solve; for example, if the
backend sends an error but doesn't drop the connection, we did and
still will keep pumping COPY data as long as the application sends it.
Fixing that will require application-visible behavior changes though,
and anyway it's an ancient behavior that we've had few complaints about.
For now I'm just trying to fix the regression from 1f39a1c06.
Per a complaint from Andres Freund. Back-patch into v12 where
1f39a1c06 came in.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
As it stands, this flag is only set when we've successfully sent a
cancel request, not if we get SIGINT and then fail to send a cancel.
However, for almost all callers, that's the Wrong Thing: we'd prefer
to abort processing after control-C even if no cancel could be sent.
As an example, since commit 1d468b9ad "pgbench -i" fails to give up
sending COPY data even after control-C, if the postmaster has been
stopped, which is clearly not what the code intends and not what anyone
would want. (The fact that it keeps going at all is the fault of a
separate bug in libpq, but not letting CancelRequested become set is
clearly not what we want here.)
The sole exception, as far as I can find, is that scripts_parallel.c's
ParallelSlotsGetIdle tries to consume a query result after issuing a
cancel, which of course might not terminate quickly if no cancel
happened. But that behavior was poorly thought out too. No user of
ParallelSlotsGetIdle tries to continue processing after a cancel,
so there is really no point in trying to clear the connection's state.
Moreover this has the same defect as for other users of cancel.c,
that if the cancel request fails for some reason then we end up with
control-C being completely ignored. (On top of that, select_loop failed
to distinguish clearly between SIGINT and other reasons for select(2)
failing, which means that it's possible that the existing code would
think that a cancel has been sent when it hasn't.)
Hence, redefine CancelRequested as simply meaning that SIGINT was
received. We could add a second flag with the other meaning, but
in the absence of any compelling argument why such a flag is needed,
I think it would just offer an opportunity for future callers to
get it wrong. Also remove the consumeQueryResult call in
ParallelSlotsGetIdle's failure exit. In passing, simplify the
API of select_loop.
It would now be possible to re-unify psql's cancel_pressed with
CancelRequested, partly undoing 5d43c3c54. But I'm not really
convinced that that's worth the trouble, so I left psql alone,
other than fixing a misleading comment.
This code is new in v13 (cf a4fd3aa71), so no need for back-patch.
Per investigation of a complaint from Andres Freund.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
Commit 24d85952 made a change that indirectly caused a performance
regression by triggering a change in the way GCC optimizes memcpy() on
some platforms.
The behavior seemed to contradict a GCC document, so I filed a report:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556
This patch implements a narrow workaround which eliminates the
regression I observed. The workaround is benign enough that it seems
unlikely to cause a different regression on another platform.
Discussion: https://postgr.es/m/99b2eab335c1592c925d8143979c8e9e81e1575f.camel@j-davis.com
ineq_histogram_selectivity() can be invoked in situations where the
ordering we care about is not that of the column's histogram. We could
be considering some other collation, or even more drastically, the
query operator might not agree at all with what was used to construct
the histogram. (We'll get here for anything using scalarineqsel-based
estimators, so that's quite likely to happen for extension operators.)
Up to now we just ignored this issue and assumed we were dealing with
an operator/collation whose sort order exactly matches the histogram,
possibly resulting in junk estimates if the binary search gets confused.
It's past time to improve that, since the use of nondefault collations
is increasing. What we can do is verify that the given operator and
collation match what's recorded in pg_statistic, and use the existing
code only if so. When they don't match, instead execute the operator
against each histogram entry, and take the fraction of successes as our
selectivity estimate. This gives an estimate that is probably good to
about 1/histogram_size, with no assumptions about ordering. (The quality
of the estimate is likely to degrade near the ends of the value range,
since the two orderings probably don't agree on what is an extremal value;
but this is surely going to be more reliable than what we did before.)
At some point we might further improve matters by storing more than one
histogram calculated according to different orderings. But this code
would still be good fallback logic when no matches exist, so that is
not an argument for not doing this.
While here, also improve get_variable_range() to deal more honestly
with non-default collations.
This isn't back-patchable, because it requires adding another argument
to ineq_histogram_selectivity, and because it might have significant
impact on the estimation results for extension operators relying on
scalarineqsel --- mostly for the better, one hopes, but in any case
destabilizing plan choices in back branches is best avoided.
Per investigation of a report from James Lucas.
Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
Commit 5e0928005 changed the planner so that, instead of blindly using
DEFAULT_COLLATION_OID when invoking operators for selectivity estimation,
it would use the collation of the column whose statistics we're
considering. This was recognized as still being not quite the right
thing, but it seemed like a good incremental improvement. However,
shortly thereafter we introduced nondeterministic collations, and that
creates cases where operators can fail if they're passed the wrong
collation. We don't want planning to fail in cases where the query itself
would work, so this means that we *must* use the query's collation when
invoking operators for estimation purposes.
The only real problem this creates is in ineq_histogram_selectivity, where
the binary search might produce a garbage answer if we perform comparisons
using a different collation than the column's histogram is ordered with.
However, when the query's collation is significantly different from the
column's default collation, the estimate we previously generated would be
pretty irrelevant anyway; so it's not clear that this will result in
noticeably worse estimates in practice. (A follow-on patch will improve
this situation in HEAD, but it seems too invasive for back-patch.)
The patch requires changing the signatures of mcv_selectivity and allied
functions, which are exported and very possibly are used by extensions.
In HEAD, I just did that, but an API/ABI break of this sort isn't
acceptable in stable branches. Therefore, in v12 the patch introduces
"mcv_selectivity_ext" and so on, with signatures matching HEAD, and makes
the old functions into wrappers that assume DEFAULT_COLLATION_OID should
be used. That does not match the prior behavior, but it should avoid risk
of failure in most cases. (In practice, I think most extension datatypes
aren't collation-aware, so the change probably doesn't matter to them.)
Per report from James Lucas. Back-patch to v12 where the problem was
introduced.
Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
DES has been deprecated in OpenSSL 3.0.0 which makes loading keys
encrypted with DES fail with "fetch failed". Solve by changing the
cipher used to aes256 which has been supported since 1.0.1 (and is
more realistic to use anyways).
Note that the minimum supported OpenSSL version is 1.0.1 as of
7b283d0e1d, so this does not introduce
any new version requirements.
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se
If the flag value is lost, logical decoding would work the same way as
REPLICA IDENTITY NOTHING, meaning that no old tuple values would be
included in the changes anymore produced by logical decoding.
Author: Michael Paquier
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20200603065340.GK89559@paquier.xyz
Backpatch-through: 12