max_slot_wal_keep_size that was added in v13 and wal_keep_segments are
the GUC parameters to specify how much WAL files to retain for
the standby servers. While max_slot_wal_keep_size accepts the number of
bytes of WAL files, wal_keep_segments accepts the number of WAL files.
This difference of setting units between those similar parameters could
be confusing to users.
To alleviate this situation, this commit renames wal_keep_segments to
wal_keep_size, and make users specify the WAL size in it instead of
the number of WAL files.
There was also the idea to rename max_slot_wal_keep_size to
max_slot_wal_keep_segments, in the discussion. But we have been moving
away from measuring in segments, for example, checkpoint_segments was
replaced by max_wal_size. So we concluded to rename wal_keep_segments
to wal_keep_size.
Back-patch to v13 where max_slot_wal_keep_size was added.
Author: Fujii Masao
Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, David Steele
Discussion: https://postgr.es/m/574b4ea3-e0f9-b175-ead2-ebea7faea855@oss.nttdata.com
The logical decoding infrastructure needs to know which top-level
transaction the subxact belongs to, in order to decode all the
changes. Until now that might be delayed until commit, due to the
caching (GPROC_MAX_CACHED_SUBXIDS), preventing features requiring
incremental decoding.
So we also write the assignment info into WAL immediately, as part
of the next WAL record (to minimize overhead) only when wal_level=logical.
We can not remove the existing XLOG_XACT_ASSIGNMENT WAL as that is
required for avoiding overflow in the hot standby snapshot.
Bump XLOG_PAGE_MAGIC, since this introduces XLR_BLOCK_ID_TOPLEVEL_XID.
Author: Tomas Vondra, Dilip Kumar, Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Neha Sharma and Mahendra Singh Thalor
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
There was no easy way to find how many times generic and custom plans
have been executed for a prepared statement. This commit exposes those
numbers of times in pg_prepared_statements view.
Author: Atsushi Torikoshi, Kyotaro Horiguchi
Reviewed-by: Tatsuro Yamada, Masahiro Ikeda, Fujii Masao
Discussion: https://postgr.es/m/CACZ0uYHZ4M=NZpofH6JuPHeX=__5xcDELF8hT8_2T+R55w4RQw@mail.gmail.com
Valgrind builds with assertions enabled sometimes perform a
theoretically unsafe page access inside an assertion in
heapam_tuple_lock(). This happened when the eval-plan-qual isolation
test ran one of the permutations added by commit a2418f9e23.
Avoid complaints from Valgrind by moving the assertion ever so slightly.
This is minor cleanup for commit 1e0dfd16, which added Valgrind buffer
access instrumentation.
No backpatch, since this only happens within an assertion, and seems
very unlikely to cause any real problems even with assert-enabled
builds.
Make PinBuffer() mark buffers as defined to Valgrind unconditionally,
including when the buffer header spinlock must be acquired. Failure to
handle that case could lead to false positive reports from Valgrind.
This theoretically creates a risk that we'll mark buffers defined even
when external callers don't end up with a buffer pin. That seems
perfectly acceptable, though, since in general we make no guarantees
about buffers that are unsafe to access being reliably marked as unsafe.
Oversight in commit 1e0dfd16, which added valgrind buffer access
instrumentation.
Due to the layout of this catalog, subslotname has to be explicitly
marked BKI_FORCE_NULL, else initdb will default to the assumption
that it's non-nullable. Since, in fact, CREATE/ALTER SUBSCRIPTION
will store null values there, the existing marking is just wrong,
and has been since this catalog was invented.
We haven't noticed because not much in the system actually depends
on attnotnull being truthful. However, JIT'ed tuple deconstruction
does depend on that in some cases, allowing crashes or wrong answers
in queries that inspect pg_subscription. Commit 9de77b545 quite
accidentally exposed this on the buildfarm members that force JIT
activation.
Back-patch to v13. The problem goes further back, but we cannot
force initdb in released branches, so some klugier solution will
be needed there. Before working on that, push this simple fix
to try to get the buildfarm back to green.
Discussion: https://postgr.es/m/4118109.1595096139@sss.pgh.pa.us
This header hasn't changed recently, so the fact that it now fails
headerscheck/cpluspluscheck testing must be due to changes in what
it includes. Probably f21916791 is to blame, but I didn't try to
verify that.
Discussion: https://postgr.es/m/3699703.1595016554@sss.pgh.pa.us
This patch adds a "binary" option to CREATE/ALTER SUBSCRIPTION.
When that's set, the publisher will send data using the data type's
typsend function if any, rather than typoutput. This is generally
faster, if slightly less robust.
As committed, we won't try to transfer user-defined array or composite
types in binary, for fear that type OIDs won't match at the subscriber.
This might be changed later, but it seems like fit material for a
follow-on patch.
Dave Cramer, reviewed by Daniel Gustafsson, Petr Jelinek, and others;
adjusted some by me
Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
The term "hash_mem" will take on new significance when pending work to
add a new hash_mem_multiplier GUC is committed. Rename a local variable
that happens to have been called hash_mem now to avoid confusion.
Teach Valgrind memcheck to maintain the "defined-ness" of each shared
buffer based on whether the backend holds at least one pin at the point
it is accessed by access method code. Bugs like the one fixed by commit
b0229f26 can be detected using this new instrumentation.
Note that backends running with Valgrind naturally have their own
independent ideas about whether any given byte in shared memory is safe
or unsafe to access. There is no risk that concurrent access by
multiple backends to the same shared memory will confuse Valgrind's
instrumentation, because everything already works at the process level
(or at the memory mapping level, if you prefer).
Author: Álvaro Herrera, Peter Geoghegan
Reviewed-By: Anastasia Lubennikova
Discussion: https://postgr.es/m/20150723195349.GW5596@postgresql.org
Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
pg_dump produces custom-format archive files that lack data offsets
when it is unable to seek its output. Up to now that's been a hazard
for pg_restore. But if pg_restore is able to seek in the archive
file, there is no reason to throw up our hands when asked to restore
data blocks out of order. Instead, whenever we are searching for a
data block, record the locations of the blocks we passed over (that
is, fill in the missing data-offset fields in our in-memory copy of
the TOC data). Then, when we hit a case that requires going
backwards, we can just seek back.
Also track the furthest point that we've searched to, and seek back
to there when beginning a search for a new data block. This avoids
possible O(N^2) time consumption, by ensuring that each data block
is examined at most twice. (On Unix systems, that's at most twice
per parallel-restore job; but since Windows uses threads here, the
threads can share block location knowledge, reducing the amount of
duplicated work.)
We can also improve the code a bit by using fseeko() to skip over
data blocks during the search.
This is all of some use even in simple restores, but it's really
significant for parallel pg_restore. In that case, we require
seekability of the input already, and we will very probably need
to do out-of-order restores.
Back-patch to v12, as this fixes a regression introduced by commit
548e50976. Before that, parallel restore avoided requesting
out-of-order restores, so it would work on a data-offset-less
archive. Now it will again.
Ideally this patch would include some test coverage, but there are
other open bugs that need to be fixed before we can extend our
coverage of parallel restore very much. Plan to revisit that later.
David Gilman and Tom Lane; reviewed by Justin Pryzby
Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
We do not really need to track the file position by hand. We were
already relying on ftello() whenever the archive file is seekable,
while if it's not seekable we don't need the file position info
anyway because we're not going to be able to re-write the TOC.
Moreover, that tracking was buggy since it failed to account for
the effects of fseeko(). Somewhat remarkably, that seems not to
have made for any live bugs up to now. We could fix the oversights,
but it seems better to just get rid of the whole error-prone mess.
In itself this is merely code cleanup. However, it's necessary
infrastructure for an upcoming bug-fix patch (because that code
*does* need valid file position after fseeko). The bug fix
needs to go back as far as v12; hence, back-patch that far.
Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
There is no advantage to attempting deduplication for a unique index
during CREATE INDEX, since there cannot possibly be any duplicates.
Doing so wastes cycles due to unnecessary copying. Make sure that we
avoid it consistently.
We already avoided unique index deduplication in the case where there
were some spool2 tuples to merge. That didn't account for the fact that
spool2 is removed early/unset in the common case where it has no tuples
that need to be merged (i.e. it failed to account for the "spool2 turns
out to be unnecessary" optimization in _bt_spools_heapscan()).
Oversight in commit 0d861bbb, which added nbtree deduplication
Backpatch: 13-, where nbtree deduplication was introduced.
We had two occurrences of "Mitteleuropäische Zeit" in Europe.txt,
though the corresponding entries in Default were spelled
"Mitteleuropaeische Zeit". Standardize on the latter spelling to
avoid questions of which encoding to use.
While here, correct a couple of other trivial inconsistencies between
the Default file and the supposedly-matching entries in the *.txt
files, as exposed by some checking with comm(1). Also, add BDST to
the Europe.txt file; it previously was only listed in Default.
None of this has any direct functional effect.
Per complaint from Christoph Berg. As usual for timezone data patches,
apply to all branches.
Discussion: https://postgr.es/m/20200716100743.GE3534683@msg.df7cb.de
Commit 1e53fe0e70 has unified the usage of the config-file reload flag by
using the same signal handler function for the SIGHUP signal at many places
in the code. By mistake, it used the wrong SIGNAL in apply launcher
process for the SIGHUP signal handler function.
Author: Bharath Rupireddy
Reviewed-by: Dilip Kumar
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CALj2ACVzHCRnS20bOiEHaLtP5PVBENZQn4khdsSJQgOv_GM-LA@mail.gmail.com
This representation saves 8 bytes per tuple compared to HeapTuple, and
avoids the need to allocate, copy and free on the receiving side.
Gather can emit the returned MinimalTuple directly, but GatherMerge now
needs to make an explicit copy because it buffers multiple tuples at a
time. That should be no worse than before.
Reviewed-by: Soumyadeep Chakraborty <soumyadeep2007@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2B8T_ggoUTAE-U%3DA%2BOcPc4%3DB0nPPHcSfffuQhvXXjML6w%40mail.gmail.com
Windows has junction points which function as symbolic links for
directories. This patch introduces a new function TestLib::dir_symlink()
which creates a junction point on Windows and a standard Unix type
symbolic link elsewhere.
The function TestLib::perl2host is also modified, first to use cygpath
where it's available (e.g. msys2) and second to allow it to succeed if
the gandparent directory exists but the parent does not.
Given these changes the only symlink tests that need to be skipped on
Windows are those related to permissions or to use of readlink. The
relevant tests for pg_basebackup and pg_rewind are therefore adjusted
accordingly.
Andrew Dunstan, reviewed by Peter Eisentraut and Michael Paquier.
Discussion: https://postgr.es/m/c50a646c-d9bb-7c62-a4bf-8256ff6ff338@2ndquadrant.com
pg_test_fsync has always opened files using the text mode on Windows, as
this is the default mode used if not enforced by _setmode().
This fixes a failure when running pg_test_fsync down to 12 because
O_DSYNC and the text mode are not able to work together nicely. We
fixed the handling of O_DSYNC in 12~ for the tool by switching to the
concurrent-safe version of fopen() in src/port/ with 0ba06e0. And
40cfe86, by enforcing the text mode for compatibility reasons if O_TEXT
or O_BINARY are not specified by the caller, broke pg_test_fsync. For
all versions, this avoids any translation overhead, and pg_test_fsync
should test binary writes, so it is a gain in all cases.
Note that O_DSYNC is still not handled correctly in ~11, leading to
pg_test_fsync to show insanely high numbers for open_datasync() (using
this property it is easy to notice that the binary mode is much
faster). This would require a backpatch of 0ba06e0 and 40cfe86, which
could potentially break existing applications, so this is left out.
There are no TAP tests for this tool yet, so I have checked all builds
manually using MSVC. We could invent a new option to run a single
transaction instead of using a duration of 1s to make the tests a
maximum short, but this is left as future work.
Thanks to Bruce Momjian for the discussion.
Reported-by: Jeff Janes
Author: Michael Paquier
Discussion: https://postgr.es/m/16526-279ded30a230d275@postgresql.org
Backpatch-through: 9.5
When working with an online source cluster, pg_rewind gets a list of all
the files in the source data directory using a WITH RECURSIVE query,
returning a NULL result for a file's metadata if it gets removed between
the moment it is listed in a directory and the moment its metadata is
obtained with pg_stat_file() (say a recycled WAL segment). The query
result was processed in such a way that for each tuple we checked only
that the first file's metadata was NULL. This could have two
consequences, both resulting in a failure of the rewind:
- If the first tuple referred to a removed file, all files from the
source would be ignored.
- Any file actually missing would not be considered as such.
While on it, rework slightly the code so as no values are saved if we
know that a file is going to be skipped.
Issue introduced by b36805f, so backpatch down to 9.5.
Author: Justin Pryzby, Michael Paquier
Reviewed-by: Daniel Gustafsson, Masahiko Sawada
Discussion: https://postgr.es/m/20200713061010.GC23581@telsasoft.com
Backpatch-through: 9.5
When using the following functions, users could see various types of
errors of the type "cache lookup failed for OID XXX" with elog(), that
can only be used for internal errors:
* pg_describe_object()
* pg_identify_object()
* pg_identify_object_as_address()
The set of APIs managing object addresses for all object types are made
smarter by gaining a new argument "missing_ok" that allows any caller to
control if an error is raised or not on an undefined object. The SQL
functions listed above are changed to handle the case where an object is
missing.
Regression tests are added for all object types for the cases where
these are undefined. Before this commit, these cases failed with cache
lookup errors, and now they basically return NULL (minus the name of the
object type requested).
Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson,
Álvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
reparameterize_path_by_child() failed to reparameterize BitmapAnd
and BitmapOr paths. This matters only if such a path is chosen as
the inside of a nestloop partition-wise join, where we have to pass
in parameters from the outside of the nestloop. If that did happen,
we generated a bad plan that would likely lead to crashes at execution.
This is not entirely reparameterize_path_by_child()'s fault though;
it's the victim of an ancient decision (my ancient decision, I think)
to not bother filling in param_info in BitmapAnd/Or path nodes. That
caused the function to believe that such nodes and their children
contain no parameter references and so need not be processed.
In hindsight that decision looks pretty penny-wise and pound-foolish:
while it saves a few cycles during path node setup, we do commonly
need the information later. In particular, by reversing the decision
and requiring valid param_info data in all nodes of a bitmap path
tree, we can get rid of indxpath.c's get_bitmap_tree_required_outer()
function, which computed the data on-demand. It's not unlikely that
that nets out as a savings of cycles in many scenarios. A couple
of other things in indxpath.c can be simplified as well.
While here, get rid of some cases in reparameterize_path_by_child()
that are visibly dead or useless, given that we only care about
reparameterizing paths that can be on the inside of a parameterized
nestloop. This case reminds one of the maxim that untested code
probably does not work, so I'm unwilling to leave unreachable code
in this function. (I did leave the T_Gather case in place even
though it's not reached in the regression tests. It's not very
clear to me when the planner might prefer to put Gather below
rather than above a nestloop, but at least in principle the case
might be interesting.)
Per bug #16536, originally from Arne Roland but with a test case
by Andrew Gierth. Back-patch to v11 where this code came in.
Discussion: https://postgr.es/m/16536-2213ee0b3aad41fd@postgresql.org
Three groups of issues needed to be addressed:
load_external_function() and related functions returned PGFunction,
even though not necessarily all callers are looking for a function of
type PGFunction. Since these functions are really just wrappers
around dlsym(), change to return void * just like dlsym().
In dynahash.c, we are using strlcpy() where a function with a
signature like memcpy() is expected. This should be safe, as the new
comment there explains, but the cast needs to be augmented to avoid
the warning.
In PL/Python, methods all need to be cast to PyCFunction, per Python
API, but this now runs afoul of these warnings. (This issue also
exists in core CPython.)
To fix the second and third case, we add a new type pg_funcptr_t that
is defined specifically so that gcc accepts it as a special function
pointer that can be cast to any other function pointer without the
warning.
Also add -Wcast-function-type to the standard warning flags, subject
to configure check.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/1e97628e-6447-b4fd-e230-d109cec2d584%402ndquadrant.com
An ALTER TABLE to validate a foreign key in which another subcommand
already caused a pending table rewrite could fail due to ALTER TABLE
attempting to validate the foreign key before the actual table rewrite
takes place. This situation could result in an error such as:
ERROR: could not read block 0 in file "base/nnnnn/nnnnn": read only 0 of 8192 bytes
The failure here was due to the SPI call which validates the foreign key
trying to access an index which is yet to be rebuilt.
Similarly, we also incorrectly tried to validate CHECK constraints before
the heap had been rewritten.
The fix for both is to delay constraint validation until phase 3, after
the table has been rewritten. For CHECK constraints this means a slight
behavioral change. Previously ALTER TABLE VALIDATE CONSTRAINT on
inheritance tables would be validated from the bottom up. This was
different from the order of evaluation when a new CHECK constraint was
added. The changes made here aligns the VALIDATE CONSTRAINT evaluation
order for inheritance tables to be the same as ADD CONSTRAINT, which is
generally top-down.
Reported-by: Nazli Ugur Koyluoglu, using SQLancer
Discussion: https://postgr.es/m/CAApHDvp%3DZXv8wiRyk_0rWr00skhGkt8vXDrHJYXRMft3TjkxCA%40mail.gmail.com
Backpatch-through: 9.5 (all supported versions)
Incorrect function names were referenced. As this fixes some portions
of tableam.h, that is mentioned in the docs as something to look at when
implementing a table AM, backpatch down to 12 where this has been
introduced.
Author: Hironobu Suzuki
Discussion: https://postgr.es/m/8fe6d672-28dd-3f1d-7aed-ac2f6d599d3f@interdb.jp
Backpatch-through: 12
The qual pushdown logic assumed that all Vars in a restriction clause
must be Vars referencing subquery outputs; but since we introduced
LATERAL, it's possible for such a Var to be a lateral reference instead.
This led to an assertion failure in debug builds. In a non-debug
build, there might be no ill effects (if qual_is_pushdown_safe decided
the qual was unsafe anyway), or we could get failures later due to
construction of an invalid plan. I've not gone to much length to
characterize the possible failures, but at least segfaults in the
executor have been observed.
Given that this has been busted since 9.3 and it took this long for
anybody to notice, I judge that the case isn't worth going to great
lengths to optimize. Hence, fix by just teaching qual_is_pushdown_safe
that such quals are unsafe to push down, matching the previous behavior
when it accidentally didn't fail.
Per report from Tom Ellis. Back-patch to all supported branches.
Discussion: https://postgr.es/m/20200713175124.GQ8220@cloudinit-builder
Remove previous hack in KeepLogSeg that added a case to deal with a
(badly represented) invalid segment number. This was added for the sake
of GetWALAvailability. But it's not needed if in that function we
initialize the segment number to be retreated to the currently being
written segment, so do that instead.
Per valgrind-running buildfarm member skink, and some sparc64 animals.
Discussion: https://postgr.es/m/1724648.1594230917@sss.pgh.pa.us
GSS-related resources should be cleaned up in pqDropConnection,
not freePGconn, else the wrong things happen when resetting
a connection or trying to switch to a different server.
It's also critical to reset conn->gssenc there.
During connection setup, initialize conn->try_gss at the correct
place, else switching to a different server won't work right.
Remove now-redundant cleanup of GSS resources around one (and, for
some reason, only one) pqDropConnection call in connectDBStart.
Per report from Kyotaro Horiguchi that psql would freeze up,
rather than successfully resetting a GSS-encrypted connection
after a server restart.
This is YA oversight in commit b0b39f72b, so back-patch to v12.
Discussion: https://postgr.es/m/20200710.173803.435804731896516388.horikyota.ntt@gmail.com
* Strategy number and purpose are essential information for opfamily operator.
So, show those columns in non-verbose output.
* "Left/right arg type" \dAp column names are confusing, because those type
don't necessary match to function arguments. Rename them to "Registered
left/right type".
* Replace manual assembling of operator/procedure names with casts to
regoperator/regprocedure.
* Add schema-qualification for pg_catalog functions and tables.
Reported-by: Peter Eisentraut, Tom Lane
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2edc7b27-031f-b2b6-0db2-864241c91cb9%402ndquadrant.com
Backpatch-through: 13
The stats with this commit was available only for WALSenders, however,
users might want to see for backends doing logical decoding via SQL API.
Then, users might want to reset and access these stats across server
restart which was not possible with the current patch.
List of commits reverted:
caa3c4242c Don't call elog() while holding spinlock.
e641b2a995 Doc: Update the documentation for spilled transaction
statistics.
5883f5fe27 Fix unportable printf format introduced in commit 9290ad198.
9290ad198b Track statistics for spilling of changes from ReorderBuffer.
Additionaly, remove the release notes entry for this feature.
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
Switching the regression tests to use tstzrange() has proved to not be a
good idea for environments where the timestamp precision is low, as
internal range checks exclude the upper bound. So, if the commit
timestamp of a transaction matched with now() from the next query,
the test would fail. This changes to use two bound checks instead of
the range function, where the upper bound is inclusive.
Per buildfarm member jacana.
Discussion: https://postgr.es/m/20200712122507.GD21680@paquier.xyz
Replication origins created by regression tests should have names
starting with "regress_", and the test introduced in b1e48bb for commit
timestamps did not do that.
Per buildfarm member longfin.
Discussion: https://postgr.es/m/20200712122507.GD21680@paquier.xyz
This includes two changes:
- Addition of a new function pg_xact_commit_timestamp_origin() able, for
a given transaction ID, to return the commit timestamp and replication
origin of this transaction. An equivalent function existed in
pglogical.
- Addition of the replication origin to pg_last_committed_xact().
The commit timestamp manager includes already APIs able to return the
replication origin of a transaction on top of its commit timestamp, but
the code paths for replication origins were never stressed as those
functions have never looked for a replication origin, and the SQL
functions available have never included this information since their
introduction in 73c986a.
While on it, refactor a test of modules/commit_ts/ to use tstzrange() to
check that a transaction timestamp is within the wanted range, making
the test a bit easier to read.
Bump catalog version.
Author: Movead Li
Reviewed-by: Madan Kumar, Michael Paquier
Discussion: https://postgr.es/m/2020051116430836450630@highgo.ca
The raw_buf and line_buf buffers aren't used when reading binary format,
so skip allocating them. raw_buf is 64K so that seems like a worthwhile
savings. An unused line_buf only wastes 1K, but as long as we're checking
it's free to avoid allocating that too.
Bharath Rupireddy, tweaked a bit by me
Discussion: https://postgr.es/m/CALj2ACXcCKaGPY0whowqrJ4OPJvDnTssgpGCzvuFQu5z0CXb-g@mail.gmail.com
Parallel pg_restore has always supposed that ACL items for different
objects are independent and can be restored in parallel without
conflicts. However, there is one case where this fails: because
REVOKE on a table is defined to also revoke the privilege(s) at
column level, we can't restore per-column ACLs till after we restore
any table-level privileges on their table. Failure to honor this
restriction can lead to "tuple concurrently updated" errors during
parallel restore, or even to the per-column ACLs silently disappearing
because the table-level REVOKE is executed afterwards.
To fix, add a dependency from each column-level ACL item to its table's
ACL item, if there is one. Note that this doesn't fix the hazard
for pre-existing archive files, only for ones made with a corrected
pg_dump. Given that the bug's been there quite awhile without
field reports, I think this is acceptable.
This requires changing the API of pg_dump's dumpACL() function.
To keep its argument list from getting even longer, I removed the
"CatalogId objCatId" argument, which has been unused for ages.
Per report from Justin Pryzby. Back-patch to all supported branches.
Discussion: https://postgr.es/m/20200706050129.GW4107@telsasoft.com
"relkind" normally refers to the char field from pg_class. However, in
the parse nodes AlterTableStmt and CreateTableAsStmt, "relkind" was used
for a field of type enum ObjectType, that could refer to other object
types than those possible for a relkind. Such fields being usually
named "objtype", switch the name in both structures to make things more
consistent. Note that this led to some confusion in functions that
also operate on a RangeTableEntry object, which also has a field named
"relkind".
This naming goes back to commit 09d4e96, where only OBJECT_TABLE and
OBJECT_INDEX were used. This got extended later to use as well
OBJECT_TYPE with e440e12, not really a relation kind.
Author: Mark Dilger
Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/609181AE-E399-47C7-9221-856E0F96BF93@enterprisedb.com
SQL standard doesn't define numeric Inf or NaN values. It appears even more
ridiculous to support then in jsonpath assuming JSON doesn't support these
values as well. This commit forbids returning NaN from .double(), which was
previously allowed. NaN can't be result of inner-jsonpath computation over
non-NaNs. So, we can not expect NaN in the jsonpath output.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/203949.1591879542%40sss.pgh.pa.us
Author: Alexander Korotkov
Reviewed-by: Tom Lane
Backpatch-through: 12
When jsonpath .double() method detects that numeric or string can't be
converted to double precision, it throws an error. This commit makes these
errors explicitly express the reason of failure.
Discussion: https://postgr.es/m/CAPpHfdtqJtiSXkP7tOXez18NxhLUH_-75bL8%3DOce4Ki%2Bbv7V6Q%40mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Tom Lane
Backpatch-through: 12
This reverts commit 2b2a070, that moved the reset of path
"testtablespace" used by the regression tests as a path for tablespaces
(via --outputdir) from pg_regress to the MSVC script vcregress.pl, as
this broke the behavior added by ce5d342 to be able to safely run the
regression test suite with an administrative Windows account using a
restricted token.
Note that before 2b2a070, the code doing the reset in pg_regress.c
included a comment telling that we had better move that out to a
different place, leading to the mistake done in 2b2a070. Fix this
comment, and document instead that we had better never remove this code,
for the sake of not breaking again the behavior we expect on Windows.
Thanks to Thomas Munro and Andrew Dunstan for the discussion.
Discussion: https://postgr.es/m/6d9eee97-54c8-e14a-48f7-3194e712f54f@2ndQuadrant.com
Discussion: https://postgr.es/m/CA+hUKGLiieEzfrdWxWFE+_wnXho_F5Smx972X1wEubhS7v1q9g@mail.gmail.com
This message was being emitted on the grounds that only crashed
summarization could cause it, but in reality even an aborted vacuum
could do it ... which makes it way too noisy, particularly since it
shows up in regression tests and makes them die.
Reported by Tom Lane.
Discussion: https://postgr.es/m/489091.1593534251@sss.pgh.pa.us
Due to not having our signals straight about CRLF vs. LF line
termination, the output of pg_current_logfile() included a trailing
\r on Windows. To fix, force the file descriptor it uses into text
mode.
While here, move a couple of local variable declarations to make
the function's logic clearer.
In v12 and v13, also back-patch the test added by 1c4e88e2f so that
this function has some test coverage. However, the 004_logrotate.pl
test script doesn't exist before v12, and it didn't seem worth adding
to older branches just for this.
Per report from Thomas Kellerer. Back-patch to v10 where this
function was added.
Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net
Buildfarm results now imply that Perl's IPC::Run does CRLF conversion
for us if we're using native Perl, but not when using MSys Perl.
Restrict the conversions done by PostgresNode.pm to act only in the
latter case. (Similar conversions done in TestLib.pm and RewindTest.pm
were already handled this way.)
Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net
The previous approach was to search-and-destroy all \r occurrences
no matter what. That seems more likely to hide bugs than anything
else; indeed it seems to be hiding one now. Fix things so that
we only transform \r\n to \n.
Side effects: must do this before, not after, chomp'ing if we're
going to chomp, else we'd fail to clean up a trailing \r\n. Also,
remove safe_psql's redundant repetition of what psql already did;
else it might reduce \r\r\n to \n, which is exactly the scenario
I'm hoping to expose.
Perhaps this should be back-patched, but for now I'm content to
see what happens in HEAD.
Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net
The Sort node does not put a space between the number of kilobytes and
the "kB" of memory or disk space used, but HashAgg does. Here we align
HashAgg to do the same as Sort. Sort has been displaying this
information for longer than HashAgg, so it makes sense to align HashAgg
to Sort rather than the other way around.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20200708163021.GW4107@telsasoft.com
Backpatch-through: 13, where the hashagg started showing these details
Since slot_keep_segs indicates the number of WAL segments not LSN,
its datatype should not be XLogRecPtr.
Back-patch to v13 where this issue was added.
Reported-by: Atsushi Torikoshi
Author: Atsushi Torikoshi, tweaked by Fujii Masao
Discussion: https://postgr.es/m/ebd0d674f3e050222238a960cac5251a@oss.nttdata.com
I neglected to test this scenario while preparing commit f3faf35f3,
so of course it was broken, thanks to some very obscure and undocumented
code in pg_dump. Pre-v12 databases might have toast tables attached to
partitioned tables, which we need to ignore since newer servers never
create such useless toast tables. There was a filter for this case in
binary_upgrade_set_type_oids_by_rel_oid(), which appeared to just
prevent the pg_type OID from being copied. But actually it managed to
prevent the toast table from being created at all --- or it did before
I took out that logic. But that was a fundamentally bizarre place to be
making the test in the first place. The place where the filter should
have been, one would think, is binary_upgrade_set_pg_class_oids(), so
add it there.
While at it, reorganize binary_upgrade_set_pg_class_oids() so that it
doesn't make a completely useless query when it knows it's being
invoked for an index. And correct a comment that mis-described the
scenario where we need to force creation of a TOAST table.
Per buildfarm.
Commit f7f70d5e2 left one inconsistency behind: we're still creating
pg_type entries for the composite types of sequences and toast tables,
but not arrays over those composites. But there seems precious little
reason to have named composite types for toast tables, and not much more
to have them for sequences (especially given the thought that sequences
may someday not be standalone relations at all).
So, let's close that inconsistency by removing these composite types,
rather than adding arrays for them. This buys back a little bit of
the initial pg_type bloat added by the previous patch, and could be
a significant savings in a large database with many toast tables.
Aside from a small logic rearrangement in heap_create_with_catalog,
this patch mostly needs to clean up some places that were assuming that
pg_class.reltype always has a valid value. Those are really pre-existing
bugs, given that it's documented otherwise; notably, the plpgsql changes
fix code that gives "cache lookup failed for type 0" on indexes today.
But none of these seem interesting enough to back-patch.
Also, remove the pg_dump/pg_upgrade infrastructure for propagating
a toast table's pg_type OID into the new database, since we no longer
need that.
Discussion: https://postgr.es/m/761F1389-C6A8-4C15-80CE-950C961F5341@gmail.com
The previous definition of the column was almost universally disliked,
so provide this updated definition which is more useful for monitoring
purposes: a large positive value is good, while zero or a negative value
means danger. This should be operationally more convenient.
Backpatch to 13, where the new column to pg_replication_slots (and the
feature it represents) were added.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/9ddfbf8c-2f67-904d-44ed-cf8bc5916228@oss.nttdata.com
Previously we checked that the ssl pointer was not null, but this puts a
requirement on there being such a pointer which may not be true in
future multi-ssl-library supporting times. This seems to have been an
oversight in 9029f4b374, but hasn't really had any effect since we only
have one library.
Author: Daniel Gustafsson
nbtree index builds cannot write out an empty page. That would mean
that there was no way to create a pivot tuple pointing to the page one
level up, since _bt_truncate() generates one based on page's firstright
tuple.
Replace the unnecessary PageIsEmpty() check with an assertion that
checks that the page has space for at least two line pointers (the
would-be high key line pointer, plus at least one valid "data item"
tuple line pointer).
The PageIsEmpty() check was added by commit 5d9f146c over 20 years ago.
It looks like it has always been unnecessary.
When we invented arrays of composite types (commit bc8036fc6),
we excluded system catalogs, basically just on the grounds of not
wanting to bloat pg_type. However, it's definitely inconsistent that
catalogs' composite types can't be put into arrays when others can.
Another problem is that the exclusion is done by checking
IsUnderPostmaster in heap_create_with_catalog, which means that
(1) If a user tries to create a table in single-user mode, it doesn't
get an array type. That's bad in itself, plus it breaks pg_upgrade.
(2) If someone drops and recreates a system view or information_schema
view (as we occasionally recommend doing), it will now have an array
type where it did not before, making for still more inconsistency.
So this is all pretty messy. Let's just get rid of the inconsistency
and decree that system-created relations should have array types if
similar user-created ones would, i.e. it only depends on the relkind.
As of HEAD, that means that the initial contents of pg_type grow from
411 rows to 605, which is a lot of growth percentage-wise, but it's
still quite a small catalog compared to others.
Wenjing Zeng, reviewed by Shawn Wang, further hacking by me
Discussion: https://postgr.es/m/761F1389-C6A8-4C15-80CE-950C961F5341@gmail.com
This introduces a new set of extended routines for procedure and
operator name lookups, with a flag bitmask argument that can modify the
result. The following options are available:
- Force schema qualification, ignoring search_path. This is similar to
the existing option for format_{operator|procedure}_qualified().
- Force NULL as result instead of a numeric OID for an undefined
object. This option is new.
This is a refactoring similar to 1185c78, that will be used for a future
patch to improve the SQL functions providing information using object
addresses for undefined objects.
Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson,
Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
If a type scanned is undefined, type format routines have two behaviors
depending on if FORMAT_TYPE_ALLOW_INVALID is used by the caller or not:
- Issue a cache lookup error
- Return an undefined type name "???", "???[]" or "-"
The current interface is not really helpful for callers willing to
format properly a type name, but still make sure that the type is
defined as there could be types matching the strings generated when
looking for an undefined type, even if that should not be a problem in
practice. In order to counter that, add a new flag called
FORMAT_TYPE_INVALID_AS_NULL that returns a NULL result instead of "???
or "-" which does not generate an error. This flag will be used in a
follow-up patch improving the set of SQL functions showing information
for object addresses when it comes to undefined objects.
Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Dmitry Dolgov, Daniel Gustafsson,
Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
This commit includes two improvements for build.pl in src/tools/msvc/:
- Fix two warnings related to $ARGV[0] being uninitialized, something
that happens when calling build.pl (or build.bat) without arguments to
compile all the components with a release quality.
- If calling the script with more than two arguments, exit immediately
and show a new help output. build.pl was not failing in this case
before this commit, and the extra arguments were just ignored, but the
new behavior helps in understanding how to run the script without
looking at the documentation for the Windows builds.
Reported-by: Ranier Vilela
Author: Michael Paquier
Reviewed-by: Juan José Santamaría Flecha, David Zhang
Discussion: https://postgr.es/m/CAEudQAo38dfR_0Ugt2OHy4mq-6Hz93XPSBAGEUV67RhKdgp8Zg@mail.gmail.com
In the common case where this function isn't actually asked to perform
any type conversion, there's nothing it has to do beyond comparing the
arguments. Arrange for that part to be inlined into callers, with the
slower path remaining out-of-line. This seems to be good for several
percent speedup on simple cases, with only minimal code bloat.
Amit Khandekar
Discussion: https://postgr.es/m/CAJ3gD9eBNrmUD7WBBLG8ohaZ485H9y+4eihQTgr+K8Lhka3vcQ@mail.gmail.com
The cfbot and some BF animals are complaining about the previous
read_binary_file commit because of ignoring return value of ‘fread’.
So let's make everyone happy by testing the return value even though
not strictly needed.
Reported by Justin Pryzby, and suggested patch by Tom Lane. Backpatched
to v11 same as the previous commit.
Reported-By: Justin Pryzby
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com
Backpatch-through: 11
read_binary_file(), used by SQL functions pg_read_file() and friends,
uses stat to determine file length to read, when not passed an explicit
length as an argument. This is problematic, for example, if the file
being read is a virtual file with a stat-reported length of zero.
Arrange to read until EOF, or StringInfo data string lenth limit, is
reached instead.
Original complaint and patch by me, with significant review, corrections,
advice, and code optimizations by Tom Lane. Backpatched to v11. Prior to
that only paths relative to the data and log dirs were allowed for files,
so no "zero length" files were reachable anyway.
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com
Backpatch-through: 11
After running GetForeignRelSize for a foreign table, adjust rel->tuples
to be at least as large as rel->rows. This prevents bizarre behavior
in estimate_num_groups() and perhaps other places, especially in the
scenario where rel->tuples is zero because pg_class.reltuples is
(suggesting that ANALYZE has never been run for the table). As things
stood, we'd end up estimating one group out of any GROUP BY on such a
table, whereas the default group-count estimate is more likely to result
in a sane plan.
Also, clarify in the documentation that GetForeignRelSize has the option
to override the rel->tuples value if it has a better idea of what to use
than what is in pg_class.reltuples.
Per report from Jeff Janes. Back-patch to all supported branches.
Patch by me; thanks to Etsuro Fujita for review
Discussion: https://postgr.es/m/CAMkU=1xNo9cnan+Npxgz0eK7394xmjmKg-QEm8wYG9P5-CcaqQ@mail.gmail.com
Commit ecd9e9f0b fixed the problem in the wrong place, causing unwanted
side-effects on the behavior of GetNextTempTableSpace(). Instead,
let's make SharedFileSetInit() responsible for subbing in the value
of MyDatabaseTableSpace when the default tablespace is called for.
The convention about what is in the tempTableSpaces[] array is
evidently insufficiently documented, so try to improve that.
It also looks like SharedFileSetInit() is doing the wrong thing in the
case where temp_tablespaces is empty. It was hard-wiring use of the
pg_default tablespace, but it seems like using MyDatabaseTableSpace
is more consistent with what happens for other temp files.
Back-patch the reversion of PrepareTempTablespaces()'s behavior to
9.5, as ecd9e9f0b was. The changes in SharedFileSetInit() go back
to v11 where that was introduced. (Note there is net zero code change
before v11 from these two patch sets, so nothing to release-note.)
Magnus Hagander and Tom Lane
Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com
This saves one level of C function call per plpgsql statement executed,
and permits a tiny additional optimization of not saving and restoring
estate->err_stmt for each statement in a block. The net effect seems
nearly un-measurable on x86_64, but there's a clear win on aarch64,
amounting to two or three percent in a loop over a few simple plpgsql
statements.
To do this, we have to get rid of the other existing call sites for
exec_stmt(). Replace them with exec_toplevel_block(), which is just
defined to do what exec_stmts() does, but for a single
PLpgSQL_stmt_block statement. Hard-wiring the expectation of which
statement type applies here allows us to skip the dispatch switch,
making this not much uglier than the previous factorization.
Amit Khandekar, tweaked a bit by me
Discussion: https://postgr.es/m/CAJ3gD9eBNrmUD7WBBLG8ohaZ485H9y+4eihQTgr+K8Lhka3vcQ@mail.gmail.com
A likely copy/paste error in 98e8b48053 from back in 2004 would
cause temp tablespace to be reset to InvalidOid if temp_tablespaces
was set to the same value as the primary tablespace in the database.
This would cause shared filesets (such as for parallel hash joins)
to ignore them, putting the temporary files in the default tablespace
instead of the configured one. The bug is in the old code, but it
appears to have been exposed only once we had shared filesets.
Reviewed-By: Daniel Gustafsson
Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com
Backpatch-through: 9.5
Do the same for the maintenance_work_mem global variable.
Oversight in commit 848ae330a4, which increased the previous defaults
for work_mem and maintenance_work_mem by 4X.
Make some of the variable names in _bt_search() consistent with
corresponding variables within _bt_getstackbuf(). This naming scheme is
clearer because the variable names always express a relationship between
the currently locked buffer/page and some other page.
src/backend/replication/README includes since 32bc08b a basic
description of the WAL receiver hooks available in walreceiver.h for a
module like libpqwalreceiver, but the README has never been updated to
reflect changes done to the hooks, so the contents of the README have
rotten with the time. This commit moves the description from the README
to walreceiver.h, where it will be hard to miss that a description
update or addition is needed depending on the modifications done to the
hooks.
Each hook now includes a description of what it does in walreceiver.h,
and the replication's README mentions walreceiver.h.
Thanks also to Amit Kapila for the discussion.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20200502024606.GA471944@paquier.xyz
This is a follow-up commit similar to 68de144, with more places in the
backend code simplified with the macros able to assign values to the
fields of ObjectAddress. The code paths changed here could be
transitioned later into using more grouping when inserting dependency
records, simplifying this future work.
Author: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/20190213182737.mxn6hkdxwrzgxk35@alap3.anarazel.de
Use separate functions to save and restore error context information as
that made code easier to understand. Also, make it clear that the index
information required for error context is sane.
Author: Andres Freund, Justin Pryzby, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CAA4eK1LWo+v1OWu=Sky27GTGSCuOmr7iaURNbc5xz6jO+SaPeA@mail.gmail.com
When creating an extension, the same type of dependency is used when
registering a dependency to a schema and required extensions. This
improves the code so as those dependencies are not recorded one-by-one,
but grouped together. Note that this has as side effect to remove
duplicate dependency entries, even if it should not happen in practice
as extensions listed as required in a control file should be listed only
once.
Extracted from a larger patch by the same author.
Author: Daniel Dustafsson
Discussion: https://postgr.es/m/20200629065535.GA183079@paquier.xyz
001_ssltests.pl and 002_scram.pl both generated an extra file for a
client key used in the tests that were not removed. In Debian, this
causes repeated builds to fail.
The code refactoring done in 4dc6355 broke the cleanup done in
001_ssltests.pl, and the new tests added in 002_scram.pl via d6e612f
forgot the removal of one file. While on it, fix a second issue
introduced in 002_scram.pl where we use the same file name in 001 and
002 for the temporary client key whose permissions are changed in the
test, as using the same file name in both tests could cause failures
with parallel jobs of src/test/ssl/ if one test removes a file still
needed by the second test.
Reported-by: Felix Lechner
Author: Daniel Gustafsson, Felix Lechner
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CAFHYt543sjX=Cm_aEeoejStyP47C+Y3+Wh6WbirLXsgUMaw7iw@mail.gmail.com
Backpatch-through: 13
The "Disk Usage" and "HashAgg Batches" properties in the EXPLAIN ANALYZE
output for HashAgg were previously only shown if the number of batches
was greater than 0. Here we change this so that these properties are
always shown for EXPLAIN ANALYZE formats other than "text". The idea here
is that since the HashAgg could have spilled to disk if there had been
more data or groups to aggregate, then it's relevant that we're clear in
the EXPLAIN ANALYZE output when no spilling occurred in this particular
execution of the given plan.
For the "text" EXPLAIN format, we still hide these properties when no
spilling occurs. This EXPLAIN format is designed to be easy for humans
to read. To maintain the readability we have a higher threshold for which
properties we display for this format.
Discussion: https://postgr.es/m/CAApHDvo_dmNozQQTmN-2jGp1vT%3Ddxx7Q0vd%2BMvD1cGpv2HU%3DSg%40mail.gmail.com
Backpatch-through: 13, where the hashagg spilling code was added.
Any frontend-only file of src/common/ should include a protection to
prevent such code to be included in the backend compilation.
fe_memutils.c and restricted_token.c have been doing that, while
file_utils.c (since bf5bb2e) and logging.c (since fc9a62a) forgot it.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200625080757.GI130132@paquier.xyz
The IANA tzcode library has a feature to read a time zone file named
"posixrules" and apply the daylight-savings transition dates and times
therein, when it is given a POSIX-style time zone specification that
lacks an explicit transition rule. However, there's a problem with
that code: it doesn't work for dates past the Y2038 time_t rollover.
(Effectively, all times beyond that point are treated as standard
time.) The IANA crew regard this feature as legacy, so their plan is
to remove it not fix it. The time frame in which that will happen
is unclear, but presumably it'll happen well before 2038.
Moreover, effective with the next IANA data update (probably this
fall), the recommended default will be to not install a "posixrules"
file in the first place. The time frame in which tzdata packagers
might adopt that suggestion is likewise unclear, but at least some
platforms will probably do it in the next year or so. While we could
ignore that recommendation so far as PG-supplied tzdata trees are
concerned, builds using --with-system-tzdata will be subject to
whatever the platform's tzdata packager decides to do.
Thus, whether or not we do anything, some increasing fraction of
Postgres users will be exposed to the behavior observed when there
is no "posixrules" file; and if we do nothing, we'll have essentially
no control over the timing of that change.
The best thing to do to ameliorate the uncertainty seems to be to
proactively remove the posixrules-reading feature. If we do that in
a scheduled release then at least we can release-note the behavioral
change, rather than having users be surprised by it after a routine
tzdata update.
The change in question is fairly minor anyway: to be affected,
you have to be using a POSIX-style timezone spec, it has to not
have an explicit rule, and it has to not be one of the four traditional
continental-USA zone names (EST5EDT, CST6CDT, MST7MDT, or PST8PDT),
as those are special-cased. Since the default "posixrules" file
provides USA DST rules, the number of people who are likely to find
such a zone spec useful is probably quite small. Moreover, the
fallback behavior with no explicit rule and no "posixrules" file is to
apply current USA rules, so the only thing that really breaks is the
DST transitions in years before 2007 (and you get the countervailing
fix that transitions after 2038 will be applied).
Now, some installations might have replaced the "posixrules" file,
allowing e.g. EU rules to be applied to a POSIX-style timezone spec.
That won't work anymore. But it's not exactly clear why this solution
would be preferable to using a regular named zone. In any case, given
the Y2038 issue, we need to be pushing users to stop depending on this.
Back-patch into v13; it hasn't been released yet, so it seems OK to
change its behavior. (Personally I think we ought to back-patch
further, but I've been outvoted.)
Discussion: https://postgr.es/m/1390.1562258309@sss.pgh.pa.us
Discussion: https://postgr.es/m/20200621211855.6211-1-eggert@cs.ucla.edu
Commit 54cd4f045 added some kluges to work around an old glibc bug,
namely that %.*s could misbehave if glibc thought any characters in
the supplied string were incorrectly encoded. Now that we use our
own snprintf.c implementation, we need not worry about that bug (even
if it still exists in the wild). Revert a couple of particularly
ugly hacks, and remove or improve assorted comments.
Note that there can still be encoding-related hazards here: blindly
clipping at a fixed length risks producing wrongly-encoded output
if the clip splits a multibyte character. However, code that's
doing correct multibyte-aware clipping doesn't really need a comment
about that, while code that isn't needs an explanation why not,
rather than a red-herring comment about an obsolete bug.
Discussion: https://postgr.es/m/279428.1593373684@sss.pgh.pa.us
Since %c only passes a C "char" to printf, it's incapable of dealing
with multibyte characters. Passing just the first byte of such a
character leads to an output string that is visibly not correctly
encoded, resulting in undesirable behavior such as encoding conversion
failures while sending error messages to clients.
We've lived with this issue for a long time because it was inconvenient
to avoid in a portable fashion. However, now that we always use our own
snprintf code, it's reasonable to use the %.*s format to print just one
possibly-multibyte character in a string. (We previously avoided that
obvious-looking answer in order to work around glibc's bug #6530, cf
commits 54cd4f045 and ed437e2b2.)
Hence, run around and fix a bunch of places that used %c to report
a character found in a user-supplied string. For simplicity, I did
not touch places that were emitting non-user-facing debug messages,
or reporting catalog data that should always be ASCII. (It's also
unclear how useful this approach could be in frontend code, where
it's less certain that we know what encoding we're dealing with.)
In passing, improve a couple of poorly-written error messages in
pageinspect/heapfuncs.c.
This is a longstanding issue, but I'm hesitant to back-patch because
of the impact on translatable message strings. In any case this fix
would not work reliably before v12.
Tom Lane and Quan Zongliang
Discussion: https://postgr.es/m/a120087c-4c88-d9d4-1ec5-808d7a7f133d@gmail.com
SQL:1999 had syntax
SUBSTRING(text FROM pattern FOR escapechar)
but this was replaced in SQL:2003 by the more clear
SUBSTRING(text SIMILAR pattern ESCAPE escapechar)
but this was never implemented in PostgreSQL. This patch adds that
new syntax as an alternative in the parser, and updates documentation
and tests to indicate that this is the preferred alternative now.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/a15db31c-d0f8-8ce0-9039-578a31758adb%402ndquadrant.com
The logic used to build the set of dependencies needed for a type is
rather repetitive with direct assignments for each ObjectAddress field.
This refactors the code to use the macro ObjectAddressSet() instead, to
do the same work. There are more areas of the backend code that could
use this macro, but these are left for a follow-up patch that will
partially rework the way dependencies are recorded as well. Type
dependencies are left out of the follow-up patch, so they are refactored
separately here.
Extracted from a larger patch by the same author.
Author: Daniel Gustafsson
Discussion: https://potgr.es/m/20190213182737.mxn6hkdxwrzgxk35@alap3.anarazel.de
OpenSSL's native reports about problems related to protocol version
restrictions are pretty opaque and inconsistent. When we get an
SSL error that is plausibly due to this, emit a hint message that
includes the range of SSL protocol versions we (think we) are
allowing. This should at least get the user thinking in the right
direction to resolve the problem, even if the hint isn't totally
accurate, which it might not be for assorted reasons.
Back-patch to v13 where we increased the default minimum protocol
version, thereby increasing the risk of this class of failure.
Patch by me, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
When we initially created this parameter, in commit ff8ca5fad, we left
the default as "allow any protocol version" on grounds of backwards
compatibility. However, that's inconsistent with the backend's default
since b1abfec82; protocol versions prior to 1.2 are not considered very
secure; and OpenSSL has had TLSv1.2 support since 2012, so the number
of PG servers that need a lesser minimum is probably quite small.
On top of those things, it emerges that some popular distros (including
Debian and RHEL) set MinProtocol=TLSv1.2 in openssl.cnf. Thus, far
from having "allow any protocol version" behavior in practice, what
we actually have as things stand is a platform-dependent lower limit.
So, change our minds and set the min version to TLSv1.2. Anybody
wanting to connect with a new libpq to a pre-2012 server can either
set ssl_min_protocol_version=TLSv1 or accept the fallback to non-SSL.
Back-patch to v13 where the aforementioned patches appeared.
Patch by me, reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
We failed to save slot to disk after invalidating it, so the state was
lost in case of server restart or crash. Fix by marking it dirty and
flushing.
Also, if the slot is known invalidated we don't need to reason about the
LSN at all -- it's known invalidated. Only test the LSN if the slot is
known not invalidated.
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/17a69cfe-f1c1-a416-ee25-ae15427c69eb@oss.nttdata.com
Commit 0d861bbb, which added deduplication to nbtree, had
_bt_check_unique() pass a TID to table_index_fetch_tuple_check() that
isn't safe to mutate. table_index_fetch_tuple_check()'s tid argument is
modified when the TID in question is not the latest visible tuple in a
hot chain, though this wasn't documented.
To fix, go back to using a local copy of the TID in _bt_check_unique(),
and update comments above table_index_fetch_tuple_check().
Backpatch: 13-, where B-Tree deduplication was introduced.
If restart_lsn of logical replication slot gets behind more than
max_slot_wal_keep_size from the current LSN, the logical replication slot
would be invalidated and its restart_lsn is reset to an invalid LSN.
If this logical replication slot with an invalid restart_lsn was specified as
the source slot in pg_copy_logical_replication_slot(), the function caused
the assertion failure unexpectedly.
This assertion was added because restart_lsn should not be invalid before.
But in v13, it can be invalid thanks to max_slot_wal_keep_size. So since this
assertion is no longer useful, this commit removes it.
This commit also changes the errcode in the error message that
pg_copy_logical_replication_slot() emits when the slot with an invalid
restart_lsn is specified, to more appropriate one.
Back-patch to v13 where max_slot_wal_keep_size was added and
the assertion was no longer valid.
Author: Fujii Masao
Reviewed-by: Alvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/f91de4fb-a7ab-b90e-8132-74796e049d51@oss.nttdata.com
I forgot that INT64_FORMAT can't be used with sscanf on Windows.
Use the same trick of sscanf'ing into a temp variable as we do in
some other places in zic.c.
The upstream IANA code avoids the portability problem by relying on
<inttypes.h>'s SCNdFAST64 macro. Once we're requiring C99 in all
branches, we should do likewise and drop this set of diffs from
upstream. For now, though, a hack seems fine, since we do not
actually care about leapseconds anyway.
Discussion: https://postgr.es/m/4e5d1a5b-143e-e70e-a99d-a3b01c1ae7c3@2ndquadrant.com
In pg_replication_slot, change output from normal/reserved/lost to
reserved/extended/unreserved/ lost, which better expresses the possible
states particularly near the time where segments are no longer safe but
checkpoint has not run yet.
Under the new definition, reserved means the slot is consuming WAL
that's still under the normal WAL size constraints; extended means it's
consuming WAL that's being protected by wal_keep_segments or the slot
itself, whose size is below max_slot_wal_keep_size; unreserved means the
WAL is no longer safe, but checkpoint has not yet removed those files.
Such as slot is in imminent danger, but can still continue for a little
while and may catch up to the reserved WAL space.
Also, there were some bugs in the calculations used to report the
status; fixed those.
Backpatch to 13.
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200616.120236.1809496990963386593.horikyota.ntt@gmail.com
The current definition is dangerous. No bugs exist in our code at
present, but backpatch to 11 nonetheless where it was introduced.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
The description of InsertPgAttributeTuple() does not match its handling
of pg_attribute contents with NULL values for a long time, with 911e702
making things more inconsistent. This adjusts the description to match
the reality.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/4E4E4B33-9FDF-4D21-B77A-642D027AEAD9@yesql.se
explain_get_index_name() applied quote_identifier() to the index name.
This is fine for text output, but the non-text output formats all have
their own quoting conventions and would much rather start from the
actual index name. For example in JSON you'd get something like
"Index Name": "\"My Index\"",
which is surely not desirable, especially when the same does not
happen for table names. Hence, move the responsibility for applying
quoting out to the callers, where it can go into already-existing
special code paths for text format.
This changes the API spec for users of explain_get_index_name_hook:
before, they were supposed to apply quote_identifier() if necessary,
now they should not. Research suggests that the only publicly
available user of the hook is hypopg, and it actually forgot to
apply quoting anyway, so it's fine. (In any case, there's no
behavioral change for the output of a hook as seen in non-text
EXPLAIN formats, so this won't break any case that programs should
be relying on.)
Digging in the commit logs, it appears that quoting was included in
explain_get_index_name's duties when commit 604ffd280 invented it;
and that was fine at the time because we only had text output format.
This should have been rethought when non-text formats were invented,
but it wasn't.
This is a fairly clear bug for users of non-text EXPLAIN formats,
so back-patch to all supported branches.
Per bug #16502 from Maciek Sakrejda. Patch by me (based on
investigation by Euler Taveira); thanks to Julien Rouhaud for review.
Discussion: https://postgr.es/m/16502-57bd1c9f913ed1d1@postgresql.org
Both INDEX_CLEANUP and TRUNCATE have been available since v12, and are
enabled by default except if respectively vacuum_index_cleanup and
vacuum_truncate are disabled for a given relation. This change adds
support for disabling these options from vacuumdb.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6F7F17EF-B1F2-4681-8D03-BA96365717C0@amazon.com
spg_mask() didn't take into account that pd_lower equal to SizeOfPageHeaderData
is still valid value. This commit fixes that. Backpatch to 11, where
spg_mask() pg_lower check was introduced.
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/20200615131405.GM52676%40paquier.xyz
Backpatch-through: 11
The function has been reading global variable forceSyncCommit, mirroring
the intent of the caller that passed forceSync=forceSyncCommit. The
other caller, RecordTransactionCommitPrepared(), passed false. Since
COMMIT PREPARED can't share a transaction with any command, it certainly
doesn't share a transaction with a command that sets forceSyncCommit.
Reviewed by Michael Paquier.
Discussion: https://postgr.es/m/20200617032615.GC2916904@rfd.leadboat.com
A few places calling fwrite and gzwrite were not setting errno to ENOSPC
when reporting errors, as is customary; this led to some failures being
reported as
"could not write file: Success"
which makes us look silly. Make a few of these places in pg_dump and
pg_basebackup use our customary pattern.
Backpatch-to: 9.5
Author: Justin Pryzby <pryzby@telsasoft.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200611153753.GU14879@telsasoft.com
The IANA time zone folk have deprecated use of a "posixrules" file in
the tz database. While for now it's our choice whether to keep
supplying one in our own builds, installations built with
--with-system-tzdata will soon be needing to cope with that file not
being present, at least on some platforms.
This causes a problem for the horology test, which expected the
nonstandard POSIX zone spec "CST7CDT" to apply pre-2007 US daylight
savings rules. That does happen if the posixrules file supplies such
information, but otherwise the test produces undesired results.
To fix, add an explicit transition date rule that matches 2005 practice.
(We could alternatively have switched the test to use some real time
zone, but it seems useful to have coverage of this type of zone spec.)
While at it, update a documentation example that also relied on
"CST7CDT"; use a real-world zone name instead. Also, document why
the zone names EST5EDT, CST6CDT, MST7MDT, PST8PDT aren't subject to
similar failures when "posixrules" is missing.
Back-patch to all supported branches, since the hazard is the same
for all.
Discussion: https://postgr.es/m/1665379.1592581287@sss.pgh.pa.us
It was possible for deduplication's single value strategy to mistakenly
believe that a very small duplicate tuple counts as one of the six large
tuples that it aims to leave behind after the page finally splits. This
could cause slightly suboptimal space utilization with very low
cardinality indexes, though only under fairly narrow conditions.
To fix, be particular about what kind of tuple counts as a
maxpostingsize-capped tuple. This avoids confusion in the event of a
small tuple that gets "wedged" between two large tuples, where all
tuples on the page are duplicates of the same value.
Discussion: https://postgr.es/m/CAH2-Wz=Y+sgSFc-O3LpiZX-POx2bC+okec2KafERHuzdVa7-rQ@mail.gmail.com
Backpatch: 13-, where deduplication was introduced (by commit 0d861bbb)
This commit fixes the following issues.
1. There is the case where the slot is dropped while trying to invalidate it.
InvalidateObsoleteReplicationSlots() did not handle this case, and
which could cause checkpoint to fail.
2. InvalidateObsoleteReplicationSlots() could emit the same log message
multiple times unnecessary. It should be logged only once.
3. When marking the slot as used, we always searched the target slot from
all the replication slots even if we already found it. This could cause
useless waste of cycles.
Back-patch to v13 where these issues were added as a part of
max_slot_wal_keep_size code.
Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi, Alvaro Herrera
Discussion: https://postgr.es/m/66c05b67-3396-042c-1b41-bfa6c3ddcf82@oss.nttdata.com
Since 1f39bce02, HashAgg nodes have had the ability to spill to disk when
memory consumption exceeds work_mem. That commit added new properties to
EXPLAIN ANALYZE to show the maximum memory usage and disk usage, however,
it didn't quite go as far as showing that information for parallel
workers. Since workers may have experienced something very different from
the main process, we should show this information per worker, as is done
in Sort.
Reviewed-by: Justin Pryzby
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/CAApHDvpEKbfZa18mM1TD7qV6PG+w97pwCWq5tVD0dX7e11gRJw@mail.gmail.com
Backpatch-through: 13, where the hashagg spilling code was added.
This was a danger only for --disable-spinlocks in combination with
atomic operations unsupported by the current platform.
While atomics.c was careful to signal that a separate semaphore ought
to be used when spinlock emulation is active, spin.c didn't actually
implement that mechanism. That's my (Andres') fault, it seems to have
gotten lost during the development of the atomic operations support.
Fix that issue and add test for nesting atomic operations inside a
spinlock.
Author: Andres Freund
Discussion: https://postgr.es/m/20200605023302.g6v3ydozy5txifji@alap3.anarazel.de
Backpatch: 9.5-
As s_lock_test, the already existing test for spinlocks, isn't run in
an automated fashion (and doesn't test a normal backend environment),
adding tests that are run as part of a normal regression run is a good
idea. Particularly in light of several recent and upcoming spinlock
related fixes.
Currently the new tests are run as part of the pre-existing
test_atomic_ops() test. That perhaps can be quibbled about, but for
now seems ok.
The only operations that s_lock_test tests but the new tests don't are
the detection of a stuck spinlock and S_LOCK_FREE (which is otherwise
unused, not implemented on all platforms, and will be removed).
This currently contains a test for more than INT_MAX spinlocks (only
run with --disable-spinlocks), to ensure the recent commit fixing a
bug with more than INT_MAX spinlock initializations is correct. That
test is somewhat slow, so we might want to disable it after a few
days.
It might be worth retiring s_lock_test after this. The added coverage
of a stuck spinlock probably isn't worth the added complexity?
Author: Andres Freund
Discussion: https://postgr.es/m/20200606023103.avzrctgv7476xj7i@alap3.anarazel.de
Advancing a replication slot did not recompute the oldest xmin and LSN
values across replication slots, preventing resource removal like
segments not recycled at checkpoint time. The original commit that
introduced the slot advancing in 9c7d06d never did the update of those
oldest values, and b0afdca removed this code.
This commit adds a TAP test to check segment recycling with advancing
for physical slots, enforcing an extra segment switch before advancing
to check if the segment gets correctly recycled after a checkpoint.
Reported-by: Andres Freund
Reviewed-by: Alexey Kondratov, Kyptaro Horiguchi
Discussion: https://postgr.es/m/20200609171904.kpltxxvjzislidks@alap3.anarazel.de
Backpatch-through: 11
testtablespace is an extra path used as tablespace location in the main
regression test suite, computed from --outputdir as defined by the
caller of pg_regress (current directory if undefined).
This special handling was introduced as of f10589e to be specific to
MSVC, as we let pg_regress' Makefile handle this cleanup in other
environments. This moves the cleanup to the MSVC script running
regression tests instead where needed: check, installcheck and
upgradecheck. I have also checked this patch on MSVC with repeated runs
of each target.
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20200219.142519.437573253063431435.horikyota.ntt@gmail.com
This absorbs a leap-second-related bug fix in localtime.c, and
teaches zic to handle an expiration marker in the leapseconds file.
Neither are of any interest to us (for the foreseeable future
anyway), but we need to stay more or less in sync with upstream.
Also adjust some over-eager changes in the README from commit 957338418.
I have no intention of making changes that require C99 in this code,
until such time as all the live back branches require C99. Otherwise
back-patching will get too exciting.
For the same reason, absorb assorted whitespace and other cosmetic
changes from HEAD into the back branches; mostly this reflects use of
improved versions of pgindent.
All in all then, quite a boring update. But I figured I'd get it
done while I was looking at this code.
On platforms without support for 64bit atomic operations where we also
cannot rely on 64bit reads to have single copy atomicity, such atomics
are implemented using a spinlock based fallback. That means it's not
safe to even read such atomics from within a signal handler (since the
signal handler might run when the spinlock already is held).
To avoid this issue defer global barrier processing out of the signal
handler. Instead of checking local / shared barrier generation to
determine whether to set ProcSignalBarrierPending, introduce
PROCSIGNAL_BARRIER and always set ProcSignalBarrierPending when
receiving such a signal. Additionally avoid redundant work in
ProcessProcSignalBarrier if ProcSignalBarrierPending is unnecessarily.
Also do a small amount of other polishing.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20200609193723.eu5ilsjxwdpyxhgz@alap3.anarazel.de
Backpatch: 13-, where the code was introduced.
Don't use fread(), since that doesn't necessarily set errno. We could
use read() instead, but it's even better to use pg_pread(), which
allows us to avoid some extra calls to seek to the desired location in
the file.
Also, advertise a wait event while reading from a file, as we do for
most other places where we're reading data from files.
Patch by me, reviewed by Hamid Akhtar.
Discussion: http://postgr.es/m/CA+TgmobBw-3573vMosGj06r72ajHsYeKtksT_oTxH8XvTL7DxA@mail.gmail.com
Merge two calls to sendDir() that are exactly the same except for
the fifth argument. Adjust comments to match.
Also, don't bother checking whether tblspc_map_file is NULL. We
initialize it in all cases, so it can't be.
Patch by me, reviewed by Amit Kapila and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/CA+TgmoYq+59SJ2zBbP891ngWPA9fymOqntqYcweSDYXS2a620A@mail.gmail.com
Commit 72d422a522 made xlog.c call
sendTablespace() with the 'sizeonly' argument set to true, which
required basebackup.c to export sendTablespace(). However, that's
kind of ugly, so instead defer the call to sendTablespace() until
basebackup.c regains control. That way, it can still be a static
function.
Patch by me, reviewed by Amit Kapila and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/CA+TgmoYq+59SJ2zBbP891ngWPA9fymOqntqYcweSDYXS2a620A@mail.gmail.com
Per POSIX this case should produce +0, but buildfarm member fossa
(with icc (ICC) 19.0.5.281 20190815) is reporting -0. icc has a
boatload of unsafe floating-point optimizations, with a corresponding
boatload of not-too-well-documented compiler switches, and it seems our
default use of "-mp1" isn't whacking it hard enough to keep it from
misoptimizing the stanza in dpow() that checks whether y is odd.
There's nothing wrong with that code (seeing that no other buildfarm
member has trouble with it), so I'm content to blame this on the
compiler. But without access to the compiler I'm not going to guess at
what switches might be needed to fix it. For now, tweak the test case
so it will accept either -0 or +0 as a correct answer.
Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
I failed to notice that we don't really need to check for y being an
integer in the code path where x = -inf; we already did.
Also make some further cosmetic rearrangements in that spot in hopes
of dodging the seeming compiler bug that buildfarm member fossa is
hitting. And be consistent about declaring variables as "float8"
not "double", since the pre-existing variables in this function are
like that.
Discussion: https://postgr.es/m/E1jkyFX-0005RR-1Q@gemulon.postgresql.org
Convert buffile.c error handling to use ereport. This fixes cases where
I/O errors were indistinguishable from EOF or not reported. Also remove
"%m" from error messages where errno would be bogus. While we're
modifying those strings, add block numbers and short read byte counts
where appropriate.
Back-patch to all supported releases.
Reported-by: Amit Khandekar <amitdkhan.pg@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Ibrar Ahmed <ibrar.ahmad@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGJE04G%3D8TLK0DLypT_27D9dR8F1RQgNp0jK6qR0tZGWOw%40mail.gmail.com
Non-zero vacuum_defer_cleanup_age values cause pg_upgrade freezing of
the system catalogs to be incomplete, or do nothing. This will cause
the upgrade to fail in confusing ways.
Reported-by: Laurenz Albe
Discussion: https://postgr.es/m/7d6f6c22ba05ce0c526e9e8b7bfa8105e7da45e6.camel@cybertec.at
Backpatch-through: 9.5
Buildfarm results for commit e532b1d57 reveal the error in my thinking
about the unexpected-EDOM case. I'd supposed this was no longer really
a live issue, but it seems the fix for glibc's bug #3866 is not all that
old, and we still have at least one buildfarm animal (lapwing) with the
bug. Hence, resurrect essentially the previous logic (but, I hope, less
opaquely presented), and explain what it is we're really doing here.
Also, blindly try to fix fossa's failure by tweaking the logic that
figures out whether y is an odd integer when x is -inf. This smells
a whole lot like a compiler bug, but I lack access to icc to try to
pin it down. Maybe doing division instead of multiplication will
dodge the issue.
Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
Introduce TAR_BLOCK_SIZE and replace many instances of 512 with
the new constant. Introduce function tarPaddingBytesRequired
and use it to replace numerous repetitions of (x + 511) & ~511.
Add preprocessor guards against multiple inclusion to pgtar.h.
Reformat the prototype for tarCreateHeader so it doesn't extend
beyond 80 characters.
Discussion: http://postgr.es/m/CA+TgmobWbfReO9-XFk8urR1K4wTNwqoHx_v56t7=T8KaiEoKNw@mail.gmail.com
Buildfarm results for commit decbe2bfb show that AIX and illumos
have non-POSIX-compliant pow() functions, as do ancient NetBSD
and HPUX releases. While it's dubious how much we should care
about the latter two platforms, the former two are probably enough
reason to put in manual handling of infinite-input cases. Hence,
do so, and clean up the post-pow() error handling to reflect its
now-more-limited scope. (Notably, while we no longer expect to
ever see EDOM from pow(), report it as a domain error if we do.
The former coding had the net effect of expensively converting the
error to ERANGE, which seems highly questionable: if pow() wanted
to report ERANGE, it would have done so.)
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/E1jkU7H-00024V-NZ@gemulon.postgresql.org
Previously, these functions tended to throw underflow errors for
negative-infinity exponents. The correct thing per POSIX is to
return 0, so let's do that instead. (Note that the SQL standard
is silent on such issues, as it lacks the concepts of either Inf
or NaN; so our practice is to follow POSIX whenever a corresponding
C-library function exists.)
Also, add a bunch of test cases verifying that exp() and power()
actually do follow POSIX for Inf and NaN inputs. While this patch
should guarantee that exp() passes the tests, power() will not unless
the platform's pow(3) is fully POSIX-compliant. I already know that
gaur fails some of the tests, and I am suspicious that the Windows
animals will too; the extent of compliance of other old platforms
remains to be seen. We might choose to drop failing test cases, or
to work harder at overriding pow(3) for these cases, but first let's
see just how good or bad the situation is.
Discussion: https://postgr.es/m/582552.1591917752@sss.pgh.pa.us
This patch removes the hardcoded check for superuser privileges when
executing replication origin functions. Instead, execution is revoked
from public, meaning that those functions can be executed by a superuser
and that access to them can be granted.
Author: Martín Marqués
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Masahiko Sawada
Discussion: https:/postgr.es/m/CAPdiE1xJMZOKQL3dgHMUrPqysZkgwzSMXETfKkHYnBAB7-0VRQ@mail.gmail.com
var_samp(numeric) and stddev_samp(numeric) disagreed with their float
cousins about what to do for a single non-null input value that is NaN.
The float versions return NULL on the grounds that the calculation is
only defined for more than one non-null input, which seems like the
right answer. But the numeric versions returned NaN, as a result of
dealing with edge cases in the wrong order. Fix that. The patch
also gets rid of an insignificant memory leak in such cases.
This inconsistency is of long standing, but on the whole it seems best
not to back-patch the change into stable branches; nobody's complained
and it's such an obscure point that nobody's likely to complain.
(Note that v13 and v12 now contain test cases that will notice if we
accidentally back-patch this behavior change in future.)
Report and patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
When there is just one non-null input value, and it is infinity or NaN,
aggregates such as stddev_pop and covar_pop should produce a NaN
result, because the calculation is not well-defined. They used to do
so, but since we adopted Youngs-Cramer aggregation in commit e954a727f,
they produced zero instead. That's an oversight, so fix it. Add tests
exercising these edge cases.
Affected aggregates are
var_pop(double precision)
stddev_pop(double precision)
var_pop(real)
stddev_pop(real)
regr_sxx(double precision,double precision)
regr_syy(double precision,double precision)
regr_sxy(double precision,double precision)
regr_r2(double precision,double precision)
regr_slope(double precision,double precision)
regr_intercept(double precision,double precision)
covar_pop(double precision,double precision)
corr(double precision,double precision)
Back-patch to v12 where the behavior change was accidentally introduced.
Report and patch by me; thanks to Dean Rasheed for review.
Discussion: https://postgr.es/m/353062.1591898766@sss.pgh.pa.us
Unify the grammar of COMMENT, DROP, and SECURITY LABEL further. They
all effectively just take an object address for later processing, so
we can make the grammar more generalized. Some extra checking about
which object types are supported can be done later in the statement
execution.
Discussion: https://www.postgresql.org/message-id/flat/163c00a5-f634-ca52-fc7c-0e53deda8735%402ndquadrant.com
Using --outputdir with a custom output repository has never created by
default the sql/ and expected/ paths generated with contents from
respectively input/ and output/ if they don't exist, while the base
output directory gets created if it does not exist. If sql/ and
expected/ are not present, pg_regress would fail with the path missing,
requiring test scripts to create those extra paths by themselves. This
commit changes pg_regress so as both get created by default if they do
not exist, removing the need for external test scripts to do so.
This cleans up two code paths in the tree for pg_upgrade tests in MSVC
and environments able to use test.sh. sql/ and expected/ were created
as part of each test script, but this is not needed anymore as
pg_regress handles the work now.
Author: Roman Zharkov, Daniel Gustafsson
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/16484-4d89e9cc11241996@postgresql.org
Core by no means makes excessive use of these functions, but quite a large
number of those usages do require the caller to call strlen() on the
returned string. This is quite wasteful since these functions do already
have a good idea of the length of the string, so we might as well just
have them return that.
Reviewed-by: Andrew Gierth
Discussion: https://postgr.es/m/CAApHDvrm2A5x2uHYxsqriO2cUaGcFvND%2BksC9e7Tjep0t2RK_A%40mail.gmail.com
In passing, also remove a few surplus empty lines from pg_ltoa and
pg_ulltoa_n in numutils.c
Reported-by: Andrew Gierth
Discussion: https://postgr.es/m/87y2ou3xuh.fsf@news-spur.riddles.org.uk
Backpatch-through: 13, where these changes were introduced
plpgsql has always executed the query given in a RETURN QUERY command
by opening it as a cursor and then fetching a few rows at a time,
which it turns around and dumps into the function's result tuplestore.
The point of this was to keep from blowing out memory with an oversized
SPITupleTable result (note that while a tuplestore can spill tuples
to disk, SPITupleTable cannot). However, it's rather inefficient, both
because of extra data copying and because of executor entry/exit
overhead. In recent versions, a new performance problem has emerged:
use of a cursor prevents use of a parallel plan for the executed query.
We can improve matters by skipping use of a cursor and having the
executor push result tuples directly into the function's result
tuplestore. However, a moderate amount of new infrastructure is needed
to make that idea work:
* We can use the existing tstoreReceiver.c DestReceiver code to funnel
executor output to the tuplestore, but it has to be extended to support
plpgsql's requirement for possibly applying a tuple conversion map.
* SPI needs to be extended to allow use of a caller-supplied
DestReceiver instead of its usual receiver that puts tuples into
a SPITupleTable. Two new API calls are needed to handle both the
RETURN QUERY and RETURN QUERY EXECUTE cases.
I also felt that I didn't want these new API calls to use the legacy
method of specifying query parameter values with "char" null flags
(the old ' '/'n' convention); rather they should accept ParamListInfo
objects containing the parameter type and value info. This required
a bit of additional new infrastructure since we didn't yet have any
parse analysis callback that would interpret $N parameter symbols
according to type data supplied in a ParamListInfo. There seems to be
no harm in letting makeParamList install that callback by default,
rather than leaving a new ParamListInfo's parserSetup hook as NULL.
(Indeed, as of HEAD, I couldn't find anyplace that was using the
parserSetup field at all; plpgsql was using parserSetupArg for its
own purposes, but parserSetup seemed to be write-only.)
We can actually get plpgsql out of the business of using legacy null
flags altogether, and using ParamListInfo instead of its ad-hoc
PreparedParamsData structure; but this requires inventing one more
SPI API call that can replace SPI_cursor_open_with_args. That seems
worth doing, though.
SPI_execute_with_args and SPI_cursor_open_with_args are now unused
anywhere in the core PG distribution. Perhaps someday we could
deprecate/remove them. But cleaning up the crufty bits of the SPI
API is a task for a different patch.
Per bug #16040 from Jeremy Smith. This is unfortunately too invasive to
consider back-patching. Patch by me; thanks to Hamid Akhtar for review.
Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
Rewrite the documentation of these functions, in light of recent bug fix
commit 5940ffb2.
Back-patch to 13 where the check-for-conflict-out code was split up into
AM-specific and generic parts, and new documentation was added that now
looked wrong.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443%40jepsen.io
When merging two NumericAggStates, the code missed adding the new
state's NaNcount unless its N was also nonzero; since those counts
are independent, this is wrong.
This would only have visible effect if some partial aggregate scans
found only NaNs while earlier ones found only non-NaNs; then we could
end up falsely deciding that there were no NaNs and fail to return a
NaN final result as expected. That's pretty improbable, so it's no
surprise this hasn't been reported from the field. Still, it's a bug.
I didn't try to produce a regression test that would show the bug,
but I did notice that these functions weren't being reached at all
in our regression tests, so I improved the tests to at least
exercise them. With these additions, I see pretty complete code
coverage on the aggregation-related functions in numeric.c.
Back-patch to 9.6 where this code was introduced. (I only added
the improved test case as far back as v10, though, since the
relevant part of aggregates.sql isn't there at all in 9.6.)
Eliminate enable_groupingsets_hash_disk, which was primarily useful
for testing grouping sets that use HashAgg and spill. Instead, hack
the table stats to convince the planner to choose hashed aggregation
for grouping sets that will spill to disk. Suggested by Melanie
Plageman.
Rename enable_hashagg_disk to hashagg_avoid_disk_plan, and invert the
meaning of on/off. The new name indicates more strongly that it only
affects the planner. Also, the word "avoid" is less definite, which
should avoid surprises when HashAgg still needs to use the
disk. Change suggested by Justin Pryzby, though I chose a different
GUC name.
Discussion: https://postgr.es/m/CAAKRu_aisiENMsPM2gC4oUY1hHG3yrCwY-fXUg22C6_MJUwQdA%40mail.gmail.com
Discussion: https://postgr.es/m/20200610021544.GA14879@telsasoft.com
Backpatch-through: 13
SSI's HeapCheckForSerializableConflictOut() test failed to correctly
handle conditions involving a concurrently inserted tuple which is later
concurrently updated by a separate transaction . A SELECT statement
that called HeapCheckForSerializableConflictOut() could end up using the
same XID (updater's XID) for both the original tuple, and the successor
tuple, missing the XID of the xact that created the original tuple
entirely. This only happened when neither tuple from the chain was
visible to the transaction's MVCC snapshot.
The observable symptoms of this bug were subtle. A pair of transactions
could commit, with the later transaction failing to observe the effects
of the earlier transaction (because of the confusion created by the
update to the non-visible row). This bug dates all the way back to
commit dafaa3ef, which added SSI.
To fix, make sure that we check the xmin of concurrently inserted tuples
that happen to also have been updated concurrently.
Author: Peter Geoghegan
Reported-By: Kyle Kingsbury
Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/db7b729d-0226-d162-a126-8a8ab2dc4443@jepsen.io
Backpatch: All supported versions
fe_archive.c was compiled only for the frontend in src/common/, but as
it will never share anything with the backend, it makes most sense to
move this file to src/fe_utils/.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/e9766d71-8655-ac86-bdf6-77e0e7169977@2ndquadrant.com
Backpatch-through: 13
access_method, database_name, and index_name are all just name, and
they are not used consistently for their alleged purpose, so remove
them. They have been around since ancient times but have no current
reason for existing. Removing them can simplify future grammar
refactoring.
Discussion: https://www.postgresql.org/message-id/flat/163c00a5-f634-ca52-fc7c-0e53deda8735%402ndquadrant.com
The previous description string still described the pre-PostgreSQL
10 (pre eb61136dc7) behavior of
selecting between encrypted and unencrypted, but it is now choosing
between encryption algorithms.
Commit cec2edfa78 introduced logical_decoding_work_mem to limit
ReorderBuffer memory usage. We spill the changes once the memory occupied
by changes exceeds logical_decoding_work_mem. There was an assumption
in the code that by evicting the largest (sub)transaction we will come
under the memory limit as the selected transaction will be at least as
large as the most recent change (which caused us to go over the memory
limit). However, that is not true because a user can reduce the
logical_decoding_work_mem to a smaller value before the most recent
change.
We fix it by allowing to evict the transactions until we reach under the
memory limit.
Reported-by: Fujii Masao
Author: Amit Kapila
Reviewed-by: Fujii Masao
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/2b7ba291-22e0-a187-d167-9e5309a3458d@oss.nttdata.com
There are a number of Remove${Something}ById() functions that are
essentially identical in structure and only different in which catalog
they are working on. Refactor this to be one generic function. The
information about which oid column, index, etc. to use was already
available in ObjectProperty for most catalogs, in a few cases it was
easily added.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/331d9661-1743-857f-1cbb-d5728bcd62cb%402ndquadrant.com
Commit 0c882e52a tried to force table atest12 to have more-accurate-
than-default statistics; but transiently setting default_statistics_target
isn't enough for that, because autovacuum could come along and overwrite
the stats later. This evidently explains some intermittent buildfarm
failures we've seen since then. Repair by disabling autovac on this table.
Thanks to David Rowley for correctly diagnosing the cause.
Discussion: https://postgr.es/m/CA+hUKG+OUkQSOUTg=qo=S=fWa_tbm99i7rB7mfbHz1SYm4v-jQ@mail.gmail.com
Previously we used pg_atomic_write_64_impl inside
pg_atomic_init_u64. That works correctly, but on platforms without
64bit single copy atomicity it could trigger spurious valgrind errors
about uninitialized memory, because we use compare_and_swap for atomic
writes on such platforms.
I previously suppressed one instance of this problem (6c878edc1d),
but as Tom reports that wasn't enough. As the atomic variable cannot
yet be concurrently accessible during initialization, it seems better
to have pg_atomic_init_64_impl set the value directly.
Change pg_atomic_init_u32_impl for symmetry.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/1714601.1591503815@sss.pgh.pa.us
Backpatch: 9.5-
The redo routines for XLOG_CHECKPOINT_{ONLINE,SHUTDOWN} must acquire
ControlFileLock before modifying ControlFile->checkPointCopy, or the
checkpointer could write out a control file with a bad checksum.
Likewise, XLogReportParameters() must acquire ControlFileLock before
modifying ControlFile and calling UpdateControlFile().
Back-patch to all supported releases.
Author: Nathan Bossart <bossartn@amazon.com>
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/70BF24D6-DC51-443F-B55A-95735803842A%40amazon.com
Since database connections can be used with WAL senders in 9.4, it is
possible to use physical replication. This commit fixes a crash when
starting physical replication with a WAL sender using a database
connection, caused by the refactoring done in 850196b.
There have been discussions about forbidding the use of physical
replication in a database connection, but this is left for later,
taking care only of the crash new to 13.
While on it, add a test to check for a failure when attempting logical
replication if the WAL sender does not have a database connection. This
part is extracted from a larger patch by Kyotaro Horiguchi.
Reported-by: Vladimir Sitnikov
Author: Michael Paquier, Kyotaro Horiguchi
Reviewed-by: Kyotaro Horiguchi, Álvaro Herrera
Discussion: https://postgr.es/m/CAB=Je-GOWMj1PTPkeUhjqQp-4W3=nW-pXe2Hjax6rJFffB5_Aw@mail.gmail.com
Backpatch-through: 13
Commit 7be5d8df1f surfaced the logic
error, which had no functional implications, by adding "use warnings".
The buildfarm always customizes PROVE_FLAGS, so the warning did not
appear there. Back-patch to 9.5 (all supported versions).
Even when we've concluded that we have a hard write failure on the
socket, we should continue to try to read data. This gives us an
opportunity to collect any final error message that the backend might
have sent before closing the connection; moreover it is the job of
pqReadData not pqSendSome to close the socket once EOF is detected.
Due to an oversight in 1f39a1c06, pqSendSome failed to try to collect
data in the case where we'd already set write_failed. The problem was
masked for ordinary query operations (which really only make one write
attempt anyway), but COPY to the server would continue to send data
indefinitely after a mid-COPY connection loss.
Hence, add pqReadData calls into the paths where pqSendSome drops data
because of write_failed. If we've lost the connection, this will
eventually result in closing the socket and setting CONNECTION_BAD,
which will cause PQputline and siblings to report failure, allowing
the application to terminate the COPY sooner. (Basically this restores
what happened before 1f39a1c06.)
There are related issues that this does not solve; for example, if the
backend sends an error but doesn't drop the connection, we did and
still will keep pumping COPY data as long as the application sends it.
Fixing that will require application-visible behavior changes though,
and anyway it's an ancient behavior that we've had few complaints about.
For now I'm just trying to fix the regression from 1f39a1c06.
Per a complaint from Andres Freund. Back-patch into v12 where
1f39a1c06 came in.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
As it stands, this flag is only set when we've successfully sent a
cancel request, not if we get SIGINT and then fail to send a cancel.
However, for almost all callers, that's the Wrong Thing: we'd prefer
to abort processing after control-C even if no cancel could be sent.
As an example, since commit 1d468b9ad "pgbench -i" fails to give up
sending COPY data even after control-C, if the postmaster has been
stopped, which is clearly not what the code intends and not what anyone
would want. (The fact that it keeps going at all is the fault of a
separate bug in libpq, but not letting CancelRequested become set is
clearly not what we want here.)
The sole exception, as far as I can find, is that scripts_parallel.c's
ParallelSlotsGetIdle tries to consume a query result after issuing a
cancel, which of course might not terminate quickly if no cancel
happened. But that behavior was poorly thought out too. No user of
ParallelSlotsGetIdle tries to continue processing after a cancel,
so there is really no point in trying to clear the connection's state.
Moreover this has the same defect as for other users of cancel.c,
that if the cancel request fails for some reason then we end up with
control-C being completely ignored. (On top of that, select_loop failed
to distinguish clearly between SIGINT and other reasons for select(2)
failing, which means that it's possible that the existing code would
think that a cancel has been sent when it hasn't.)
Hence, redefine CancelRequested as simply meaning that SIGINT was
received. We could add a second flag with the other meaning, but
in the absence of any compelling argument why such a flag is needed,
I think it would just offer an opportunity for future callers to
get it wrong. Also remove the consumeQueryResult call in
ParallelSlotsGetIdle's failure exit. In passing, simplify the
API of select_loop.
It would now be possible to re-unify psql's cancel_pressed with
CancelRequested, partly undoing 5d43c3c54. But I'm not really
convinced that that's worth the trouble, so I left psql alone,
other than fixing a misleading comment.
This code is new in v13 (cf a4fd3aa71), so no need for back-patch.
Per investigation of a complaint from Andres Freund.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
Commit 24d85952 made a change that indirectly caused a performance
regression by triggering a change in the way GCC optimizes memcpy() on
some platforms.
The behavior seemed to contradict a GCC document, so I filed a report:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556
This patch implements a narrow workaround which eliminates the
regression I observed. The workaround is benign enough that it seems
unlikely to cause a different regression on another platform.
Discussion: https://postgr.es/m/99b2eab335c1592c925d8143979c8e9e81e1575f.camel@j-davis.com
ineq_histogram_selectivity() can be invoked in situations where the
ordering we care about is not that of the column's histogram. We could
be considering some other collation, or even more drastically, the
query operator might not agree at all with what was used to construct
the histogram. (We'll get here for anything using scalarineqsel-based
estimators, so that's quite likely to happen for extension operators.)
Up to now we just ignored this issue and assumed we were dealing with
an operator/collation whose sort order exactly matches the histogram,
possibly resulting in junk estimates if the binary search gets confused.
It's past time to improve that, since the use of nondefault collations
is increasing. What we can do is verify that the given operator and
collation match what's recorded in pg_statistic, and use the existing
code only if so. When they don't match, instead execute the operator
against each histogram entry, and take the fraction of successes as our
selectivity estimate. This gives an estimate that is probably good to
about 1/histogram_size, with no assumptions about ordering. (The quality
of the estimate is likely to degrade near the ends of the value range,
since the two orderings probably don't agree on what is an extremal value;
but this is surely going to be more reliable than what we did before.)
At some point we might further improve matters by storing more than one
histogram calculated according to different orderings. But this code
would still be good fallback logic when no matches exist, so that is
not an argument for not doing this.
While here, also improve get_variable_range() to deal more honestly
with non-default collations.
This isn't back-patchable, because it requires adding another argument
to ineq_histogram_selectivity, and because it might have significant
impact on the estimation results for extension operators relying on
scalarineqsel --- mostly for the better, one hopes, but in any case
destabilizing plan choices in back branches is best avoided.
Per investigation of a report from James Lucas.
Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
Commit 5e0928005 changed the planner so that, instead of blindly using
DEFAULT_COLLATION_OID when invoking operators for selectivity estimation,
it would use the collation of the column whose statistics we're
considering. This was recognized as still being not quite the right
thing, but it seemed like a good incremental improvement. However,
shortly thereafter we introduced nondeterministic collations, and that
creates cases where operators can fail if they're passed the wrong
collation. We don't want planning to fail in cases where the query itself
would work, so this means that we *must* use the query's collation when
invoking operators for estimation purposes.
The only real problem this creates is in ineq_histogram_selectivity, where
the binary search might produce a garbage answer if we perform comparisons
using a different collation than the column's histogram is ordered with.
However, when the query's collation is significantly different from the
column's default collation, the estimate we previously generated would be
pretty irrelevant anyway; so it's not clear that this will result in
noticeably worse estimates in practice. (A follow-on patch will improve
this situation in HEAD, but it seems too invasive for back-patch.)
The patch requires changing the signatures of mcv_selectivity and allied
functions, which are exported and very possibly are used by extensions.
In HEAD, I just did that, but an API/ABI break of this sort isn't
acceptable in stable branches. Therefore, in v12 the patch introduces
"mcv_selectivity_ext" and so on, with signatures matching HEAD, and makes
the old functions into wrappers that assume DEFAULT_COLLATION_OID should
be used. That does not match the prior behavior, but it should avoid risk
of failure in most cases. (In practice, I think most extension datatypes
aren't collation-aware, so the change probably doesn't matter to them.)
Per report from James Lucas. Back-patch to v12 where the problem was
introduced.
Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
DES has been deprecated in OpenSSL 3.0.0 which makes loading keys
encrypted with DES fail with "fetch failed". Solve by changing the
cipher used to aes256 which has been supported since 1.0.1 (and is
more realistic to use anyways).
Note that the minimum supported OpenSSL version is 1.0.1 as of
7b283d0e1d, so this does not introduce
any new version requirements.
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se
If the flag value is lost, logical decoding would work the same way as
REPLICA IDENTITY NOTHING, meaning that no old tuple values would be
included in the changes anymore produced by logical decoding.
Author: Michael Paquier
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20200603065340.GK89559@paquier.xyz
Backpatch-through: 12
It's intentional that we don't allow values greater than 24 hours,
while we do allow "24:00:00" as well as "23:59:60" as inputs.
However, the range check was miscoded in such a way that it would
accept "23:59:60.nnn" with a nonzero fraction. For time or timetz,
the stored result would then be greater than "24:00:00" which would
fail dump/reload, not to mention possibly confusing other operations.
Fix by explicitly calculating the result and making sure it does not
exceed 24 hours. (This calculation is redundant with what will happen
later in tm2time or tm2timetz. Maybe someday somebody will find that
annoying enough to justify refactoring to avoid the duplication; but
that seems too invasive for a back-patched bug fix, and the cost is
probably unmeasurable anyway.)
Note that this change also rejects such input as the time portion
of a timestamp(tz) value.
Back-patch to v10. The bug is far older, but to change this pre-v10
we'd need to ensure that the logic behaves sanely with float timestamps,
which is possibly nontrivial due to roundoff considerations.
Doesn't really seem worth troubling with.
Per report from Christoph Berg.
Discussion: https://postgr.es/m/20200520125807.GB296739@msg.df7cb.de
The preferred terminology has been support "function", not procedure,
for some time, so change that over. The command stays \dAp, since
\dAf is already something else.
Fix some more violations of the "only straight-line code inside a
spinlock" rule. These are hazardous not only because they risk
holding the lock for an excessively long time, but because it's
possible for palloc to throw elog(ERROR), leaving a stuck spinlock
behind.
copy_replication_slot() had two separate places that did pallocs
while holding a spinlock. We can make the code simpler and safer
by copying the whole ReplicationSlot struct into a local variable
while holding the spinlock, and then referencing that copy.
(While that's arguably more cycles than we really need to spend
holding the lock, the struct isn't all that big, and this way seems
far more maintainable than copying fields piecemeal. Anyway this
is surely much cheaper than a palloc.) That bug goes back to v12.
InvalidateObsoleteReplicationSlots() not only did a palloc while
holding a spinlock, but for extra sloppiness then leaked the memory
--- probably for the lifetime of the checkpointer process, though
I didn't try to verify that. Fortunately that silliness is new
in HEAD.
pg_get_replication_slots() had a cosmetic violation of the rule,
in that it only assumed it's safe to call namecpy() while holding
a spinlock. Still, that's a hazard waiting to bite somebody, and
there were some other cosmetic coding-rule violations in the same
function, so clean it up. I back-patched this as far as v10; the
code exists before that but it looks different, and this didn't
seem important enough to adapt the patch further back.
Discussion: https://postgr.es/m/20200602.161518.1399689010416646074.horikyota.ntt@gmail.com
Previously UpdateSpillStats() called elog(DEBUG2) while holding
the spinlock even though the local variables that the elog() accesses
don't need to be protected by the lock. Since spinlocks are intended
for very short-term locks, they should not be used when calling
elog(DEBUG2). So this commit moves that elog() out of spinlock period.
Author: Kyotaro Horiguchi
Reviewed-by: Amit Kapila and Fujii Masao
Discussion: https://postgr.es/m/20200602.161518.1399689010416646074.horikyota.ntt@gmail.com
The recipe was previously given in comments in the module's test
script, but now we have an explicit recipe in the Makefile. The now
redundant comments in the script are removed.
This recipe shouldn't be needed in normal use, as the certificate and
key are in git and don't need to be regenerated.
Discussion: https://postgr.es/m/ae8f21fc-95cb-c98a-f241-1936133f466f@2ndQuadrant.com
This issue has been present since the introduction of this code as of
a3519a2 from 2002, and has been found by buildfarm member prion that
uses RELCACHE_FORCE_RELEASE via the tests introduced recently in
e786be5.
Discussion: https://postgr.es/m/20200601022055.GB4121@paquier.xyz
Backpatch-through: 9.5
A relation that has no storage initializes rd_tableam to NULL, which
caused those two functions to crash because of a pointer dereference.
Note that in 11 and older versions, this has always failed with a
confusing error "could not open file".
These two functions are used by the Postgres ODBC driver, which requires
them only when connecting to a backend strictly older than 8.1. When
connected to 8.2 or a newer version, the driver uses a RETURNING clause
instead whose support has been added in 8.2, so it should be possible to
just remove both functions in the future. This is left as an issue to
address later.
While on it, add more regression tests for those functions as we never
really had coverage for them, and for aggregates of TIDs.
Reported-by: Jaime Casanova, via sqlsmith
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAJGNTeO93u-5APMga6WH41eTZ3Uee9f3s8dCpA-GSSqNs1b=Ug@mail.gmail.com
Backpatch-through: 12
Commit 1f39bce021 added disk-based hash aggregation, which may spill
incoming tuples to disk. It however did not request projection to make
the tuples as narrow as possible, which may mean having to spill much
more data than necessary (increasing I/O, pushing other stuff from page
cache, etc.).
This adds CP_SMALL_TLIST in places that may use hash aggregation - we do
that only for AGG_HASHED. It's unnecessary for AGG_SORTED, because that
either uses explicit Sort (which already does projection) or pre-sorted
input (which does not need spilling to disk).
Author: Tomas Vondra
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20200519151202.u2p2gpiawoaznsv2%40development
The repeat() function loops for potentially a long time without
ever checking for interrupts. This prevents, for example, a query
cancel from interrupting until the work is all done. Fix by
inserting a CHECK_FOR_INTERRUPTS() into the loop.
Backpatch to all supported versions.
Discussion: https://www.postgresql.org/message-id/flat/8692553c-7fe8-17d9-cbc1-7cddb758f4c6%40joeconway.com
ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE was used in an ereport with the
same message but different errdetail a few lines earlier, so use that
here as well.
Backpatch-through: 11
Disk-based HashAgg relies on writing to multiple tapes
concurrently. Avoid fragmentation of the tapes' blocks by
preallocating many blocks for a tape at once. No file operations are
performed during preallocation; only the block numbers are reserved.
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20200519151202.u2p2gpiawoaznsv2%40development
These were missed when these were added to pg_hba.conf in PG 12;
updates docs and pg_hba.conf.sample.
Reported-by: Arthur Nascimento
Bug: 16380
Discussion: https://postgr.es/m/20200421182736.GG19613@momjian.us
Backpatch-through: 12
The following commands have been missing calls to object access hooks
InvokeObjectPost{Create|Alter}Hook normally applied to all commands:
- ALTER RULE RENAME TO
- ALTER USER MAPPING
- CREATE ACCESS METHOD
- CREATE STATISTICS
Thanks also to Robert Haas for the discussion.
Author: Mark Dilger
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/435CD295-F409-44E0-91EC-DF32C7AFCD76@enterprisedb.com
The previous indentation of optimizer functions was unclear; adjust the
indentation dashes so that a deeper level of indentation indicates that
the outer optimizer function calls the inner one.
Author: Richard Guo, with additional change by me
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAMbWs4-U-ogzpchGsP2BBMufCss1hktm%2B%2BeTJK_dUC196pw0cQ%40mail.gmail.com
When installing binaries and libraries using the MSVC installation
routines, the operation gets done after moving to the root folder, whose
location is detected by checking if "configure" exists two times in a
row. So, calling the installation script from src/tools/msvc/ with an
extra "configure" file four levels up the root path of the code tree
causes the execution to go further up, leading to a failure in finding
the builds. This commit fixes the issue by moving to the root folder of
the code tree only once, when necessary.
Author: Arnold Müller
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/16343-f638f67e7e52b86c@postgresql.org
Backpatch-through: 9.5
This should be plain old ColId. Making it so makes the grammar less
complicated, and makes the compiled tables a kilobyte or so smaller
(likely because they don't have to deal with a keyword classification
that's not used anyplace else).
Commit 624686abcf added an assertion that verified that _bt_search
successfully relocated the leaf page undergoing deletion. Page deletion
cannot deal with the case where the descent stack is to the right of the
page, so this seemed critical (deletion can only handle the case where
the descent stack is to the left of the leaf/target page). However, the
assertion went a bit too far.
Since only a buffer pin is held on the leaf page throughout the call to
_bt_search, nothing guarantees that it can't have split during this
small window. And if does actually split, _bt_search may end up
"relocating" a page to the right of the original target leaf page. This
scenario seems extremely unlikely, but it must still be considered.
Remove the assertion, and document how we cope in this scenario.
Make number of translate_columns elements match the number of output columns.
The only "true" value, which was previously specified, seems to be intended
for opfamily operator "purpose" column. But that column has already translated
values substituted. So, all elements in translate_columns[] should be "false".
This commit changes ORDER BY clause for \dAo and \dAp psql commands in
the following way.
* Operators for the same types are grouped together.
* Same-class operators and procedures are listed before cross-class operators
and procedures.
Modification of ORDER BY clause for \dAp required removing DISTINCT clause,
which doesn't seem to affect anything.
Discussion: https://postgr.es/m/20200511210856.GA18368%40alvherre.pgsql
Author: Alvaro Herrera revised by me
Reviewed-by: Alexander Korotkov, Nikita Glukhov
Synchronize the event names for parallel hash join waits with other
event names, by getting rid of the slashes and dropping "-ing"
suffixes. Rename ClogGroupUpdate to XactGroupUpdate, to match the
new SLRU name. Move the ProcSignalBarrier event to the IPC category;
it doesn't belong under IO.
Also a bit more wordsmithing in the wait event documentation tables.
Discussion: https://postgr.es/m/4505.1589640417@sss.pgh.pa.us
d140f2f3 has renamed receivedUpto to flushedUpto, and has added
writtenUpto to the WAL receiver's shared memory information, but
pg_stat_wal_receiver was not consistent with that. This commit renames
received_lsn to flushed_lsn, and adds a new column called written_lsn.
Bump catalog version.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20200515090817.GA212736@paquier.xyz
libpq's exports.txt was overlooked in commit 36d108761, which the
buildfarm is quite unhappy about.
Also, I'd gathered that the plan included renaming PQgetSSLKeyPassHook
to PQgetSSLKeyPassHook_OpenSSL, but that didn't happen in the patch
as committed. I'm taking it on my own authority to do so now, since
the window before beta1 is closing fast.
4dc6355210 provided a way for libraries and clients to modify how libpq
handles client certificate passphrases, by installing a hook. However,
these routines are quite specific to how OpenSSL works, so it's
misleading and not future-proof to have these names not refer to OpenSSL.
Change all the names to add "_OpenSSL" after "Hook", and fix the docs
accordingly.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/981DE552-E399-45C2-9F60-3F0E3770CC61@yesql.se
Thomas Munro fixed a longstanding annoyance in pg_bsd_indent, that
it would misformat lines containing IsA() macros on the assumption
that the IsA() call should be treated like a cast. This improves
some other cases involving field/variable names that match typedefs,
too. The only places that get worse are a couple of uses of the
OpenSSL macro STACK_OF(); we'll gladly take that trade-off.
Discussion: https://postgr.es/m/20200114221814.GA19630@alvherre.pgsql
In a logical replication subscriber, a table using REPLICA IDENTITY FULL
which has a primary key would try to use the primary key's index
available to scan for a tuple, but an assertion only assumed as correct
the case of an index associated to REPLICA IDENTITY USING INDEX. This
commit corrects the assertion so as the use of a primary key index is a
valid case.
Reported-by: Dilip Kumar
Analyzed-by: Dilip Kumar
Author: Euler Taveira
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/CAFiTN-u64S5bUiPL1q5kwpHNd0hRnf1OE-bzxNiOs5zo84i51w@mail.gmail.com
Backpatch-through: 10
It's just weird that this name wasn't chosen to look like an
identifier. The suspicion that it wasn't thought about too
hard is reinforced by the fact that it wasn't documented in
the pg_locks view (until I did so, a day or two back).
Update, and add a comment reminding future adjusters of this
array to fix the docs too.
Do some desultory wordsmithing on various entries in the wait
events tables.
Discussion: https://postgr.es/m/24595.1589326879@sss.pgh.pa.us
In commit 850196b610 I (Álvaro) failed to handle the case of walsender
shutting down on an error before setting up its 'xlogreader' pointer;
the error handling code dereferences the pointer, causing a crash.
Fix by testing the pointer before trying to dereference it.
Kyotaro authored the code fix; I adopted Nathan's test case to be used
by the TAP tests and added the necessary PostgresNode change.
Reported-by: Nathan Bossart <bossartn@amazon.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/C04FC24E-903D-4423-B312-6910E4D846E5@amazon.com
This was mostly confusing, especially since some wait events in
this class had the suffix and some did not.
While at it, stop exposing MainLWLockNames[] as a globally visible
name; any code using that directly is almost certainly wrong, as
its name has been misleading for some time.
(GetLWLockIdentifier() is what to use instead.)
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
The previous coding zeroed out offsetof(ReplicationStateCtl, states)
more bytes than it was entitled to, as a consequence of starting the
zeroing from the wrong pointer (or, if you prefer, using the wrong
calculation of how much to zero).
It's unsurprising that this has not caused any reported problems,
since it can be expected that the newly-allocated block is at the end
of what we've used in shared memory, and we always make the shmem
block substantially bigger than minimally necessary. Nonetheless,
this is wrong and it could bite us someday; plus it's a dangerous
model for somebody to copy.
This dates back to the introduction of this code (commit 5aa235042),
so back-patch to all supported branches.
Choose names that fit into the conventions for wait event names
(particularly, that multi-word names are in the style MultiWordName)
and hopefully convey more information to non-hacker users than the
previous names did.
Also rename SerializablePredicateLockListLock to
SerializablePredicateListLock; the old name was long enough to cause
table formatting problems, plus the double occurrence of "Lock" seems
confusing/error-prone.
Also change a couple of particularly opaque LWLock field names.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
_bt_killitems marks btree items dead when a scan leaves the page where
they live, but it does so with only share lock (to improve concurrency).
This was historicall okay, since killing a dead item has no
consequences. However, with the advent of data checksums and
wal_log_hints, this action incurs a WAL full-page-image record of the
page. Multiple concurrent processes would write the same page several
times, leading to WAL bloat. The probability of this happening can be
reduced by only killing items if they're not already dead, so change the
code to do that.
The problem could eliminated completely by having _bt_killitems upgrade
to exclusive lock upon seeing a killable item, but that would reduce
concurrency so it's considered a cure worse than the disease.
Backpatch all the way back to 9.5, since wal_log_hints was introduced in
9.4.
Author: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>
Discussion: https://postgr.es/m/CA+fd4k6PeRj2CkzapWNrERkja5G0-6D-YQiKfbukJV+qZGFZ_Q@mail.gmail.com
Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care. Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing). Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.
For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects. This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.
(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)
Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
COPY TO released the ACCESS SHARE lock immediately when it was done rather
than holding on to it until the end of the transaction.
This breaks the case where a REPEATABLE READ transaction could see an
empty table if it repeats a COPY statement and somebody truncated the
table in the meantime.
Before 4dded12faa the lock was also released after COPY FROM, but the
commit failed to notice the irregularity in COPY TO.
This is old behavior but doesn't seem important enough to backpatch.
Author: Laurenz Albe, based on suggestion by Robert Haas and Tom Lane
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/7bcfc39d4176faf85ab317d0c26786953646a411.camel@cybertec.at
The reasons why event triggers are disabled in standalone mode are
documented in the code path of ddl_command_start, and other places
checking if standalone mode is enabled or not mention to refer to the
comment for ddl_command_start, except for table_rewrite that duplicated
the same explanation.
Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwYqHtXpvr2mBJRwH9f+Y5y1GXw3rhbaAu0Dk2MoNevsmA@mail.gmail.com
Includes some manual cleanup of places that pgindent messed up,
most of which weren't per project style anyway.
Notably, it seems some people didn't absorb the style rules of
commit c9d297751, because there were a bunch of new occurrences
of function calls with a newline just after the left paren, all
with faulty expectations about how the rest of the call would get
indented.
There is little point in using the LWLockRegisterTranche mechanism for
built-in tranche names. It wastes cycles, it creates opportunities for
bugs (since failing to register a tranche name is a very hard-to-detect
problem), and the lack of any centralized list of names encourages
sloppy nonconformity in name choices. Moreover, since we have a
centralized list of the tranches anyway in enum BuiltinTrancheIds, we're
certainly not buying any flexibility in return for these disadvantages.
Hence, nuke all the backend-internal LWLockRegisterTranche calls,
and instead provide a const array of the builtin tranche names.
(I have in mind to change a bunch of these names shortly, but this
patch is just about getting them into one place.)
Discussion: https://postgr.es/m/9056.1589419765@sss.pgh.pa.us
Commit 3eb77eba5a moved the loop and refactored it, and inadvertently
changed the effect of fsync=off so that it also skipped removing entries
from the pendingOps table. That was not intentional, and leads to an
assertion failure if you turn fsync on while the server is running and
reload the config.
Backpatch-through: 12-
Reviewed-By: Thomas Munro
Discussion: https://www.postgresql.org/message-id/3cbc7f4b-a5fa-56e9-9591-c886deb07513%40iki.fi
Visual Studio 2015 and later versions should still be able to do the same
as Visual Studio 2012, but the declaration of locale_name is missing in
_locale_t, causing the code compilation to fail, hence this falls back
instead on to enumerating all system locales by using EnumSystemLocalesEx
to find the required locale name. If the input argument is in Unix-style
then we can get ISO Locale name directly by using GetLocaleInfoEx() with
LCType as LOCALE_SNAME.
In passing, change the documentation references of the now obsolete links.
Note that this problem occurs only with NLS enabled builds.
Author: Juan José Santamaría Flecha, Davinder Singh and Amit Kapila
Reviewed-by: Ranier Vilela and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAHzhFSFoJEWezR96um4-rg5W6m2Rj9Ud2CNZvV4NWc9tXV7aXQ@mail.gmail.com
The defect suppressed a Standby Status Update message when bytes flushed
to disk had changed but bytes received had not changed. If
pg_recvlogical then exited with no intervening Standby Status Update,
the next pg_recvlogical repeated already-flushed records. The defect
could also cause superfluous messages, which are functionally harmless.
Back-patch to 9.5 (all supported versions).
Discussion: https://postgr.es/m/20200502221647.GA3941274@rfd.leadboat.com
pg_recvlogical merely called PQfinish(), so the backend sent messages
after the disconnect. When that caused EPIPE in internal_flush(),
before a LogicalConfirmReceivedLocation(), the next pg_recvlogical would
repeat already-acknowledged records. Whether or not the defect causes
EPIPE, post-disconnect messages could contain an ErrorResponse that the
user should see. One properly ends PGRES_COPY_OUT by repeating
PQgetCopyData() until it returns a negative value. Augment one of the
tests to cover the case of WAL past --endpos. Back-patch to v10, where
commit 7c030783a5 first appeared. Before
that commit, pg_recvlogical never reached PGRES_COPY_OUT.
Reported by Thomas Munro.
Discussion: https://postgr.es/m/CAEepm=1MzM2Z_xNe4foGwZ1a+MO_2S9oYDq3M5D11=JDU_+0Nw@mail.gmail.com
Previously, AsyncShmemInit forcibly initialized the first page of the
async SLRU, to save dealing with that case in asyncQueueAddEntries.
But this is a poor tradeoff, since many installations do not ever use
NOTIFY; for them, expending those cycles in AsyncShmemInit is a
complete waste. Besides, this only saves a couple of instructions
in asyncQueueAddEntries, which hardly seems likely to be measurable.
The real reason to change this now, though, is that now that we track
SLRU access stats, the existing code is causing the postmaster to
accumulate some access counts, which then get inherited into child
processes by fork(), messing up the statistics. Delaying the
initialization into the first child that does a NOTIFY fixes that.
Hence, we can revert f3d23d83e, which was an incorrect attempt at
fixing that issue. Also, add an Assert to pgstat.c that should
catch any future errors of the same sort.
Discussion: https://postgr.es/m/8367.1589391884@sss.pgh.pa.us
The additional pain from level 4 is excessive for the gain.
Also revert all the source annotation changes to their original
wordings, to avoid back-patching pain.
Discussion: https://postgr.es/m/31166.1589378554@sss.pgh.pa.us
Instead of re-identifying which statistics bucket to use for a given
SLRU on every counter increment, do it once during shmem initialization.
This saves a fair number of cycles, and there's no real cost because
we could not have a bucket assignment that varies over time or across
backends anyway.
Also, get rid of the ill-considered decision to let pgstat.c pry
directly into SLRU's shared state; it's cleaner just to have slru.c
pass the stats bucket number.
In consequence of these changes, there's no longer any need to store
an SLRU's LWLock tranche info in shared memory, so get rid of that,
making this a net reduction in shmem consumption. (That partly
reverts fe702a7b3.)
This is basically code review for 28cac71bd, so I also cleaned up
some comments, removed a dangling extern declaration, fixed some
things that should be static and/or const, etc.
Discussion: https://postgr.es/m/3618.1589313035@sss.pgh.pa.us
* Have both physical and logical walsender share a 'xlogreader' state
struct for tracking state. This replaces the existing globals sendSeg
and sendCxt.
* Change WALRead not to receive XLogReaderState->seg and ->segcxt as
separate arguments anymore; just use the ones from 'state'. This is
made possible by the above change.
* have the XLogReader segment_open contract require the callbacks to
install the file descriptor in the state struct themselves instead of
returning it. xlogreader was already ignoring any possible failed
return from the callbacks, relying solely on them never returning.
(This point is not altogether excellent, as it means the callbacks
have to know more of XLogReaderState; but to really improve on that
we would have to pass back error info from the callbacks to
xlogreader. And the complexity would not be saved but instead just
transferred to the callers of WALRead, which would have to learn how
to throw errors from the open_segment callback in addition of, as
currently, from pg_pread.)
* segment_open no longer receives the 'segcxt' as a separate argument,
since it's part of the XLogReaderState argument.
Per comments from Kyotaro Horiguchi.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200511203336.GA9913@alvherre.pgsql
This commit changes pg_stat_get_slru() so that it uses
TimestampTzGetDatum() for stats_reset field because that field
stores the timestamp with time zone value. Previously
Int64GetDatum() was used.
Author: Fujii Masao
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/b8784fe6-1401-ab35-aa14-d57b5bb8e312@oss.nttdata.com
Previously since SLRUStats was not initialized, SLRU stats counters
could begin with non-zero value. Which could lead to incorrect results
in pg_stat_slru view.
Author: Fujii Masao
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/976bbb73-a112-de3c-c488-b34b64609793@oss.nttdata.com
Use it at level 4, a bit more restrictive than the default level, and
tweak our commanding comments to FALLTHROUGH.
(However, leave zic.c alone, since it's external code; to avoid the
warnings that would appear there, change CFLAGS for that file in the
Makefile.)
Author: Julien Rouhaud <rjuju123@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20200412081825.qyo5vwwco3fv4gdo@nol
Discussion: https://postgr.es/m/flat/E1fDenm-0000C8-IJ@gemulon.postgresql.org
The explain format used by incremental sort was somewhat inconsistent
with other nodes, making it harder to parse and understand. This commit
addresses that by
- adding an extra space to better separate groups of values
- using colons instead of equal signs to separate key/value
- properly capitalizing first letter of a key
- using separate lines for full and pre-sorted groups
These changes were proposed by Justin Pryzby and mostly copy the final
explain format used to report WAL usage.
Author: Justin Pryzby
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/20200419023625.GP26953@telsasoft.com
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES. For reference, the command was
./renumber_oids.pl --first-mapped-oid=8000 --target-oid=5032
Also run reformat_dat_file.pl while I'm here.
Renumbering recently-added types changed some results in the opr_sanity
test. To make those a bit easier to eyeball-verify, change the queries
to show regtype not just bare type OIDs. (I think we didn't have
regtype when these queries were written.)
The existing callers of XLogReadDetermineTimeline() performing recovery
need to check a replay LSN position when determining on which timeline
to read a WAL page. A portion of the comment describing this function
said exactly that, while referring to a routine for fetching a write
LSN, something not available in recovery.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20200511.101619.2043820539323292957.horikyota.ntt@gmail.com
Restructure the function that locates the root of the to-be-deleted
subtree during nbtree page deletion. Handle the conditions that make
page deletion unsafe in a slightly more uniform way, and acknowledge the
fact that the behavior with incomplete splits on internal pages is
different (as pointed out in the nbtree README as of commit 35bc0ec7).
Also invent new terminology that avoids ambiguity around which pages are
about to be deleted. Consistently use the term "to-be-deleted subtree",
not the ambiguous term "branch".
We were calling the subtree parent page the "top parent page", but that
was quite misleading. The top parent page usually refers to a page
unlinked from its siblings and marked deleted (during the second stage
of page deletion). There was one kind of top parent page that we merely
removed a downlink from, and another kind of top parent page that we
actually marked deleted. Eliminate the ambiguity by inventing a new
term ("subtree parent page") that refers to the former kind of page
only.
The one in xlogreader.h was pointed out by Antonin Houska; I (Álvaro) noticed the
others by grepping.
Author: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/28250.1589186654@antos
Somehow we'd never noticed this oversight, even though it means
that such basic columns as pg_proc.proargtypes were not being
validated by the oidjoins test. Correct the query and update
the test script with the newly-found dependencies.
Incremental sort always processes at least one full group group before
switching to prefix groups, so it's enough to check just the number of
full groups. There was no risk of division by zero due to the extra
condition, but it made the code harder to understand.
Reported-by: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAp+7qoS92-4V1vLChpdY3vEkLCbf+gye6P-4cirE-0z0A@mail.gmail.com
ExecReScanIncrementalSort was resetting bounded=false, which means the
optimization would be disabled on all rescans. This happens because
ExecSetTupleBound is called before the rescan, not after it.
Author: James Coleman
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20200414065336.GI1492@paquier.xyz
The executor flags were not handled entirely correctly, although the
bugs were mostly harmless and it was mostly comment inaccuracy. We don't
need to strip any of the flags for child nodes.
Incremental sort does not support backward scans of mark/restore, so
MARK/BACKWARDS flags should not be possible. So we simply ensure this
using an assert, and we don't bother removing them when initializing
the child node.
With REWIND it's a bit less clear - incremental sort does not support
REWIND, but there is no way to signal this - it's legal to just ignore
the flag. We however continue passing the flag to child nodes, because
they might be useful to leverage that.
Reported-by: Michael Paquier
Author: James Coleman
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20200414065336.GI1492@paquier.xyz
We seem to have forgotten to do this in the v12 cycle, so add it as
a task in the RELEASE_CHANGES list, in hopes we won't forget again.
While here, fix findoidjoins.c so that it actually works in the
new dispensation where OID is a regular column, and change it to only
consider system relations (this avoids being fooled by the OID column
in the brintest test table).
Also tweak the largeobject test so that the somewhat-recently-added
manual creation of a LO with an OID in the system range doesn't
fool findoidjoins.c. For the moment I just made that use an unused
OID, but we might have to find a more robust solution someday.