Somehow we'd never noticed this oversight, even though it means
that such basic columns as pg_proc.proargtypes were not being
validated by the oidjoins test. Correct the query and update
the test script with the newly-found dependencies.
Incremental sort always processes at least one full group group before
switching to prefix groups, so it's enough to check just the number of
full groups. There was no risk of division by zero due to the extra
condition, but it made the code harder to understand.
Reported-by: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAp+7qoS92-4V1vLChpdY3vEkLCbf+gye6P-4cirE-0z0A@mail.gmail.com
ExecReScanIncrementalSort was resetting bounded=false, which means the
optimization would be disabled on all rescans. This happens because
ExecSetTupleBound is called before the rescan, not after it.
Author: James Coleman
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20200414065336.GI1492@paquier.xyz
The executor flags were not handled entirely correctly, although the
bugs were mostly harmless and it was mostly comment inaccuracy. We don't
need to strip any of the flags for child nodes.
Incremental sort does not support backward scans of mark/restore, so
MARK/BACKWARDS flags should not be possible. So we simply ensure this
using an assert, and we don't bother removing them when initializing
the child node.
With REWIND it's a bit less clear - incremental sort does not support
REWIND, but there is no way to signal this - it's legal to just ignore
the flag. We however continue passing the flag to child nodes, because
they might be useful to leverage that.
Reported-by: Michael Paquier
Author: James Coleman
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/20200414065336.GI1492@paquier.xyz
We seem to have forgotten to do this in the v12 cycle, so add it as
a task in the RELEASE_CHANGES list, in hopes we won't forget again.
While here, fix findoidjoins.c so that it actually works in the
new dispensation where OID is a regular column, and change it to only
consider system relations (this avoids being fooled by the OID column
in the brintest test table).
Also tweak the largeobject test so that the somewhat-recently-added
manual creation of a LO with an OID in the system range doesn't
fool findoidjoins.c. For the moment I just made that use an unused
OID, but we might have to find a more robust solution someday.
Code review for 0dc8ead463, prompted by a bug closed by 91c40548d5.
XLogReader's system for opening and closing segments had gotten too
complicated, with callbacks being passed at both the XLogReaderAllocate
level (read_page) as well as at the WALRead level (segment_open). This
was confusing and hard to follow, so restructure things so that these
callbacks are passed together at XLogReaderAllocate time, and add
another callback to the set (segment_close) to make it a coherent whole.
Also, ensure XLogReaderState is an argument to all the callbacks, so
that they can grab at the ->private data if necessary.
Document the whole arrangement more clearly.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20200422175754.GA19858@alvherre.pgsql
There were a few different ways to line-wrap the error messages. Make
them all the same, and use placeholders for the actual program names,
to save translation work.
TimelineHistoryRead and TimelineHistoryWrite wait events are reported
during waiting for a read and write of a timeline history file, respectively.
However, previously, TimelineHistoryRead wait event was not reported
while readTimeLineHistory() was reading a timeline history file. Also
TimelineHistoryWrite was not reported while writeTimeLineHistory() was
writing one line with the details of the timeline split, at the end.
This commit fixes these issues.
Back-patch to v10 where wait events for a timeline history file was added.
Author: Masahiro Ikeda
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/d11b0c910b63684424e06772eb844ab5@oss.nttdata.com
Factor out code common to _bt_lock_branch_parent() and _bt_pagedel()
into a new utility function. This new function is used to check that
the left sibling of a deletion target page does not have the
INCOMPLETE_SPLIT page flag set. If it is set then deletion is unsafe;
there won't be a usable pivot tuple (with a downlink) in the parent page
that points to the deletion target page. The page deletion algorithm is
not prepared to deal with that. Also restructure an existing, related
utility function that checks if the right sibling of the target page has
the ISHALFDEAD page flag set.
This organization highlights the symmetry between the two cases. The
goal is to make the design of page deletion clearer. Both functions
involve a sibling page with a flag that indicates that there was an
interrupted operation (a page split or a page deletion) that resulted in
a page pointed to by sibling pages, but not pointed to in the parent.
And, both functions indicate if page deletion is unsafe due to the
absence of a particular downlink in the parent page.
checkcondition_str() failed to report multiple matches for a prefix
pattern correctly: it would dutifully merge the match positions, but
then after exiting that loop, if the last prefix-matching word had
had no suitable positions, it would report there were no matches.
The upshot would be failing to recognize a match that the query
should match.
It looks like you need all of these conditions to see the bug:
* a phrase search (else we don't ask for match position details)
* a prefix search item (else we don't get to this code)
* a weight restriction (else checkclass_str won't fail)
Noted while investigating a problem report from Pavel Borisov,
though this is distinct from the issue he was on about.
Back-patch to 9.6 where phrase search was added.
We were acquiring object locks then deleting objects one by one, instead
of acquiring all object locks first, ignoring those that did not exist,
and then deleting all objects together. The latter is the correct
protocol to use, and what this commits changes to code to do. Failing
to follow that leads to "cache lookup failed for relation XYZ" error
reports when DROP OWNED runs concurrently with other DDL -- for example,
a session termination that removes some temp tables.
Author: Álvaro Herrera
Reported-by: Mithun Chicklore Yogendra (Mithun CY)
Reviewed-by: Ahsan Hadi, Tom Lane
Discussion: https://postgr.es/m/CADq3xVZTbzK4ZLKq+dn_vB4QafXXbmMgDP3trY-GuLnib2Ai1w@mail.gmail.com
Attempting to use an installation path of Python that includes spaces
caused the MSVC builds to fail. This fixes the issue by using the same
quoting method as ad7595b for OpenSSL.
Author: Victor Wagner
Discussion: https://postgr.es/m/20200430150608.6dc6b8c4@antares.wagner.home
Backpatch-through: 9.5
Both the backend and libpq leaked buffers containing encrypted data
to be transmitted, so that the process size would grow roughly as
the total amount of data sent.
There were also far-less-critical leaks of the same sort in GSSAPI
session establishment.
Oversight in commit b0b39f72b, which I failed to notice while
reviewing the code in 2c0cdc818.
Per complaint from pmc@citylink.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200504115649.GA77072@gate.oper.dinoex.org
In commit 33e05f89c5, we have added the option to display WAL usage
statistics in Explain and auto_explain. The display format used two spaces
between each field which is inconsistent with Buffer usage statistics which
is using one space between each field. Change the format to make WAL usage
statistics consistent with Buffer usage statistics.
This commit also changed the usage of "full page writes" to
"full page images" for WAL usage statistics to make it consistent with
other parts of code and docs.
Author: Julien Rouhaud, Amit Kapila
Reviewed-by: Justin Pryzby, Kyotaro Horiguchi and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
Remove one of the arguments to btvacuumpage(), and give up on the idea
that it's a recursive function. We now use the term "backtracking" to
refer to the case where an earlier block must be visited to make sure no
tuples that need to be removed were missed.
Advertising btvacuumpage() as a recursive function was unhelpful. In
reality the function always simulates recursion with a loop (it doesn't
actually call itself). This wasn't just necessary as a precaution (per
the comments mentioning tail recursion), though. There is no reliable
natural limit on the number of times we can backtrack.
There are important behavioral difference when "recursing"/backtracking,
mostly related to page deletion. We don't perform page deletion when
backtracking due to the extra complexity. And when we recurse, we're
not performing a physical order scan anymore, so we expect fairly
different conditions to hold for the page. Structuring the code like
this makes it clearer how _bt_pagedel() cooperates with btvacuumpage()
and btvacuumscan() (as established in commit b0229f26 and commit
73a076b0).
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzmRGMDWiLMcb+zagG9652PboNN4Gfcq1Gc_wJL6A716MA@mail.gmail.com
If the client is compiled with GSSAPI support and tries to start up GSS
with the server, but the server is not compiled with GSSAPI support, we
would mistakenly end up falling through to call ProcessStartupPacket
with secure_done = true, but the client might then try to perform SSL,
which the backend wouldn't understand and we'd end up failing the
connection with:
FATAL: unsupported frontend protocol 1234.5679: server supports 2.0 to 3.0
Fix by arranging to track ssl_done independently from gss_done, instead
of trying to use the same boolean for both.
Author: Andrew Gierth
Discussion: https://postgr.es/m/87h82kzwqn.fsf@news-spur.riddles.org.uk
Backpatch: 12-, where GSSAPI encryption was added.
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
On further reflection, code comments added by commit b0229f26 slightly
misrepresented how we determine the oldest bpto.xact for the index.
btvacuumpage() does not treat the bpto.xact of a page that it put in the
FSM as a candidate to be the oldest deleted page (the delete-marked page
that has the oldest bpto.xact XID among all pages encountered).
The definition of a deleted page for the purposes of the bpto.xact
calculation is different from the definition used by the bulk delete
statistics. The bulk delete statistics don't distinguish between pages
that were deleted by the current VACUUM, pages deleted by a previous
VACUUM operation but not yet recyclable/reusable, and pages that are
reusable (though reusable pages are counted separately).
Backpatch: 11-, just like commit b0229f26.
The logic for determining how many nbtree pages in an index are deleted
pages sometimes undercounted pages. Pages that were deleted by the
current VACUUM operation (as opposed to some previous VACUUM operation
whose deleted pages have yet to be reused) were sometimes overlooked.
The final count is exposed to users through VACUUM VERBOSE's "%u index
pages have been deleted" output.
btvacuumpage() avoided double-counting when _bt_pagedel() deleted more
than one page by assuming that only one page was deleted, and that the
additional deleted pages would get picked up during a future call to
btvacuumpage() by the same VACUUM operation. _bt_pagedel() can
legitimately delete pages that the btvacuumscan() scan will not visit
again, though, so that assumption was slightly faulty.
Fix the accounting by teaching _bt_pagedel() about its caller's
requirements. It now only reports on pages that it knows btvacuumscan()
won't visit again (including the current btvacuumpage() page), so
everything works out in the end.
This bug has been around forever. Only backpatch to v11, though, to
keep _bt_pagedel() is sync on the branches that have today's bugfix
commit b0229f26da. Note that this commit changes the signature of
_bt_pagedel(), just like commit b0229f26da.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-
Commit 857f9c36cd (which taught nbtree VACUUM to skip a scan of the
index from btcleanup in situations where it doesn't seem worth it) made
VACUUM maintain the oldest btpo.xact among all deleted pages for the
index as a whole. It failed to handle all the details surrounding pages
that are deleted by the current VACUUM operation correctly (though pages
deleted by some previous VACUUM operation were processed correctly).
The most immediate problem was that the special area of the page was
examined without a buffer pin at one point. More fundamentally, the
handling failed to account for the full range of _bt_pagedel()
behaviors. For example, _bt_pagedel() sometimes deletes internal pages
in passing, as part of deleting an entire subtree with btvacuumpage()
caller's page as the leaf level page. The original leaf page passed to
_bt_pagedel() might not be the page that it deletes first in cases where
deletion can take place.
It's unclear how disruptive this bug may have been, or what symptoms
users might want to look out for. The issue was spotted during
unrelated code review.
To fix, push down the logic for maintaining the oldest btpo.xact to
_bt_pagedel(). btvacuumpage() is now responsible for pages that were
fully deleted by a previous VACUUM operation, while _bt_pagedel() is now
responsible for pages that were deleted by the current VACUUM operation
(this includes half-dead pages from a previous interrupted VACUUM
operation that become fully deleted in _bt_pagedel()). Note that
_bt_pagedel() should never encounter an existing deleted page.
This commit theoretically breaks the ABI of a stable release by changing
the signature of _bt_pagedel(). However, if any third party extension
is actually affected by this, then it must already be completely broken
(since there are numerous assumptions made in _bt_pagedel() that cannot
be met outside of VACUUM). It seems highly unlikely that such an
extension actually exists, in any case.
Author: Peter Geoghegan
Reviewed-By: Masahiko Sawada
Discussion: https://postgr.es/m/CAH2-WzkrXBcMQWAYUJMFTTvzx_r4q=pYSjDe07JnUXhe+OZnJA@mail.gmail.com
Backpatch: 11-, where the "skip full scan" feature was introduced.
Add test coverage for the nbtutils.c routines concerned with IndexScans
that have native ScalarArrayOpExpr quals. The ScalarArrayOpExpr
specialized mark and restore routines, and the "find extreme element"
routine now have some test coverage.
These functions are probably infrequently exercised by real world
queries, so having some coverage seems like a good idea. The mark and
restore routines were originally added by a bugfix that came several
weeks after the first stable release of Postgres 9.2 (see commit
70bc583319).
The libpq parameters ssl{max|min}protocolversion are renamed to use
underscores, to become ssl_{max|min}_protocol_version. The related
environment variables still use the names introduced in commit ff8ca5f
that added the feature.
Per complaint from Peter Eisentraut (this was also mentioned by me in
the original patch review but the issue got discarded).
Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/b319e449-318d-e691-4997-1327e166fcc4@2ndquadrant.com
Commit 79dfa8a has introduced a check to catch when the minimum protocol
version was set higher than the maximum version, however an error was
getting generated when both bounds are set even if they are able to
work, causing a backend to not use a new SSL context but keep the old
one.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/14BFD060-8C9D-43B4-897D-D5D9AA6FC92B@yesql.se
Checkpointer uses its MyLatch to wake up when a checkpoint request is
received. But before commit c655077639 the latch was not used for
anything else, so the code could just go to sleep after each loop
without rechecking the sleeping condition. That commit added a separate
ResetLatch in its code path[1], which can cause a checkpoint to go
unnoticed for potentially a long time.
Fix by skipping sleep if any checkpoint flags are set. Also add a test
to verify this; authored by Kyotaro Horiguchi.
[1] CreateCheckPoint -> InvalidateObsoleteReplicationSlots ->
ConditionVariableTimeSleep
Report and diagnosis by Kyotaro Horiguchi.
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200408.141956.891237856186513376.horikyota.ntt@gmail.com
Lack of these checks could cause visible misbehavior, including
assertion failures. This was missed in commit c655077639, whereby
restart_lsn becomes invalid when the size limit is exceeded.
Also reword some existing error messages, and add errdetail(), so that
the reported errors all match in spirit.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200408.093710.447591748588426656.horikyota.ntt@gmail.com
pg_statio_all_tables view appears to have a wrong grouping. As the result
numbers of toast index blocks read and hit were multiplied to the number of
table indexes. This commit fixes the view definition.
Backpatching this appears difficult. We don't have a mechanism to patch a
system catalog of existing instances in minor upgrade. We can write a
release notes instruction to do this manually. But per discussion this is
probably not so critical bug for doing such an intrusive fix.
Reported-by: Andrei Zubkov
Discussion: https://postgr.es/m/CAPpHfdtMYkkNudLMG9G0dxX_B%3Dn5sfKzOyxxrvWYtSicaGW0Lw%40mail.gmail.com
This part of the test was included originally in 4e87c48, but got
reverted as of f9c1b8d because of timing issues in the test, where,
after more analysis, we found that the standbys may not have recovered
from the archives all the segments needed by the test. This stabilizes
the test by making sure that standbys replay up to the position returned
by pg_switch_wal(), meaning that all segments are recovered before the
manual checkpoint that tests WAL segment recycling and its interactions
with archive status files.
Author: Jehan-Guillaume de Rorthais
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20200424034248.GL33034@paquier.xyz
When pg_basebackup -R is used, we inject standby.signal into the
tar file for the main tablespace. The proper thing to do is to pad
each file injected into the tar file out to a 512-byte boundary
by appending nulls, but here the file is of length 0 and we add
511 zero bytes. Since 0 is already a multiple of 512, we should
not add any zero bytes. Do that instead.
Patch by me, reviewed by Tom Lane.
Discussion: http://postgr.es/m/CA+TgmobWbfReO9-XFk8urR1K4wTNwqoHx_v56t7=T8KaiEoKNw@mail.gmail.com
Queries such as '!(foo<->bar)' failed to find matching rows when
implemented as a GiST or GIN index search. That's because of
failing to handle phrase searches as tri-valued when considering
a query without any position information for the target tsvector.
We can only say that the phrase operator might match, not that it
does match; and therefore its NOT also might match. The previous
coding incorrectly inverted the approximate phrase result to
decide that there was certainly no match.
To fix, we need to make TS_phrase_execute return a real ternary result,
and then bubble that up accurately in TS_execute. As long as we have
to do that anyway, we can simplify the baroque things TS_phrase_execute
was doing internally to manage tri-valued searching with only a bool
as explicit result.
For now, I left the externally-visible result of TS_execute as a plain
bool. There do not appear to be any outside callers that need to
distinguish a three-way result, given that they passed in a flag
saying what to do in the absence of position data. This might need
to change someday, but we wouldn't want to back-patch such a change.
Although tsginidx.c has its own TS_execute_ternary implementation for
use at upper index levels, that sadly managed to get this case wrong
as well :-(. Fixing it is a lot easier fortunately.
Per bug #16388 from Charles Offenbacher. Back-patch to 9.6 where
phrase search was introduced.
Discussion: https://postgr.es/m/16388-98cffba38d0b7e6e@postgresql.org
Buildfarm member chipmunk has failed twice due to taking >30s, and
twenty-four runs of other members have used >5s. The test is new in
v13, so no back-patch.
Avoid accessing the leaf page's top parent tuple without a buffer lock
held during the second phase of nbtree page deletion. The old approach
was safe, though only because VACUUM never drops its buffer pin (and
because only VACUUM itself can modify a half-dead page). Even still, it
seems like a good idea to be strict here. Tighten things up by copying
the top parent page's block number to a local variable before releasing
the buffer lock on the leaf page -- not after.
This is a follow-up to commit fa7ff642, which fixed a similar issue in
the first phase of nbtree page deletion.
Update some related comments in passing.
Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
Avoid accessing the deletion target page's special area during nbtree
page deletion at a point where there is no buffer lock held. This issue
was detected by a patch that extends Valgrind's memcheck tool to mark
nbtree pages that are unsafe to access (due to not having a buffer lock
or buffer pin) as NOACCESS.
We do hold a buffer pin at this point, and only access the special area,
so the old approach was safe. Even still, it seems like a good idea to
tighten up the rules in this area. There is no reason to not simply
insist on always holding a buffer lock (not just a pin) when accessing
nbtree pages.
Update some related comments in passing.
Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
Before sleeping, WalSndWaitForWal() sends a keepalive if MyWalSnd->write
< sentPtr. When the latest physical LSN yields no logical replication
messages (a common case), that keepalive elicits a reply. Processing
the reply updates pg_stat_replication.replay_lsn. WalSndLoop() lacks
that; when WalSndLoop() slept, replay_lsn advancement could stall until
wal_receiver_status_interval elapsed. This sometimes stalled
src/test/subscription/t/001_rep_changes.pl for up to 10s.
Reviewed by Fujii Masao and Michael Paquier.
Discussion: https://postgr.es/m/20200418070142.GA1075445@rfd.leadboat.com
CreateRole() was passing a Value node, not a RoleSpec node, for the
newly-created role name when adding the role as a member of existing
roles for the IN ROLE syntax.
This mistake went unnoticed because the node in question is used only
for error messages and is not accessed on non-error paths.
In older pg versions (such as 9.5 where this was found), this results
in an "unexpected node type" error in place of the real error. That
node type check was removed at some point, after which the code would
accidentally fail to fail on 64-bit platforms (on which accessing the
Value node as if it were a RoleSpec would be mostly harmless) or give
an "unexpected role type" error on 32-bit platforms.
Fix the code to pass the correct node type, and add an lfirst_node
assertion just in case.
Per report on irc from user m1chelangelo.
Backpatch all the way, because this error has been around for a long
time.
Sticking this comment at the end of the last line was a bad idea: it's
not particularly readable, and it tempts pgindent to mess with line
breaks within the comment, which in turn reveals that win32tzlist.pl's
clean_displayname() does the wrong thing to clean up such line breaks.
While that's not hard to fix, there's basically no excuse for this
arrangement to begin with, especially since it makes the table layout
needlessly vary across back branches with different pgindent rules.
Let's just put the comment inside the braces, instead.
This commit just moves and reformats the comments, and updates
win32tzlist.pl to match; there's no actual data change.
Per odd-looking results from Juan José Santamaría Flecha.
Back-patch, since the point is to make win32_tzmap[] look the
same in all supported branches again.
Discussion: https://postgr.es/m/5752.1587740484@sss.pgh.pa.us
Commit 32ff26911 introduced use of rank() into the triggers view to
calculate the spec-mandated action_order column. As written, this
prevents query constraints on the table-name column from being pushed
below the window aggregate step. That's bad for performance of this
typical usage pattern, since the view now has to be evaluated for all
tables not just the one(s) the user wants to see. It's also the cause
of some recent buildfarm failures, in which trying to evaluate the view
rows for triggers in process of being dropped resulted in "cache lookup
failed for function NNN" errors. Those rows aren't of interest to the
test script doing the query, but the filter that would eliminate them
is being applied too late. None of this happened before the rank()
call was there, so it's a regression compared to v10 and before.
We can improve matters by changing the rank() call so that instead of
partitioning by OIDs, it partitions by nspname and relname, casting
those to sql_identifier so that they match the respective view output
columns exactly. The planner has enough intelligence to know that
constraints on partitioning columns are safe to push down, so this
eliminates the performance problem and the regression test failure
risk. We could make the other partitioning columns match view outputs
as well, but it'd be more complicated and the performance benefits
are questionable.
Side note: as this stands, the planner will push down constraints on
event_object_table and trigger_schema, but not on event_object_schema,
because it checks for ressortgroupref matches not expression
equivalence. That might be worth improving someday, but it's not
necessary to fix the immediate concern.
Back-patch to v11 where the rank() call was added. Ordinarily we'd not
change information_schema in released branches, but the test failure has
been seen in v12 and presumably could happen in v11 as well, so we need
to do this to keep the buildfarm happy. The change is harmless so far
as users are concerned. Some might wish to apply it to existing
installations if performance of this type of query is of concern,
but those who don't are no worse off.
I bumped catversion in HEAD as a pro forma matter (there's no
catalog incompatibility that would really require a re-initdb).
Obviously that can't be done in the back branches.
Discussion: https://postgr.es/m/5891.1587594470@sss.pgh.pa.us
DST law changes in Morocco and the Canadian Yukon.
Historical corrections for Shanghai.
The America/Godthab zone is renamed to America/Nuuk to reflect
current English usage; however, the old name remains available as a
compatibility link.
The test is proving to have timing issues when looking at archive status
files on standbys after crash recovery, while other parts of the test
rely on pg_stat_archiver as a wait point to make sure that a given state
of the archiving is reached. The coverage is not heavily impacted by
the removal those extra tests.
Per reports from several buildfarm animals, like crake, piculet,
culicidae and francolin.
Discussion: https://postgr.es/m/20200424005929.GK33034@paquier.xyz
Backpatch-through: 9.5
78ea8b5 has fixed an issue related to the recycling of WAL segments on
standbys depending on archive_mode. However, it has introduced a
regression with the handling of WAL segments ready to be archived during
crash recovery, causing those files to be recycled without getting
archived.
This commit fixes the regression by tracking in shared memory if a live
cluster is either in crash recovery or archive recovery as the handling
of WAL segments ready to be archived is different in both cases (those
WAL segments should not be removed during crash recovery), and by using
this new shared memory state to decide if a segment can be recycled or
not. Previously, it was not possible to know if a cluster was in crash
recovery or archive recovery as the shared state was able to track only
if recovery was happening or not, leading to the problem.
A set of TAP tests is added to close the gap here, making sure that WAL
segments ready to be archived are correctly handled when a cluster is in
archive or crash recovery with archive_mode set to "on" or "always", for
both standby and primary.
Reported-by: Benoît Lobréau
Author: Jehan-Guillaume de Rorthais
Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/20200331172229.40ee00dc@firost
Backpatch-through: 9.5
In the footsteps of aaf069aa3, remove ACLDEBUG, which was the only
other remaining undocumented symbol in pg_config_manual.h. The fact
that nobody had bothered to document it in seventeen years is a good
clue to its usefulness. In practice, none of the tracing logic it
enabled would be of any value without additional effort.
Discussion: https://postgr.es/m/6631.1587565046@sss.pgh.pa.us
Nobody really uses this stuff, especially not since we created
valgrind-based infrastructure that does the same thing better.
It is thus unsurprising that the generation.c and slab.c versions
were actually broken. Rather than fix 'em, let's just remove 'em.
Alexander Lakhin
Discussion: https://postgr.es/m/8936216c-3492-3f6e-634b-d638fddc5f91@gmail.com
Function names declared "extern" now use BackupManifest in the name
rather than just Manifest, and data types use backup_manifest
rather than just manifest.
Per note from Michael Paquier.
Discussion: http://postgr.es/m/20200418125713.GG350229@paquier.xyz
In a9c35cf85c I changed ExecMakeTableFunctionResult() to dynamically
allocate the FunctionCallInfo used to call the SRF. Unfortunately I
did not account for the fact that the surrounding memory context has
query lifetime, leading to a leak till the end of the query.
In most cases the leak is fairly inconsequential, but if the
FunctionScan is done many times in the query, the leak can add
up. This happens e.g. if the function scan is on the inner side of a
nested loop, due to a lateral join.
EXPLAIN SELECT sum(f) FROM generate_series(1, 100000000) g(i), generate_series(i, i+1) f;
quickly shows the leak.
Instead of explicitly freeing the FunctionCallInfo it seems better to
make sure all the per-set temporary state in
ExecMakeTableFunctionResult() is cleaned up wholesale. Currently
that's probably just the FunctionCallInfo allocation, but since
there's some initialization work, and since there's already an
appropriate context, this seems like a more robust approach.
Bug: #16112
Reported-By: Ben Cornett
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/16112-4448bbf55a404189%40postgresql.org
Backpatch: 12, a9c35cf85c
This commit does:
- get rid of the garbage code for unused --print-parse-wal option.
- add help message for --quiet option into usage().
- fix typo of option name in help message.
Author: Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/ff4710f7-2331-4f6b-012e-d76da3275e91@oss.nttdata.com
The mask was added by commit 8224de4f, which introduced INCLUDE nbtree
indexes. The status bits really were reserved initially. We now use 2
out of 4 of the bits for additional tuple metadata, though. Rename the
mask to BT_STATUS_OFFSET_MASK.
Also consolidate related nbtree.h code comments about the format of
pivot tuples and posting list tuples.
When estimating the number of pre-sorted groups in cost_incremental_sort
we must not pass Vars with varno 0 to estimate_num_groups, which would
cause failues in find_base_rel. This may happen when sorting output of
set operations, thanks to generate_append_tlist.
Unlike recurse_set_operations we can't easily access the original target
list, so if we find any Vars with varno 0, we fall back to the default
estimate DEFAULT_NUM_DISTINCT.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20200411214639.GK2228%40telsasoft.com
This Assert was trying to ensure that the number of columns in the foreign
key being cloned was the same number of attributes in the parentRel. Of
course, it's perfectly valid to have columns in the table which are not
part of the foreign key constraint. It appears that this Assert was
misunderstanding that.
Reported-by: Rajkumar Raghuwanshi
Reviewed-by: amul sul
Discussion: https://postgr.es/m/CAKcux6=z1dtiWw5BOpqDx-U6KTiq+zD0Y2m810zUtWL+giVXWA@mail.gmail.com
readOneRecord() is used now when looking for a checkpoint record to
check if the target server is an ancestor of the source across multiple
timelines, and using a restore_command if available improves the
stability of the operation. This part was missed in a7e8ece.
Reported-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20200421.150830.1410714948345179794.horikyota.ntt@gmail.com
It's important to know that a trigger is cloned from a parent table,
because of the behavior that the trigger is dropped on detach. Make
psql's \d display it.
We'd like to backpatch, but lack of the pg_trigger.tgparentid column
makes it more difficult. Punt for now. If somebody wants to volunteer
an implementation that reads pg_depend on older versions, that can
probably be backpatched.
Authors: Justin Pryzby, Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/20200419002206.GM26953@telsasoft.com
Checking if Subject Alternative Names (SANs) from a certificate match
with the hostname connected to leaked memory after each lookup done.
This is broken since acd08d7 that added support for SANs in SSL
certificates, so backpatch down to 9.5.
Author: Roman Peshkurov
Reviewed-by: Hamid Akhtar, Michael Paquier, David Steele
Discussion: https://postgr.es/m/CALLDf-pZ-E3mjxd5=bnHsDu9zHEOnpgPgdnO84E2RuwMCjjyPw@mail.gmail.com
Backpatch-through: 9.5
index.c supposed that it could just use a PG_TRY block to clean up the
state associated with an active REINDEX operation. However, that code
doesn't run if we do a FATAL exit --- for example, due to a SIGTERM
shutdown signal --- while the REINDEX is happening. And that state does
get consulted during catalog accesses, which makes it problematic if we
do any catalog accesses during shutdown --- for example, to clean up any
temp tables created in the session.
If this combination of circumstances occurred, we could find ourselves
trying to access already-freed memory. In debug builds that'd fairly
reliably cause an assertion failure. In production we might often
get away with it, but with some bad luck it could cause a core dump.
Another possible bad outcome is an erroneous conclusion that an
index-to-be-accessed is being reindexed; but it looks like that would
be unlikely to have any consequences worse than failing to drop temp
tables right away. (They'd still get dropped by the next session that
uses that temp schema.)
To fix, get rid of the use of PG_TRY here, and instead hook into
the transaction abort mechanisms to clean up reindex state.
Per bug #16378 from Alexander Lakhin. This has been wrong for a
very long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/16378-7a70ca41b3ec2009@postgresql.org
Working on commit 1c455078b led me to check through FunctionCallInvoke
call sites to see if every one was being honest about (a) making sure
that fcinfo.isnull is initially false, and (b) checking its state after
the call. Sure enough, I found some violations.
The main one is that finalize_partialaggregate re-used serialfn_fcinfo
without resetting isnull, even though it clearly intends to cater for
serialfns that return NULL. There would only be an issue with a
non-strict serialfn, since it's unlikely that a serialfn would return
NULL for non-null input. We have no non-strict serialfns in core, and
there may be none in the wild either, which would account for the lack
of complaints. Still, it's clearly wrong, so back-patch that fix to
9.6 where finalize_partialaggregate was introduced.
Also, arrayfuncs.c and rowtypes.c contained various callers that were
not bothering to check for result nulls. While what's being called is
a comparison or hash function that probably *shouldn't* return null,
that's a lousy excuse for not having any check at all. There are
existing places that just Assert(!fcinfo->isnull) in comparable
situations, so I added that to the places that were calling btree
comparison or hash support functions. In the places calling
boolean-returning equality functions, it's quite cheap to have them
treat isnull as FALSE, so make those places do that. Also remove some
"locfcinfo->isnull = false" assignments that are unnecessary given the
assumption that no previous call returned null. These changes seem like
mostly neatnik-ism or debugging support, so I didn't back-patch.
When a partition is detached, any triggers that had been cloned from its
parent were not properly disentangled from its parent triggers.
This resulted in triggers that could not be dropped because they
depended on the trigger in the trigger in the no-longer-parent table:
ALTER TABLE t DETACH PARTITION t1;
DROP TRIGGER trig ON t1;
ERROR: cannot drop trigger trig on table t1 because trigger trig on table t requires it
HINT: You can drop trigger trig on table t instead.
Moreover the table can no longer be re-attached to its parent, because
the trigger name is already taken:
ALTER TABLE t ATTACH PARTITION t1 FOR VALUES FROM (1)TO(2);
ERROR: trigger "trig" for relation "t1" already exists
The former is a bug introduced in commit 86f575948c. (The latter is
not necessarily a bug, but it makes the bug more uncomfortable.)
To avoid the complexity that would be needed to tell whether the trigger
has a local definition that has to be merged with the one coming from
the parent table, establish the behavior that the trigger is removed
when the table is detached.
Backpatch to pg11.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/20200408152412.GZ2228@telsasoft.com
Commit 0d861bbb, which introduced deduplication to nbtree, added some
logic to take large posting list tuples into account when choosing a
split point. We subtract firstright posting list overhead from the
projected new high key size when calculating leftfree/rightfree values
for an affected candidate split point. Posting list tuples aren't
special to nbtsplitloc.c, but taking them into account like this makes a
huge difference in practice. Posting list tuples are frequently tuple
size outliers.
However, commit 0d861bbb missed a closely related issue: split interval
itself is calculated based on the assumption that tuples on the page
being split are roughly equisized. That assumption was acceptable back
when commit fab25024 taught the logic for choosing a split point about
suffix truncation, but it's pretty questionable now that very large
tuple sizes are common. This oversight led to unbalanced page splits in
low cardinality multi-column indexes when deduplication was used: page
splits that don't give sufficient weight to how unbalanced the split is
when the interval happens to include some large posting list tuples (and
when most other tuples on the page are not so large).
Nail this down by calculating an initial split interval in a way that's
attuned to the actual cost that we want to keep under control (not a
fuzzy proxy for the cost): apply a leftfree + rightfree evenness test to
each candidate split point that actually gets included in the split
interval (for the default strategy). This replaces logic that used a
percentage of all legal split points for the page as the basis of the
initial split interval.
Discussion: https://postgr.es/m/CAH2-WznJt5aT2uUB2Bs+JBLdwe0XTX67+xeLFcaNvCKxO=QBVQ@mail.gmail.com
Although selfuncs.c will never call a target operator with null inputs,
some functions might return null anyway. The existing coding will fail
if that happens (since FunctionCall2Coll will punt), which seems
undesirable given that matchingsel() has such a broad range of potential
applicability --- in fact, we already have a problem because we apply it
to jsonb_path_exists_opr, which can return null. Hence, rejigger the
underlying functions mcv_selectivity and histogram_selectivity to cope,
treating a null result as false.
While we are at it, we can move the InitFunctionCallInfoData overhead
out of the inner loops, which isn't a huge number of cycles but might
save something considering we are likely calling functions as cheap
as int4eq(). Plus, the number of loop cycles to be expected is much
more than it was when this code was written, since typical settings
of default_statistics_target are higher.
In view of that consideration, let's apply the same change to
var_eq_const, eqjoinsel_inner, and eqjoinsel_semi. We do not expect
equality functions to ever return null for non-null inputs (and
certainly that code has been that way a long time without complaints),
but the cycle savings seem attractive, especially in the eqjoinsel loops
where there's potentially an O(N^2) savings.
Similar code exists in ineq_histogram_selectivity and
get_variable_range, but I forebore from changing those for now.
The performance argument for changing ineq_histogram_selectivity
is really weak anyway, since that will only iterate log2(N) times.
Nikita Glukhov and Tom Lane
Discussion: https://postgr.es/m/9d3b0959-95d6-c37e-2c0b-287bcfe5c705@postgrespro.ru
Older gcc versions don't like duplicate typedefs, so get rid of
that in favor of doing it like we do it elsewhere, ie just use
a "struct" declaration when trying to avoid importing a whole
header file.
Also, there seems no reason to include stringinfo.h here at all,
so get rid of that addition too.
Discussion: https://postgr.es/m/27239.1587415696@sss.pgh.pa.us
... as added in the prior commit.
(We'd like to have tab-completion for the other object types too, but
they don't have sub-command completion yet.)
Author: Ibrar Ahmed <ibrar.ahmad@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CALtqXTcogrFEVP9uou5vFtnGsn+vHZUu9+9a0inarfYVOHScYQ@mail.gmail.com
Commit f2fcad27d5 (9.6 era) added the ability to mark objects as
dependent an extension, but forgot to add a way for such dependencies to
be removed. This commit fixes that oversight.
Strictly speaking this should be backpatched to 9.6, but due to lack of
demand we're not doing so at this time.
Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
Reviewed-by: ahsan hadi <ahsan.hadi@gmail.com>
Reviewed-by: Ibrar Ahmed <ibrar.ahmad@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
The views pg_stat_progress_* had not gotten the memo that
pg_read_all_stats is supposed to be able to read all statistics. Also
make a pass over all text-returning pg_stat_xyz functions that could
return "insufficient privilege" and make sure they also respect
pg_read_all_status.
Reported-by: Andrey M. Borodin
Reviewed-by: Andrey M. Borodin, Kyotaro Horiguchi
Discussion: https://postgr.es/m/13145F2F-8458-4977-9D2D-7B2E862E5722@yandex-team.ru
The normalization-check target needs to be run last, after moving the
newly generated files into place. Also, we need an additional
dependency so that unicode_norm.o is rebuilt first. Otherwise,
norm_test will still test the old files but against the new expected
results, which will probably fail.
We have repeatedly seen the buildfarm reach the Assert(false) in
SyncRepGetSyncStandbysPriority. This apparently is due to failing to
consider the possibility that the sync_standby_priority values in
shared memory might be inconsistent; but they will be whenever only
some of the walsenders have updated their values after a change in
the synchronous_standby_names setting. That function is vastly too
complex for what it does, anyway, so rewriting it seems better than
trying to apply a band-aid fix.
Furthermore, the API of SyncRepGetSyncStandbys is broken by design:
it returns a list of WalSnd array indexes, but there is nothing
guaranteeing that the contents of the WalSnd array remain stable.
Thus, if some walsender exits and then a new walsender process
takes over that WalSnd array slot, a caller might make use of
WAL position data that it should not, potentially leading to
incorrect decisions about whether to release transactions that
are waiting for synchronous commit.
To fix, replace SyncRepGetSyncStandbys with a new function
SyncRepGetCandidateStandbys that copies all the required data
from shared memory while holding the relevant mutexes. If the
associated walsender process then exits, this data is still safe to
make release decisions with, since we know that that much WAL *was*
sent to a valid standby server. This incidentally means that we no
longer need to treat sync_standby_priority as protected by the
SyncRepLock rather than the per-walsender mutex.
SyncRepGetSyncStandbys is no longer used by the core code, so remove
it entirely in HEAD. However, it seems possible that external code is
relying on that function, so do not remove it from the back branches.
Instead, just remove the known-incorrect Assert. When the bug occurs,
the function will return a too-short list, which callers should treat
as meaning there are not enough sync standbys, which seems like a
reasonably safe fallback until the inconsistent state is resolved.
Moreover it's bug-compatible with what has been happening in non-assert
builds. We cannot do anything about the walsender-replacement race
condition without an API/ABI break.
The bogus assertion exists back to 9.6, but 9.6 is sufficiently
different from the later branches that the patch doesn't apply at all.
I chose to just remove the bogus assertion in 9.6, feeling that the
probability of a bad outcome from the walsender-replacement race
condition is too low to justify rewriting the whole patch for 9.6.
Discussion: https://postgr.es/m/21519.1585272409@sss.pgh.pa.us
In some corner cases, this could also lead to corrupted values being
included in the tuple.
Users who are concerned that they are affected by this should first
upgrade and then perform a base backup of their database and restore onto
an off-line server. They should then query each table with generated
columns to ensure there are no rows where the generated expression does
not match a newly calculated version of the GENERATED ALWAYS expression.
If no crashes occur and no rows are returned then you're not affected.
Fixes bug #16369.
Reported-by: Cameron Ezell
Discussion: https://postgr.es/m/16369-5845a6f1bef59884@postgresql.org
Backpatch-through: 12 (where GENERATED ALWAYS columns were added.)
Apparently in some language versions of Visual Studio nmake outputs some
material after the version number and before the end of the line. This
has been seen in Chinese versions. Therefore, we no longer demand that
the version string comes at the end of a line.
Per complaint from Cuiping Lin.
Backpatch to all live branches.
In commit 4dc6355210 I neglected to put #ifdef USE_OPENSSL around the
declarations of the new items. This is remedied here.
Per complaint from Daniel Gustafsson.
recordExtObjInitPriv and removeExtObjInitPriv were sloppy about
calling ReleaseSysCache. The cases cannot occur given current usage
in ALTER EXTENSION ADD/DROP, since we wouldn't get here for these
relkinds; but it seems wise to clean up better.
In passing, extend test logic in test_pg_dump to exercise the
dropped-column code paths here.
Since the case is unreachable at present, there seems no great
need to back-patch; hence fix HEAD only.
Kyotaro Horiguchi, with test case and comment adjustments by me
Discussion: https://postgr.es/m/20200417.151831.1153577605111650154.horikyota.ntt@gmail.com
The result of the query used to retrieve the WAL segment size from the
backend was not getting freed in two code paths. Both pg_basebackup and
pg_receivewal exit immediately if a failure happened on this query, so
this was not an actual problem, but it could be an issue if this code
gets used for other tools in different ways, be they future tools in
this code tree or external, existing, ones.
Oversight in commit fc49e24, so backpatch down to 11.
Author: Jie Zhang
Discussion: https://postgr.es/m/970ad9508461469b9450b64027842331@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 11
It was previously thought that remove_useless_groupby_columns() needed to
keep track of which constraints the generated plan depended upon, however,
this is unnecessary. The confusion likely arose regarding this because of
check_functional_grouping(), which does need to track the dependency to
ensure VIEWs with columns which are functionally dependant on the GROUP BY
remain so. For remove_useless_groupby_columns(), cached plans will just
become invalidated when the primary key's underlying index is removed
through the normal relcache invalidation code.
Here we just remove the unneeded code which records the dependency and
updates the comments. The previous comments claimed that we could not use
UNIQUE constraints for the same optimization due to lack of a
pg_constraint record for NOT NULL constraints (which are required because
NULLs can be duplicated in a unique index). Since we don't actually need a
pg_constraint record to handle the invalidation, it looks like we could
add code to do this in the future. But not today.
We're not really fixing any bug in the code here, this fix is just to set
the record straight on UNIQUE constraints. This code was added back in
9.6, but due to lack of any bug, we'll not be backpatching this.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvrdYa=VhOoMe4ZZjZ-G4ALnD-xuAeUNCRTL+PYMVN8OnQ@mail.gmail.com
Earlier we were inconsistent in allowing the usage of parallel and
full options. Change it such that we disallow them only when they are
combined in a way that we don't support.
In passing, improve the comments in some of the existing tests of parallel
vacuum.
Reported-by: Tushar Ahuja
Author: Justin Pryzby, Amit Kapila
Reviewed-by: Sawada Masahiko, Michael Paquier, Mahendra Singh Thalor and
Amit Kapila
Discussion: https://postgr.es/m/58c8d171-e665-6fa3-a9d3-d9423b694dae%40enterprisedb.com
Since 0d8c9c1, pg_basebackup would generate an error if connected to a
backend version older than 12 where backup manifests are not supported.
Avoiding this error is possible by using the --no-manifest option.
This error handling could be confusing for some users, where patching a
backup script that interacts with multiple backend versions would cause
the addition of --no-manifest to potentially not generate a backup
manifest even for Postgres 13 and newer versions. As we want to
encourage the use of backup manifests as much as possible, this commit
silently disables manifests where not supported, instead of generating
an error.
While on it, rework a bit the code to make it more consistent with the
surroundings when generating the BASE_BACKUP command.
Per discussion with Andres Freund, Stephen Frost, Robert Haas, Álvaro
Herrera, Kyotaro Horiguchi, Tom Lane, David Steele, and me.
Author: Michael Paquier
Discussion: https://postgr.es/m/20200410080910.GZ1606@paquier.xyz
This commit prevents pg_basebackup from receiving backup_manifest file
when --no-manifest is specified. Previously, when pg_basebackup was
writing a tarfile to stdout, it tried to receive backup_manifest file even
when --no-manifest was specified, and reported an error.
Also remove unused -m option from pg_basebackup.
Also fix typo in BASE_BACKUP command documentation.
Author: Fujii Masao
Reviewed-by: Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/01e3ed3a-8729-5aaa-ca84-e60e3ca59db8@oss.nttdata.com
A comment from the Berkeley days incorrectly claimed that the page
management code cares about the contents of the hole in the center of
the page (at least in the case of the left half of an nbtree page
split). Commit 8fa30f906b added an addendum that stated that the
original comment was "probably obsolete". It's definitely obsolete,
though, so remove the original comment plus the addendum.
Back in commits 1df92eeafe, f884a96819, and 592123efbb I used some
hackish code to set the script search path, unaware despite decades of
perl that there was a completely standard way to do this. This patch
changes those cases to use the standard perl FindBin package.
Nest the "update metapage as part of insert into root-like page" branch
inside the broader "insert into internal page" branch. This improves
readability.
Clearly it's not okay for nbtree to split a page that is the only page
on its level, and then find that it has to split the parent one level up
in turn. There is simply no code to handle the split_only_page case in
the _bt_insertonpg() "newitem won't fit" branch (only the "newitem fits"
branch handles split_only_page). Add a defensive assertion that will
fail if a split_only_page call to _bt_insertonpg() somehow ends up
splitting the target/parent page.
I (pgeoghegan) believe that we don't need split_only_page handling for
the "newitem won't fit" branch because anybody calling _bt_insertonpg()
like this would have to hold a lock on the same one and only child page.
It seems like a good idea for nbtree's retail insert code to be
absolutely consistent with nbtree's page split code for anything that
naturally requires equivalent handling. Anything that concerns
inserting newitem (which is handled as part of the page split atomic
action when a page split is required) should work in exactly the same
way. With that in mind, make _bt_insertonpg() handle 'cbuf' in a way
that matches _bt_split().
An nbtree split point can be thought of as a point between two adjoining
tuples from an imaginary version of the page being split that includes
the incoming/new item (in addition to the items that really are on the
page). These adjoining tuples are called the lastleft and firstright
tuples.
The variables that represent split points contained a field called
firstright, which is an offset number of the first data item from the
original page that goes on the new right page. The corresponding tuple
from origpage was usually the same thing as the actual firstright tuple,
but not always: the firstright tuple is sometimes the new/incoming item
instead. This situation seems unnecessarily confusing.
Make things clearer by renaming the origpage offset returned by
_bt_findsplitloc() to "firstrightoff". We now have a firstright tuple
and a firstrightoff offset number which are comparable to the
newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers
respectively. Also make sure that we are consistent about how we
describe nbtree page split point state.
Push the responsibility for dealing with pg_upgrade'd !heapkeyspace
indexes down to lower level code, relieving _bt_split() from dealing
with it directly. This means that we always have a palloc'd left page
high key on the leaf level, no matter what. This enables simplifying
some of the code (and code comments) within _bt_split().
Finally, restructure the page split code to make it clearer why suffix
truncation (which only takes place during leaf page splits) is
completely different to the first data item truncation that takes place
during internal page splits. Tuples are marked as having fewer
attributes stored in both cases, and the firstright tuple is truncated
in both cases, so it's easy to imagine somebody missing the distinction.
This replaces a few occurrences of ugly code with a more clean and
idiomatic usage. The problem was highlighted by perlcritic, but we're
not enforcing the policy that led to the discovery.
Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
We've had a mixture of the warnings pragma, the -w switch on the shebang
line, and no warnings at all. This patch removes the -w swicth and add
the warnings pragma to all perl sources missing it. It raises the
severity of the TestingAndDebugging::RequireUseWarnings perlcritic
policy to level 5, so that we catch any future violations.
Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
This makes it easier to do a web search for details of the policy that's
been violated, as well as displaying the name that might be needed for a
policy override.
Various perlcritic settings changes are being discussed, but this one
should be uncontroversial.
The placement of the fall-through comment in this code appears not to
work to suppress the warning in recent gcc. Move it to the bottom of
the case group, and add an assertion that we didn't get there through
some other code path. Also improve wording of nearby comments.
Julien Rouhaud, comment hacking by me
Discussion: https://postgr.es/m/CAOBaU_aLdPGU5wCpaowNLF-Q8328iR7mj1yJAhMOVsdLwY+sdg@mail.gmail.com
Specifically, remember lookup_type_cache() results instead of retrieving
them once per comparison. Under CLOBBER_CACHE_ALWAYS, this reduced
src/test/subscription/t/001_rep_changes.pl elapsed time by an order of
magnitude, which reduced check-world elapsed time by 9%.
Discussion: https://postgr.es/m/20200406085420.GC162712@rfd.leadboat.com
Before sleeping, WalSndWaitForWal() sends a keepalive if MyWalSnd->write
< sentPtr. That is important in logical replication. When the latest
physical LSN yields no logical replication messages (a common case),
that keepalive elicits a reply, and processing the reply updates
pg_stat_replication.replay_lsn. WalSndLoop() lacks that; when
WalSndLoop() slept, replay_lsn advancement could stall until
wal_receiver_status_interval elapsed. This sometimes stalled
src/test/subscription/t/001_rep_changes.pl for up to 10s.
Discussion: https://postgr.es/m/20200406063649.GA3738151@rfd.leadboat.com
Before discarding the old hash table in ExecReScanHashJoin, capture
its statistics, ensuring that we report the maximum hashtable size
across repeated rescans of the hash input relation. We can repurpose
the existing code for reporting hashtable size in parallel workers
to help with this, making the patch pretty small. This also ensures
that if rescans happen within parallel workers, we get the correct
maximums across all instances.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
ExecReScanHashJoin will destroy the join's hash table if it expects
that the inner relation will produce different rows on rescan.
Up to now it's not bothered to clear the additional pointer to that
hash table that exists in the child HashState node. However, it's
possible for the query to terminate without building a fresh hash
table (this happens if the outer relation is found to be empty
during the final rescan). So we can end with a dangling pointer
to a deleted hash table. That was harmless originally, but since
9.0 EXPLAIN ANALYZE has used that pointer to print hash table
statistics. In debug builds this reproducibly results in garbage
statistics. In non-debug builds there's frequently no ill effects,
but in principle one could get wrong EXPLAIN ANALYZE output, or
perhaps even a crash if free() has released the hashtable memory
back to the OS.
To fix, just make sure we clear the additional pointer when destroying
the hash table. In problematic cases, EXPLAIN ANALYZE will then print
no hashtable statistics (reverting to its pre-9.0 behavior). This isn't
ideal, but since the problem manifests only in unusual corner cases,
it's hard to justify taking any risks to do better in the back
branches. A follow-on patch will improve matters in HEAD.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
Depending on specific values of restart_lsn or pg_current_wal_lsn()
is obviously unsafe. The previous coding tried to dodge this issue
by rounding off, but that's not good enough, as shown by multiple
buildfarm members. Nuke all the uses of these values except for
null-ness checks, pending some credible argument why we should think
something else could be usefully stable.
Kyotaro Horiguchi, further modified by me
Discussion: https://postgr.es/m/E1jM1Sa-0003mS-99@gemulon.postgresql.org
Suppress a probably-meaningless uninitialized-variable warning
(induced by my previous patch, I'm sorry to say).
Improve mark_hl_fragments()'s test for overlapping cover strings:
it failed to consider the possibility that the current string is
strictly within another one. That's unlikely given the preceding
splitting into MaxWords fragments, but I don't think it's impossible.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
This code could produce very poor results when asked to highlight a
string based on a query using phrase-match operators. The root cause
is that hlCover(), which is supposed to find a minimal substring that
matches the query, was written assuming that word position is not
significant. I'm only 95% convinced that its algorithm was correct even
for plain AND/OR queries; but it definitely fails completely for phrase
matches, causing it to possibly not identify a cover string at all.
Hence, rewrite hlCover() with a less-tense algorithm that just tries
all the possible substrings, earlier and shorter ones first. (This is
not as bad as it sounds performance-wise, because all of the string
matching has been done already: the repeated tsquery match checks
boil down to pointer comparisons.)
Unfortunately, since that approach produces more candidate cover
strings than before, it also exposes that there were bugs in the
heuristics in mark_hl_words() for selecting a best cover string.
Fixes there include:
* Do not apply the ShortWord filter to words that appear in the query.
* Remove a misguided optimization for quickly rejecting a cover.
* Fix order-of-operation bug that could cause computation of a
wrong figure of merit (poslen) when shortening a cover.
* Change the preference rule so that candidate headlines that do not
include their whole cover string (after MaxWords trimming) are lowest
priority, since they may not actually satisfy the user's query.
This results in some changes in existing regression test cases,
but they all seem reasonable. Note in particular that the tests
involving strings like "1 2 3" were previously being affected by
the ShortWord filter, masking the normal matching behavior.
Per bug #16345 from Augustinas Jokubauskas; the new test cases are
based on that example. Back-patch to 9.6 where phrase search was
added to tsquery.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
This code was woefully unreadable and under-commented. Try to improve
matters by adding comments, using some macros to make complicated
if-tests more readable, using boolean type where appropriate, etc.
There are a couple of tiny coding improvements too, but this commit
includes (I hope) no behavioral change.
Nonetheless, back-patch as far as 9.6, because a followup bug-fixing
commit depends on this.
Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column
referred to a column with a higher attribute number. This is because
the column mapping mechanism created the mapping incrementally as
columns are added. This was sufficient for previous uses of that
mechanism (omitting dropped columns), and it also happened to work if
generated columns only referred to columns with lower attribute
numbers, but here it failed.
This fix is to build the attribute mapping in a separate loop before
processing the columns in detail.
Bug: #16342
Reported-by: Ethan Waldo <ewaldo@healthetechs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
If there is already a backup_manifest file in the database cluster,
it belongs to the past backup that was used to start this server.
It is not correct for the backup being taken now. So this commit
changes pg_basebackup so that it always skips such backup_manifest
file. The backup_manifest file for the current backup will be injected
separately if users want it.
Author: Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/78f76a3d-1a28-a97d-0394-5c96985dd1c0@oss.nttdata.com
Currently, we don't account for buffer usage incurred by parallel workers
for parallel create index. This commit allows each worker to record the
buffer usage stats and leader backend to accumulate that stats at the
end of the operation. This will allow pg_stat_statements to display
correct buffer usage stats for (parallel) create index command.
Reported-by: Julien Rouhaud
Author: Sawada Masahiko
Reviewed-by: Dilip Kumar, Julien Rouhaud and Amit Kapila
Backpatch-through: 11, where this was introduced
Discussion: https://postgr.es/m/20200328151721.GB12854@nol
The test never did ANALYZE on the test table, so the plans depended on
various defaults (e.g. number of groups being 200). This worked most of
the time, but with CLOBBER_CACHE_ALWAYS the autoanalyze often managed
to build accurate stats, changing the plan.
Fixed by increasing the size of test tables a bit, making the Sort a bit
more expensive than Incremental Sort. The tests were constructed to test
transitions in the Incremental Sort algorithm, and this change does not
break that.
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
Repair an oversight in commit 8728b2c70: if we're postponing restore
of event triggers to the end, we must also postpone restoring any
comments on them, since of course we cannot create the comments first.
(This opens yet another opportunity for an event trigger to bollix
the restore, but there's no help for that.)
Per bug #16346 from Alexander Lakhin.
Like the previous commit, back-patch to all supported branches.
Hamid Akhtar and Tom Lane
Discussion: https://postgr.es/m/16346-6210ad7a0ea81be1@postgresql.org
GetWalRcvWriteRecPtr() previously reported the latest *flushed*
location. Adopt the conventional terminology used elsewhere in the tree
by renaming it to GetWalRcvFlushRecPtr(), and likewise for some related
variables that used the term "received".
Add a new definition of GetWalRcvWriteRecPtr(), which returns the latest
*written* value. This will allow later patches to use the value for
non-data-integrity purposes, without having to wait for the flush
pointer to advance.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
To control whether partition changes are replicated using their own
identity and schema or an ancestor's, add a new parameter that can be
set per publication named 'publish_via_partition_root'.
This allows replicating a partitioned table into a different partition
structure on the subscriber.
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
0f5ca02f53 introduces 3 new keywords. It appears to be too much for relatively
small feature. Given now we past feature freeze, it's already late for
discussion of the new syntax. So, revert.
Discussion: https://postgr.es/m/28209.1586294824%40sss.pgh.pa.us
2nd pass of modifying various places which obtain the next power
of 2 of a number and make them use the new functions added in
f0705bb62.
In passing, also modify num_combinations(). This can be implemented
using simple bitshifting rather than looping.
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
Attempting to use a COLLATE clause with a type that it not collatable in
a partition bound expression could crash the server. This commit fixes
the code by adding more checks similar to what is done when computing
index or partition attributes by making sure that there is a collation
iff the type is collatable.
Backpatch down to 12, as 7c079d7 introduced this problem.
Reported-by: Alexander Lakhin
Author: Dmitry Dolgov
Discussion: https://postgr.es/m/16325-809194cf742313ab@postgresql.org
Backpatch-through: 12
First pass of modifying various places that obtain the next power of 2 of
a number and make them use the new functions added in pg_bitutils.h
instead.
This also removes the _hash_log2() function. There are no longer any
callers in core. Other users can swap their _hash_log2(n) call to make use
of pg_ceil_log2_32(n).
Author: David Fetter, with some minor adjustments by me
Reviewed-by: John Naylor, Jesse Zhang
Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
If the memory context's maxBlockSize is too big, a single block
allocation can suddenly exceed work_mem. For Hash Aggregation, this
can mean spilling to disk too early or reporting a confusing memory
usage number for EXPLAN ANALYZE.
Introduce CreateWorkExprContext(), which is like CreateExprContext(),
except that it creates the AllocSet with a maxBlockSize that is
reasonable in proportion to work_mem.
Right now, CreateWorkExprContext() is only used by Hash Aggregation,
but it may be generally useful in the future.
Discussion: https://postgr.es/m/412a3fbf306f84d8d78c4009e11791867e62b87c.camel@j-davis.com
There are many areas in the code where we need to determine the next
highest power of 2 of a given number. We tend to always do that in an
ad-hoc way each time, generally with some tight for loop which performs a
bitshift left once per loop and goes until it finds a number above the
given number.
Here we add two generic functions which make use of the existing
pg_leftmost_one_pos* functions which, when available, will allow us to
calculate the next power of 2 without any looping.
Here we don't add any code which uses these new functions. That will be
done in follow-up commits.
Author: David Fetter, with some minor adjustments by me
Reviewed-by: John Naylor, Jesse Zhang
Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
Trying to ensure that a slot's restart_lsn or amount of reserved bytes
exactly match some specific values seems unnecessary, and fragile as
shown by failures in multiple buildfarm members.
Discussion: https://postgr.es/m/20200407232602.GA21559@alvherre.pgsql
Provide PrefetchSharedBuffer(), a variant that takes SMgrRelation, for
use in recovery. Rename LocalPrefetchBuffer() to PrefetchLocalBuffer()
for consistency.
Add a return value to all of these. In recovery, tolerate and report
missing files, so we can handle relations unlinked before crash recovery
began. Also report cache hits and misses, so that callers can do faster
buffer lookups and better I/O accounting.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
This case didn't work because columns merged by FULL JOIN USING are
represented in the parse tree by COALESCE expressions, and the logic
for recognizing a partitionable join failed to match upper-level join
clauses to such expressions. To fix, synthesize suitable COALESCE
expressions and add them to the nullable_partexprs lists. This is
pretty ugly and brute-force, but it gets the job done. (I have
ambitions of rethinking the way outer-join output Vars are
represented, so maybe that will provide a cleaner solution someday.
For now, do this.)
Amit Langote, reviewed by Justin Pryzby, Richard Guo, and myself
Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
Previously, the partitionwise join technique only allowed partitionwise
join when input partitioned tables had exactly the same partition
bounds. This commit extends the technique to some cases when the tables
have different partition bounds, by using an advanced partition-matching
algorithm introduced by this commit. For both the input partitioned
tables, the algorithm checks whether every partition of one input
partitioned table only matches one partition of the other input
partitioned table at most, and vice versa. In such a case the join
between the tables can be broken down into joins between the matching
partitions, so the algorithm produces the pairs of the matching
partitions, plus the partition bounds for the join relation, to allow
partitionwise join for computing the join. Currently, the algorithm
works for list-partitioned and range-partitioned tables, but not
hash-partitioned tables. See comments in partition_bounds_merge().
Ashutosh Bapat and Etsuro Fujita, most of regression tests by Rajkumar
Raghuwanshi, some of the tests by Mark Dilger and Amul Sul, reviewed by
Dmitry Dolgov and Amul Sul, with additional review at various points by
Ashutosh Bapat, Mark Dilger, Robert Haas, Antonin Houska, Amit Langote,
Justin Pryzby, and Tomas Vondra
Discussion: https://postgr.es/m/CAFjFpRdjQvaUEV5DJX3TW6pU5eq54NCkadtxHX2JiJG_GvbrCA@mail.gmail.com
Our documentation describes four allowed input syntaxes for circles,
but the regression tests tried only three ... with predictable
consequences. Remarkably, this has been wrong since the circle
datatype was added in 1997, but nobody noticed till now.
David Zhang, with some help from me
Discussion: https://postgr.es/m/332c47fa-d951-7574-b5cc-a8f7f7201202@highgo.ca
The goal of separating hotly accessed per-backend data from PGPROC
into PGXACT is to make accesses fast (GetSnapshotData() in
particular). But delayChkpt is not actually accessed frequently; only
when starting a checkpoint. As it is frequently modified (multiple
times in the course of a single transaction), storing it in the same
cacheline as hotly accessed data unnecessarily dirties a contended
cacheline.
Therefore move delayChkpt to PGPROC.
This is part of a larger series of patches intending to improve
GetSnapshotData() scalability. It is committed and pushed separately,
as it is independently beneficial (small but measurable win, limited
by the other frequent modifications of PGXACT).
Author: Andres Freund
Reviewed-By: Robert Haas, Thomas Munro, David Rowley
Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
SLRU page hits were tracked only in SimpleLruReadPage, but that's not
enough because we may hit the page in SimpleLruReadPage_ReadOnly in
which case we don't call SimpleLruReadPage at all.
Reported-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20200119143707.gyinppnigokesjok@development
Before the fix every 2PC commit/abort leaked a file descriptor. As the
files are opened using BasicOpenFile(), that quickly leads to the
backend running out of file descriptors.
Once enough 2PC abort/commit have caused enough FDs to leak, any IO
in the backend will fail with "Too many open files", as
BasicOpenFilePerm() will have triggered all open files known to fd.c
to be closed.
The leak causing the problem at hand is a consequence of 0dc8ead46,
but is only exascerbated by it. Previously most XLogPageReadCB
callbacks used static variables to cache one open file, but after the
commit the cache is private to each XLogReader instance. There never
was infrastructure to close FDs at the time of XLogReaderFree, but the
way XLogReader was used limited the leak to one FD.
This commit just closes the during XLogReaderFree() if the FD is
stored in XLogReaderState.seg.ws_segno. This may not be the way to
solve this medium/long term, but at least unbreaks 2PC.
Discussion: https://postgr.es/m/20200406025651.fpzdb5yyb7qyhqko@alap3.anarazel.de
Since heap TID is supposed to be just another key attribute to the
implementation, it doesn't make much sense to have separate
BTreeTupleSetNAtts() and BTreeTupleSetAltHeapTID() functions. Merge the
two functions together. This slightly simplifies _bt_truncate().
Replication slots are useful to retain data that may be needed by a
replication system. But experience has shown that allowing them to
retain excessive data can lead to the primary failing because of running
out of space. This new feature allows the user to configure a maximum
amount of space to be reserved using the new option
max_slot_wal_keep_size. Slots that overrun that space are invalidated
at checkpoint time, enabling the storage to be released.
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20170228.122736.123383594.horiguchi.kyotaro@lab.ntt.co.jp
We invented \gx to allow the "\pset expanded" flag to be forced on
for the duration of one command output, but that turns out to not
be nearly enough to satisfy the demand for variant output formats.
Hence, make it possible to change any pset option(s) for the duration
of a single command output, by writing "option=value ..." inside
parentheses, for example
\g (format=csv csv_fieldsep='\t') somefile
\gx can now be understood as a shorthand for including expanded=on
inside the parentheses.
Patch by me, expanding on a proposal by Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRBx9OnBPRJVtfA5ycUpySge-XootAXAsv_4rrkHxJ8eRg@mail.gmail.com
This commit adds following optional clause to BEGIN and START TRANSACTION
commands.
WAIT FOR LSN lsn [ TIMEOUT timeout ]
New clause pospones transaction start till given lsn is applied on standby.
This clause allows user be sure, that changes previously made on primary would
be visible on standby.
New shared memory struct is used to track awaited lsn per backend. Recovery
process wakes up backend once required lsn is applied.
Author: Ivan Kartyshov, Anna Akenteva
Reviewed-by: Craig Ringer, Thomas Munro, Robert Haas, Kyotaro Horiguchi
Reviewed-by: Masahiko Sawada, Ants Aasma, Dmitry Ivanov, Simon Riggs
Reviewed-by: Amit Kapila, Alexander Korotkov
Discussion: https://postgr.es/m/0240c26c-9f84-30ea-fca9-93ab2df5f305%40postgrespro.ru
WITH TIES is an option to the FETCH FIRST N ROWS clause (the SQL
standard's spelling of LIMIT), where you additionally get rows that
compare equal to the last of those N rows by the columns in the
mandatory ORDER BY clause.
There was a proposal by Andrew Gierth to implement this functionality in
a more powerful way that would yield more features, but the other patch
had not been finished at this time, so we decided to use this one for
now in the spirit of incremental development.
Author: Surafel Temesgen <surafel3000@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Discussion: https://postgr.es/m/CALAY4q9ky7rD_A4vf=FVQvCGngm3LOes-ky0J6euMrg=_Se+ag@mail.gmail.com
Discussion: https://postgr.es/m/87o8wvz253.fsf@news-spur.riddles.org.uk
Since the existing bit number argument can't exceed INT32_MAX, it's
not possible for these functions to manipulate bits beyond the first
256MB of a bytea value. Lift that restriction by redeclaring the
bit number arguments as int8 (which requires a catversion bump,
hence is not back-patchable).
The similarly-named functions for bit/varbit don't really have a
problem because we restrict those types to at most VARBITMAXLEN bits;
hence leave them alone.
While here, extend the encode/decode functions in utils/adt/encode.c
to allow dealing with values wider than 1GB. This is not a live bug
or restriction in current usage, because no input could be more than
1GB, and since none of the encoders can expand a string more than 4X,
the result size couldn't overflow uint32. But it might be desirable
to support more in future, so make the input length values size_t
and the potential-output-length values uint64.
Also add some test cases to improve the miserable code coverage
of these functions.
Movead Li, editorialized some by me; also reviewed by Ashutosh Bapat
Discussion: https://postgr.es/m/20200312115135445367128@highgo.ca
Previously when there were multiple timelines listed in the history file
of the recovery target timeline, archive recovery searched all of them,
starting from the newest timeline to the oldest one, to find the segment
to read. That is, archive recovery had to continuously fail scanning
the segment until it reached the timeline that the segment belonged to.
These scans for non-existent segment could be harmful on the recovery
performance especially when archival area was located on the remote
storage and each scan could take a long time.
To address the issue, this commit changes archive recovery so that
it skips scanning the timeline that the segment to read doesn't belong to.
Author: Kyotaro Horiguchi, tweaked a bit by Fujii Masao
Reviewed-by: David Steele, Pavel Suderevsky, Grigory Smolkin
Discussion: https://postgr.es/m/16159-f5a34a3a04dc67e0@postgresql.org
Discussion: https://postgr.es/m/20200129.120222.1476610231001551715.horikyota.ntt@gmail.com
Commit d2d8a229bc introduced Incremental Sort, but it was considered
only in create_ordered_paths() as an alternative to regular Sort. There
are many other places that require sorted input and might benefit from
considering Incremental Sort too.
This patch modifies a number of those places, but not all. The concern
is that just adding Incremental Sort to any place that already adds
Sort may increase the number of paths considered, negatively affecting
planning time, without any benefit. So we've taken a more conservative
approach, based on analysis of which places do affect a set of queries
that did seem practical. This means some less common queries may not
benefit from Incremental Sort yet.
Author: Tomas Vondra
Reviewed-by: James Coleman
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
It turns out that the code did indeed rely on a zeroed
TuplesortInstrumentation.sortMethod field to indicate
"this worker never did anything", although it seems the
issue only comes up during certain race-condition-y cases.
Hence, rearrange the TuplesortMethod enum to restore
SORT_TYPE_STILL_IN_PROGRESS to having the value zero,
and add some comments reinforcing that that isn't optional.
Also future-proof a loop over the possible values of the enum.
sizeof(bits32) happened to be the correct limit value,
but only by purest coincidence.
Per buildfarm and local investigation.
Discussion: https://postgr.es/m/12222.1586223974@sss.pgh.pa.us
The txid_XXX family of fmgr functions exposes 64 bit transaction IDs to
users as int8. Now that we have an SQL type xid8 for FullTransactionId,
define a new set of functions including pg_current_xact_id() and
pg_current_snapshot() based on that. Keep the old functions around too,
for now.
It's a bit sneaky to use the same C functions for both, but since the
binary representation is identical except for the signedness of the
type, and since older functions are the ones using the wrong signedness,
and since we'll presumably drop the older ones after a reasonable period
of time, it seems reasonable to switch to FullTransactionId internally
and share the code for both.
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com>
Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com>
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
Similar to xid, but 64 bits wide. This new type is suitable for use in
various system views and administration functions.
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com>
Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com>
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
An assertion added by commit 0d861bbb checked that _bt_killitems() only
processes a BTScanPosItem whose heap TID is contained in a posting list
tuple when its page offset number still matches what is on the page
(i.e. when it matches the posting list tuple's current offset number).
This was only correct in the common case where the page can't have
changed since we first read it. It was not correct in cases where we
don't drop the buffer pin (and don't need to verify the page hasn't
changed using its LSN). The latter category includes scans involving
unlogged tables, and scans that use a non-MVCC snapshot, per the logic
originally introduced by commit 2ed5b87f.
The assertion still seems helpful. Fix it by taking cases where the
page may have been concurrently modified into account.
Reported-By: Anastasia Lubennikova, Alexander Lakhin
Discussion: https://postgr.es/m/c4e38e9a-0f9c-8e53-e639-adf343f94472@postgrespro.ru
Incremental Sort is an optimized variant of multikey sort for cases when
the input is already sorted by a prefix of the requested sort keys. For
example when the relation is already sorted by (key1, key2) and we need
to sort it by (key1, key2, key3) we can simply split the input rows into
groups having equal values in (key1, key2), and only sort/compare the
remaining column key3.
This has a number of benefits:
- Reduced memory consumption, because only a single group (determined by
values in the sorted prefix) needs to be kept in memory. This may also
eliminate the need to spill to disk.
- Lower startup cost, because Incremental Sort produce results after each
prefix group, which is beneficial for plans where startup cost matters
(like for example queries with LIMIT clause).
We consider both Sort and Incremental Sort, and decide based on costing.
The implemented algorithm operates in two different modes:
- Fetching a minimum number of tuples without check of equality on the
prefix keys, and sorting on all columns when safe.
- Fetching all tuples for a single prefix group and then sorting by
comparing only the remaining (non-prefix) keys.
We always start in the first mode, and employ a heuristic to switch into
the second mode if we believe it's beneficial - the goal is to minimize
the number of unnecessary comparions while keeping memory consumption
below work_mem.
This is a very old patch series. The idea was originally proposed by
Alexander Korotkov back in 2013, and then revived in 2017. In 2018 the
patch was taken over by James Coleman, who wrote and rewrote most of the
current code.
There were many reviewers/contributors since 2013 - I've done my best to
pick the most active ones, and listed them in this commit message.
Author: James Coleman, Alexander Korotkov
Reviewed-by: Tomas Vondra, Andreas Karlsson, Marti Raudsepp, Peter Geoghegan, Robert Haas, Thomas Munro, Antonin Houska, Andres Freund, Alexander Kuzmenkov
Discussion: https://postgr.es/m/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=goN-gfA@mail.gmail.com
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
Since commit 8f59f6b9c0, CLOBBER_CACHE_ALWAYS buildfarm members have
been failing this test case because the error message now sometimes
includes an error cursor position. It seems largely just luck that
that never happened before, and there are likely to be more ways it
could happen in future. Hence, rather than trying to prevent it,
adjust the test script to suppress that component of the report.
At some point we might need to back-patch this, but refrain until
there's a demonstrated need. (We'd need a different fix before v12,
anyway, since VERBOSITY=sqlstate is a recent thing.)
Tom Lane and Andres Freund
Discussion: https://postgr.es/m/30675.1586111599@sss.pgh.pa.us
Mainly, this adds support code in logical/worker.c for applying
replicated operations whose target is a partitioned table to its
relevant partitions.
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
This new cache lookup routine has been introduced in a40caf5, and more
code paths can directly use it.
Note that in cluster_rel(), the code was returning immediately if the
tuple's entry in pg_index for the clustered index was not valid. This
commit changes the code so as a lookup error is raised instead,
something that could not happen from the start as we check for the
existence of the index beforehand, while holding an exclusive lock on
the parent table.
Author: Justin Pryzby
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com
This commit adds a new option WAL similar to existing option BUFFERS in the
EXPLAIN command. This option allows to include information on WAL record
generation added by commit df3b181499 in EXPLAIN output.
This also allows the WAL usage information to be displayed via
the auto_explain module. A new parameter auto_explain.log_wal controls
whether WAL usage statistics are printed when an execution plan is logged.
This parameter has no effect unless auto_explain.log_analyze is enabled.
Author: Julien Rouhaud
Reviewed-by: Dilip Kumar and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
A table rewritten by ALTER TABLE would lose tracking of an index usable
for CLUSTER. This setting is tracked by pg_index.indisclustered and is
controlled by ALTER TABLE, so some extra work was needed to restore it
properly. Note that ALTER TABLE only marks the index that can be used
for clustering, and does not do the actual operation.
Author: Amit Langote, Justin Pryzby
Reviewed-by: Ibrar Ahmed, Michael Paquier
Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com
Backpatch-through: 9.5
This is for the benefit of running postgres under the rr
debugger. When using rr signal handlers running while a syscall is
active use an alternative stack. As e.g. bgworkers are started from
within signal handlers, the forked backend then has a different stack
base than postmaster. Previously that subsequently lead to those
processes triggering spurious "stack depth limit exceeded" errors.
Discussion: https://postgr.es/m/20200327182217.ubrrl32lyfhxfwk5@alap3.anarazel.de
There's a very low risk that RecentGlobalXmin could be far enough in
the past to be older than relfrozenxid, or even wrapped
around. Luckily the consequences of that having happened wouldn't be
too bad - the page wouldn't be pruned for a while.
Avoid that risk by using TransactionXmin instead. As that's announced
via MyPgXact->xmin, it is protected against wrapping around (see code
comments for details around relfrozenxid).
Author: Andres Freund
Discussion: https://postgr.es/m/20200328213023.s4eyijhdosuc4vcj@alap3.anarazel.de
Backpatch: 9.5-
Commit 7cb0a423f overlooked that the multi-level partition test table
pagg_tab_ml still had an exactly even row split at its upper level of
partitioning, so that some of the sub-aggregation plan steps still had
exactly equal costs, leading to plan instability. Tweak the partition
boundaries some more to make the row distribution unequal at both
levels. This leads to more changes in the "expected" plan order than
the previous round, but it seems fine. (Actually, I'm surprised that
this didn't affect even more plans in this test: looking at the
underlying costs shows that some of the parallel plan groups are
*not* getting sorted by cost. Bug?)
Per buildfarm member lousyjack,
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-04-04%2021%3A03%3A04
Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
Commit 77ec5affb added an assertion to enforce_generic_type_consistency
that boils down to "if the function result is polymorphic, there must be
at least one polymorphic argument". This should be true for user-created
functions, but there are built-in functions for which it's not true, as
pointed out by Jaime Casanova. Hence, go back to the old behavior of
leaving the return type alone. There's only a limited amount of stuff
you can do with such a function result, but it does work to some extent;
add some regression test cases to ensure we don't break that again.
Discussion: https://postgr.es/m/CAJGNTeMbhtsCUZgJJ8h8XxAJbK7U2ipsX8wkHRtZRz-NieT8RA@mail.gmail.com
Until now, only selected bulk operations (e.g. COPY) did this. If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY. See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules. Maintainers of table access
methods should examine that section.
To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL. A new GUC,
wal_skip_threshold, guides that choice. If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold. Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.
Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node. Make relcache.c retain entries for certain
dropped relations until end of transaction.
Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN.
Future servers accept older WAL, so this bump is discretionary.
Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas. Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem. Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout.
Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
This allows gathering the WAL generation statistics for each statement
execution. The three statistics that we collect are the number of WAL
records, the number of full page writes and the amount of WAL bytes
generated.
This helps the users who have write-intensive workload to see the impact
of I/O due to WAL. This further enables us to see approximately what
percentage of overall WAL is due to full page writes.
In the future, we can extend this functionality to allow us to compute the
the exact amount of WAL data due to full page writes.
This patch in itself is just an infrastructure to compute WAL usage data.
The upcoming patches will expose this data via explain, auto_explain,
pg_stat_statements and verbose (auto)vacuum output.
Author: Kirill Bychik, Julien Rouhaud
Reviewed-by: Dilip Kumar, Fujii Masao and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
Don't try to be precise about it, just use a constant 16 bytes of
chunk overhead. Being smarter would require knowing the memory context
where the chunk will be allocated, which is not known by all callers.
Discussion: https://postgr.es/m/20200325220936.il3ni2fj2j2b45y5@alap3.anarazel.de
Commit 0d8c9c1210 allowed BASE_BACKUP to
acquire a ResourceOwner without a transaction so that the backup
manifest functionality could use a BufFile, but it overlooked the fact
that when a walsender is used with replication=database, it might have
a transaction in progress, because in that mode, SQL and replication
commands can be mixed. Try to fix things up so that the two cleanup
mechanisms don't conflict.
Per buildfarm member serinus, which triggered the problem when
CREATE_REPLICATION_SLOT failed from inside a transaction. It passed
on the subsequent run, so evidently the failure doesn't happen every
time.
lapwing is complaining that about a call to pg_gmtime, saying that
it "expected 'const pg_time_t *' but argument is of type 'time_t *'".
I at first thought that the problem had someting to do with const,
but Thomas Munro suggested that it might be just because time_t
and pg_time_t are different identifers. lapwing is i686 rather than
x86_64, and pg_time_t is always int64, so that seems like a good
guess.
There is other code that just casts time_t to pg_time_t without
any conversion function, so try that approach here.
Introduced in commit 0d8c9c1210.
It may be necessary to go further and remove this test altogether,
but I'm going to try this fix first. It's not clear, at least to
me, exactly how this is breaking buildfarm members, but it appears
to be doing so.
Move have_partkey_equi_join and match_expr_to_partition_keys to
relnode.c, since they're used only there. Refactor
build_joinrel_partition_info to split out the code that fills the
joinrel's partition key lists; this doesn't have any non-cosmetic
impact, but it seems like a useful separation of concerns.
Improve assorted nearby comments.
Amit Langote, with a little further editorialization by me
Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
The buildfarm is very unhappy right now because TAP test
003_corruption.pl uses TestLib::tempdir to generate the name of
a temporary directory that is used as a tablespace name, and
this results in a 'symbolic link target too long' error message
on many of the buildfarm machines, but not on my machine.
It appears that other people have run into similar problems in
the past and that TestLib::tempdir_short was the solution, so
let's try using that instead.
It seems that we have a policy that every Perl subroutine should
end with an explicit "return", so add explicit "return"
statements to all the new subroutines added by my prior
commit 0d8c9c1210.
Per buildfarm.
A manifest is a JSON document which includes (1) the file name, size,
last modification time, and an optional checksum for each file backed
up, (2) timelines and LSNs for whatever WAL will need to be replayed
to make the backup consistent, and (3) a checksum for the manifest
itself. By default, we use CRC-32C when checksumming data files,
because we are trying to detect corruption and user error, not foil an
adversary. However, pg_basebackup and the server-side BASE_BACKUP
command now have options to select a different algorithm, so users
wanting a cryptographic hash function can select SHA-224, SHA-256,
SHA-384, or SHA-512. Users not wanting file checksums at all can
disable them, or disable generating of the backup manifest altogether.
Using a cryptographic hash function in place of CRC-32C consumes
significantly more CPU cycles, which may slow down backups in some
cases.
A new tool called pg_validatebackup can validate a backup against the
manifest. If no checksums are present, it can still check that the
right files exist and that they have the expected sizes. If checksums
are present, it can also verify that each file has the expected
checksum. Additionally, it calls pg_waldump to verify that the
expected WAL files are present and parseable. Only plain format
backups can be validated directly, but tar format backups can be
validated after extracting them.
Robert Haas, with help, ideas, review, and testing from David Steele,
Stephen Frost, Andrew Dunstan, Rushabh Lathia, Suraj Kharage, Tushar
Ahuja, Rajkumar Raghuwanshi, Mark Dilger, Davinder Singh, Jeevan
Chalke, Amit Kapila, Andres Freund, and Noah Misch.
Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com
When BUFFERS option is enabled, EXPLAIN command includes the information
on buffer usage during each plan node, in its output. In addition to that,
this commit makes EXPLAIN command include also the information on
buffer usage during planning phase, in its output. This feature makes it
easier to discern the cases where lots of buffer access happen during
planning.
This commit revives the original commit ed7a509571 that was reverted by
commit 19db23bcbd. The original commit had to be reverted because
it caused the regression test failure on the buildfarm members prion and
dory. But since commit c0885c4c30 got rid of the caues of the test failure,
the original commit can be safely introduced again.
Author: Julien Rouhaud, slightly revised by Fujii Masao
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
entryGetItem()'s three code paths each contained bugs associated
with filtering the entries for gin_fuzzy_search_limit.
The posting-tree path failed to advance "advancePast" after having
decided to filter an item. If we ran out of items on the current
page and needed to advance to the next, what would actually happen
is that entryLoadMoreItems() would re-load the same page. Eventually,
the random dropItem() test would accept one of the same items it'd
previously rejected, and we'd move on --- but it could take awhile
with small gin_fuzzy_search_limit. To add insult to injury, this
case would inevitably cause entryLoadMoreItems() to decide it needed
to re-descend from the root, making things even slower.
The posting-list path failed to implement gin_fuzzy_search_limit
filtering at all, so that all entries in the posting list would
be returned.
The bitmap-result path used a "gotitem" variable that it failed to
update in the one place where it'd actually make a difference, ie
at the one "continue" statement. I think this was unreachable in
practice, because if we'd looped around then it shouldn't be the
case that the entries on the new page are before advancePast.
Still, the "gotitem" variable was contributing nothing to either
clarity or correctness, so get rid of it.
Refactor all three loops so that the termination conditions are
more alike and less unreadable.
The code coverage report showed that we had no coverage at all for
the re-descend-from-root code path in entryLoadMoreItems(), which
seems like a very bad thing, so add a test case that exercises it.
We also had exactly no coverage for gin_fuzzy_search_limit, so add a
simplistic test case that at least hits those code paths a little bit.
Back-patch to all supported branches.
Adé Heyward and Tom Lane
Discussion: https://postgr.es/m/CAEknJCdS-dE1Heddptm7ay2xTbSeADbkaQ8bU2AXRCVC2LdtKQ@mail.gmail.com
The explain regression test runs EXPLAIN commands via the function
that filters unstable outputs. To produce more stable test output,
this commit improves the function so that it also filters out text-mode
Buffers lines. This is necessary because text-mode Buffers lines vary
depending the system state.
This improvement will get rid of the regression test failure that
the commit ed7a509571 caused on the buildfarm members prion and
dory because of the instability of Buffers lines.
Author: Fujii Masao
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20200403025751.GB1759@paquier.xyz
contrib/lo's lo_manage() thought it could use
trigdata->tg_trigger->tgname in its error message about
not being called as a trigger. That naturally led to a core dump.
unique_key_recheck() figured it could Assert that fcinfo->context
is a TriggerData node in advance of having checked that it's
being called as a trigger. That's harmless in production builds,
and perhaps not that easy to reach in any case, but it's logically
wrong.
The first of these per bug #16340 from William Crowell;
the second from manual inspection of other CALLED_AS_TRIGGER
call sites.
Back-patch the lo.c change to all supported branches, the
other to v10 where the thinko crept in.
Discussion: https://postgr.es/m/16340-591c7449dc7c8c47@postgresql.org
This commit introduces new wait events RecoveryConflictSnapshot and
RecoveryConflictTablespace. The former is reported while waiting for
recovery conflict resolution on a vacuum cleanup. The latter is reported
while waiting for recovery conflict resolution on dropping tablespace.
Also this commit changes the code so that the wait event Lock is reported
while waiting in ResolveRecoveryConflictWithVirtualXIDs() for recovery
conflict resolution on a lock. Basically the wait event Lock is reported
during that wait, but previously was not reported only when that wait
happened in ResolveRecoveryConflictWithVirtualXIDs().
Author: Masahiko Sawada
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CA+fd4k4mXWTwfQLS3RPwGr4xnfAEs1ysFfgYHvmmoUgv6Zxvmg@mail.gmail.com
This option is similar to \gset, except that it is able to store all
results from combined SQL queries into separate variables. If a query
returns multiple rows, the last result is stored and if a query returns
no rows, nothing is stored.
While on it, add a TAP test for \gset to check for a failure when a
query returns multiple rows.
Author: Fabien Coelho
Reviewed-by: Ibrar Ahmed, Michael Paquier
Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081914200.2529@lancre
When BUFFERS option is enabled, EXPLAIN command includes the information
on buffer usage during each plan node, in its output. In addition to that,
this commit makes EXPLAIN command include also the information on
buffer usage during planning phase, in its output. This feature makes it
easier to discern the cases where lots of buffer access happen during
planning.
Author: Julien Rouhaud, slightly revised by Fujii Masao
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
The primary motivation for this change is that it will be used by the
upcoming patch to add backup manifests, but it also seems to have some
potential more general use.
Andres Freund and Robert Haas
Discussion: http://postgr.es/m/20200330020814.nspra4mvby42yoa4@alap3.anarazel.de
Instead of disabling autovacuum on these test tables, adjust the
partition boundaries so that the child partitions are not all the
same size. That should cause the planner to use a predictable
ordering of the per-partition scan nodes even in cases where
autovacuum causes the rowcount estimates to be off a bit.
Moreover, this also lets these tests show that the planner does
properly order the tables in descending size order, something
that wasn't being proven before.
The pagg_tab1 and pagg_tab2 partitions are still all the same
size, but that should be fine, because those tables are so small
that (1) autovacuum won't fire on them, and (2) even if it did,
it couldn't change the reltuples value --- with only one page,
it can't see just part of the relation.
Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
This patch replaces the boolean GUC log_parameters_on_error introduced
by commit ba79cb5dc with an integer log_parameter_max_length_on_error,
adding the ability to specify how many bytes to trim each logged
parameter value to. (The previous coding hard-wired that choice at
64 bytes.)
In addition, add a new parameter log_parameter_max_length that provides
similar control over truncation of query parameters that are logged in
response to statement-logging options, as opposed to errors. Previous
releases always logged such parameters in full, possibly causing log
bloat.
For backwards compatibility with prior releases,
log_parameter_max_length defaults to -1 (log in full), while
log_parameter_max_length_on_error defaults to 0 (no logging).
Per discussion, log_parameter_max_length is SUSET since the DBA should
control routine logging behavior, but log_parameter_max_length_on_error
is USERSET because it also affects errcontext data sent back to the
client.
Alexey Bashtanov, editorialized a little by me
Discussion: https://postgr.es/m/b10493cc-a399-a03a-67c7-068f2791ee50@imap.cc
In b07642dbc, we added code to trigger autovacuums based on the number of
INSERTs into a table. This seems to have cause some destabilization of
the regression tests. Likely this is due to an autovacuum triggering
mid-test and (per theory from Tom Lane) one of the test's queries causes
autovacuum to skip some number of pages, resulting in the reltuples
estimate changing.
The failure that this is attempting to fix is around the order of subnodes
in an Append. Since the planner orders these according to the subnode
cost, then it's possible that a small change in the reltuples value changes
the subnode's cost enough that it swaps position with one of its fellow
subnodes.
The failure here only seems to occur on slower buildfarm machines. In this
case, lousyjack, which seems have taken over 8 minutes to run just
the partitionwise_aggregate test. Such a slow run would increase the
chances that the autovacuum launcher would trigger a vacuum mid-test.
Faster machines run this test in sub second time, so have a much smaller
window for an autovacuum to trigger.
Here we fix this by disabling autovacuum on all tables created in the test.
Additionally, this reverts the change made in the
partitionwise_aggregate test in 2dc16efed.
Discussion: https://postgr.es/m/22297.1585797192@sss.pgh.pa.us
This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and
check Unicode normal forms, per SQL standard.
To support fast IS NORMALIZED tests, we pull in a new data file
DerivedNormalizationProps.txt from Unicode and build a lookup table
from that, using techniques similar to ones already used for other
Unicode data. make update-unicode will keep it up to date. We only
build and use these tables for the NFC and NFKC forms, because they
are too big for NFD and NFKD and the improvement is not significant
enough there.
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com
Commit 40d964ec99 allowed vacuum command to process indexes in parallel but
forgot to accumulate the buffer usage stats of parallel workers. This
allows leader backend to accumulate buffer usage stats of all the parallel
workers.
Reported-by: Julien Rouhaud
Author: Sawada Masahiko
Reviewed-by: Dilip Kumar, Amit Kapila and Julien Rouhaud
Discussion: https://postgr.es/m/20200328151721.GB12854@nol
This commit makes pg_stat_statements support new GUC
pg_stat_statements.track_planning. If this option is enabled,
pg_stat_statements tracks the planning statistics of the statements,
e.g., the number of times the statement was planned, the total time
spent planning the statement, etc. This feature is useful to check
the statements that it takes a long time to plan. Previously since
pg_stat_statements tracked only the execution statistics, we could
not use that for the purpose.
The planning and execution statistics are stored at the end of
each phase separately. So there are not always one-to-one relationship
between them. For example, if the statement is successfully planned
but fails in the execution phase, only its planning statistics are stored.
This may cause the users to be able to see different pg_stat_statements
results from the previous version. To avoid this,
pg_stat_statements.track_planning needs to be disabled.
This commit bumps the version of pg_stat_statements to 1.8
since it changes the definition of pg_stat_statements function.
Author: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao
Reviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane
Discussion: https://postgr.es/m/CAHGQGwFx_=DO-Gu-MfPW3VQ4qC7TfVdH2zHmvZfrGv6fQ3D-Tw@mail.gmail.com
Discussion: https://postgr.es/m/CAEepm=0e59Y_6Q_YXYCTHZkqOc6H2pJ54C_Xe=VFu50Aqqp_sA@mail.gmail.com
Discussion: https://postgr.es/m/DB6PR0301MB21352F6210E3B11934B0DCC790B00@DB6PR0301MB2135.eurprd03.prod.outlook.com
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
We require the partition key to be a subset of the set of columns
being made unique, so that physically-separate indexes on the different
partitions are sufficient to enforce the uniqueness constraint.
The existing code checked that the listed columns appear, but did not
inquire into the index semantics, which is a serious oversight given
that different index opclasses might enforce completely different
notions of uniqueness.
Ideally, perhaps, we'd just match the partition key opfamily to the
index opfamily. But hash partitioning uses hash opfamilies which we
can't directly match to btree opfamilies. Hence, look up the equality
operator in each family, and accept if it's the same operator. This
should be okay in a fairly general sense, since the equality operator
ought to precisely represent the opfamily's notion of uniqueness.
A remaining weak spot is that we don't have a cross-index-AM notion of
which opfamily member is "equality". But we know which one to use for
hash and btree AMs, and those are the only two that are relevant here
at present. (Any non-core AMs that know how to enforce equality are
out of luck, for now.)
Back-patch to v11 where this feature was introduced.
Guancheng Luo, revised a bit by me
Discussion: https://postgr.es/m/D9C3CEF7-04E8-47A1-8300-CA1DCD5ED40D@gmail.com
Quite a few matching operators such as JSONB's @> used "contsel" and
"contjoinsel" as their selectivity estimators. That was a bad idea,
because (a) contsel is only a stub, yielding a fixed default estimate,
and (b) that default is 0.001, meaning we estimate these operators as
five times more selective than equality, which is surely pretty silly.
There's a good model for improving this in ltree's ltreeparentsel():
for any "var OP constant" query, we can try applying the operator
to all of the column's MCV and histogram values, taking the latter
as being a random sample of the non-MCV values. That code is
actually 100% generic, except for the question of exactly what
default selectivity ought to be plugged in when we don't have stats.
Hence, migrate the guts of ltreeparentsel() into the core code, provide
wrappers "matchingsel" and "matchingjoinsel" with a more-appropriate
default estimate, and use those for the non-geometric operators that
formerly used contsel (mostly JSONB containment operators and tsquery
matching).
Also apply this code to some match-like operators in hstore, ltree, and
pg_trgm, including the former users of ltreeparentsel as well as ones
that improperly used contsel. Since commit 911e70207 just created new
versions of those extensions that we haven't released yet, we can sneak
this change into those new versions instead of having to create an
additional generation of update scripts.
Patch by me, reviewed by Alexey Bashtanov
Discussion: https://postgr.es/m/12237.1582833074@sss.pgh.pa.us
In a psql session, if the connection to the server is abruptly cut, the
referenced connection would become NULL as of CheckConnection(). This
could cause a hard crash with psql if attempting to connect by reusing
the past connection's data because of a null-pointer dereference with
either PQhost() or PQdb(). This issue is fixed by making sure that no
reuse of the past connection is done if it does not exist.
Issue has been introduced by 6e5f8d4, so backpatch down to 12.
Reported-by: Hugh Wang
Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/16330-b34835d83619e25d@postgresql.org
Backpatch-through: 12
The coverity complained that dividing integer expressions and then
converting the integer quotient to type "double" would lose fractional
part. Typecasting one of the arguments of expression with double should
fix the report.
Author: Mahendra Singh Thalor
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/20200329224818.6phnhv7o2q2rfovf@alap3.anarazel.de
pg_rewind needs to copy from the source cluster to the target cluster a
set of relation blocks changed from the previous checkpoint where WAL
forked up to the end of WAL on the target. Building this list of
relation blocks requires a range of WAL segments that may not be present
anymore on the target's pg_wal, causing pg_rewind to fail. It is
possible to work around this issue by copying manually the WAL segments
needed but this may lead to some extra and actually useless work.
This commit introduces a new option allowing pg_rewind to use a
restore_command while doing the rewind by grabbing the parameter value
of restore_command from the target cluster configuration. This allows
the rewind operation to be more reliable, so as only the WAL segments
needed by the rewind are restored from the archives.
In order to be able to do that, a new routine is added to src/common/ to
allow frontend tools to restore files from archives using an
already-built restore command. This version is more simple than the
backend equivalent as there is no need to handle the non-recovery case.
Author: Alexey Kondratov
Reviewed-by: Andrey Borodin, Andres Freund, Alvaro Herrera, Alexander
Korotkov, Michael Paquier
Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
Must hold some lock on the pg_statistic_ext_data catalog *before*
we look up the tuple we aim to replace. Otherwise a concurrent
VACUUM FULL or similar operation could move it to a different TID,
leaving us trying to replace the wrong tuple.
Back-patch to v12 where this got broken.
Credit goes to Dean Rasheed; I'm just doing the clerical work.
Discussion: https://postgr.es/m/CAEZATCU0zHMDiQV0g8P2U+YSP9C1idUPrn79DajsbonwkN0xvQ@mail.gmail.com
The stats_ext test is not expecting that autovacuum will touch
any of its tables; an expectation falsified by commit b07642dbc.
Although I'm suspicious that there's something else going on that
makes extended stats estimates not 100% reproducible, it's pretty
easy to demonstrate that there are places in this test that fail
if an autovacuum updates the table's stats unexpectedly.
Hence, revert the band-aid changes made by 2dc16efed and 24566b359
in favor of summarily disabling autovacuum for all the tables that
this test checks estimated rowcounts for.
Also remove an evidently obsolete comment at the head of the test.
Discussion: https://postgr.es/m/15012.1585623298@sss.pgh.pa.us
When recovery target is reached and recovery is paused because of
recovery_target_action=pause, executing pg_wal_replay_resume() causes
the standby to promote, i.e., the recovery to end. So, in this case,
the previous message "Execute pg_wal_replay_resume() to continue"
logged was confusing because pg_wal_replay_resume() doesn't cause
the recovery to continue.
This commit improves the message logged when recovery is paused,
and the proper message is output based on what (pg_wal_replay_pause
or recovery_target_action) causes recovery to be paused.
Author: Sergei Kornilov, revised by Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/19168211580382043@myt5-b646bde4b8f3.qloud-c.yandex.net
Buildfarm experience shows that this function can fail with ENOENT
if some other process unlinks a file between when we read the directory
entry and when we try to stat() it. The problem is old but we had
not noticed it until 085b6b667 added regression test coverage.
To fix, just ignore ENOENT failures. There is one other case that
this might hide: a symlink that points to nowhere. That seems okay
though, at least better than erroring.
Back-patch to v10 where this function was added, since the regression
test cases were too.
Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
This commit improves error reporting introduced by 911e702077. It puts
argument of errmsg() to the single line for easier grepping source for error
text. Also it improves wording of errhint().
The original implementation disallowed using OVERRIDING USER VALUE on
identity columns defined as GENERATED ALWAYS, which is not per
standard. So allow that now.
Expand documentation and tests around this.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://www.postgresql.org/message-id/flat/CAEZATCVrh2ufCwmzzM%3Dk_OfuLhTTPBJCdFkimst2kry4oHepuQ%40mail.gmail.com
The definitions of the routines defined in xlogarchive.c have been part
of xlog_internal.h which is included by several frontend tools, but all
those routines are only called by the backend. More cleanup could be
done within xlog_internal.h, but that's already a nice cut.
This will help a follow-up patch for pg_rewind where handling of
restore_command is added for frontends.
Author: Alexey Kondratov, Michael Paquier
Reviewed-by: Álvaro Herrera, Alexander Korotkov
Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
vacuum code.
After commit b61d161c14, during vacuum, we cache the information of
relation name and relation namespace in local structure LVRelStats so that
we can use it in an error callback function. We can use the cached
information to avoid the calls to RelationGetRelationName(),
RelationGetNamespace() and get_namespace_name(). This is mainly for the
consistent in vacuum code path but it will avoid the extra syscache lookup
we do in get_namespace_name().
Author: Justin Pryzby
Reviewed-by: Amit Kapila
Discussion: https://www.postgresql.org/message-id/20191120210600.GC30362@telsasoft.com
Commit 7c2dbc69 reorganized _bt_truncate() in a way that enables a
further simplification that I (pgeoghegan) missed: Since we mark the
tuple that is returned to the caller as a pivot tuple before the point
where its heap TID is set as of 7c2dbc69, it is possible to use the high
level BTreeTupleGetHeapTID() inline function to get an item pointer. Do
it that way now. This approach is clearer and more maintainable.
This reverts commit 2aa6e33, that added a fast path to skip
anti-wraparound and non-aggressive autovacuum jobs (these have no sense
as anti-wraparound implies aggressive). With a cluster using a high
amount of relations with a portion of them being heavily updated, this
could cause autovacuum to lock down, with autovacuum workers attempting
repeatedly those jobs on the same relations for the same database, that
just kept being skipped. This lock down can be solved with a manual
VACUUM FREEZE.
Justin King has reported one environment where the issue happened, and
Julien Rouhaud and I have been able to reproduce it in a second
environment. With a very aggressive autovacuum_freeze_max_age,
triggering those jobs with pgbench is a matter of minutes, and hitting
the lock down is a lot harder (my local tests failed to do that).
Note that anti-wraparound and non-aggressive jobs can only be triggered
on a subset of shared catalogs:
- pg_auth_members
- pg_authid
- pg_database
- pg_replication_origin
- pg_shseclabel
- pg_subscription
- pg_tablespace
While the lock down was possible down to v12, the root cause of those
jobs is a much older issue, which needs more analysis.
Bonus thanks to Andres Freund for the discussion.
Reported-by: Justin King
Discussion: https://postgr.es/m/CAE39h22zPLrkH17GrkDgAYL3kbjvySYD1io+rtnAUFnaJJVS4g@mail.gmail.com
Backpatch-through: 12
Simplify _bt_truncate(), the routine that generates truncated leaf page
high keys. Remove a micro-optimization that avoided a second palloc0()
call (this was used when a heap TID was needed in the final pivot tuple,
though only when the index happened to not be an INCLUDE index).
Removing this dubious micro-optimization allows _bt_truncate() to use
the index_truncate_tuple() indextuple.c utility routine in all cases.
This was already the common case.
This commit is a HEAD-only follow up to bugfix commit 4b42a899.