The serialization code dumped core for a string-valued GUC whose value
is NULL, which is a legal state. The infrastructure isn't capable of
transmitting that state exactly, but fortunately, transmitting an empty
string instead should be close enough (compare, eg, commit e45e990e4).
The code potentially underestimated the space required to format a
real-valued variable, both because it made an unwarranted assumption that
%g output would never be longer than %e output, and because it didn't count
right even for %e format. In practice this would pretty much always be
masked by overestimates for other variables, but it's still wrong.
Also fix boundary-case error in read_gucstate, incorrect handling of the
case where guc_sourcefile is non-NULL but zero length (not clear that can
happen, but if it did, this code would get totally confused), and
confusingly useless check for a NULL result from read_gucstate.
Andreas Seltenreich discovered the core dump; other issues noted while
reading nearby code. Back-patch to 9.5 where this code was introduced.
Michael Paquier and Tom Lane
Discussion: <871sy78wno.fsf@credativ.de>
Like pg_tables, pg_views, and others, this view contains information
about sequences in a way that is independent of the system catalog
layout but more comprehensive than the information schema.
To help implement the view, add a new internal function
pg_sequence_last_value() to return the last value of a sequence. This
is kept separate from pg_sequence_parameters() to separate querying
run-time state from catalog-like information.
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Add a loop to check that each test covers all of the pg_dump runs. We
(I) had been a bit sloppy when adding new runs and not making sure to
mark if they should be under like or unlike for each test, this loop
makes sure that the test system will complain if any are forgotten in
the future.
The loop also correctly handles the 'catch all' cases, which are used to
avoid running unnecessary specific checks when a single catch-all can be
done (eg: a no-acl run should not have any GRANT commands).
Also, re-enable the testing of blobs, but use lo_from_bytea() instead of
trying to be cute and writing out to a file and then reading it back in
with psql, which proved to be difficult for some buildfarm members.
This allows us to add support for testing the --no-blobs option which
will be getting added shortly, provided the buildfarm doesn't blow up on
this.
The debug messages that merely print StartTransactionCommand,
CommitTransactionCommand, ProcessUtilty, or ProcessQuery with no
additional details seem to be useless. Get rid of them.
The transaction status messages produced by ShowTransactionState are
occasionally useful, but they are extremely verbose, producing
multiple lines of log output every time they fire, which can happens
multiple times per transaction. So, reduce the level to DEBUG5; avoid
emitting an extra line just to explain which debug point is at issue;
and tighten up the rest of the message so it doesn't use quite so much
horizontal space.
With these changes, it's possible to run a somewhat busy system with a
log level even as high as DEBUG4, whereas previously anything above
DEBUG2 would flood the log with output that probably wasn't really all
that useful.
pg_dump's traditional solution for breaking a circular dependency involving
a view was to create the view with CREATE TABLE and then later issue CREATE
RULE "_RETURN" ... to convert the table to a view, relying on the backend's
very very ancient code that supports making views that way. We've wanted
to get rid of that kluge for a long time, but the thing that finally
motivates doing something about it is the recognition that this method
fails with the --clean option, because it leads to issuing DROP RULE
"_RETURN" followed by DROP TABLE --- and the backend won't let you drop a
view's _RETURN rule.
Instead, let's break circular dependencies by initially creating the view
using CREATE VIEW AS SELECT NULL::columntype AS columnname, ... (so that
it has the right column names and types to support external references,
but no dependencies beyond the column data types), and then later dumping
the ON SELECT rule using the spelling CREATE OR REPLACE VIEW. This method
wasn't available when this code was originally written, but it's been
possible since PG 7.3, so it seems fine to start relying on it now.
To solve the --clean problem, make the dropStmt for an ON SELECT rule
be CREATE OR REPLACE VIEW with the same dummy target list as above.
In this way, during the DROP phase, we first reduce the view to have
no extra dependencies, and then we can drop it entirely when we've
gotten rid of whatever had a circular dependency on it.
(Note: this should work adequately well with the --if-exists option, since
the CREATE OR REPLACE VIEW will go through whether the view exists or not.
It could fail if the view exists with a conflicting column set, but we
don't really support --clean against a non-matching database anyway.)
This allows cleaning up some other kluges inside pg_dump, notably that
we don't need a notion of reloptions attached to a rule anymore.
Although this is a bug fix, commit to HEAD only for now. The problem's
existed for a long time and we've had relatively few complaints, so it
doesn't really seem worth taking risks to fix it in the back branches.
We might revisit that choice if no problems emerge.
Discussion: <19092.1479325184@sss.pgh.pa.us>
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified. That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.
Back-patch to 9.4 where this option was introduced.
Discussion: <19092.1479325184@sss.pgh.pa.us>
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
require complex interlocking that matched the requirements on the
master. This required an O(N) operation that became a significant
problem with large indexes, causing replication delays of seconds or in
some cases minutes while the XLOG_BTREE_VACUUM was replayed.
This commit skips the “pin scan” that was previously required, by
observing in detail when and how it is safe to do so, with full
documentation. The pin scan is skipped only in replay; the VACUUM code
path on master is not touched here.
No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.
This is a back-patch of commits 687f2cd7a0, 3e4b7d8798, b602842613
by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is
possible because this depends on catalog scans being MVCC. I (Álvaro)
additionally updated a slight problem in the README, which explains why
this touches the 9.6 and master branches.
Previously, the handle for the control segment could not be zero, but
some other DSM segment could potentially have a handle value of zero.
However, that means that if someone wanted to store a dsm_handle that
might or might not be valid, they would need a separate boolean to
keep track of whether the associated value is legal. That's annoying,
so change things so that no DSM segment can ever have a handle of 0 -
or as we call it here, DSM_HANDLE_INVALID.
Thomas Munro. This was submitted as part of a much larger patch to
add an malloc-like allocator for dynamic shared memory, but this part
seems like a good idea independently of the rest of the patch.
On Windows, libc will mask \r\n line endings for us, since we read the
password file in text mode. But that doesn't happen on Unix. People
who share password files across both systems might have \r\n line endings
in a file they use on Unix, so as a convenience, ignore trailing \r.
Per gripe from Josh Berkus.
In passing, put the existing check for empty line somewhere where it's
actually useful, ie after stripping the newline not before.
Vik Fearing, adjusted a bit by me
Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting
for whether MyPgXact->xmin could be cleared or advanced. In normal
transactions this was masked by the fact that the transaction snapshot
would be older, but during backend startup and certain utility commands
it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had
been cleared, meaning that recently-deleted rows could be pruned even
though this snapshot could still see them, causing unexpected catalog
lookup failures. This effect appears to be the explanation for a recent
failure on buildfarm member piculet.
To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever
it is valid.
In the previous logic, it was possible for the CatalogSnapshot to remain
valid across waits for client input, but with this change that would mean
it delays advance of global xmin in cases where it did not before. To
avoid possibly causing new table-bloat problems with clients that sit idle
for long intervals, add code to invalidate the CatalogSnapshot before
waiting for client input. (When the backend is busy, it's unlikely that
the CatalogSnapshot would be the oldest snap for very long, so we don't
worry about forcing early invalidation of it otherwise.)
In passing, remove the CatalogSnapshotStale flag in favor of using
"CatalogSnapshot != NULL" to represent validity, as we do for the other
special snapshots in snapmgr.c. And improve some obsolete comments.
No regression test because I don't know a deterministic way to cause this
failure. But the stress test shown in the original discussion provokes
"cache lookup failed for relation 1255" within a few dozen seconds for me.
Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's
quite easy to produce similar failures with the same test case in branches
before 9.4. But MVCC catalog scans were supposed to fix that.)
Discussion: <16447.1478818294@sss.pgh.pa.us>
Commit 41124a91e6 allowed the
transaction log file prefix to be changed but left in place the
existing 64-character limit on the total length of a log file name.
It's possible that could be inconvenient for somebody, so increase the
limit to MAXPGPATH, which ought to be enough for anybody.
Per a suggestion from Tom Lane.
This allows us to avoid running the regression tests in contrib modules
like pg_stat_statement in a less ugly manner.
Discussion: <22432.1478968242@sss.pgh.pa.us>
Add a new optional Makefile variable PROVE_TESTS that, if passed as a
space-separated list of paths relative to the Makefile invoking
$(prove_check) or $(prove_installcheck), runs just those tests instead
of t/*.pl .
From: Craig Ringer <craig@2ndquadrant.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Previously, pg_upgrade migrated sequence data like tables by copying the
on-disk file. This does not allow any changes in the on-disk format for
sequences. It's simpler to just have pg_dump set the new sequence
values as it normally does. To do that, create a hidden submode in
pg_dump that dumps sequence data even when a schema-only dump is
requested, and trigger that submode in binary upgrade mode. (This new
submode could easily be exposed as a command-line option, but it has
limited use outside of pg_dump and would probably cause some confusion,
so we don't do that at this time.)
Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Instead of handling both sequence data and table data internally as
"table data", handle sequences separately under a "sequence set" type.
We already handled materialized view data differently, so it makes the
code somewhat cleaner to handle each relation kind separately at the top
level.
This does not change the output format, since there already was a
separate "SEQUENCE SET" archive entry type. A noticeable difference is
that SEQUENCE SET entries now always appear after TABLE DATA entries.
And in parallel mode there is less sorting to do, because the sequence
data entries are no longer considered table data.
Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Be sure to pull up the subquery's hasRowSecurity flag when flattening a
subquery in pull_up_simple_subquery(). This isn't a bug today because
we don't look at the hasRowSecurity flag during planning, but it could
easily be a bug tomorrow.
Likewise, make rewriteRuleAction() pull up the hasRowSecurity flag when
absorbing RTEs from a rule action. This isn't a bug either, for the
opposite reason: the flag should never be set yet. But again, it seems
like good future proofing.
Add a comment explaining why rewriteTargetView() should *not* set
hasRowSecurity when adding stuff to securityQuals.
Improve some nearby comments about securityQuals processing, and document
that field more completely in parsenodes.h.
Patch by me, analysis by Dean Rasheed.
Discussion: <CAEZATCXZ8tb2DV6f=bkhsMV6u_gRcZ0CZBw2J-qU84RxSukZog@mail.gmail.com>
The reloptions stuff allows this option to be set on a matview.
While it's questionable whether that is useful or was really intended,
it does work, and we shouldn't change that in minor releases. Commit
e3e66d8a9 disabled the option since I didn't realize that it was
possible for it to be set on a matview. Tweak the test to re-allow it.
Discussion: <19749.1478711862@sss.pgh.pa.us>
We just pass the data to the INSTEAD trigger.
Haribabu Kommi, reviewed by Dilip Kumar
Patch: <CAJrrPGcSQkrNkO+4PhLm4B8UQQQmU9YVUuqmtgM=pmzMfxWaWQ@mail.gmail.com>
The plan generated for sorted partial aggregation with "GROUP BY constant"
included a Sort node with no sort keys, which the executor does not like.
Per report from Steve Randall. I'd add a regression test case if I could
think of a compact one, but it doesn't seem worth expending lots of cycles
on.
Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI. That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases. And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?
Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid. It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge. Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.
A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead. The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.
Discussion: <20808.1478481403@sss.pgh.pa.us>
For ALTER TYPE .. RENAME, add "VALUE" to the list of possible
completions. Complete ALTER TYPE .. RENAME VALUE with possible
enum values. After that, complete with "TO".
Dagfinn Ilmari Mannsåker, reviewed by Artur Zakirov.
Invent a new function heap_modify_tuple_by_cols() that is functionally
equivalent to SPI_modifytuple except that it always allocates its result
by simple palloc. I chose however to make the API details a bit more
like heap_modify_tuple: pass a tupdesc rather than a Relation, and use
bool convention for the isnull array.
Use this function in place of SPI_modifytuple at all call sites where the
intended behavior is to allocate in current context. (There actually are
only two call sites left that depend on the old behavior, which makes me
wonder if we should just drop this function rather than keep it.)
This new function is easier to use than heap_modify_tuple() for purposes
of replacing a single column (or, really, any fixed number of columns).
There are a number of places where it would simplify the code to change
over, but I resisted that temptation for the moment ... everywhere except
in plpgsql's exec_assign_value(); changing that might offer some small
performance benefit, so I did it.
This is on the way to removing SPI_push/SPI_pop, but it seems like
good code cleanup in its own right.
Discussion: <9633.1478552022@sss.pgh.pa.us>
There's basically no scenario where it's sensible for this to match
dropped columns, so put a test for dropped-ness into SPI_fnumber()
itself, and excise the test from the small number of callers that
were paying attention to the case. (Most weren't :-(.)
In passing, normalize tests at call sites: always reject attnum <= 0
if we're disallowing system columns. Previously there was a mixture
of "< 0" and "<= 0" tests. This makes no practical difference since
SPI_fnumber() never returns 0, but I'm feeling pedantic today.
Also, in the places that are actually live user-facing code and not
legacy cruft, distinguish "column not found" from "can't handle
system column".
Per discussion with Jim Nasby; thi supersedes his original patch
that just changed the behavior at one call site.
Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
The intent of the test is to check whether XLOG_SEG_SIZE is in a
particular range, but actually in one case it compares XLOG_BLCKSZ
by mistake. Repair.
Commit 88e9823026 introduced this
faulty test.
Kuntal Ghosh, reviewed by Michael Paquier.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment. But really it's better to just use heap_modify_tuple.
While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber. The lack of a check
for system column names is actually a pre-existing bug here, and might
even qualify as a security bug except that we don't have any trusted
version of plpython.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment. But really it's better to just use heap_modify_tuple.
The code's actually shorter this way, and this avoids depending on some
rather indirect reasoning about why the temporary arrays can't be overrun.
(I think the old code is safe, as long as Perl hashes can't contain
duplicate keys; but with this way we don't need that assumption, only
the assumption that SPI_fnumber doesn't return an out-of-range attnum.)
While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.
When squeezing a bucket during vacuum, it's not necessary to retain
any tuples already marked as dead, so ignore them when deciding which
tuples must be moved in order to empty a bucket page. Similarly, when
splitting a bucket, relocating dead tuples to the new bucket is a
waste of effort; instead, just ignore them.
Amit Kapila, reviewed by me. Testing help provided by Ashutosh
Sharma.
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug. Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version. Commit
e7293e3271 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code. As that
commit had shown, slight increases in pattern complexity can suppress
the bug. This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes. Back-patch to 9.6, where test_pg_dump was introduced.
As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8. The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
We really ought to make StdRdOptions and the other decoded forms of
reloptions self-identifying, but for the moment, assume that only plain
relations could possibly be user_catalog_tables. Fixes problem with bogus
"ON CONFLICT is not supported on table ... used as a catalog table" error
when target is a view with cascade option.
Discussion: <26681.1477940227@sss.pgh.pa.us>
This reverts commit c8ead2a397.
Seems there is no way to do this that doesn't cause MSVC to give
warnings, so let's just go back to the way we've been doing it.
Discussion: <11843.1478358206@sss.pgh.pa.us>
Use a macro to generate the in and out functions for pseudotypes that
reject all input and output, saving many lines of redundant code.
Parameterize the error messages to reduce translatable strings.
Use Tcl_ListObjGetElements instead of Tcl_SplitList. Aside from being
possibly more efficient in its own right, this means we are no longer
responsible for freeing a malloc'd result array, so we can get rid of
a PG_TRY/PG_CATCH block.
Use heap_form_tuple instead of SPI_modifytuple. We don't need the
extra generality of the latter, since we're always replacing all
columns. Nor do we need its memory-context-munging, since at this
point we're already out of the SPI environment.
Per comparison of this code to tuple-building code submitted by Jim Nasby.
I've abandoned the thought of merging the two cases into a single routine,
but we may as well make the older code simpler and faster where we can.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented. Fix that.
For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple. Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name. We can fix it by not
checking for ".tupno" until after we check for a column name match. This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.
In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.
Noted while working on Jim Nasby's patch to support composite results
from pltcl. The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
The workaround the IANA guys chose to get rid of the clang warning
we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity.
Go back to the previous solution, ie, remove the useless comparison
to SIZE_MAX. (In principle, there could be machines out there where
it's not useless because ptrdiff_t is wider than size_t. But the whole
thing is pretty academic anyway, as we could never approach this limit
for any sane estimate of the amount of data that zic will ever be asked
to work with.)
Also, s/lineno/lineno_t/g, because if we accept their decision to start
using "lineno" as a typedef, it is going to have very unpleasant
consequences in our next pgindent run. Noted that while fooling with
pltcl yesterday.
Don't ask Tcl_GetIndexFromObj to store an error message in the interpreter
in cases where the next argument isn't necessarily one of the options
we're asking it to check for. At best that is a waste of time, and at
worst it might cause an inappropriate error result to get left behind.
Be sure to check for valid syntax (ie, no command arguments) in
pltcl_SPI_lastoid.
Extracted from a larger and otherwise-unrelated patch.
Jim Nasby
Patch: <f2134651-14b3-efeb-f274-c69f3c084031@BlueTreble.com>
Commit 7a2fe9bd0 improved merge append so that replacement of a tuple
takes log(N) operations, not twice log(N). Since cost_merge_append knew
about that explicitly, we should adjust it. This probably makes little
difference in practice, but the obsolete comment is confusing.
Ideally this would have been put in in 9.3 with the underlying behavior
change; but I'm not going to back-patch it, since there's some small chance
of changing a plan choice that somebody's optimized for.
Thomas Munro
Discussion: <CAEepm=0WQBSvuYcMOUj4Ga4NXpu2J=ejZcE=e=eiTjTX-6_gDw@mail.gmail.com>
Second try at the change originally made in commit 8518583cd;
this time with contrib updates so that manual extern declarations
are also marked with PGDLLEXPORT. The release notes should point
this out as a significant source-code change for extension authors,
since they'll have to make similar additions to avoid trouble on Windows.
Laurenz Albe, doc change by me
Patch: <A737B7A37273E048B164557ADEF4A58B53962ED8@ntex2010a.host.magwien.gv.at>
Generally, WAL resource managers are only supposed to examine the
top 4 bits of a WAL record's xl_info; the rest are reserved for
the WAL mechanism itself. A few places were not consistent about
doing this with respect to XLOG_CHECKPOINT and XLOG_SWITCH records.
There's no bug currently, since no additional bits ever get set in
these specific record types, but that might not be true forever.
Let's follow the generic coding rule here too.
Michael Paquier
This is infrastructure for the complete SQL standard feature. No
support is included at this point for execution nodes or PLs. The
intent is to add that soon.
As this patch leaves things, standard syntax can create tuplestores
to contain old and/or new versions of rows affected by a statement.
References to these tuplestores are in the TriggerData structure.
C triggers can access the tuplestores directly, so they are usable,
but they cannot yet be referenced within a SQL statement.
LockBufferForCleanup() acquires a cleanup lock unconditionally, and
ConditionalLockBufferForCleanup() acquires a cleanup lock if it is
possible to do so without waiting; this patch adds a new API,
IsBufferCleanupOK(), which tests whether an exclusive lock already
held happens to be a cleanup lock. This is possible because a cleanup
lock simply means an exclusive lock plus the assurance any other pins
on the buffer are newer than our own pin. Therefore, just as the
existing functions decide that the exclusive lock that they've just
taken is a cleanup lock if they observe the pin count to be 1, this
new function allows us to observe that the pin count is 1 on a buffer
we've already locked.
This is useful in situations where a backend definitely wishes to
modify the buffer and also wishes to perform cleanup operations if
possible. The patch to eliminate heavyweight locking by hash indexes
uses this, and it may have other applications as well.
Amit Kapila, per a suggestion from me. Some comment adjustments by me
as well.
This patch absorbs some unreleased fixes for symlink manipulation bugs
introduced in tzcode 2016g. Ordinarily I'd wait around for a released
version, but in this case it seems like we could do with extra testing,
in particular checking whether it works in EDB's VMware build environment.
This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in
https://github.com/eggert/tz.
Per a report from Sandeep Thakkar, building in an environment where hard
links are not supported in the timezone data installation directory failed,
because upstream code refactoring had broken the case of symlinking from an
existing symlink. Further experimentation also showed that the symlinks
were sometimes made incorrectly, with too many or too few "../"'s in the
symlink contents.
This should get back-patched, but first let's see what the buildfarm
makes of it. I'm not too sure about the new dependency on linkat(2).
Report: <CANFyU94_p6mqRQc2i26PFp5QAOQGB++AjGX=FO8LDpXw0GSTjw@mail.gmail.com>
Discussion: http://mm.icann.org/pipermail/tz/2016-November/024431.html
It's also possible to specify a separate port for each host.
Previously, we'd loop over every address returned by looking up the
host name; now, we'll try every address for every host name.
Patch by me. Victor Wagner wrote an earlier patch for this feature,
which I read, but I didn't use any of his code. Review by Mithun Cy.
The foreign-key-aware logic for estimation of join sizes (added in commit
100340e2d) blindly tried to apply the concept to rels that are actually
parents of inheritance trees. This is just plain wrong so far as the
referenced relation is concerned, since the inheritance scan may well
produce lots of rows that are not participating in the constraint. It's
wrong for the referencing relation too, for the same reason; although on
that end we could conceivably detect whether all members of the inheritance
tree have equivalent FK constraints pointing to the same referenced rel,
and then proceed more or less as we do now. But pending somebody writing
code to do that, we must disable this, because it's producing completely
silly estimates when there's an FK linking the heads of inheritance trees.
Per bug #14404 from Clinton Adams. Back-patch to 9.6 where the new
estimation logic came in.
Report: <20161028200412.15987.96482@wrigleys.postgresql.org>
While converting expressions in an upper-level plan node so that they
reference Vars and expressions provided by the input plan node(s),
don't convert plain Const items, even if there happens to be a matching
Const in the input. It's silly to do so because a Var is more expensive to
execute than a Const. Moreover, converting can fool ExecCheckPlanOutput's
check that an insert or update query inserts nulls into dropped columns,
leading to "query provides a value for a dropped column" errors during
INSERT or UPDATE on a table with a dropped column. We could solve this
by making that check more complicated, but I don't see the point; this fix
should save a marginal number of cycles, and it also makes for less messy
EXPLAIN output, as shown by the ensuing regression test result changes.
Per report from Pavel Hanák. I have not incorporated a test case based
on that example, as there doesn't seem to be a simple way of checking
this in isolation without making a bunch of assumptions about other
planner and SQL-function behavior.
Back-patch to 9.6. This setrefs.c behavior exists much further back,
but there is not currently reason to think that it causes problems
before 9.6.
Discussion: <83shraampf.fsf@is-it.eu>
This serves as implicit documentation and is handy if someone wants to
tweak things. The rules are not part of a normal build, like this
entire directory.
Commit 2bd9e412f9 added these in error.
They were part of an earlier design for that patch and survived in the
committed version only by inadvertency.
Julien Rouhaud
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions;
for example, in the query 'a | b | c' the substitution subquery 'a | c'
should match and lead to replacement of the first and third items.
That's fine, but the matching algorithm apparently takes about O(2^N)
for an N-clause query (I say "apparently" because the code is also both
unintelligible and uncommented). We could probably do better than that
even without any extra assumptions --- but actually, we know that the
subclauses are sorted, indeed are depending on that elsewhere in this very
same function. So we can just scan the two lists a single time to detect
matches, as though we were doing a merge join.
Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just
to make sure that the tree fits the expectations of the next search cycle.
I didn't try to devise a test case for this, but I'm pretty sure that the
oversight could have led to failure to match in some cases where a match
would be expected.
Improve comments, and also stick a CHECK_FOR_INTERRUPTS into
dofindsubquery, just in case it's still too slow for somebody.
Per report from Andreas Seltenreich. Back-patch to all supported branches.
Discussion: <8760oasf2y.fsf@credativ.de>
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c',
which is all well and good, but it tries to do that to NOT nodes as well,
so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to
be done only on AND and OR nodes, and add a test case illustrating the bug.
In passing, provide some comments for the sadly naked functions in
tsquery_util.c, and simplify some baroque logic in QTNFree(), which
I think may have been leaking some items it intended to free.
Noted while investigating a complaint from Andreas Seltenreich.
Back-patch to all supported versions.
In the previous coding, if an aggregate's transition function returned an
expanded array, nodeAgg.c and nodeWindowAgg.c would always copy it and thus
force it into the flat representation. This led to ping-ponging between
flat and expanded formats, which costs a lot. For an aggregate using
array_append as transition function, I measured about a 15X slowdown
compared to the pre-9.5 code, when working on simple int[] arrays.
Of course, the old code was already O(N^2) in this usage due to copying
flat arrays all the time, but it wasn't quite this inefficient.
To fix, teach nodeAgg.c and nodeWindowAgg.c to allow expanded transition
values without copying, so long as the transition function takes care to
return the transition value already properly parented under the aggcontext.
That puts a bit of extra responsibility on the transition function, but
doing it this way allows us to not need any extra logic in the fast path
of advance_transition_function (ie, with a pass-by-value transition value,
or with a modified-in-place pass-by-reference value). We already know
that that's a hot spot so I'm loath to add any cycles at all there. Also,
while only array_append currently knows how to follow this convention,
this solution allows other transition functions to opt-in without needing
to have a whitelist in the core aggregation code.
(The reason we would need a whitelist is that currently, if you pass a
R/W expanded-object pointer to an arbitrary function, it's allowed to do
anything with it including deleting it; that breaks the core agg code's
assumption that it should free discarded values. Returning a value under
aggcontext is the transition function's signal that it knows it is an
aggregate transition function and will play nice. Possibly the API rules
for expanded objects should be refined, but that would not be a
back-patchable change.)
With this fix, an aggregate using array_append is no longer O(N^2), so it's
much faster than pre-9.5 code rather than much slower. It's still a bit
slower than the bespoke infrastructure for array_agg, but the differential
seems to be only about 10%-20% rather than orders of magnitude.
Discussion: <6315.1477677885@sss.pgh.pa.us>
The error message when we couldn't determine the encoding from a locale
said to report a bug about that. That might have been appropriate when
this code was first added, but by now this works pretty solidly and any
encodings we don't recognize we probably just don't support. We still
print the warning, but no longer invite the bug report.
This causes the supplied function name to appear in any error message,
making the error message friendlier and relieving us from having to
provide our own in some cases.
This bug exists as far back as 9.0, when Hot Standby was introduced,
so back-patch to all supported branches.
Report and patch by Takayuki Tsunakawa, reviewed by Michael Paquier
and Kuntal Ghosh.
If a restartpoint flushed no dirty buffers, it could fail to update
the minimum recovery point, leading to a minimum recovery point prior
to the starting REDO location. perform_base_backup() would interpret
that as meaning that no WAL files at all needed to be included in the
backup, failing an internal sanity check. To fix, have restartpoints
always update the minimum recovery point to just after the checkpoint
record itself, so that the file (or files) containing the checkpoint
record will always be included in the backup.
Code by Amit Kapila, per a design suggestion by me, with some
additional work on the code comment by me. Test case by Michael
Paquier. Report by Kyotaro Horiguchi.
The code to change the deferrability properties of a foreign-key constraint
updated all the associated triggers to match; but a moment's examination of
the code that creates those triggers in the first place shows that only
some of them should track the constraint's deferrability properties. This
leads to odd failures in subsequent exercise of the foreign key, as the
triggers are fired at the wrong times. Fix that, and add a regression test
comparing the trigger properties produced by ALTER CONSTRAINT with those
you get by creating the constraint as-intended to begin with.
Per report from James Parks. Back-patch to 9.4 where this ALTER
functionality was introduced.
Report: <CAJ3Xv+jzJ8iNNUcp4RKW8b6Qp1xVAxHwSXVpjBNygjKxcVuE9w@mail.gmail.com>
I broke this in commit f3094920a. Apparently it's dead code anyway,
at least as far as our buildfarm is concerned (and the upstream IANA
code doesn't worry at all about symlink() not being present).
But as long as the rest of our code is willing to guard against not
having symlink(), this should too. Noted while investigating a
tangentially-related complaint from Sandeep Thakkar.
Back-patch to keep branches in sync.
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.
Per buildfarm member prairiedog.
Instead of treating all python sequence types as array dimensions, except
for tuples and various kinds of strings, only treat Python lists as
dimensions. The PyBytes_Check() function used previously is only available
on Python 2.6 and newer, and it was a bit fiddly anyway. The list of
exceptions would require adjustment if Python got a new kind of a sequence
similar to bytes/unicodes/strings, so only checking for Lists seems more
future-proof. The documentation only mentioned using Lists, so this is
closer to what was documented, anyway.
This should fix the buildfarm failures on systems building with Python 2.5,
although I don't have Python 2.5 installed myself to test with.
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
That used to be accepted, so let's try to give a hint to users on why
their PL/python functions no longer work.
Reviewed by Pavel Stehule.
Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.
This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".
To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.
This needs to be mentioned in the release notes.
Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.
Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
The ArchiveHandle structure contained the archive format version number
twice, once as a single field and once split into components. Simplify
that by just keeping the single field and adding some macros to extract
the components. Introduce some macros for composing version numbers, to
eliminate the repeated use of magic formulas. Drop the unused trailing
zero byte from the run-time composite version representation.
reviewed by Tom Lane
An oversight in setting the boundaries of known commit timestamps during
startup caused old commit timestamps to become inaccessible after a
server restart.
Author and reporter: Julien Rouhaud
Review, test code: Craig Ringer
INSERT ... ON CONFLICT (specifically ExecCheckHeapTupleVisible) contains
another example of this unsafe coding practice. It is much harder to get
a failure out of it than the case fixed in commit 6292c2339, because in
most scenarios any hint bits that could be set would have already been set
earlier in the command. However, Konstantin Knizhnik reported a failure
with a custom transaction manager, and it's clearly possible to get a
failure via a race condition in async-commit mode.
For lack of a reproducible example, no regression test case in this
commit.
I did some testing with Asserts added to tqual.c's functions, and can say
that running "make check-world" exposed these two bugs and no others.
The Asserts are messy enough that I've not added them to the code for now.
Report: <57EE93C8.8080504@postgrespro.ru>
Related-Discussion: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
A transaction that conflicts against itself, for example
INSERT INTO t(pk) VALUES (1),(1) ON CONFLICT DO NOTHING;
should behave the same regardless of isolation level. It certainly
shouldn't throw a serialization error, as retrying will not help.
We got this wrong due to the ON CONFLICT logic not considering the case,
as reported by Jason Dusek.
Core of this patch is by Peter Geoghegan (based on an earlier patch by
Thomas Munro), though I didn't take his proposed code refactoring for fear
that it might have unexpected side-effects. Test cases by Thomas Munro
and myself.
Report: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
Related-Discussion: <57EE93C8.8080504@postgrespro.ru>
Despite the argumentation I wrote in commit 7a2fe85b0, it's unsafe to do
this, because in corner cases it's possible for HeapTupleSatisfiesSelf
to try to set hint bits on the target tuple; and at least since 8.2 we
have required the buffer content lock to be held while setting hint bits.
The added regression test exercises one such corner case. Unpatched, it
causes an assertion failure in assert-enabled builds, or otherwise would
cause a hint bit change in a buffer we don't hold lock on, which given
the right race condition could result in checksum failures or other data
consistency problems. The odds of a problem in the field are probably
pretty small, but nonetheless back-patch to all supported branches.
Report: <19391.1477244876@sss.pgh.pa.us>
Using the name fsync clashed with the #define we have on Windows that
redefines it to _commit. Naming it sync should remove that conflict.
Per all the Windows buildfarm members
This will write the received transaction log into a file called
pg_wal.tar(.gz) next to the other tarfiles instead of writing it to
base.tar. When using fetch mode, the transaction log is still written to
base.tar like before, and when used against a pre-10 server, the file
is named pg_xlog.tar.
To do this, implement a new concept of a "walmethod", which is
responsible for writing the WAL. Two implementations exist, one that
writes to a plain directory (which is also used by pg_receivexlog) and
one that writes to a tar file with optional compression.
Reviewed by Michael Paquier
Now that the upper planner uses paths, and now that we have proper hooks
to inject paths into the upper planning process, it's possible for
foreign data wrappers to arrange to push aggregates to the remote side
instead of fetching all of the rows and aggregating them locally. This
figures to be a massive win for performance, so teach postgres_fdw to
do it.
Jeevan Chalke and Ashutosh Bapat. Reviewed by Ashutosh Bapat with
additional testing by Prabhat Sahu. Various mostly cosmetic changes
by me.
With track_io_timing = on, EXPLAIN (ANALYZE, BUFFERS) will emit fields
named like "I/O Read Time". The slash makes that invalid as an XML
element name, so that adding FORMAT XML would produce invalid XML.
We already have code in there to translate spaces to dashes, so let's
generalize that to convert anything that isn't a valid XML name character,
viz letters, digits, hyphens, underscores, and periods. We could just
reject slashes, which would run a bit faster. But the fact that this went
unnoticed for so long doesn't give me a warm feeling that we'd notice the
next creative violation, so let's make it a permanent fix.
Reported by Markus Winand, though this isn't his initial patch proposal.
Back-patch to 9.2 where track_io_timing was added. The problem is only
latent in 9.1, so I don't feel a need to fix it there.
Discussion: <E0BF6A45-68E8-45E6-918F-741FB332C6BB@winand.at>
This absorbs a fix for a symlink-manipulation bug in zic that was
introduced in 2016g. It probably isn't interesting for our use-case,
but I'm not quite sure, so let's update while we're at it.
"xlog" is not a particularly clear abbreviation for "write-ahead log",
and it sometimes confuses users into believe that the contents of the
"pg_xlog" directory are not critical data, leading to unpleasant
consequences. So, rename the directory to "pg_wal".
This patch modifies pg_upgrade and pg_basebackup to understand both
the old and new directory layouts; the former is necessary given the
purpose of the tool, while the latter merely avoids an unnecessary
backward-compatibility break.
We may wish to consider renaming other programs, switches, and
functions which still use the old "xlog" naming to also refer to
"wal". However, that's still under discussion, so let's do just this
much for now.
Discussion: CAB7nPqTeC-8+zux8_-4ZD46V7YPwooeFxgndfsq5Rg8ibLVm1A@mail.gmail.com
Michael Paquier
This is mostly to absorb some corner-case fixes in zic for year-2037
timestamps. The other changes that have been made are unlikely to affect
our usage, but nonetheless we may as well take 'em.
IANA got rid of the really silly "abbreviation" and replaced it with one
that's only moderately silly. But it's still pointless, so keep on not
showing it.
DST law changes in Turkey. Historical corrections for America/Los_Angeles,
Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk.
Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in Antarctica and the former Soviet Union, for instance Antarctica/Casey
now reports "+08" not "AWST" in the pg_timezone_names view. I kept these
abbreviations in the tznames/ data files, however, so that we will still
accept them for input. (We may want to start trimming those files someday,
but today is not that day.)
An exception is that since IANA no longer claims that "AMT" is in use
in Armenia for GMT+4, I replaced it in the Default file with GMT-4,
corresponding to Amazon Time which is in use in South America. It may be
that that meaning is also invented and IANA will drop it in a future
update; but for now, it seems silly to give pride of place to a meaning
not traceable to IANA over one that is.
Switch TAP tests to use the new wait mode of pg_ctl promote. This
allows avoiding extra logic with poll_query_until() to be sure that a
promoted standby is ready for read-write queries.
From: Michael Paquier <michael.paquier@gmail.com>
--noclean and --nosync were the only options spelled without a hyphen,
so change this for consistency with other options. The options in
pg_basebackup have not been in a release, so we just rename them. For
initdb, we retain the old variants.
Vik Fearing and me
When a relation is truncated, it is important that the FSM is truncated as
well. Otherwise, after recovery, the FSM can return a page that has been
truncated away, leading to errors like:
ERROR: could not read block 28991 in file "base/16390/572026": read only 0
of 8192 bytes
We were using MarkBufferDirtyHint() to dirty the buffer holding the last
remaining page of the FSM, but during recovery, that might in fact not
dirty the page, and the FSM update might be lost.
To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty()
requires us to do WAL-logging ourselves, to protect from a torn page, if
checksumming is enabled.
Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log
when checksumming is enabled.
Analysis by Pavan Deolasee.
Discussion: <CABOikdNr5vKucqyZH9s1Mh0XebLs_jRhKv6eJfNnD2wxTn=_9A@mail.gmail.com>
On my system, this improves coverage for src/backend/access/hash from
61.3% of lines to 88.2% of lines, and from 83.5% of functions to 97.5%
of functions, which is pretty good for 36 lines of tests.
Mithun Cy, reviewing by Amit Kapila and Álvaro Herrera
CommandId is declared as uint32, and values up to 4G are indeed legal.
cidout() handles them properly by treating the value as unsigned int.
But cidin() was just using atoi(), which has platform-dependent behavior
for values outside the range of signed int, as reported by Bart Lengkeek
in bug #14379. Use strtoul() instead, as xidin() does.
In passing, make some purely cosmetic changes to make xidin/xidout
look more like cidin/cidout; the former didn't have a monopoly on
best practice IMO.
Neither xidin nor cidin make any attempt to throw error for invalid input.
I didn't change that here, and am not sure it's worth worrying about
since neither is really a user-facing type. The point is just to ensure
that indubitably-valid inputs work as expected.
It's been like this for a long time, so back-patch to all supported
branches.
Report: <20161018152550.1413.6439@wrigleys.postgresql.org>
This reverts commit 9e083fd468. That was a
few bricks shy of a load:
* Query cancel stopped working
* Buildfarm member pademelon stopped working, because the box doesn't have
/dev/urandom nor /dev/random.
This clearly needs some more discussion, and a quite different patch, so
revert for now.
This value might not be to everyone's taste; in particular, some
people might prefer %t to %m, and others may want %u, %d, or other
fields. However, it's a vast improvement on the old default of ''.
Christoph Berg
Have tuplesort_gettupleslot() copy the contents of its current table slot
as needed. This is based on an approach taken by tuplestore_gettupleslot().
In the future, tuplesort_gettupleslot() may also be taught to avoid copying
the tuple where caller can determine that that is safe (the
tuplestore_gettupleslot() interface already offers this option to callers).
Patch by Peter Geoghegan. Fixes bug #14344, reported by Regina Obe.
Report: <20160929035538.20224.39628@wrigleys.postgresql.org>
Backpatch-through: 9.6
This adds a new routine, pg_strong_random() for generating random bytes,
for use in both frontend and backend. At the moment, it's only used in
the backend, but the upcoming SCRAM authentication patches need strong
random numbers in libpq as well.
pg_strong_random() is based on, and replaces, the existing implementation
in pgcrypto. It can acquire strong random numbers from a number of sources,
depending on what's available:
- OpenSSL RAND_bytes(), if built with OpenSSL
- On Windows, the native cryptographic functions are used
- /dev/urandom
- /dev/random
Original patch by Magnus Hagander, with further work by Michael Paquier
and me.
Discussion: <CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com>
The more efficient hashtable speeds up hash-aggregations with more than
a few hundred groups significantly. Improvements of over 120% have been
measured.
Due to the the different hash table queries that not fully
determined (e.g. GROUP BY without ORDER BY) may change their result
order.
The conversion is largely straight-forward, except that, due to the
static element types of simplehash.h type hashes, the additional data
some users store in elements (e.g. the per-group working data for hash
aggregaters) is now stored in TupleHashEntryData->additional. The
meaning of BuildTupleHashTable's entrysize (renamed to additionalsize)
has been changed to only be about the additionally stored size. That
size is only used for the initial sizing of the hash-table.
Reviewed-By: Tomas Vondra
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
Use the new simplehash.h to speed up tidbitmap.c uses. For bitmap scan
heavy queries speedups of over 100% have been measured. Both lossy and
exact scans benefit, but the wins are bigger for mostly exact scans.
The conversion is mostly trivial, except that tbm_lossify() now restarts
lossifying at the point it previously stopped. Otherwise the hash table
becomes unbalanced because the scan in done in hash-order, leaving the
end of the hashtable more densely filled then the beginning. That caused
performance issues with dynahash as well, but due to the open chaining
they were less pronounced than with the linear adressing from
simplehash.h.
Reviewed-By: Tomas Vondra
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
dynahash.c hash tables aren't quite fast enough for some
use-cases. There are several reasons for lacking performance:
- the use of chaining for collision handling makes them cache
inefficient, that's especially an issue when the tables get bigger.
- as the element sizes for dynahash are only determined at runtime,
offset computations are somewhat expensive
- hash and element comparisons are indirect function calls, causing
unnecessary pipeline stalls
- it's two level structure has some benefits (somewhat natural
partitioning), but increases the number of indirections
to fix several of these the hash tables have to be adjusted to the
individual use-case at compile-time. C unfortunately doesn't provide a
good way to do compile code generation (like e.g. c++'s templates for
all their weaknesses do). Thus the somewhat ugly approach taken here is
to allow for code generation using a macro-templatized header file,
which generates functions and types based on a prefix and other
parameters.
Later patches use this infrastructure to use such hash tables for
tidbitmap.c (bitmap scans) and execGrouping.c (hash aggregation,
...). In queries where these use up a large fraction of the time, this
has been measured to lead to performance improvements of over 100%.
There are other cases where this could be useful (e.g. catcache.c).
The hash table design chosen is a variant of linear open-addressing. The
biggest disadvantage of simple linear addressing schemes are highly
variable lookup times due to clustering, and deletions leaving a lot of
tombstones around. To address these issues a variant of "robin hood"
hashing is employed. Robin hood hashing optimizes chaining lengths by
moving elements close to their optimal bucket ("rich" elements), out of
the way if a to-be-inserted element is further away from its optimal
position (i.e. it's "poor"). While that can make insertions slower, the
average lookup performance is a lot better, and higher fill factors can
be used in a still performant manner. To avoid tombstones - which
normally solve the issue that a deleted node's presence is relevant to
determine whether a lookup needs to continue looking or is done -
buckets following a deleted element are shifted backwards, unless
they're empty or already at their optimal position.
There's further possible improvements that can be made to this
implementation. Amongst others:
- Use distance as a termination criteria during searches. This is
generally a good idea, but I've been able to see the overhead of
distance calculations in some cases.
- Consider combining the 'empty' status into the hashvalue, and enforce
storing the hashvalue. That could, in some cases, increase memory
density and remove a few instructions.
- Experiment further with the, very conservatively choosen, fillfactor.
- Make maximum size of hashtable configurable, to allow storing very
very large tables. That'd require 64bit hash values to be more common
than now, though.
- some smaller memcpy calls could be optimized to copy larger chunks
But since the new implementation is already considerably faster than
dynahash it seem sensible to start using it.
Reviewed-By: Tomas Vondra
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
These are useful for very hot code paths. Because it's easy to guess
wrongly about likelihood, and because such likelihoods change over time,
they should be used sparingly.
Past tests have shown it'd be a good idea to use them in some places,
e.g. in error checks around ereports that ERROR out, but that's work for
later.
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
bitshiftright() and bitshiftleft() would recursively call each other
infinitely if the user passed INT_MIN for the shift amount, due to integer
overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN.
That doesn't change the results since any shift distance larger than the
input bit string's length produces an all-zeroes result.
Also fix some places that seemed inadequately paranoid about input typmods
exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will
certainly be much less than that, at least some of these spots are
reachable with user-chosen integer values.
Andreas Seltenreich and Tom Lane
Discussion: <87d1j2zqtz.fsf@credativ.de>
pgstat_twophase_postcommit is supposed to duplicate the math in
AtEOXact_PgStat, but it had missed out the bit about clearing
t_delta_live_tuples/t_delta_dead_tuples for a TRUNCATE.
It's harder than you might think to replicate the issue here, because
those counters would only be nonzero when a previous transaction in
the same backend had added/deleted tuples in the truncated table,
and those counts hadn't been sent to the stats collector yet.
Evident oversight in commit d42358efb. I've not added a regression
test for this; we tried to add one in d42358efb, and had to revert it
because it was too timing-sensitive for the buildfarm.
Back-patch to 9.5 where d42358efb came in.
Stas Kelvich
Discussion: <EB57BF68-C06D-4737-BDDC-4BA778F4E62B@postgrespro.ru>
It's not good for an inherited child constraint to be marked connoinherit;
that would result in the constraint not propagating to grandchild tables,
if any are created later. The code mostly prevented this from happening
but there was one case that was missed.
This is somewhat related to commit e55a946a8, which also tightened checks
on constraint merging. Hence, back-patch to 9.2 like that one. This isn't
so much because there's a concrete feature-related reason to stop there,
as to avoid having more distinct behaviors than we have to in this area.
Amit Langote
Discussion: <b28ee774-7009-313d-dd55-5bdd81242c41@lab.ntt.co.jp>
I'm not sure if this provision for "pg_backup" behaving a bit differently
from "pg_dump" ever did anything useful in a released version. But it's
definitely dead code now.
Michael Paquier
Even if Linux's mmap() is okay with a partial-hugepage request, munmap()
is not, as reported by Chris Richards. Therefore it behooves us to try
a bit harder to find out the actual hugepage size, instead of assuming
that we can skate by with a guess.
For the moment, just look into /proc/meminfo to find out the default
hugepage size, and use that. Later, on kernels that support requests
for nondefault sizes, we might try to consider other alternatives.
But that smells more like a new feature than a bug fix, especially if
we want to provide any way for the DBA to control it, so leave it for
another day.
I set this up to allow easy addition of platform-specific code for
non-Linux platforms, if needed; but right now there are no reports
suggesting that we need to work harder on other platforms.
Back-patch to 9.4 where hugepage support was introduced.
Discussion: <31056.1476303954@sss.pgh.pa.us>
Fix detaching of the mmap'd segment to have its own on_shmem_exit callback,
rather than piggybacking on the one for detaching from the SysV segment.
That was confusing, and given the distance between the two attach calls,
it was trouble waiting to happen.
Make the detaching calls idempotent by clearing AnonymousShmem to show
we've already unmapped. I spent quite a bit of time yesterday trying
to find a path that would allow the munmap()'s to be done twice, and
while I did not succeed, it seems silly that there's even a question.
Make the #ifdef logic less confusing by separating "do we want to use
anonymous shmem" from EXEC_BACKEND. Even though there's no current
scenario where those conditions are different, it is not helpful for
different places in the same file to be testing EXEC_BACKEND for what
are fundamentally different reasons.
Don't do on_exit_reset() in StartBackgroundWorker(). At best that's
useless (InitPostmasterChild would have done it already) and at worst
it could zap some callback that's unrelated to shared memory.
Improve comments, and simplify the huge_pages enablement logic slightly.
Back-patch to 9.4 where hugepage support was introduced.
Arguably this should go into 9.3 as well, but the code looks
significantly different there, and I doubt it's worth the
trouble of adapting the patch given I can't show a live bug.
The expected results in commit b4fc64578 seem to have been generated
in a non-C locale, which just points up the fact that the ORDER BY
clause was locale-sensitive.
Per buildfarm.
Commit 0b62fd036 did a fairly sloppy job of refactoring setPath()
to support jsonb_insert() along with jsonb_set(). In its defense,
though, there was no regression test case exercising the case of
replacing an existing element in a jsonb array.
Per bug #14366 from Peng Sun. Back-patch to 9.6 where bug was introduced.
Report: <20161012065349.1412.47858@wrigleys.postgresql.org>
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order. After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
Previously GRANT order on databases was not well defined, due to the use
of EXCEPT without an ORDER BY. Add an ORDER BY, adapt test output.
I don't, at the moment, see reason to backpatch this.
This turns out not to be as harmless as I thought: MSVC will complain
if it sees an "extern" declaration without PGDLLEXPORT and then one with.
(Seems fairly silly, given that this can be changed after the fact by the
linker, but there you have it.) Therefore, contrib modules that have
extern's for V1 functions in header files are falling over in the
buildfarm, since none of those externs are marked PGDLLEXPORT.
We might or might not conclude that we're willing to plaster those
declarations with PGDLLEXPORT in HEAD, but in any case there's no way we're
going to ship this change in the back branches. Third-party authors would
not thank us for breaking their code in a minor release. Hence, revert
the addition of PGDLLEXPORT (but let's keep the extra info in the comment).
If we do the other changes we can revert this commit in HEAD.
Per buildfarm.
Commit 64f3524e2 removed the stanza of code that examined these values.
I failed to notice they were unnecessary because my compiler didn't
warn about the un-read variables. Noted by Peter Eisentraut.
These functions were originally added in commit d8cedf67a to support
use of int2vector columns as catcache lookup keys. However, there are
no catcaches that use such columns. (Indeed I now think it must always
have been dead code: a catcache with such a key column would need an
underlying unique index on the column, but we've never had an int2vector
btree opclass.)
Getting rid of the int2vector-specific operator and function does not
lose any functionality, because operations on int2vectors will now fall
back to the generic anyarray support. This avoids a wart that a btree
index on an int2vector column (made using anyarray_ops) would fail to
match equality searches, because int2vectoreq wasn't a member of the
opclass. We don't really care much about that, since int2vector is not
meant as a type for users to use, but it's silly to have extra code and
less functionality.
If we ever do want a catcache to be indexed by an int2vector column,
we'd need to put back full btree and hash opclasses for int2vector,
comparable to the support for oidvector. (The anyarray code can't be
used at such a low level, because it needs to do catcache lookups.)
But we'll deal with that if/when the need arises.
Also worth noting is that removal of the hash int2vector_ops opclass will
break any user-created hash indexes on int2vector columns. While hash
anyarray_ops would serve the same purpose, it would probably not compute
the same hash values and thus wouldn't be on-disk-compatible. Given that
int2vector isn't a user-facing type and we're planning other incompatible
changes in hash indexes for v10 anyway, this doesn't seem like something
to worry about, but it's probably worth mentioning here.
Amit Langote
Discussion: <d9bb74f8-b194-7307-9ebd-90645d377e45@lab.ntt.co.jp>
This isn't really necessary for our own code, because we use a .DEF file
in MSVC builds (see gendef.pl), or --export-all-symbols in MinGW and
Cygwin builds, to ensure that all global symbols in loadable modules
will be exported on Windows. However, third-party authors might use
different build processes that need this marker, and it's harmless
enough for our own builds.
To some extent, this is an oversight in commit e7128e8db, so back-patch
to 9.4 where that was added.
Laurenz Albe
Discussion: <A737B7A37273E048B164557ADEF4A58B539300BD@ntex2010a.host.magwien.gv.at>
The need for dumping from such ancient servers has decreased to about nil
in the field, so let's remove all the code that catered to it. Aside
from removing a lot of boilerplate variant queries, this allows us to not
have to cope with servers that don't have (a) schemas or (b) pg_depend.
That means we can get rid of assorted squishy code around that. There
may be some nonobvious additional simplifications possible, but this patch
already removes about 1500 lines of code.
I did not remove the ability for pg_restore to read custom-format archives
generated by these old versions (and light testing says that that does
still work). If you have an old server, you probably also have a pg_dump
that will work with it; but you have an old custom-format backup file,
that might be all you have.
It'd be possible at this point to remove fmtQualifiedId()'s version
argument, but I refrained since that would affect code outside pg_dump.
Discussion: <2661.1475849167@sss.pgh.pa.us>
Pass the buffer size as argument to LogicalTapeRewindForRead, rather than
setting it earlier with the separate LogicTapeAssignReadBufferSize call.
This way, the buffer size is set closer to where it's actually used, which
makes the code easier to understand.
This makes the calculation for how much memory to use for the buffers less
precise. We now use the same amount of memory for every tape, rounded down
to the nearest BLCKSZ boundary, instead of using one more block for some
tapes, to get the total up to exact amount of memory available. That should
be OK, merging isn't too sensitive to the exact amount of memory used.
Reviewed by Peter Geoghegan
Discussion: <0f607c4b-df23-353e-bf56-c0389d28495f@iki.fi>
While this isn't a lot of code, it's been essentially untestable for
a very long time, because libpq doesn't support anything older than
protocol 2.0, and has not since release 6.3. There's no reason to
believe any other client-side code still uses that protocol, either.
Discussion: <2661.1475849167@sss.pgh.pa.us>
SCO OpenServer and SCO UnixWare are more or less dead platforms.
We have never had a buildfarm member testing the "sco" port, and
the last "unixware" member was last heard from in 2012, so it's
fair to doubt that the code even compiles anymore on either one.
Remove both ports. We can always undo this if someone shows up
with an interest in maintaining and testing these platforms.
Discussion: <17177.1476136994@sss.pgh.pa.us>
Upcoming changes to the hash table code used, among others, for grouping
and set operations will change the output order for a few queries. To
make it less likely that actual bugs are hidden between regression test
ordering changes, and to make the tests robust against platform
dependant ordering, add ORDER BYs guaranteeing the output order.
As it's possible that some of the changes expose platform dependant
ordering, push this earlier, to let the buildfarm shake out potentially
unstable results.
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
Ordinarily there would not be an async result sitting around at this
point, but it appears that in corner cases there can be. Considering
all the work we're about to launch, it's hardly going to cost anything
noticeable to check.
It's been like this forever, so back-patch to all supported branches.
Report: <CAD-Qf1eLUtBOTPXyFQGW-4eEsop31tVVdZPu4kL9pbQ6tJPO8g@mail.gmail.com>
We've had support for using unnamed POSIX semaphores instead of System V
semaphores for quite some time, but it was not used by default on any
platform. Since many systems have rather small limits on the number of
SysV semaphores allowed, it seems desirable to switch to POSIX semaphores
where they're available and don't create performance or kernel resource
problems. Experimentation by me shows that unnamed POSIX semaphores
are at least as good as SysV semaphores on Linux, and we previously had
a report from Maksym Sobolyev that FreeBSD is significantly worse with
SysV semaphores than POSIX ones. So adjust those two platforms to use
unnamed POSIX semaphores, if configure can find the necessary library
functions. If this goes well, we may switch other platforms as well,
but it would be advisable to test them individually first.
It's not currently contemplated that we'd encourage users to select
a semaphore API for themselves, but anyone who wants to experiment
can add PREFERRED_SEMAPHORES=UNNAMED_POSIX (or NAMED_POSIX, or SYSV)
to their configure command line to do so.
I also tweaked configure to report which API it's selected, mainly
so that we can tell that from buildfarm reports.
I did not touch the user documentation's discussion about semaphores;
that will need some adjustment once the dust settles.
Discussion: <8536.1475704230@sss.pgh.pa.us>
The transfunction was told that its first argument and result were
of the window function output type, not the aggregate state type.
This'd only matter if the transfunction consults get_fn_expr_argtype,
which typically only polymorphic functions would do.
Although we have several regression tests around polymorphic aggs,
none of them detected this mistake --- in fact, they still didn't
fail when I injected the same mistake into nodeAgg.c. So add some
more tests covering both plain agg and window-function-agg cases.
Per report from Sebastian Luque. Back-patch to 9.6 where the error
was introduced (by sloppy refactoring in commit 804163bc2, looks like).
Report: <87int2qkat.fsf@gmail.com>
Historically, we've allowed users to add a CHECK constraint to a child
table and then add an identical CHECK constraint to the parent. This
results in "merging" the two constraints so that the pre-existing
child constraint ends up with both conislocal = true and coninhcount > 0.
However, if you tried to do it in the other order, you got a duplicate
constraint error. This is problematic for pg_dump, which needs to issue
separated ADD CONSTRAINT commands in some cases, but has no good way to
ensure that the constraints will be added in the required order.
And it's more than a bit arbitrary, too. The goal of complaining about
duplicated ADD CONSTRAINT commands can be served if we reject the case of
adding a constraint when the existing one already has conislocal = true;
but if it has conislocal = false, let's just make the ADD CONSTRAINT set
conislocal = true. In this way, either order of adding the constraints
has the same end result.
Another problem was that the code allowed creation of a parent constraint
marked convalidated that is merged with a child constraint that is
!convalidated. In this case, an inheritance scan of the parent table could
emit some rows violating the constraint condition, which would be an
unexpected result given the marking of the parent constraint as validated.
Hence, forbid merging of constraints in this case. (Note: valid child and
not-valid parent seems fine, so continue to allow that.)
Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced
possibly-not-valid check constraints. The second bug obviously doesn't
apply before that, and I think the first doesn't either, because pg_dump
only gets into this situation when dealing with not-valid constraints.
Report: <CADbMkNPT-Jz5PRSQ4RbUASYAjocV_KHUWapR%2Bg8fNvhUAyRpxA%40mail.gmail.com>
Discussion: <22108.1475874586@sss.pgh.pa.us>
The need for this was previously obscured even on picky platforms
by the hack we used to support direct cross-module references in
the transforms contrib modules. Now that that hack is gone, the
undefined symbol is exposed, as reported by Robert Haas.
Back-patch to 9.5 where we started to use -Wl,-undefined,dynamic_lookup.
I'm a bit surprised that the older branches don't seem to contain
any gettext references in this module, but since they don't fail
at build time, they must not. (We might be able to get away with
leaving this alone in 9.5/9.6, but I think it's cleaner if the
reference gets resolved at link time.)
Report: <CA+TgmoaHJKU5kcWZcYduATYVT7Mnx+8jUnycaYYL7OtCwCigug@mail.gmail.com>
I somehow had assumed that in the spinlock (in turn possibly using
semaphores) based fallback atomics implementation 32 bit writes could be
done without a lock. As far as the write goes that's correct, since
postgres supports only platforms with single-copy atomicity for aligned
32bit writes. But writing without holding the spinlock breaks
read-modify-write operations like pg_atomic_compare_exchange_u32(),
since they'll potentially "miss" a concurrent write, which can't happen
in actual hardware implementations.
In 9.6+ when using the fallback atomics implementation this could lead
to buffer header locks not being properly marked as released, and
potentially some related state corruption. I don't see a related danger
in 9.5 (earliest release with the API), because pg_atomic_write_u32()
wasn't used in a concurrent manner there.
The state variable of local buffers, before this change, were
manipulated using pg_atomic_write_u32(), to avoid unnecessary
synchronization overhead. As that'd not be the case anymore, introduce
and use pg_atomic_unlocked_write_u32(), which does not correctly
interact with RMW operations.
This bug only caused issues when postgres is compiled on platforms
without atomics support (i.e. no common new platform), or when compiled
with --disable-atomics, which explains why this wasn't noticed in
testing.
Reported-By: Tom Lane
Discussion: <14947.1475690465@sss.pgh.pa.us>
Backpatch: 9.5-, where the atomic operations API was introduced.
0xc19c is not a valid UTF-8 byte sequence. It doesn't do any harm, AFAICS,
but it's surely not intentional. No backpatching though, just to be sure.
In the passing, also add a file header comment to the file, like the
UCS_to_SJIS.pl script would produce. (The file was originally created with
UCS_to_SJIS.pl, but has been modified by hand since then. That's
questionable, but I'll leave fixing that for later.)
Kyotaro Horiguchi
Discussion: <20160907.155050.233844095.horiguchi.kyotaro@lab.ntt.co.jp>
Recent Perl and/or new Linux distributions are starting to remove "." from
the @INC list by default. That breaks pg_rewind and ssl test suites, which
use helper perl modules that reside in the same directory. To fix, add the
current source directory explicitly to prove's include dir.
The vcregress.pl script probably also needs something like this, but I
wasn't able to remove '.' from @INC on Windows to test this, and don't want
to try doing that blindly.
Discussion: <20160908204529.flg6nivjuwp5vaoy@alap3.anarazel.de>
getBlobs' queries for pre-9.0 servers were broken in two ways:
the 7.x/8.x query uses DISTINCT so it can't have unspecified-type
NULLs in the target list, and both that query and the 7.0 one
failed to provide the correct output column labels, so that the
subsequent code to extract data from the PGresult would fail.
Back-patch to 9.6 where the breakage was introduced (by commit 23f34fa4b).
Amit Langote and Tom Lane
Discussion: <0a3e7a0e-37bd-8427-29bd-958135862f0a@lab.ntt.co.jp>
They are supposed to be mutually exclusive, but there was no check for
that.
Michael Banck
Discussion: <20161007103414.GD12247@nighthawk.caipicrew.dd-dns.de>
Leaving the error in the error queue used to be harmless, because the
X509_STORE_load_locations() call used to be the last step in
initialize_SSL(), and we would clear the queue before the next
SSL_connect() call. But previous commit moved things around. The symptom
was that if a CRL file was not found, and one of the subsequent
initialization steps, like loading the client certificate or private key,
failed, we would incorrectly print the "no such file" error message from
the earlier X509_STORE_load_locations() call as the reason.
Backpatch to all supported versions, like the previous patch.
There were several issues with the old coding:
1. There was a race condition, if two threads opened a connection at the
same time. We used a mutex around SSL_CTX_* calls, but that was not
enough, e.g. if one thread SSL_CTX_load_verify_locations() with one
path, and another thread set it with a different path, before the first
thread got to establish the connection.
2. Opening two different connections, with different sslrootcert settings,
seemed to fail outright with "SSL error: block type is not 01". Not sure
why.
3. We created the SSL object, before calling SSL_CTX_load_verify_locations
and SSL_CTX_use_certificate_chain_file on the SSL context. That was
wrong, because the options set on the SSL context are propagated to the
SSL object, when the SSL object is created. If they are set after the
SSL object has already been created, they won't take effect until the
next connection. (This is bug #14329)
At least some of these could've been fixed while still using a shared
context, but it would've been more complicated and error-prone. To keep
things simple, let's just use a separate SSL context for each connection,
and accept the overhead.
Backpatch to all supported versions.
Report, analysis and test case by Kacper Zuk.
Discussion: <20160920101051.1355.79453@wrigleys.postgresql.org>
If you point pg_rewind to a server that is using synchronous replication,
with "pg_rewind --source-server=...", and the replication is not working
for some reason, pg_rewind will get stuck because it creates a temporary
table, which needs to be replicated. You could call broken replication a
pilot error, but pg_rewind is often used in special circumstances, when
there are changes to the replication setup.
We don't do any "real" updates, and we don't care about fsyncing or
replicating the operations on the temporary tables, so fix that by
setting synchronous_commit off.
Michael Banck, Michael Paquier. Backpatch to 9.5, where pg_rewind was
introduced.
Discussion: <20161005143938.GA12247@nighthawk.caipicrew.dd-dns.de>
LogicalTapeRewind() should not allocate large read buffer, if the tape
is completely empty. The calling code relies on that, for its
calculation of how much memory to allocate for the read buffers. That
lead to massive overallocation of memory, if maxTapes was high, but
only a few tapes were actually used.
Reported by Tomas Vondra
Discussion: <7303da46-daf7-9c68-3cc1-9f83235cf37e@2ndquadrant.com>
We don't need this anymore, and it prevents build-time error checking
that's usually good to have, so remove it. Undoes one change of commit
cac765820.
Unfortunately, it's much harder to get a similar effect on other common
platforms, because we don't want the linker to throw errors for symbols
that will be resolved in the core backend. Only macOS and AIX expect the
core backend executable to be available while linking loadable modules,
so only these platforms can usefully throw errors for unresolved symbols
at link time.
Discussion: <2652.1475512158@sss.pgh.pa.us>
Windows apparently has a constant named WAIT_TIMEOUT, and some of these
other names are pretty generic, too. Insert "PG_" at the front of each
name in order to disambiguate.
Michael Paquier
Just turning the crank on the project started in commit d51924be8.
These cases turn out to be exact subsets of the boilerplate needed
for hstore_plpython.
Discussion: <2652.1475512158@sss.pgh.pa.us>
WaitLatch, WaitLatchOrSocket, and WaitEventSetWait now taken an
additional wait_event_info parameter; legal values are defined in
pgstat.h. This makes it possible to uniquely identify every point in
the core code where we are waiting for a latch; extensions can pass
WAIT_EXTENSION.
Because latches were the major wait primitive not previously covered
by this patch, it is now possible to see information in
pg_stat_activity on a large number of important wait events not
previously addressed, such as ClientRead, ClientWrite, and SyncRep.
Unfortunately, many of the wait events added by this patch will fail
to appear in pg_stat_activity because they're only used in background
processes which don't currently appear in pg_stat_activity. We should
fix this either by creating a separate view for such information, or
else by deciding to include them in pg_stat_activity after all.
Michael Paquier and Robert Haas, reviewed by Alexander Korotkov and
Thomas Munro.
Before initializing iteration over a subtransaction's changes, the last
few changes were not spilled to disk. That's correct if the transaction
didn't spill to disk, but otherwise... This bug can lead to missed or
misorderd subtransaction contents when they were spilled to disk.
Move spilling of the remaining in-memory changes to
ReorderBufferIterTXNInit(), where it can easily be applied to the top
transaction and, if present, subtransactions.
Since this code had too many bugs already, noticeably increase test
coverage.
Fixes: #14319
Reported-By: Huan Ruan
Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org>
Backport: 9,4-, where logical decoding was added
Previously, on most platforms, we allowed hstore_plpython's references
to hstore and plpython to be unresolved symbols at link time, trusting
the dynamic linker to resolve them when the module is loaded. This
has a number of problems, the worst being that the dynamic linker
does not know where the references come from and can do nothing but
fail if those other modules haven't been loaded. We've more or less
gotten away with that for the limited use-case of datatype transform
modules, but even there, it requires some awkward hacks, most recently
commit 83c249200.
Instead, let's not treat these references as linker-resolvable at all,
but use function pointers that are manually filled in by the module's
_PG_init function. There are few enough contact points that this
doesn't seem unmaintainable, at least for these use-cases. (Note that
the same technique wouldn't work at all for decoupling from libpython
itself, but fortunately that's just a standard shared library and can
be linked to normally.)
This is an initial patch that just converts hstore_plpython. If the
buildfarm doesn't find any fatal problems, I'll work on the other
transform modules soon.
Tom Lane, per an idea of Andres Freund's.
Discussion: <2652.1475512158@sss.pgh.pa.us>
Commit 88e982302 invented GUC_UNIT_XSEGS for min_wal_size and max_wal_size,
but neglected to make it display sensibly in pg_settings.unit (by adding a
case to the switch in GetConfigOptionByNum). Fix that, and adjust said
switch to throw a run-time error the next time somebody forgets.
In passing, avoid using a static buffer for the output string --- the rest
of this function pstrdup's from a local buffer, and I see no very good
reason why the units code should do it differently and less safely.
Per report from Otar Shavadze. Back-patch to 9.5 where the new unit type
was added.
Report: <CAG-jOyA=iNFhN+yB4vfvqh688B7Tr5SArbYcFUAjZi=0Exp-Lg@mail.gmail.com>
Attempting to COPY a subset of columns from a table with RLS enabled
would fail due to an invalid query being constructed (using a single
ColumnRef with the list of fields to exact in 'fields', but that's for
the different levels of an indirection for a single column, not for
specifying multiple columns).
Correct by building a ColumnRef and then RestTarget for each column
being requested and then adding those to the targetList for the select
query. Include regression tests to hopefully catch if this is broken
again in the future.
Patch-By: Adam Brightwell
Reviewed-By: Michael Paquier
pg_upgrade checks whether all the shared libraries used in the old cluster
are also available in the new one by issuing LOAD for each library name.
Previously, it cared not what order it did the LOADs in. Ideally it
should not have to care, but currently the transform modules in contrib
fail unless both the language and datatype modules they depend on are
loaded first. A backend-side solution for that looks possible but
probably not back-patchable, so as a stopgap measure, let's do the LOAD
tests in order by library name length. That should fix the problem for
reasonably-named transform modules, eg "hstore_plpython" will be loaded
after both "hstore" and "plpython". (Yeah, it's a hack.)
In a larger sense, having a predictable order of these probes is a good
thing, since it will make upgrades predictably work or not work in the
face of inter-library dependencies. Also, this patch replaces O(N^2)
de-duplication logic with O(N log N) logic, which could matter in
installations with very many databases. So I don't foresee reverting this
even after we have a proper fix for the library-dependency problem.
In passing, improve a couple of SQL queries used here.
Per complaint from Andrew Dunstan that pg_upgrade'ing the transform contrib
modules failed. Back-patch to 9.5 where transform modules were introduced.
Discussion: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
Don't pre-read tuples into SortTuple slots during merge. Instead, use the
memory for larger read buffers in logtape.c. We're doing the same number
of READTUP() calls either way, but managing the pre-read SortTuple slots
is much more complicated. Also, the on-tape representation is more compact
than SortTuples, so we can fit more pre-read tuples into the same amount
of memory this way. And we have better cache-locality, when we use just a
small number of SortTuple slots.
Now that we only hold one tuple from each tape in the SortTuple slots, we
can greatly simplify the "batch memory" management. We now maintain a
small set of fixed-sized slots, to hold the tuples, and fall back to
palloc() for larger tuples. We use this method during all merge phases,
not just the final merge, and also when randomAccess is requested, and
also in the TSS_SORTEDONTAPE case. In other words, it's used whenever we
do an external sort.
Reviewed by Peter Geoghegan and Claudio Freire.
Discussion: <CAM3SWZTpaORV=yQGVCG8Q4axcZ3MvF-05xe39ZvORdU9JcD6hQ@mail.gmail.com>
Without this, an extension containing an access method is not properly
dumped/restored during pg_upgrade --- the AM ends up not being a member
of the extension after upgrading.
Another oversight in commit 473b93287, reported by Andrew Dunstan.
Report: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
Fixes errors introduced in commit bc34223bc, as detected by Coverity.
In passing, report ENOSPC for a short write while padding a new wal file in
open_walfile, make certain that close_walfile closes walfile in all cases,
and improve a couple of comments.
Michael Paquier and Tom Lane
In standard Unix builds, postmaster child processes do ClosePostmasterPorts
immediately after InitPostmasterChild, that is almost immediately after
being spawned. This is important because we don't want children holding
open the postmaster's end of the postmaster death watch pipe.
However, in EXEC_BACKEND builds, SubPostmasterMain was postponing this
responsibility significantly, in order to make it slightly more convenient
to pass the right flag value to ClosePostmasterPorts. This is bad,
particularly seeing that process_shared_preload_libraries() might invoke
nearly-arbitrary code. Rearrange so that we do it as soon as we've
fetched the socket FDs via read_backend_variables().
Also move the comment explaining about randomize_va_space to before the
call of PGSharedMemoryReAttach, which is where it's relevant. The old
placement was appropriate when the reattach happened inside
CreateSharedMemoryAndSemaphores, but that was a long time ago.
Back-patch to 9.3; the patch doesn't apply cleanly before that, and
it doesn't seem worth a lot of effort given that we've had no actual
field complaints traceable to this.
Discussion: <4157.1475178360@sss.pgh.pa.us>
There is no need for "all: all-lib" to be placed before inclusion of
Makefile.shlib. Makefile.global is what ensures that "all" is the
default target, and we already document that that has to be included
first. Per comment from Pavel Raiskup.
Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
The sequence "configure; cd src/pl/plpython; make -j" failed due to
trying to compile plpython's .o files before the generated headers
finished building. (This is an important real-world case, since it's
the typical second step when building both plpython2 and plpython3.)
This happens because the submake-generated-headers target is not
placed in a way to make it a prerequisite to compiling the .o files.
Fix that.
Checking other uses of submake-generated-headers, I noted that the one
attached to pg_regress was similarly misplaced; but it's actually not
needed at all for pg_regress.o, rather regress.o, so move it to be a
prerequisite of that.
Back-patch to 9.6 where submake-generated-headers was introduced
(by commit 548af97fc). It's not immediately clear to me why the
previous coding didn't have the same issue; but since we've not
had field reports of plpython make failing, leave it alone in the
older branches.
Pavel Raiskup and Tom Lane
Discussion: <1925924.izSMJEZO3x@unused-4-107.brq.redhat.com>
Before pg_regress runs psql, set the application name to the test name.
Similarly, set the application name to the test file name in the TAP
tests. Also, set a default log_line_prefix that show the application
name, as well as the PID and a time stamp.
That way, the server log output can be correlated to the test input
files, making debugging a bit easier.
The previous design for this had copyFile(), linkFile(), and
rewriteVisibilityMap() returning strerror strings, with the caller
producing one-size-fits-all error messages based on that. This made it
impossible to produce messages that described the failures with any degree
of precision, especially not short-read problems since those don't set
errno at all.
Since pg_upgrade has no intention of continuing after any error in this
area, let's fix this by just letting these functions call pg_fatal() for
themselves, making it easy for each point of failure to have a suitable
error message. Taking this approach also allows dropping cleanup code
that was unnecessary and was often rather sloppy about preserving errno.
To not lose relevant info that was reported before, pass in the schema name
and table name of the current table so that they can be included in the
error reports.
An additional problem was the use of getErrorText(), which was flat out
wrong for all but a couple of call sites, because it unconditionally did
"_dosmaperr(GetLastError())" on Windows. That's only appropriate when
reporting an error from a Windows-native API, which only a couple of
the callers were actually doing. Thus, even the reported strerror string
would be unrelated to the actual failure in many cases on Windows.
To fix, get rid of getErrorText() altogether, and just have call sites
do strerror(errno) instead, since that's the way all the rest of our
frontend programs do it. Add back the _dosmaperr() calls in the two
places where that's actually appropriate.
In passing, make assorted messages hew more closely to project style
guidelines, notably by removing initial capitals in not-complete-sentence
primary error messages. (I didn't make any effort to clean up places
I didn't have another reason to touch, though.)
Per discussion of a report from Thomas Kellerer. Back-patch to 9.6,
but no further; given the relative infrequency of reports of problems
here, it's not clear it's worth adapting the patch to older branches.
Patch by me, but with credit to Alvaro Herrera for spotting the issue
with getErrorText's misuse of _dosmaperr().
Discussion: <nsjrbh$8li$1@blaine.gmane.org>
This is new code in 9.6, and evidently we missed out testing it as
thoroughly as it should have been. Bugs fixed here:
1. Use binary not text mode to open the files on Windows. Before, if
the visibility map chanced to contain two bytes that looked like \r\n,
Windows' read() would convert that to \n, which both corrupts the map
data and causes the file to look shorter than it should. Unless you
were *very* unlucky and had an exact multiple of 8K such occurrences
in each VM file, this would cause pg_upgrade to report a failure,
though with a rather obscure error message.
2. The code for copying rebuilt bytes into the output was simply wrong.
It chanced to work okay on little-endian machines but would emit the
bytes in the wrong order on big-endian, leading to silent corruption
of the visibility map data.
3. The code was careless about alignment of the working buffers. Given
all three of an alignment-picky architecture, a compiler that chooses
to put the new_vmbuf[] local variable at an odd starting address, and
a checksum-enabled database, pg_upgrade would dump core.
Point one was reported by Thomas Kellerer, the other two detected by
code-reading.
Point two is much the nastiest of these issues from an impact standpoint,
though fortunately it affects only a minority of users. The Windows issue
will definitely bite people, but it seems quite unlikely that there would
be undetected corruption from that.
In addition, I failed to resist the temptation to do some minor cosmetic
adjustments, mostly improving the comments.
It would be a good idea to try to improve the error reporting here, but
that seems like material for a separate patch.
Discussion: <nsjrbh$8li$1@blaine.gmane.org>
There is a small window between when the server closes out the existing
segment and the new one is created. Put a loop around the open call in
this case to make sure we wait for the new file to actually appear.