When decoding from a logical slot, it's necessary for xlog reading to be
able to read xlog from historical (i.e. not current) timelines;
otherwise, decoding fails after failover, because the archives are in
the historical timeline. This is required to make "failover logical
slots" possible; it currently has no other use, although theoretically
it could be used by an extension that creates a slot on a standby and
continues to replay from the slot when the standby is promoted.
This commit includes a module in src/test/modules with functions to
manipulate the slots (which is not otherwise possible in SQL code) in
order to enable testing, and a new test in src/test/recovery to ensure
that the behavior is as expected.
Author: Craig Ringer
Reviewed-By: Oleksii Kliukin, Andres Freund, Petr Jelínek
Some minor tweaks and comment additions, for cleanliness sake and to
avoid having the upcoming timeline-following patch be polluted with
unrelated cleanup.
Extracted from a larger patch by Craig Ringer, reviewed by Andres
Freund, with some additions by myself.
Formerly, the geometric I/O routines such as box_in and point_out relied
directly on strtod() and sprintf() for conversion of the float8 component
values of their data types. However, the behavior of those functions is
pretty platform-dependent, especially for edge-case values such as
infinities and NaNs. This was exposed by commit acdf2a8b37, which
added test cases involving boxes with infinity endpoints, and immediately
failed on Windows and AIX buildfarm members. We solved these problems
years ago in the main float8in and float8out functions, so let's fix it
by making the geometric types use that code instead of depending directly
on the platform-supplied functions.
To do this, refactor the float8in code so that it can be used to parse
just part of a string, and as a convenience make the guts of float8out
usable without going through DirectFunctionCall.
While at it, get rid of geo_ops.c's fairly shaky assumptions about the
maximum output string length for a double, by having it build results in
StringInfo buffers instead of fixed-length strings.
In passing, convert all the "invalid input syntax for type foo" messages
in this area of the code into "invalid input syntax for type %s" to reduce
the number of distinct translatable strings, per recent discussion.
We would have needed a fair number of the latter anyway for code-sharing
reasons, so we might as well just go whole hog.
Note: this patch is by no means intended to guarantee that the geometric
types uniformly behave sanely for infinity or NaN component values.
But any bugs we have in that line were there all along, they were just
harder to reach in a platform-independent way.
My compiler doesn't like the lack of initialization of "flag", and
I think it's right: if there were zero keys we'd have an undefined
result. The AND of zero items is TRUE, so initialize to TRUE.
Patch implements quad-tree over boxes, naive approach of 2D quad tree will not
work for any non-point objects because splitting space on node is not
efficient. The idea of pathc is treating 2D boxes as 4D points, so,
object will not overlap (in 4D space).
The performance tests reveal that this technique especially beneficial
with too much overlapping objects, so called "spaghetti data".
Author: Alexander Lebedev with editorization by Emre Hasegeli and me
During scan sometimes it would be very helpful to know some information about
parent node or all ancestor nodes. Right now reconstructedValue could be used
but it's not a right usage of it (range opclass uses that).
traversalValue is arbitrary piece of memory in separate MemoryContext while
reconstructedVale should have the same type as indexed column.
Subsequent patches for range opclass and quad4d tree will use it.
Author: Alexander Lebedev, Teodor Sigaev
Reporting the specific out-of-range input value produces platform-dependent
results. We could skip reporting the value, but that's contrary to our
message style guidelines and unhelpful to users. Or we could add a
separate expected-output file for Windows, but that would be a substantial
maintenance burden, and these test cases seem unlikely to be worth it.
Per buildfarm.
In this mode, the master waits for the transaction to be applied on
the remote side, not just written to disk. That means that you can
count on a transaction started on the standby to see all commits
previously acknowledged by the master.
To make this work, the standby sends a reply after replaying each
commit record generated with synchronous_commit >= 'remote_apply'.
This introduces a small inefficiency: the extra replies will be sent
even by standbys that aren't the current synchronous standby. But
previously-existing synchronous_commit levels make no attempt at all
to optimize which replies are sent based on what the primary cares
about, so this is no worse, and at least avoids any extra replies for
people not using the feature at all.
Thomas Munro, reviewed by Michael Paquier and by me. Some additional
tweaks by me.
interval_mul() attempts to prevent its calculations from producing silly
results, but it forgot that zero times infinity yields NaN in IEEE
arithmetic. Hence, a case like '1 second'::interval * 'infinity'::float8
produced a NaN for the months product, which didn't trigger the range
check, resulting in bogus and possibly platform-dependent output.
This isn't terribly obvious to the naked eye because if you try that
exact case, you get "interval out of range" which is what you expect
--- but if you look closer, the error is coming from interval_out not
interval_mul. interval_mul has allowed a bogus value into the system.
Fix by adding isnan tests.
Noted while testing Vitaly Burovoy's fix for infinity input to
to_timestamp(). Given the lack of field complaints, I doubt this
is worth a back-patch.
With the original SQL-function implementation, such cases failed because
we don't support infinite intervals. Converting the function to C lets
us bypass the interval representation, which should be a bit faster as
well as more flexible.
Vitaly Burovoy, reviewed by Anastasia Lubennikova
resulttypeLen and resulttypeByVal must be set correctly when serializing
aggregates, not just when finalizing them. This was in David's final
patch but I downloaded the wrong version by mistake and failed to spot
the error.
David Rowley
This is necessary infrastructure for supporting parallel aggregation
for aggregates whose transition type is "internal". Such values
can't be passed between cooperating processes, because they are
just pointers.
David Rowley, reviewed by Tomas Vondra and by me.
This refines the previous weight range and allows a script to be "turned
off" by passing a zero weight, which is useful when scripting multiple
pgbench runs.
I did not apply the suggested warning when a script uses zero weight; we
use the principle elsewhere that if there's nothing to be done, do
nothing quietly.
Adjust docs accordingly.
Author: Jeff Janes, Fabien Coelho
You can now do the same thing via \set using the appropriate function,
either random(), random_gaussian(), or random_exponential(), depending
on the desired distribution. This is not backward-compatible, but per
discussion, it's worth it to avoid having the old syntax hang around
forever.
Fabien Coelho, reviewed by Michael Paquier, and adjusted by me.
Whenever this function is used with the FORMAT_MESSAGE_FROM_SYSTEM flag,
it's good practice to include FORMAT_MESSAGE_IGNORE_INSERTS as well.
Otherwise, if the message contains any %n insertion markers, the function
will try to fetch argument strings to substitute --- which we are not
passing, possibly leading to a crash. This is exactly analogous to the
rule about not giving printf() a format string you're not in control of.
Noted and patched by Christian Ullrich.
Back-patch to all supported branches.
When tsearch was implemented I did several mistakes in hostname/email
definition rules:
1) allow underscore in hostname what prohibited by RFC
2) forget to allow leading digits separated by hyphen (like 123-x.com)
in hostname
3) do no allow underscore/hyphen after leading digits in localpart of email
Artur's patch resolves two last issues, but by the way allows hosts name like
123_x.com together with 123-x.com. RFC forbids underscore usage in hostname
but pg allows that since initial tsearch version in core, although only
for non-digits. Patch syncs support digits and nondigits in both hostname and
email.
Forbidding underscore in hostname may break existsing usage of tsearch and,
anyhow, it should be done by separate patch.
Author: Artur Zakirov
BUG: #13964
Per discussion, the new extensible node framework is thought to be
better designed than the custom path/scan/scanstate stuff we added
in PostgreSQL 9.5. Rework the latter to be more like the former.
This is not backward-compatible, but we generally don't promise that
for C APIs, and there probably aren't many people using this yet
anyway.
KaiGai Kohei, reviewed by Petr Jelinek and me. Some further
cosmetic changes by me.
The IANA crew seem to think that symlink() exists everywhere nowadays,
and they may well be right. But we use #ifdef HAVE_SYMLINK elsewhere
so for consistency we should do it here too. Noted by Michael Paquier.
The new coding of dolink() is dependent on link() returning an on-point
errno when it fails; but the quick-hack implementation of link() that
we'd put in for Windows didn't bother with setting errno. Fix that.
Analysis and patch by Christian Ullrich.
INT64_MIN/MAX should be spelled PG_INT64_MIN/MAX, per well established
convention in our sources. Less obviously, a symbol named DOUBLE causes
problems on Windows builds, so rename that to DOUBLE_CONST; and rename
INTEGER to INTEGER_CONST for consistency.
Also, get rid of incorrect/obsolete hand-munging of yycolumn, and fix
the grammar for float constants to handle expected cases such as ".1".
First two items by Michael Paquier, second two by me.
Commit fbe5a3fb73 accidentally changed
this behavior; put things back the way they were, and add some
regression tests.
Report by Andres Freund; patch by Ashutosh Bapat, with a bit of
kibitzing by me.
Up until now, we've been using timezone and tzname, but Visual Studio
2015 (for which we wish to add support) no longer declares those
symbols. All versions since Visual Studio 2003 apparently support the
underscore-equipped names, and we don't support anything older than
Visual Studio 2005, so this should work OK everywhere. But let's see
what the buildfarm thinks.
Michael Paquier, reviewed by Petr Jelinek
The new functions are pi(), random(), random_exponential(),
random_gaussian(), and sqrt(). I was worried that this would be
slower than before, but, if anything, it actually turns out to be
slightly faster, because we now express the built-in pgbench scripts
using fewer lines; each \setrandom can be merged into a subsequent
\set.
Fabien Coelho
Corrects an oversight in 2c83f435a3 where the $timed_out reference var
isn't initialized; using it would require the caller to initialize it
beforehand, which is cumbersome.
Author: Craig Ringer
This brings us a bit closer to matching upstream, but since it affects
files outside src/timezone/, we might choose not to back-patch it.
Hence keep it separate from the main update patch.
We hadn't done this in about six years, which proves to have been a mistake
because there's been a lot of code churn upstream, making the merge rather
painful. But putting it off any further isn't going to lessen the pain,
and there are at least two incompatible changes that we need to absorb
before someone starts complaining that --with-system-tzdata doesn't work
at all on their platform, or we get blindsided by a tzdata release that
our out-of-date zic can't compile. Last week's "time zone abbreviation
differs from POSIX standard" mess was a wake-up call in that regard.
This is a sufficiently large patch that I'm afraid to back-patch it
immediately, though the foregoing considerations imply that we probably
should do so eventually. For the moment, just put it in HEAD so that
it can get some testing. Maybe we can wait till the end of the 9.6
beta cycle before deeming it okay.
Change the slightly different variations of the message
function FOO must return type BAR
to a single wording, removing the variability in type name so that they
all create a single translation entry; since the type name is not to be
translated, there's no point in it being part of the message anyway.
Also, change them all to use the same quoting convention, namely that
the function name is not to be quoted but the type name is. (I'm not
quite sure why this is so, but it's the clear majority.)
Some similar messages such as "encoding conversion function FOO must ..."
are also changed.
In the plancache, we check if the environment we planned the query under
has changed in a way which requires us to re-plan, such as when the user
for whom the plan was prepared changes and RLS is being used (and,
therefore, there may be different policies to apply).
Unfortunately, while those values were set and checked, they were not
being reset when the query was re-planned and therefore, in cases where
we change role, re-plan, and then change role again, we weren't
re-planning again. This leads to potentially incorrect policies being
applied in cases where role-specific policies are used and a given query
is planned under one role and then executed under other roles, which
could happen under security definer functions or when a common user and
query is planned initially and then re-used across multiple SET ROLEs.
Further, extensions which made use of CopyCachedPlan() may suffer from
similar issues as the RLS-related fields were not properly copied as
part of the plan and therefore RevalidateCachedQuery() would copy in the
current settings without invalidating the query.
Fix by using the same approach used for 'search_path', where we set the
correct values in CompleteCachedPlan(), check them early on in
RevalidateCachedQuery() and then properly reset them if re-planning.
Also, copy through the values during CopyCachedPlan().
Pointed out by Ashutosh Bapat. Reviewed by Michael Paquier.
Back-patch to 9.5 where RLS was introduced.
Security: CVE-2016-2193
Fix up check for high-bit-set characters, which provoked "comparison is
always true due to limited range of data type" warnings on some compilers,
and was unlike the way we do it elsewhere anyway. Fix omission of "$"
from the set of valid identifier continuation characters. Get rid of
sanitize_text(), which was utterly inconsistent with any other error report
anywhere in the system, and wasn't even well designed on its own terms
(double-quoting the result string without escaping contained double quotes
doesn't seem very well thought out). Fix up error messages, which didn't
follow the message style guidelines very well, and were overly specific in
situations where the actual mistake might not be what they said. Improve
documentation.
(I started out just intending to fix the compiler warning, but the more
I looked at the patch the less I liked it.)
If the referenced rel was proven empty, we'd compute 0/0 here, which
results in the function returning NaN. That's a bit more serious
than the other zero-divide case. Still, it only seems to be possible
in HEAD, so no back-patch.
Per report from Piotr Stefaniak. I looked through the rest of selfuncs.c
and found no other likely trouble spots.
This avoids a possible divide-by-zero in the following calculation,
and rounding the number to an integer seems like saner behavior anyway.
Assuming IEEE math, the division would yield +Infinity which would get
replaced by 1.0 at the bottom of the function, so nothing really
interesting would ensue; but avoiding divide-by-zero seems like a
good idea on general principles.
Per report from Piotr Stefaniak. No back-patch since this seems
mostly cosmetic.
Previously pg_rewind did not fsync any files. That's problematic, given
that the target directory is modified. If the database was started
afterwards, 2ce439f33 luckily already caused the data directory to be
synced to disk at postmaster startup; reducing the scope of the problem.
To fix, use initdb -S, at the end of the pg_rewind run. It doesn't seem
worthwhile to duplicate the code into pg_rewind, and initdb -S is
already used that way by pg_upgrade.
Reported-By: Andres Freund
Author: Michael Paquier, somewhat edited by me
Discussion: 20160310034352.iuqgvpmg5qmnxtkz@alap3.anarazel.deCAB7nPqSytVG1o4S3S2pA1O=692ekurJ+fckW2PywEG3sNw54Ow@mail.gmail.com
Backpatch: 9.5, where pg_rewind was introduced
This was a relatively harmless leak, as createBackupLabel() is only
called once per pg_rewind invocation.
Author: Michael Paquier
Reported-By: Michael Paquier
Discussion: CAB7nPqRnOw30gOXe2_SPLjh37bgm4V+txbYAPwoXb97nGQ297w@mail.gmail.com
Backpatch: 9.5, where pg_rewind was introduced
Returning the direct result of bit arithmetic, in a macro intended to be
used in a boolean manner, can be problematic if the return value is
stored in a variable of type 'bool'. If bool is implemented using C99's
_Bool, that can lead to comparison failures if the variable is then
compared again with the expression (see ginStepRight() for an example
that fails), as _Bool forces the result to be 0/1. That happens in some
configurations of newer MSVC compilers. It's also problematic when
storing the result of such an expression in a narrower type.
Several gin macros have been declared in that style since gin's initial
commit in 8a3631f8d8.
There's a lot more macros like this, but this is the only one causing
regression test failures; and I don't want to commit and backpatch a
larger patch with lots of conflicts just before the next set of minor
releases.
Discussion: 20150811154237.GD17575@awork2.anarazel.de
Backpatch: All supported branches
We really need to sync all of our IANA-derived timezone code with upstream,
but that's going to be a large patch and I certainly don't care to shove
such a thing into stable branches immediately before a release. As a
stopgap, copy just the tzcode2016c logic that checks validity of timezone
abbreviations. This prevents getting multiple "time zone abbreviation
differs from POSIX standard" bleats with tzdata 2014b and later.
cost_subplan() supposed that the given subplan must have plan_rows > 0,
which as far as I can tell was true until recent refactoring of the
code in createplan.c; but now that code allows the Result for a provably
empty subquery to have plan_rows = 0. Rather than undo that change,
put in a clamp to prevent zero divide.
get_cheapest_fractional_path() likewise supposed that best_path->rows > 0.
This assumption has been wrong for longer. It's actually harmless given
IEEE float math, because a positive value divided by zero gives +Infinity
and compare_fractional_path_costs() will do the right thing with that.
Still, best not to assume that.
final_cost_nestloop() also seems to have some risks in this area, so
borrow the clamping logic already present in the mergejoin cost functions.
Lastly, remove unnecessary clamp_row_est() in planner.c's calls to
get_number_of_groups(). The only thing that function does with path_rows
is pass it to estimate_num_groups() which already has an internal clamp,
so we don't need the extra call; and if we did, the callers are arguably
the wrong place for it anyway.
First two items reported by Piotr Stefaniak, the others are products
of my nosing around for similar problems. No back-patch since there's
no evidence that problems arise in the back branches.
DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai,
Astrakhan, Kirov, Sakhalin, Ulyanovsk regions). Historical corrections
for Lithuania, Moldova, Russia (Kaliningrad, Samara, Volgograd).
As of 2015b, the keepers of the IANA timezone database started to use
numeric time zone abbreviations (e.g., "+04") instead of inventing
abbreviations not found in the wild like "ASTT". This causes our rather
old copy of zic to whine "warning: time zone abbreviation differs from
POSIX standard" several times during "make install". This warning is
harmless according to the IANA folk, and I don't see any problems with
these abbreviations in some simple tests; but it seems like now would be
a good time to update our copy of the tzcode stuff. I'll look into that
soon.
We don't really want to encourage people to write numeric SQLSTATEs in
programs; that's unreadable and error-prone. Copy plpgsql's infrastructure
for converting between SQLSTATEs and exception names shown in Appendix A,
and modify examples in tests and documentation to do it that way.
Tcl has a convention for returning additional info about an error in a
global variable named errorCode. Up to now PL/Tcl has ignored that,
but this patch causes database errors caught by PL/Tcl to fill in
errorCode with useful information from the ErrorData struct.
Jim Nasby, reviewed by Pavel Stehule and myself
This avoids leaving dangling links in pg_operator; which while fairly
harmless are also unsightly.
While we're at it, simplify OperatorUpd, which went through
heap_modify_tuple for no very good reason considering it had already made
a tuple copy it could just scribble on.
Roma Sokolov, reviewed by Tomas Vondra, additional hacking by Robert Haas
and myself.
In commit 9118d03a8c we taught the planner to postpone evaluation of
set-returning functions in a SELECT's targetlist until after any sort done
to satisfy ORDER BY. However, if we postpone some SRFs this way while
others do not get postponed (because they're sort or group key columns)
we will break the traditional behavior by which all SRFs in the tlist run
in-step during ExecTargetList(), so that you get the least common multiple
of their periods not the product. Fix make_sort_input_target() so it will
not split up SRF evaluation in such cases.
There is still a hazard of similar odd behavior if there's a SRF in a
grouping column and another one that isn't, but that was true before
and we're just trying to preserve bug-compatibility with the traditional
behavior. This whole area is overdue to be rethought and reimplemented,
but we'll try to avoid changing behavior until then.
Per report from Regina Obe.
Some of the non-MSVC Windows buildfarm members seem to need this to avoid
getting "undefined symbol" errors on libpgfeutils' references to libpq.
I could understand that if libpq were a static library, but surely it is
not? Oh well, at least the extra reference is no more harmful than it is
for libpgcommon or libpgport.
This completes (at least for now) the project of getting rid of ad-hoc
linkages among the src/bin/ subdirectories. Everything they share is now
in src/fe_utils/ and is included from a static library at link time.
A side benefit is that we can restore the FLEX_NO_BACKUP check for
psqlscanslash.l. We might need to think of another way to do that check
if we ever need to build two lexers with that property in the same source
directory, but there's no foreseeable reason to need that.
Compilers that don't know that elog(ERROR) doesn't return complained
that this function might fail to return a value. Per buildfarm.
While at it, const-ify the function's declaration, since the intent
is evidently to always return a constant string.
Per discussion, we want to create a static library and put the stuff into
it that until now has been shared across src/bin/ directories by ad-hoc
methods like symlinking a source file. This commit creates the library and
populates it with a couple of files that contain the widely-useful portions
of pg_dump's dumputils.c file. dumputils.c survives, because it has some
stuff that didn't seem appropriate for fe_utils, but it's significantly
smaller and is no longer referenced from any other directory.
Follow-on patches will move more stuff into fe_utils.
The Mkvcbuild.pm hacking here is just a best guess; we'll see how the
buildfarm likes it.
This enables external code to create access methods. This is useful so
that extensions can add their own access methods which can be formally
tracked for dependencies, so that DROP operates correctly. Also, having
explicit support makes pg_dump work correctly.
Currently only index AMs are supported, but we expect different types to
be added in the future.
Authors: Alexander Korotkov, Petr Jelínek
Reviewed-By: Teodor Sigaev, Petr Jelínek, Jim Nasby
Commitfest-URL: https://commitfest.postgresql.org/9/353/
Discussion: https://www.postgresql.org/message-id/CAPpHfdsXwZmojm6Dx+TJnpYk27kT4o7Ri6X_4OSWcByu1Rm+VA@mail.gmail.com
Now that we have src/common/ for code shared between frontend and backend,
we can get rid of (most of) the klugy ways that the keyword table and
keyword lookup code were formerly shared between different uses.
This is a first step towards a more general plan of getting rid of
special-purpose kluges for sharing code in src/bin/.
I chose to merge kwlookup.c back into keywords.c, as it once was, and
always has been so far as keywords.h is concerned. We could have
kept them separate, but there is noplace that uses ScanKeywordLookup
without also wanting access to the backend's keyword list, so there
seems little point.
ecpg is still a bit weird, but at least now the trickiness is documented.
I think that the MSVC build script should require no adjustments beyond
what's done here ... but we'll soon find out.
Unfortunately, every version of glibc thus far tested has bugs whereby
strcoll() ordering does not match strxfrm() ordering as required by
the standard. This can result in, for example, corrupted indexes.
Disabling abbreviated keys in these cases slows down non-C-collation
string sorting considerably, but there seems to be no practical
alternative. Users who are confident that their libc implementations
are solid in this regard can re-enable the optimization by compiling
with TRUST_STRXFRM.
Users who have built indexes using PostgreSQL 9.5 or PostgreSQL 9.5.1
should REINDEX if there is a possibility that they may have been
affected by this problem.
Report by Marc-Olaf Jaschke. Investigation mostly by Tom Lane, with
help from Peter Geoghegan, Noah Misch, Stephen Frost, and me. Patch
by me, reviewed by Peter Geoghegan and Tom Lane.
Without this, contention on the freelist can become a pretty serious
problem on large servers.
Aleksander Alekseev, reviewed by Anastasia Lubennikova, Dilip Kumar,
and me.
User-facing (even tested by regression tests) error conditions were thrown
with elog(), hence had wrong SQLSTATE and were untranslatable. And the
error message texts weren't up to project style, either.
jsonb_set() could produce wrong answers or incorrect error reports, or in
the worst case even crash, when trying to convert a path-array element into
an integer for use as an array subscript. Per report from Vitaly Burovoy.
Back-patch to 9.5 where the faulty code was introduced (in commit
c6947010ce).
Michael Paquier
In commit afb9249d06, we (probably I) made ExecLockRows assign
null test tuples to all relations of the query while setting up to do an
EvalPlanQual recheck for a newly-updated locked row. This was sheerest
brain fade: we should only set test tuples for relations that are lockable
by the LockRows node, and in particular empty test tuples are only sensible
for inheritance child relations that weren't the source of the current
tuple from their inheritance tree. Setting a null test tuple for an
unrelated table causes it to return NULLs when it should not, as exhibited
in bug #14034 from Bronislav Houdek. To add insult to injury, doing it the
wrong way required two loops where one would suffice; so the corrected code
is even a bit shorter and faster.
Add a regression test case based on his example, and back-patch to 9.5
where the bug was introduced.
Instead of just "2" seconds, allow eg. "2.5" seconds. Per request
from Alvaro Herrera. No docs change since the docs didn't say you
couldn't do this already.
Include the \pset title string if there is one, and shorten the prefab
part of the header to be "timestamp (every Ns)". Per suggestion by
David Johnston.
Michael Paquier and Tom Lane
Parallel workers can now partially aggregate the data and pass the
transition values back to the leader, which can combine the partial
results to produce the final answer.
David Rowley, based on earlier work by Haribabu Kommi. Reviewed by
Álvaro Herrera, Tomas Vondra, Amit Kapila, James Sewell, and me.
Commit ac1d794 ("Make idle backends exit if the postmaster dies.")
introduced a regression on, at least, large linux systems. Constantly
adding the same postmaster_alive_fds to the OSs internal datastructures
for implementing poll/select can cause significant contention; leading
to a performance regression of nearly 3x in one example.
This can be avoided by using e.g. linux' epoll, which avoids having to
add/remove file descriptors to the wait datastructures at a high rate.
Unfortunately the current latch interface makes it hard to allocate any
persistent per-backend resources.
Replace, with a backward compatibility layer, WaitLatchOrSocket with a
new WaitEventSet API. Users can allocate such a Set across multiple
calls, and add more than one file-descriptor to wait on. The latter has
been added because there's upcoming postgres features where that will be
helpful.
In addition to the previously existing poll(2), select(2),
WaitForMultipleObjects() implementations also provide an epoll_wait(2)
based implementation to address the aforementioned performance
problem. Epoll is only available on linux, but that is the most likely
OS for machines large enough (four sockets) to reproduce the problem.
To actually address the aforementioned regression, create and use a
long-lived WaitEventSet for FE/BE communication. There are additional
places that would benefit from a long-lived set, but that's a task for
another day.
Thanks to Amit Kapila, who helped make the windows code I blindly wrote
actually work.
Reported-By: Dmitry Vasilyev Discussion:
CAB-SwXZh44_2ybvS5Z67p_CDz=XFn4hNAD=CnMEF+QqkXwFrGg@mail.gmail.com20160114143931.GG10941@awork2.anarazel.de
Previously latches for windows and unix had been implemented in
different files. A later patch introduce an expanded wait
infrastructure, keeping the implementation separate would introduce too
much duplication.
This basically just moves the functions, without too much change. The
reason to keep this separate is that it allows blame to continue working
a little less badly; and to make review a tiny bit easier.
Discussion: 20160114143931.GG10941@awork2.anarazel.de
After the previous fix in 6f1f34c9 msvc ended up looking for psqlscan.c
in the wrong directory.
David's fix just forces the path to be adjusted. That's not a
particularly pretty fix, but it hopefully will make the buildfarm green
again.
Author: David Rowley
Discussion: CAKJS1f_9CCi_t+LEgV5GWoCj3wjavcMoDc5qfcf_A0UwpQoPoA@mail.gmail.com
The point of this change is to use %pure-parser in pgbench's exprparse.y.
The immediate reason is that it turns out very ancient versions of bison
have a bug with the combination of a reentrant lexer and non-reentrant
parser. We could consider dropping support for such ancient bisons; but
considering that we might well need exprparse.y to be reentrant some day,
it seems better to make it so right now than to move the portability
goalposts. (AFAICT there's no particular performance consequence to this
change, either, so there's no good reason not to do it.)
Now, %pure-parser assumes that the called lexer is built with %option
bison-bridge. Because we're assuming bitwise compatibility of yyscan_t
(yyguts_t) data structures among all the psql/pgbench lexers, that
requirement propagates back to psql's lexers as well. But it's just a
few lines of change on that side too; and if psqlscan.l is to set the
baseline for a possibly-large family of lexers, it should err on the
side of including not omitting useful features.
pgbench now needs to use src/bin/psql/psqlscan.l, but it's not very clear
how to fit that into the MSVC build system. If this doesn't work I'm going
to need some help from somebody who actually understands those scripts ...
To allow multiline SQL commands in scripts, adopt the same rules psql uses
to decide what is the end of a SQL command, to wit, an unquoted semicolon
not encased in parentheses. Do this by importing the same flex lexer that
psql uses, since coping with stuff like dollar-quoted literals is hard to
get right without going the full nine yards.
This makes use of the infrastructure added in commit 0ea9efbe9e to
support independently-written flex lexers scanning the same PsqlScanState
input-buffer data structure. Since that infrastructure isn't very
friendly to ad-hoc parsing code such as strtok(), improve exprscan.l
so that it can parse either whitespace-separated words or expression
tokens, on demand, and rewrite pgbench.c's backslash-command parsing
code to always use the lexer to fetch tokens.
It's still the case that pgbench backslash commands extend to the end
of the line, no more and no less. That could be changed in a fairly
localized way now, and there was some interest in doing so, but it
seems like material for a separate patch.
In passing, make some marginal cleanups in syntax error reporting,
const-ify a few data structures that could use it, and run some of
this code through pgindent.
I can't tell whether the MSVC build scripts need to be taught explicitly
about the changes here or not, but the buildfarm will soon tell us.
Kyotaro Horiguchi and Tom Lane
Modern Perl has removed psed from its core distribution, so it might not
be readily available on some build platforms. We therefore replace its
use with a Perl script generated by s2p, which is equivalent to the sed
script. The latter is retained for non-MSVC builds to avoid creating a
new hard dependency on Perl for non-Windows tarball builds.
Backpatch to all live branches.
Michael Paquier and me.
This is a necessary preliminary step for making it play with psqlscan.l
given the way I set up the lexer input-buffer sharing mechanism in commit
0ea9efbe9e.
I've not tried to make it *actually* reentrant; there's still some static
variables laying about. But flex thinks it's reentrant, and that's what
counts.
In support of that, fix exprparse.y to pass through the yyscan_t from the
caller. Also do some minor code beautification, like not casting away
const.
Make some minor formatting adjustments to make it easier to diff these
files and see that they indeed implement the same flex rules (at least
to the extent that we want them to be the same).
(Someday it'd be nice to make ecpg's pgc.l more easily diff'able too,
but today is not that day.)
Also run relevant parts of these files and psqlscanslash.l through
pgindent.
No actual behavioral changes here, just obsessive neatnik-ism.
Now that we know about the %top{} trick, we can revert to building flex
lexers as separate .o files. This is worth doing for a couple of reasons
besides sheer cleanliness. We can narrow the scope of the -Wno-error flag
that's forced on scan.c. Also, since these grammar and lexer files are
so large, splitting them into separate build targets should have some
advantages in build speed, particularly in parallel or ccache'd builds.
We have quite a few other .l files that could be changed likewise, but the
above arguments don't apply to them, so the benefit of fixing them seems
pretty minimal. Leave the rest for some other day.
Previously, all scripts had the same probability of being chosen when
multiple of them were specified via -b, -f, -N, -S. With this commit,
-b and -f now search for an "@" in the script name and use the integer
found after it as the drawing probability for that script.
(One disadvantage is that if you have script whose names contain @, you
are now forced to specify "@1" at the end; otherwise the name's @ is
confused with a weight separator. We don't expect many pgbench script
with @ in their names in the wild, so this shouldn't be too serious a
problem.)
While at it, rework the interface between addScript, process_file,
process_builtin, and findBuiltin. It had gotten a bit out of hand with
recent commits.
Author: Fabien Coelho
Reviewed-By: Andres Freund, Robert Haas, Álvaro Herrera, Michaël Paquier
Discussion: http://www.postgresql.org/message-id/alpine.DEB.2.10.1603160721240.1666@sto
Buildfarm results show that the ability to attach pg_attribute_printf
decoration to a function pointer appeared somewhere between gcc 2.95.3
and gcc 4.0.1. Guess that it was there in 4.0.
We used to require the server key file to have permissions 0600 or less
for best security. But some systems (such as Debian) have certificate
and key files managed by the operating system that can be shared with
other services. In those cases, the "postgres" user is made a member of
a special group that has access to those files, and the server key file
has permissions 0640. To accommodate that kind of setup, also allow the
key file to have permissions 0640 but only if owned by root.
From: Christoph Berg <myon@debian.org>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
The existing infrastructure for FLEX_NO_BACKUP doesn't work reliably
when two lexers are built in parallel in the same directory. We can
probably fix that, but as a short-term workaround, just don't make
the check for psqlscanslash.l.
Per buildfarm.
This gets us to a point where psqlscan.l can be used by other frontend
programs for the same purpose psql uses it for, ie to detect when it's
collected a complete SQL command from input that is divided across
line boundaries. Moreover, other programs can supply their own lexers
for backslash commands of their own choosing. A follow-on patch will
use this in pgbench.
The end result here is roughly the same as in Kyotaro Horiguchi's
0001-Make-SQL-parser-part-of-psqlscan-independent-from-ps.patch, although
the details of the method for switching between lexers are quite different.
Basically, in this patch we share the entire PsqlScanState, YY_BUFFER_STATE
stack, *and* yyscan_t between different lexers. The only thing we need
to do to switch to a different lexer is to make sure the start_state is
valid for the new lexer. This works because flex doesn't keep any other
persistent state that depends on the specific lexing tables generated for
a particular .l file. (We are assuming that both lexers are built with
the same flex version, or at least versions that are compatible with
respect to the contents of yyscan_t; but that doesn't seem likely to
be a big problem in practice, considering how slowly flex changes.)
Aside from being more efficient than Horiguchi-san's original solution,
this avoids possible corner-case changes in semantics: the original code
was capable of popping the input buffer stack while still staying in
backslash-related parsing states. I'm not sure that that equates to any
useful user-visible behaviors, but I'm not sure it doesn't either, so
I'm loath to assume that we only need to consider the topmost buffer when
parsing a backslash command.
I've attempted to update the MSVC build scripts for the added .l file,
but will rely on the buildfarm to see if I missed anything.
Kyotaro Horiguchi and Tom Lane
Change psqlscan.l to specify '%option reentrant', adjust internal APIs
to match, and get rid of its internal static variables. While this is
good cleanup in an abstract sense, the reason to do it right now is that
it seems the only practical way to support use of separate flex lexers
with common PsqlScanState infrastructure. If we build two non-reentrant
lexers then we are going to have problems with dangling buffer pointers
in whichever lexer isn't active when we transition from one buffer to
another, as well as curious side-effects if we try to share any code
between the files. (Horiguchi-san had a different solution to that in his
pending patch, but I find it ugly and probably broken for corner cases.)
Depending on which version of flex you're using, this may result in getting
a "warning: unused variable 'yyg'" warning from psqlscan, similar to the
one you'd have seen for a long time in backend/parser/scan.l. I put a
local -Wno-error into CFLAGS for the file, for the convenience of those
who compile with -Werror.
Also, stop compiling psqlscan as part of mainloop.c, and make it a
standalone build target instead. This is a lot cleaner than before, though
it doesn't really change much in practice as of this commit. (I'm not sure
whether the MSVC build scripts will need some help with this part, but the
buildfarm will soon tell us.)
The distinction between "archive" and "hot_standby" existed only because
at the time "hot_standby" was added, there was some uncertainty about
stability. This is now a long time ago. We would like to move forward
with simplifying the replication configuration, but this distinction is
in the way, because a primary server cannot tell (without asking a
standby or predicting the future) which one of these would be the
appropriate level.
Pick a new name for the combined setting to make it clearer that it
covers all (non-logical) backup and replication uses. The old values
are still accepted but are converted internally.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: David Steele <david@pgmasters.net>
Remove assorted external references from psqlscan.l in preparation for
making it usable by other frontend programs. This mostly involves
getting rid of direct calls to psql_error() and GetVariable() in favor
of introducing a callback-functions struct to encapsulate variable
fetching and error printing. In addition, pass the current encoding
and standard-strings status as additional parameters to psql_scan_setup
instead of looking directly at "pset" or calling additional functions.
I did not bother to change some references to psql_error that are in
functions that will soon migrate to a psql-specific backslash-command
lexer. Other than that, this version of psqlscan.l is capable of
compiling standalone. It still depends on assorted src/common functions
as well as some encoding-related libpq functions, but we expect that
all programs using it will be happy with those dependencies.
Kyotaro Horiguchi, somewhat editorialized on by me
This avoids a good number of, individually quite fast, system calls in
scenarios with many quick queries. Besides the aesthetic benefit of
seing fewer superflous system calls with strace, it also improves
performance by ~2% measured by pgbench -M prepared -c 96 -j 8 -S (scale
100).
Without having benchmarked it, this patch also adjust the windows code,
as that makes it easier to unify the unix/windows codepaths in a later
patch. There's little reason to diverge in behaviour between the
platforms.
Discussion: CA+TgmoYc1Zm+Szoc_Qbzi92z2c1vRHZmjhfPn5uC=w8bXv6Avg@mail.gmail.com
Reviewed-By: Robert Haas
This allows for easier testing of the different primitives; in
preparation for adding a new primitive.
Discussion: 20160114143931.GG10941@awork2.anarazel.de
Reviewed-By: Robert Haas
postgres_fdw can now sent an UPDATE or DELETE statement directly to
the foreign server in simple cases, rather than sending a SELECT FOR
UPDATE statement and then updating or deleting rows one-by-one.
Etsuro Fujita, reviewed by Rushabh Lathia, Shigeru Hanada, Kyotaro
Horiguchi, Albe Laurenz, Thom Brown, and me.
Random .h files have no business including postgres-fe.h (or postgres.h).
If that wasn't the first #include done by the calling .c file, it's the
.c file that's broken. Noted while prepping Kyotaro Horiguchi's psql
lexer refactoring patch.
This means that, for example, "SELECT expensive_func(a) FROM bigtab
WHERE something" can compute expensive_func(a) in the workers rather
than the leader if it happens to be parallel-safe, which figures to be
a big win in some practical cases.
Currently, we can only do this if the entire target list is
parallel-safe. If we worked harder, we might be able to evaluate
parallel-safe targets in the worker and any parallel-restricted
targets in the leader, but that would be more complicated, and there
aren't that many parallel-restricted functions that people are likely
to use in queries anyway. I think. So just do the simple thing for
the moment.
Robert Haas, Amit Kapila, and Tom Lane
I'm committing these changes separately so that it's clear what is
Peter's original work versus what I changed. This is a followup to
commit 0011c0091e, and these changes
are all by me.
Introduce a new memory context which stores tuple data, and reset it
at the end of each merge pass; this helps avoid memory fragmentation
and, consequently, overallocation. Also, for the final merge patch,
eliminate memory context chunk header overhead entirely by allocating
all of the memory used for buffering tuples during the merge in a
single chunk. Since this modestly increases the number of tuples we
can store, grow the memtuples array a bit so that we're less likely to
run short of slots there.
Peter Geoghegan. Review and testing of patches in this series by
Jeff Janes, Greg Stark, Mithun Cy, and me.
In HEAD, fix incorrect field width for hours part of OF when tm_gmtoff is
negative. This was introduced by commit 2d87eedc1d as a result of
falsely applying a pattern that's correct when + signs are omitted, which
is not the case for OF.
In 9.4, fix missing abs() call that allowed a sign to be attached to the
minutes part of OF. This was fixed in 9.5 by 9b43d73b3f, but for
inscrutable reasons not back-patched.
In all three versions, ensure that the sign of tm_gmtoff is correctly
reported even when the GMT offset is less than 1 hour.
Add regression tests, which evidently we desperately need here.
Thomas Munro and Tom Lane, per report from David Fetter
- allow to use non-ascii characters as affix flag. Non-numeric affix flags now
are stored as string instead of numeric value of character.
- allow to use 0 as affix flag in numeric encoded affixes
That adds support for arabian, hungarian, turkish and
brazilian portuguese languages.
Author: Artur Zakirov with heavy editorization by me
This didn't work because when we dropped and re-established a database
connection, we did not bother to reset session-specific state such as
the statements-are-prepared flags.
The st->prepared[] array certainly needs to be flushed, and I cleared a
couple of other fields as well that couldn't possibly retain meaningful
state for a new connection.
In passing, fix some bogus comments and strange field order choices.
Per report from Robins Tharakan.
Somebody had apparently once figured that casting to unsigned int would
produce the right output for negative inputs, but that would only be
true if 2^32 were a multiple of 7, which of course it ain't. We need
to use a signed division and then correct the sign of the remainder.
AFAICT, the only case where this would arise currently is when doing
ISO-week calculations for dates in 4714BC, where we'd compute a
negative Julian date representing 4714-01-04BC and then do some
arithmetic with it. Since we don't even really document support for
such dates, this is not of much consequence. But we may as well
get it right.
Per report from Vitaly Burovoy.
Tighten the semantics of boundary-case timestamptz so that we allow
timestamps >= '4714-11-24 00:00+00 BC' and < 'ENDYEAR-01-01 00:00+00 AD'
exactly, no more and no less, but it is allowed to enter timestamps
within that range using non-GMT timezone offsets (which could make the
nominal date 4714-11-23 BC or ENDYEAR-01-01 AD). This eliminates
dump/reload failure conditions for timestamps near the endpoints.
To do this, separate checking of the inputs for date2j() from the
final range check, and allow the Julian date code to handle a range
slightly wider than the nominal range of the datatypes.
Also add a bunch of checks to detect out-of-range dates and timestamps
that formerly could be returned by operations such as date-plus-integer.
All C-level functions that return date, timestamp, or timestamptz should
now be proof against returning a value that doesn't pass IS_VALID_DATE()
or IS_VALID_TIMESTAMP().
Vitaly Burovoy, reviewed by Anastasia Lubennikova, and substantially
whacked around by me
Vinayak Pokale provided a patch for a copy-and-paste error in a
comment. I noticed that I'd use the word "automatically" nearby where
I meant to talk about things being "atomic". Rahila Syed spotted a
misplaced counter update. Fix all that stuff.
INSERT ... ON CONFLICT's precheck may have to wait on the outcome of
another insertion, which may or may not itself be a speculative
insertion. This wait is not necessarily associated with an exclusion
constraint, but was always reported that way in log messages if the wait
happened to involve a tuple that had no speculative token.
Initially discovered through use of ON CONFLICT DO NOTHING, where
spurious references to exclusion constraints in log messages were more
likely.
Patch by Peter Geoghegan.
Reviewed by Julien Rouhaud.
Back-patch to 9.5 where INSERT ... ON CONFLICT was added.
There's a lot more that could be done here yet - in particular, this
reports only very coarse-grained information about the index vacuuming
phase - but even as it stands, the new pg_stat_progress_vacuum can
tell you quite a bit about what a long-running vacuum is actually
doing.
Amit Langote and Robert Haas, based on earlier work by Vinayak Pokale
and Rahila Syed.
Previously, we included <xlocale.h> only if necessary to get the definition
of type locale_t. According to notes in PGAC_TYPE_LOCALE_T, this is
important because on some versions of glibc that file supplies an
incompatible declaration of locale_t. (This info may be obsolete, because
on my RHEL6 box that seems to be the *only* definition of locale_t; but
there may still be glibc's in the wild for which it's a live concern.)
It turns out though that on FreeBSD and maybe other BSDen, you can get
locale_t from stdlib.h or locale.h but mbstowcs_l() and friends only from
<xlocale.h>. This was leaving us compiling calls to mbstowcs_l() and
friends with no visible prototype, which causes a warning and could
possibly cause actual trouble, since it's not declared to return int.
Hence, adjust the configure checks so that we'll include <xlocale.h>
either if it's necessary to get type locale_t or if it's necessary to
get a declaration of mbstowcs_l().
Report and patch by Aleksander Alekseev, somewhat whacked around by me.
Back-patch to all supported branches, since we have been using
mbstowcs_l() since 9.1.
This is basically like the just-added create_upper_paths_hook, but
control is funneled only to the FDW responsible for all the baserels
of the current query; so providing such a callback is much less likely
to add useless overhead than using the hook function is.
The documentation is a bit sketchy. We'll likely want to improve it,
and/or adjust the call conventions, when we get some experience with
actually using this callback. Hopefully somebody will find time to
experiment with it before 9.6 feature freeze.
We don't support any parallel write operations at present, so choosing
a parallel plan causes us to error out. Also, add a new regression
test that uses EXPLAIN ANALYZE SELECT INTO; if we'd had this previously,
force_parallel_mode testing would have caught this issue.
Mithun Cy and Robert Haas
In the initial revision of the upper-planner pathification work, the only
available way for an FDW or custom-scan provider to inject Paths
representing post-scan-join processing was to insert them during scan-level
GetForeignPaths or similar processing. While that's not impossible, it'd
require quite a lot of duplicative processing to look forward and see if
the extension would be capable of implementing the whole query. To improve
matters for custom-scan providers, provide a hook function at the point
where the core code is about to start filling in upperrel Paths. At this
point Paths are available for the whole scan/join tree, which should reduce
the amount of redundant effort considerably.
(An alternative design that was suggested was to provide a separate hook
for each post-scan-join processing step, but that seems messy and not
clearly more useful.)
Following our time-honored tradition, there's no documentation for this
hook outside the source code.
As-is, this hook is only meant for custom scan providers, which we can't
assume very much about. A followon patch will implement an FDW callback
to let FDWs do the same thing in a somewhat more structured fashion.
Although the default choice of rel->reltarget should typically be
sufficient for scan or join paths, it's not at all sufficient for the
purposes PathTargets were invented for; in particular not for
upper-relation Paths. So break API compatibility by adding a PathTarget
argument to create_foreignscan_path(). To ease updating of existing
code, accept a NULL value of the argument as selecting rel->reltarget.
In commit 19a541143a I did not make PathTarget a subtype of Node,
and embedded a RelOptInfo's reltarget directly into it rather than having
a separately-allocated Node. In hindsight that was misguided
micro-optimization, enabled by the fact that at that point we didn't have
any Paths with custom PathTargets. Now that PathTarget processing has
been fleshed out some more, it's easier to see that it's better to have
PathTarget as an indepedent Node type, even if it does cost us one more
palloc to create a RelOptInfo. So change it while we still can.
This commit just changes the representation, without doing anything more
interesting than that.
Perl's integers are pointer-sized, so can hold more than INT_MAX on LP64
platforms, and come in both signed (IV) and unsigned (UV). Floating
point values (NV) may also be larger than double.
Since Perl 5.19.4 array indices are SSize_t instead of I32, so allow up
to SSize_t_max on those versions. The limit is not imposed just by
av_extend's argument type, but all the array handling code, so remove
the speculative comment.
Dagfinn Ilmari Mannsåker
Commit 23a27b039d widened the rows-stored counters to uint64, but
that's academic unless we allow the tuple pointer array to exceed 1GB.
(It might be a good idea to provide some other limit on how much storage
a SPITupleTable can eat. On the other hand, there are plenty of other
ways to drive a backend into swap hell.)
Dagfinn Ilmari Mannsåker
The old code is bad for two reasons. First, it has an off-by-one
error. Second, it won't help if you aren't running with assertions
enabled. Per discussion, we want a check here in that case too.
Author: KaiGai Kohei, adjusted by me.
Reviewed-by: Petr Jelinek
Discussion: 56E0D547.1030101@2ndquadrant.com
Commit d88976cfa1 removed this code from ginFreeScanKeys():
- if (entry->list)
- pfree(entry->list);
evidently in the belief that that ItemPointer array is allocated in the
keyCtx and so would be reclaimed by the following MemoryContextReset.
Unfortunately, it isn't and it won't. It'd likely be a good idea for
that to become so, but as a simple and back-patchable fix in the
meantime, restore this code to ginFreeScanKeys().
Also, add a similar pfree to where startScanEntry() is about to zero out
entry->list. I am not sure if there are any code paths where this
change prevents a leak today, but it seems like cheap future-proofing.
In passing, make the initial allocation of so->entries[] use palloc
not palloc0. The code doesn't depend on unused entries being zero;
if it did, the array-enlargement code in ginFillScanEntry() would be
wrong. So using palloc0 initially can only serve to confuse readers
about what the invariant is.
Per report from Felipe de Jesús Molina Bravo, via Jaime Casanova in
<CAJGNTeMR1ndMU2Thpr8GPDUfiHTV7idELJRFusA5UXUGY1y-eA@mail.gmail.com>
This longstanding functionality evidently got lost in commit
3d6d1b5855. Noted while studying an OOM report from Jaime
Casanova. Backpatch to 9.5 where the bug was introduced.
Commit a2dabf0e1d had the bright idea that it could modify a "const"
global variable if it merely casted away const from a pointer. This does
not work on platforms where the compiler puts "const" variables into
read-only storage. Depressingly, we evidently have no such platforms in
our buildfarm ... an oversight I have now remedied. (The one platform
that is known to catch this is recent OS X with -fno-common.)
Per report from Chris Ruprecht. Back-patch to 9.5 where the bogus
code was introduced.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
Buildfarm members gaur and pademelon are old enough not to know about
MAP_FAILED; which is used in 428b1d6. Include portability/mem.h to fix;
as already done in a bunch of other places.
CitusDB is using these and don't wish to redesign their code right now.
I am not on board with this being a good idea, or a good precedent,
but I lack the energy to fight about it.
Commit a892234f83 added a second bit per
page to the visibility map, but pg_upgrade has been unaware of it up
until now. Therefore, a pg_upgrade from an earlier major release of
PostgreSQL to any commit preceding this one and following the one
mentioned above would result in invalid visibility map contents on the
new cluster, very possibly leading to data corruption. This plugs
that hole.
Masahiko Sawada, reviewed by Jeff Janes, Bruce Momjian, Simon Riggs,
Michael Paquier, Andres Freund, me, and others.
It is frequently useful for volatile, set-returning, or expensive functions
in a SELECT's targetlist to be postponed till after ORDER BY and LIMIT are
done. Otherwise, the functions might be executed for every row of the
table despite the presence of LIMIT, and/or be executed in an unexpected
order. For example, in
SELECT x, nextval('seq') FROM tab ORDER BY x LIMIT 10;
it's probably desirable that the nextval() values are ordered the same
as x, and that nextval() is not run more than 10 times.
In the past, Postgres was inconsistent in this area: you would get the
desirable behavior if the ordering were performed via an indexscan, but
not if it had to be done by an explicit sort step. Getting the desired
behavior reliably required contortions like
SELECT x, nextval('seq')
FROM (SELECT x FROM tab ORDER BY x) ss LIMIT 10;
This patch conditionally postpones evaluation of pure-output target
expressions (that is, those that are not used as DISTINCT, ORDER BY, or
GROUP BY columns) so that they effectively occur after sorting, even if an
explicit sort step is necessary. Volatile expressions and set-returning
expressions are always postponed, so as to provide consistent semantics.
Expensive expressions (costing more than 10 times typical operator cost,
which by default would include any user-defined function) are postponed
if there is a LIMIT or if there are expressions that must be postponed.
We could be more aggressive and postpone any nontrivial expression, but
there are costs associated with doing so: it requires an extra Result plan
node which adds some overhead, and postponement changes the volume of data
going through the sort step, perhaps for the worse. Since we tend not to
have very good estimates of the output width of nontrivial expressions,
it's hard to have much confidence in our ability to predict whether
postponement would increase or decrease the cost of the sort; therefore
this patch doesn't attempt to make decisions conditionally on that.
Between these factors and a general desire not to change query behavior
when there's not a demonstrable benefit, it seems best to be conservative
about applying postponement. We might tweak the decision rules in the
future, though.
Konstantin Knizhnik, heavily rewritten by me
Some dictionaries have duplicated base words with different affix set, we
just merge that sets into one set. But previously merging of sets of affixes
was actually a concatenation of strings but it's wrong for numeric
representation of affixes because such representation uses comma to
separate affixes.
Author: Artur Zakirov
Adds several tsvector editting function: convert tsvector to/from text array,
set weight for given lexemes, delete lexeme(s), unnest, filter lexemes
with given weights
Author: Stas Kelvich with some editorization by me
Reviewers: Tomas Vondram, Teodor Sigaev
Teach make_group_input_target() and make_window_input_target() to work
entirely with the PathTarget representation of tlists, rather than
constructing a tlist and immediately deconstructing it into PathTarget
format. In itself this only saves a few palloc's; the bigger picture is
that it opens the door for sharing cost_qual_eval work across all of
planner.c's constructions of PathTargets. I'll come back to that later.
In support of this, flesh out tlist.c's infrastructure for PathTargets
a bit more.
Much cruft had accumulated over time with a large number of parameters
passed down between functions very deep. With this refactoring, instead
introduce a StreamCtl structure that holds the parameters, and pass around
a pointer to this structure instead. This makes it much easier to add or
remove fields that are needed deeper down in the implementation without
having to modify every function header in the file.
Patch by me after much nagging from Andres
Reviewed by Craig Ringer and Daniel Gustafsson
emit_log_hook could only see the translated text, making it harder to identify
which message was being sent. Pass original text to allow the exact message to
be identified, whichever language is used for logging.
Discussion: 20160216.184755.59721141.horiguchi.kyotaro@lab.ntt.co.jp
Author: Kyotaro Horiguchi
The old code is wrong, because it returns a pointer to an automatic
variable. And it's also more clever than we really need to be
considering that the case it's worrying about should never happen.
Up to now checkpoints were written in the order they're in the
BufferDescriptors. That's nearly random in a lot of cases, which
performs badly on rotating media, but even on SSDs it causes slowdowns.
To avoid that, sort checkpoints before writing them out. We currently
sort by tablespace, relfilenode, fork and block number.
One of the major reasons that previously wasn't done, was fear of
imbalance between tablespaces. To address that balance writes between
tablespaces.
The other prime concern was that the relatively large allocation to sort
the buffers in might fail, preventing checkpoints from happening. Thus
pre-allocate the required memory in shared memory, at server startup.
This particularly makes it more efficient to have checkpoint flushing
enabled, because that'll often result in a lot of writes that can be
coalesced into one flush.
Discussion: alpine.DEB.2.10.1506011320000.28433@sto
Author: Fabien Coelho and Andres Freund
Currently writes to the main data files of postgres all go through the
OS page cache. This means that some operating systems can end up
collecting a large number of dirty buffers in their respective page
caches. When these dirty buffers are flushed to storage rapidly, be it
because of fsync(), timeouts, or dirty ratios, latency for other reads
and writes can increase massively. This is the primary reason for
regular massive stalls observed in real world scenarios and artificial
benchmarks; on rotating disks stalls on the order of hundreds of seconds
have been observed.
On linux it is possible to control this by reducing the global dirty
limits significantly, reducing the above problem. But global
configuration is rather problematic because it'll affect other
applications; also PostgreSQL itself doesn't always generally want this
behavior, e.g. for temporary files it's undesirable.
Several operating systems allow some control over the kernel page
cache. Linux has sync_file_range(2), several posix systems have msync(2)
and posix_fadvise(2). sync_file_range(2) is preferable because it
requires no special setup, whereas msync() requires the to-be-flushed
range to be mmap'ed. For the purpose of flushing dirty data
posix_fadvise(2) is the worst alternative, as flushing dirty data is
just a side-effect of POSIX_FADV_DONTNEED, which also removes the pages
from the page cache. Thus the feature is enabled by default only on
linux, but can be enabled on all systems that have any of the above
APIs.
While desirable and likely possible this patch does not contain an
implementation for windows.
With the infrastructure added, writes made via checkpointer, bgwriter
and normal user backends can be flushed after a configurable number of
writes. Each of these sources of writes controlled by a separate GUC,
checkpointer_flush_after, bgwriter_flush_after and backend_flush_after
respectively; they're separate because the number of flushes that are
good are separate, and because the performance considerations of
controlled flushing for each of these are different.
A later patch will add checkpoint sorting - after that flushes from the
ckeckpoint will almost always be desirable. Bgwriter flushes are most of
the time going to be random, which are slow on lots of storage hardware.
Flushing in backends works well if the storage and bgwriter can keep up,
but if not it can have negative consequences. This patch is likely to
have negative performance consequences without checkpoint sorting, but
unfortunately so has sorting without flush control.
Discussion: alpine.DEB.2.10.1506011320000.28433@sto
Author: Fabien Coelho and Andres Freund
All along, this function should have treated WindowFuncs in a manner
similar to Aggrefs, ie with an option whether or not to recurse into them.
By not considering the case, it was always recursing, which is OK for most
callers (although I suspect that the case in prepare_sort_from_pathkeys
might represent a bug). But now we need return-without-recursing behavior
as well. There are also more than a few callers that should never see a
WindowFunc, and now we'll get some error checking on that.
Commit a892234f83 gave us enough
infrastructure to avoid vacuuming pages where every tuple on the
page is already frozen. So, replace the notion of a scan_all or
whole-table vacuum with the less onerous notion of an "aggressive"
vacuum, which will pages that are all-visible, but still skip those
that are all-frozen.
This should greatly reduce the cost of anti-wraparound vacuuming
on large clusters where the majority of data is never touched
between one cycle and the next, because we'll no longer have to
read all of those pages only to find out that we don't need to
do anything with them.
Patch by me, reviewed by Masahiko Sawada.
In commit 1d97c19a0f and later c1d9579dd8, we extended
pull_var_clause's API by adding enum-type arguments. That's sort of a pain
to maintain, though, because it means every time we add a new behavior we
must touch every last one of the call sites, even if there's a reasonable
default behavior that most of them could use. Let's switch over to using a
bitmask of flags, instead; that seems more maintainable and might save a
nanosecond or two as well. This commit changes no behavior in itself,
though I'm going to follow it up with one that does add a new behavior.
In passing, remove flatten_tlist(), which has not been used since 9.1
and would otherwise need the same API changes.
Removing these enums means that optimizer/tlist.h no longer needs to
depend on optimizer/var.h. Changing that caused a number of C files to
need addition of #include "optimizer/var.h" (probably we can thank old
runs of pgrminclude for that); but on balance it seems like a good change
anyway.
Earlier version committed in 9.0 caused spurious waits in some cases.
New infrastructure for lock waits in 9.3 used to correct and improve this.
Jeff Janes based upon a proposal by Simon Riggs, who also reviewed
Additional review comments from Amit Kapila
When a process is waiting for a heavyweight lock, we will now indicate
the type of heavyweight lock for which it is waiting. Also, you can
now see when a process is waiting for a lightweight lock - in which
case we will indicate the individual lock name or the tranche, as
appropriate - or for a buffer pin.
Amit Kapila, Ildus Kurbangaliev, reviewed by me. Lots of helpful
discussion and suggestions by many others, including Alexander
Korotkov, Vladimir Borodin, and many others.
The Visual Studio 2013 CRT generates invalid code when it makes a 64-bit
build that is later used on a CPU that supports AVX2 instructions using a
version of Windows before 7SP1/2008R2SP1.
Detect this combination, and in those cases turn off the generation of
FMA3, per recommendation from the Visual Studio team.
The bug is actually in the CRT shipping with Visual Studio 2013, but
Microsoft have stated they're only fixing it in newer major versions.
The fix is therefor conditioned specifically on being built with this
version of Visual Studio, and not previous or later versions.
Author: Christian Ullrich
Previously 2PC header was fixed at 200 bytes, which in most cases wasted
WAL space for a workload using 2PC heavily.
Pavan Deolasee, reviewed by Petr Jelinek
There's no point in pstrdup'ing the result of TextDatumGetCString,
since that's necessarily already a freshly-palloc'd C string.
These particular calls are unlikely to be of any consequence
performance-wise, but still they're a bad precedent that can confuse
future patch authors.
Noted by Chapman Flack.
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes. Use the previously added durable_rename()/durable_link_or_rename()
in various places where we previously just renamed files.
Most of the changed call sites are arguably not critical, but it seems
better to err on the side of too much durability. The most prominent
known case where the previously missing fsyncs could cause data loss is
crashes at the end of a checkpoint. After the actual checkpoint has been
performed, old WAL files are recycled. When they're filled, their
contents are fdatasynced, but we did not fsync the containing
directory. An OS/hardware crash in an unfortunate moment could then end
up leaving that file with its old name, but new content; WAL replay
would thus not replay it.
Reported-By: Tomas Vondra
Author: Michael Paquier, Tomas Vondra, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes; especially on filesystems like xfs and ext4 when mounted
with data=writeback. To be certain that a rename() atomically replaces
the previous file contents in the face of crashes and different
filesystems, one has to fsync the old filename, rename the file, fsync
the new filename, fsync the containing directory. This sequence is not
generally adhered to currently; which exposes us to data loss risks. To
avoid having to repeat this arduous sequence, introduce
durable_rename(), which wraps all that.
Also add durable_link_or_rename(). Several places use link() (with a
fallback to rename()) to rename a file, trying to avoid replacing the
target file out of paranoia. Some of those rename sequences need to be
durable as well. There seems little reason extend several copies of the
same logic, so centralize the link() callers.
This commit does not yet make use of the new functions; they're used in
a followup commit.
Author: Michael Paquier, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
These simple methods rely on RecursiveCopy to create a filesystem-level
backup of a server. They aren't currently used anywhere yet,but will be
useful for future tests.
Author: Craig Ringer
Reviewed-By: Michael Paquier, Salvador Fandino, Álvaro Herrera
Commitfest-URL: https://commitfest.postgresql.org/9/569/
This allows skipping copying certain files and subdirectories in tests.
This is useful in some circumstances such as copying a data directory;
future tests want this feature.
Also POD-ify the module.
Authors: Craig Ringer, Pallavi Sontakke
Reviewed-By: Álvaro Herrera
An index search using a row comparison such as ROW(a, b) > ROW('x', 'y')
would stop upon reaching a NULL entry in the "b" column, ignoring the
fact that there might be non-NULL "b" values associated with later values
of "a". This happens because _bt_mark_scankey_required() marks the
subsidiary scankey for "b" as required, which is just wrong: it's for
a column after the one with the first inequality key (namely "a"), and
thus can't be considered a required match.
This bit of brain fade dates back to the very beginnings of our support
for indexed ROW() comparisons, in 2006. Kind of astonishing that no one
came across it before Glen Takahashi, in bug #14010.
Back-patch to all supported versions.
Note: the given test case doesn't actually fail in unpatched 9.1, evidently
because the fix for bug #6278 (i.e., stopping at nulls in either scan
direction) is required to make it fail. I'm sure I could devise a case
that fails in 9.1 as well, perhaps with something involving making a cursor
back up; but it doesn't seem worth the trouble.
At low rates, this can lead to pgbench taking significantly longer to
terminate than the user might expect. Repair.
Fabien Coelho, reviewed by Aleksander Alekseev, Álvaro Herrera, and me.
Using this facility, any utility command can report the target relation
upon which it is operating, if there is one, and up to 10 64-bit
counters; the intent of this is that users should be able to figure out
what a utility command is doing without having to resort to ugly hacks
like attaching strace to a backend.
As a demonstration, this adds very crude reporting to lazy vacuum; we
just report the target relation and nothing else. A forthcoming patch
will make VACUUM report a bunch of additional data that will make this
much more interesting. But this gets the basic framework in place.
Vinayak Pokale, Rahila Syed, Amit Langote, Robert Haas, reviewed by
Kyotaro Horiguchi, Jim Nasby, Thom Brown, Masahiko Sawada, Fujii Masao,
and Masanori Oyama.
This function is written as though Gather doesn't project; but it does.
Even if it did not project, though, we must use build_path_tlist to ensure
that the output columns receive correct sortgroupref labeling.
Per report from Amit Kapila.
Refactor so that the internal APIs in planner.c deal in PathTargets not
targetlists, and establish a more regular structure for deriving the
targets needed for successive steps.
There is more that could be done here; calculating the eval costs of each
successive target independently is both inefficient and wrong in detail,
since we won't actually recompute values available from the input node's
tlist. But it's no worse than what happened before the pathification
rewrite. In any case this seems like a good starting point for considering
how to handle Konstantin Knizhnik's function-evaluation-postponement patch.
Python's allocator does some low-level tricks for efficiency;
unfortunately they trigger valgrind errors. Those tricks can be disabled
making instrumentation easier; but few people testing postgres will have
such a build of python. So add broad suppressions of the resulting
errors.
See also https://svn.python.org/projects/python/trunk/Misc/README.valgrind
This possibly will suppress valid errors, but without it it's basically
impossible to use valgrind with plpython code.
Author: Andres Freund
Backpatch: 9.4, where we started to maintain valgrind suppressions
Instead of having planner.c compute a groupColIdx array and store it in
GroupingSetsPaths, make create_groupingsets_plan() find the grouping
columns by searching in the child plan node's tlist. Although that's
probably a bit slower for create_groupingsets_plan(), it's more like
the way every other plan node type does this, and it provides positive
confirmation that we know which child output columns we're supposed to be
grouping on. (Indeed, looking at this now, I'm not at all sure that it
wasn't broken before, because create_groupingsets_plan() isn't demanding
an exact tlist match from its child node.) Also, this allows substantial
simplification in planner.c, because it no longer needs to compute the
groupColIdx array at all; no other cases were using it.
I'd intended to put off this refactoring until later (like 9.7), but
in view of the likely bug fix and the need to rationalize planner.c's
tlist handling so we can do something sane with Konstantin Knizhnik's
function-evaluation-postponement patch, I think it can't wait.
Struct QL_HELP used to be defined as static in the sql_help.h header
file, which is included in sql_help.c and help.c, thus creating two
separate instances of the struct. This causes a warning from GCC 6,
because the struct is not used in sql_help.c.
Instead, declare the struct as extern in the header file and define it
in sql_help.c. This also allows making a bunch of functions static
because they are no longer needed outside of sql_help.c.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
I passed the wrong "root" struct to create_pathtarget in build_minmax_path.
Since the subroot is a clone of the outer root, this would not cause any
serious problems, but it would waste some cycles because
set_pathtarget_cost_width would not have access to Var width estimates
set up while running query_planner on the subroot.
plperl_ref_from_pg_array() didn't consider the case that postgrs arrays
can have 0 dimensions (when they're empty) and accessed the first
dimension without a check. Fix that by special casing the empty array
case.
Author: Alex Hunsaker
Reported-By: Andres Freund / valgrind / buildfarm animal skink
Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de
Backpatch: 9.1-
This patch removes some redundant cost calculations that I left for later
cleanup in commit 3fc6e2d7f5. There's now a uniform policy that the
make_foo() convenience functions don't do any cost calculations. Most of
their callers copy costs from the source Path node, and for those that
don't, the calculation in the make_foo() function wasn't necessarily right
anyhow. (make_result() was particularly a mess, as it was serving multiple
callers using cost calcs designed for only the first one or two that had
ever existed.) Aside from saving a few cycles, this ensures that what
EXPLAIN prints matches the costs we used for planning purposes. It does
not change any planner decisions, since the decisions are already made.
Without this fix, it inevitably bombs out with "ERROR: failed to
initialize transaction_read_only to 0". Repair.
Ashutosh Sharma; comments adjusted by me.
For example, if you want to perform an ioctl() on a file descriptor
opened through the fd.c routines, there's no way to do that without
being able to get at the underlying fd.
KaiGai Kohei
Commit a892234f83 added a second bit per
page to the visibility map, which still seems like a good idea, but it
also added a second page-level bit alongside PD_ALL_VISIBLE to track
whether the visibility map bit was set. That no longer seems like a
clever plan, because we don't really need that bit for anything. We
always clear both bits when the page is modified anyway.
Patch by me, reviewed by Kyotaro Horiguchi and Masahiko Sawada.
We've not found a use for this so far, and the current need, which
is to convert the visibility map to a new format, does not suit the
existing design anyway. So just rip it out.
Author: Masahiko Sawada, slightly revised by me.
Discussion: 20160215211313.GB31273@momjian.us
As originally committed, get_controlfile() used a non-standard approach
to error logging. Make it consistent with the majority of error logging
done in src/common.
Applies to master only.
Coverity and inspection for the issue addressed in fd45d16f found some
questionable code.
Specifically coverity noticed that the wrong length was added in
ReorderBufferSerializeChange() - without immediate negative consequences
as the variable isn't used afterwards. During code-review and testing I
noticed that a bit of space was wasted when allocating tuple bufs in
several places. Thirdly, the debug memset()s in
ReorderBufferGetTupleBuf() reduce the error checking valgrind can do.
Backpatch: 9.4, like c8f621c43.
I've been saying we needed to do this for more than five years, and here it
finally is. This patch removes the ever-growing tangle of spaghetti logic
that grouping_planner() used to use to try to identify the best plan for
post-scan/join query steps. Now, there is (nearly) independent
consideration of each execution step, and entirely separate construction of
Paths to represent each of the possible ways to do that step. We choose
the best Path or set of Paths using the same add_path() logic that's been
used inside query_planner() for years.
In addition, this patch removes the old restriction that subquery_planner()
could return only a single Plan. It now returns a RelOptInfo containing a
set of Paths, just as query_planner() does, and the parent query level can
use each of those Paths as the basis of a SubqueryScanPath at its level.
This allows finding some optimizations that we missed before, wherein a
subquery was capable of returning presorted data and thereby avoiding a
sort in the parent level, making the overall cost cheaper even though
delivering sorted output was not the cheapest plan for the subquery in
isolation. (A couple of regression test outputs change in consequence of
that. However, there is very little change in visible planner behavior
overall, because the point of this patch is not to get immediate planning
benefits but to create the infrastructure for future improvements.)
There is a great deal left to do here. This patch unblocks a lot of
planner work that was basically impractical in the old code structure,
such as allowing FDWs to implement remote aggregation, or rewriting
plan_set_operations() to allow consideration of multiple implementation
orders for set operations. (The latter will likely require a full
rewrite of plan_set_operations(); what I've done here is only to fix it
to return Paths not Plans.) I have also left unfinished some localized
refactoring in createplan.c and planner.c, because it was not necessary
to get this patch to a working state.
Thanks to Robert Haas, David Rowley, and Amit Kapila for review.
A thinko in a96761391 caused pg_ctl to get it exactly backwards when
deciding whether to report problems to the Windows eventlog or to stderr.
Per bug #14001 from Manuel Mathar, who also identified the fix.
Like the previous patch, back-patch to all supported branches.
As written, this would accept e.g. 123e9 as a function name. Aside
from being mildly astonishing, that would come back to haunt us if
we ever try to add float constants to the expression syntax. Insist
that function names start with letters (or at least non-digits).
In passing reset yyline as well as yycol when starting a new expression.
This variable is useless since it's used nowhere, but if we're going
to have it we should have it act sanely.
In c8f621c43 I forgot to account for MAXALIGN when allocating a new
tuplebuf in ReorderBufferGetTupleBuf(). That happens to currently not
cause active problems on a number of platforms because the affected
pointer is already aligned, but others, like ppc and hppa, trigger this
in the regression test, due to a debug memset clearing memory.
Fix that.
Backpatch: 9.4, like the previous commit.
There were two places in spell.c that supposed that they could search
for a location in a string produced by lowerstr() and then transpose
the offset into the original string. But this fails completely if
lowerstr() transforms any characters into characters of different byte
length, as can happen in Turkish UTF8 for instance.
We'd added some comments about this coding in commit 51e78ab4ff,
but failed to realize that it was not merely confusing but wrong.
Coverity complained about this code years ago, but in such an opaque
fashion that nobody understood what it was on about. I'm not entirely
sure that this issue *is* what it's on about, actually, but perhaps
this patch will shut it up -- and in any case the problem is clear.
Back-patch to all supported branches.
isdigit(), isspace(), etc are likely to give surprising results if passed a
signed char. We should always cast the argument to unsigned char to avoid
that. Error in commit d78a7d9c7f, found by buildfarm member gaur.
When decoding the old version of an UPDATE or DELETE change, and if that
tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or
failed in more subtle ways in non-assert builds. Normally individual
tuples aren't bigger than MaxHeapTupleSize, with big datums toasted.
But that's not the case for the old version of a tuple for logical
decoding; the replica identity is logged as one piece. With the default
replica identity btree limits that to small tuples, but that's not the
case for FULL.
Change the tuple buffer infrastructure to separate allocate over-large
tuples, instead of always going through the slab cache.
This unfortunately requires changing the ReorderBufferTupleBuf
definition, we need to store the allocated size someplace. To avoid
requiring output plugins to recompile, don't store HeapTupleHeaderData
directly after HeapTupleData, but point to it via t_data; that leaves
rooms for the allocated size. As there's no reason for an output plugin
to look at ReorderBufferTupleBuf->t_data.header, remove the field. It
was just a minor convenience having it directly accessible.
Reported-By: Adam Dratwiński
Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
Somehow I managed to flip the order of restoring old & new tuples when
de-spooling a change in a large transaction from disk. This happens to
only take effect when a change is spooled to disk which has old/new
versions of the tuple. That only is the case for UPDATEs where he
primary key changed or where replica identity is changed to FULL.
The tests didn't catch this because either spooled updates, or updates
that changed primary keys, were tested; not both at the same time.
Found while adding tests for the following commit.
Backpatch: 9.4, where logical decoding was added
Logical decoding's reorderbuffer keeps transactions in an LSN ordered
list for efficiency. To make that's efficiently possible upper-level
xids are forced to be logged before nested subtransaction xids. That
only works though if these records are all looked at: Unfortunately we
didn't do so for e.g. row level locks, which are otherwise uninteresting
for logical decoding.
This could lead to errors like:
"ERROR: subxact logged without previous toplevel record".
It's not sufficient to just look at row locking records, the xid could
appear first due to a lot of other types of records (which will trigger
the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny).
So invent infrastructure to tell reorderbuffer about xids seen, when
they'd otherwise not pass through reorderbuffer.c.
Reported-By: Jarred Ward
Bug: #13844
Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org
Backpatch: 9.4, where logical decoding was added
Add four new SQL accessible functions: pg_control_system(),
pg_control_checkpoint(), pg_control_recovery(), and pg_control_init()
which expose a subset of the control file data.
Along the way move the code to read and validate the control file to
src/common, where it can be shared by the new backend functions
and the original pg_controldata frontend program.
Patch by me, significant input, testing, and review by Michael Paquier.
Previously recovery_min_apply_delay was applied even before recovery
had reached consistency. This could cause us to wait a long time
unexpectedly for read-only connections to be allowed. It's problematic
because the standby was useless during that wait time.
This patch changes recovery_min_apply_delay so that it's applied once
the database has reached the consistent state. That is, even if the delay
is set, the standby tries to replay WAL records as fast as possible until
it has reached consistency.
Author: Michael Paquier
Reviewed-By: Julien Rouhaud
Reported-By: Greg Clough
Backpatch: 9.4, where recovery_min_apply_delay was added
Bug: #13770
Discussion: http://www.postgresql.org/message-id/20151111155006.2644.84564@wrigleys.postgresql.org
Historically, the wait_for_stats() function in this test has simply checked
for a report of an indexscan on tenk2, corresponding to the last command
issued before we expect stats updates to appear. However, with parallel
query that indexscan could be done by a parallel worker that will emit
its stats counters to the collector before the session's main backend does
(a full second before, in fact, thanks to the "pg_sleep(1.0)" added by
commit 957d08c81f). That leaves a sizable window in which an
autovacuum-triggered write of the stats files would present a state in
which the indexscan on tenk2 appears to have been done, but none of the
write updates performed by the test have been. This is evidently the
explanation for intermittent failures seen by me and on buildfarm member
mandrill.
To fix, we should check separately for both the tenk2 seqscan and indexscan
counts, since those might be reported by different processes that could be
delayed arbitrarily on an overloaded test machine. And we need to check
for at least one update-related count. If we ever allow parallel workers
to do writes, this will get even more complicated ... but in view of all
the other hard problems that will entail, I don't feel a need to solve this
one today.
Per research by Rahila Syed and myself; part of this patch is Rahila's.
A simple SELECT is handled by PortalRunSelect, not ProcessQuery. Also,
the previous indentation was unclear: change it so that a deeper level
of indentation indicates that the outer function calls the inner one.
Stas Kelvich
Originally, we didn't have nworkers_launched, so code that used parallel
contexts had to be preprared for the possibility that not all of the
workers requested actually got launched. But now we can count on knowing
the number of workers that were successfully launched, which can shave
off a few cycles and simplify some code slightly.
Amit Kapila, reviewed by Haribabu Kommi, per a suggestion from Peter
Geoghegan.
Now it's possible to load recent version of Hunspell for several languages.
To handle these dictionaries Hunspell patch adds support for:
* FLAG long - sets the double extended ASCII character flag type
* FLAG num - sets the decimal number flag type (from 1 to 65535)
* AF parameter - alias for flag's set
Also it moves test dictionaries into separate directory.
Author: Artur Zakirov with editorization by me
The existing code confuses the byte length of the string (which is
relevant when passing it to pg_strncasecmp) with the character length
of the string (which is relevant when it is used with the SQL substring
function). Separate those two concepts.
Report and patch by Kyotaro Horiguchi, reviewed by Thomas Munro and
reviewed and further revised by me.
This makes the flag more visible for testers using the default file as a
template, increasing the likelyhood that the test suite will be run.
Also have the flag be displayed in the fake "configure" output, if set.
This patch is two new lines only, but perltidy decides to shift things
around which makes it appear a bit bigger.
Author: Michaël Paquier
Reviewed-by: Craig Ringer
Discussion: https://www.postgresql.org/message-id/CAB7nPqRet6UAP2APhZAZw%3DVhJ6w-Q-gGLdZkrOqFgd2vc9-ZDw%40mail.gmail.com
This makes the psql() method much more capable: it captures both stdout
and stderr; it now returns the psql exit code rather than stdout; a
timeout can now be specified, as can ON_ERROR_STOP behavior; it gained a
new "on_error_die" (defaulting to off) parameter to raise an exception
if there's any problem. Finally, additional parameters to psql can be
passed if there's need for further tweaking.
For convenience, a new safe_psql() method retains much of the old
behavior of psql(), except that it uses on_error_die on, so that
problems like syntax errors in SQL commands can be detected more easily.
Many existing TAP test files now use safe_psql, which is what is really
wanted. A couple of ->psql() calls are now added in the commit_ts
tests, which verify that the right thing is happening on certain errors.
Some ->command_fails() calls in recovery tests that were verifying that
psql failed also became ->psql() calls now.
Author: Craig Ringer. Some tweaks by Álvaro Herrera
Reviewed-By: Michaël Paquier
Also, mention in README that Perl files should be perltidy'ed. This
isn't really the best place (since we have Perl files elsewhere in the
tree) and this is already in pgindent's README, but this subdir is
likely to get hacked a whole lot more than the other Perl files, so it
seems okay to spend two lines on this.
Author: Craig Ringer
One test was relying on method remove_tree that isn't implemented in the
oldest Perl we support; fix it by using the older rmtree instead.
Another test had a typo in a SQL command, which isn't noticed because
the PostgresNode->psql() method doesn't check that queries return
correctly. That's undesirable and will also be fixed later on, but for
now let's make the test actually work.
Author: Craig Ringer
606c0123d6 attempted to reduce cost of index scans using > and <
strategies, though got that completely wrong in a few complex cases.
Revert whole patch until we find a safe optimization.
When adding replication origins in 5aa235042, I somehow managed to set
the timestamp of decoded transactions to InvalidXLogRecptr when decoding
one made without a replication origin. Fix that, and the wrong type of
the new commit_time variable.
This didn't trigger a regression test failure because we explicitly
don't show commit timestamps in the regression tests, as they obviously
are variable. Add a test that checks that a decoded commit's timestamp
is within minutes of NOW() from before the commit.
Reported-By: Weiping Qu
Diagnosed-By: Artur Zakirov
Discussion: 56D4197E.9050706@informatik.uni-kl.de,
56D42918.1010108@postgrespro.ru
Backpatch: 9.5, where 5aa235042 originates.
A thinko concerning nesting depth caused json_to_record() to produce bogus
output if a field of its input object contained a sub-object with a field
name matching one of the requested output column names. Per bug #13996
from Johann Visagie.
I added a regression test case based on his example, plus parallel tests
for json_to_recordset, jsonb_to_record, jsonb_to_recordset. The latter
three do not exhibit the same bug (which suggests that we may be missing
some opportunities to share code...) but testing seems like a good idea
in any case.
Back-patch to 9.4 where these functions were introduced.
Commits 9ff60273e3 and dbe2328959 adjusted the declarations
of some core functions referenced by contrib/tsearch2's install script,
forgetting that in a pg_upgrade situation, we'll be trying to restore
operator class definitions that reference the old signatures. We've
hit this problem before; solve it in the same way as before, namely by
installing stub functions that have the expected signature and just
invoke the correct function. Per report from Jeff Janes.
(Someday we ought to stop supporting contrib/tsearch2, but I'm not
sure today is that day.)
This makes it easier to relate the temporary data dirs to each node in
a test script.
Author: Kyotaro Horiguchi
Reviewed-By: Craig Ringer, Alvaro Herrera
PL/Tcl appears to contain logic to convert strings between the database
encoding and UTF8, which is the only encoding modern Tcl will deal with.
However, that code has been disabled since commit 034895125d, which
made it "#if defined(UNICODE_CONVERSION)" and neglected to provide any way
for that symbol to become defined. That might have been all right back
in 2001, but these days we take a dim view of allowing strings with
incorrect encoding into the database.
Remove the conditional compilation, fix warnings about signed/unsigned char
conversions, clean up assorted places that didn't bother with conversions.
(Notably, there were lots of assumptions that database table and field
names didn't need conversion...)
Add a regression test based on plpython_unicode. It's not terribly
thorough, but better than no test at all.
As of commit 2878220682, PL/Tcl will not
compile against pre-8.0 Tcl, whereas it used to work (more or less anyway)
with quite prehistoric versions. As long as we're moving these goalposts,
let's reinstall them at someplace that has some thought behind it. This
commit sets the minimum allowed Tcl version at 8.4, and rips out some bits
of compatibility cruft that are in consequence no longer needed. Reasons
for requiring 8.4 include:
* 8.4 was released in 2002; there seems little reason to believe that
anyone would want to use older versions with Postgres 9.6+.
* We have no buildfarm members testing anything older than 8.4, and
thus no way to know if it's broken.
* We need at least 8.1 to allow enforcement of database encoding
security (8.1 standardized Tcl on using UTF8 internally, before that
it was pretty unpredictable).
* Some versions between 8.1 and 8.4 allowed the backend to become
multithreaded, which is disastrous. We need at least 8.4 to be able
to disable the Tcl notifier subsystem to prevent that.
A small side benefit is that we can make the code more readable by
doing s/CONST84/const/g.
The original implementation of Tcl was all strings, but they improved
performance significantly by introducing typed "objects" (integers,
lists, code, etc). It's past time we made use of that; that happened
in Tcl 8.0 which was released in 1997.
This patch also modernizes some of the error-reporting code, which may
cause small changes in the spelling of complaints about bad calls to
PL/Tcl-provided commands.
Jim Nasby and Karl Lehenbauer, reviewed by Victor Wagner
Commit 7132810c (Retain tempdirs for failed tests) used Test::More's
is_passing method, but that was added in Test::More 0.89_01 which is
sometime later than Perl 5.10.1. Popular platforms such as RHEL6 don't
have that, nevermind some of our older dinosaurs. Do it the hard way.
Michael Paquier, based on research by Craig Ringer
The new bit indicates whether every tuple on the page is already frozen.
It is cleared only when the all-visible bit is cleared, and it can be
set only when we vacuum a page and find that every tuple on that page is
both visible to every transaction and in no need of any future
vacuuming.
A future commit will use this new bit to optimize away full-table scans
that would otherwise be triggered by XID wraparound considerations. A
page which is merely all-visible must still be scanned in that case, but
a page which is all-frozen need not be. This commit does not attempt
that optimization, although that optimization is the goal here. It
seems better to get the basic infrastructure in place first.
Per discussion, it's very desirable for pg_upgrade to automatically
migrate existing VM forks from the old format to the new format. That,
too, will be handled in a follow-on patch.
Masahiko Sawada, reviewed by Kyotaro Horiguchi, Fujii Masao, Amit
Kapila, Simon Riggs, Andres Freund, and others, and substantially
revised by me.
Test composite-type arguments and the argisnull and spi_lastoid Tcl
commmands. This stuff was not covered before, but needs to be exercised
since the upcoming Tcl object-conversion patch changes these code paths
(and broke at least one of them).
These tests verify that 1) WAL replay preserves the stored value,
2) a streaming standby server replays the value obtained from the
master, and 3) the behavior is sensible in the face of repeated
configuration changes.
One annoyance is that tmp_check/ subdir from the TAP tests is clobbered
when the pg_regress test runs in the same subdirectory. This is
bothersome but not too terrible a problem, since the pg_regress test is
not run anyway if the TAP tests fail (unless "make -k" is used).
I had these tests around since commit 69e7235c93e2; add them now that we
have the recovery test framework in place.
Fabien Coelho, reviewed mostly by Michael Paquier and me, but also by
Heikki Linnakangas, BeomYong Lee, Kyotaro Horiguchi, Oleksander
Shulgin, and Álvaro Herrera.
I noticed that the async-notify test results in log messages like these:
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
This is because it unceremoniously disconnects a client session that is
about to have some NOTIFY messages delivered to it. Such log messages
during a regression test might well cause people to go looking for a
problem that doesn't really exist (it did cause me to waste some time that
way). We can shut it up by adding an UNLISTEN command to session teardown.
Patch HEAD only; this doesn't seem significant enough to back-patch.
This error message was written with only ON SELECT rules in mind, but since
then we also made RETURNING-clause targetlists go through the same logic.
This means that you got a rather off-topic error message if you tried to
add a rule with RETURNING to a table having dropped columns. Ideally we'd
just support that, but some preliminary investigation says that it might be
a significant amount of work. Seeing that Nicklas Avén's complaint is the
first one we've gotten about this in the ten years or so that the code's
been like that, I'm unwilling to put much time into it. Instead, improve
the error report by issuing a different message for RETURNING cases, and
revise the associated comment based on this investigation.
Discussion: 1456176604.17219.9.camel@jordogskog.no
The original coding of the test was relying too much on the ordering in
which backends are awakened once an advisory lock which they wait for is
released. Change the code so that each backend uses its own advisory
lock instead, so that the output becomes stable. Also add a few seconds
of sleep between lock releases, so that the test isn't broken in
overloaded buildfarm animals, as suggested by Tom Lane.
Per buildfarm members spoonbill and guaibasaurus.
Discussion: https://www.postgresql.org/message-id/19294.1456551587%40sss.pgh.pa.us
It's harmless, but might confuse readers. Seems to have been introduced
in 6bc8ef0b7f. Back-patch, just to avoid cosmetic cross-branch
differences.
Amit Langote
In commit 19a541143a I replaced RelOptInfo.width with
RelOptInfo.reltarget.width, but I missed updating debug_print_rel()
for that because it's not compiled by default.
Reported by Salvador Fandino, patch by Michael Paquier.
When converting an RTE with securityQuals into a security barrier
subquery RTE, ensure that the Vars in the new subquery's targetlist
all have varlevelsup = 0 so that they correctly refer to the
underlying base relation being wrapped.
The original code was creating new Vars by copying them from existing
Vars referencing the base relation found elsewhere in the query, but
failed to account for the fact that such Vars could come from sublink
subqueries, and hence have varlevelsup > 0. In practice it looks like
this could only happen with nested security barrier views, where the
outer view has a WHERE clause containing a correlated subquery, due to
the order in which the Vars are processed.
Bug: #13988
Reported-by: Adam Guthrie
Backpatch-to: 9.4, where updatable SB views were introduced
A failure partway through PGLC_localeconv() led to a situation where
the next call would call free_struct_lconv() a second time, leading
to free() on already-freed strings, typically leading to a core dump.
Add a flag to remember whether we need to do that.
Per report from Thom Brown. His example case only provokes the failure
as far back as 9.4, but nonetheless this code is obviously broken, so
back-patch to all supported branches.
In 0e5680f473, I fixed a bug in heapam that caused spurious deadlocks
when multiple updates concurrently attempted to modify the old version
of an updated tuple whose new version was key-share locked. I proposed
an isolationtester spec file that reproduced the bug, but back then
isolationtester wasn't mature enough to be able to run it. Now that
38f8bdcac4 is in the tree, we can have this spec file too.
Discussion: https://www.postgresql.org/message-id/20141212205254.GC1768%40alvh.no-ip.org
I applied the previous-to-last revision of Michaël's submitted patch
instead of the last; these two tweaks pointed out by Craig were left out
of the previous commit by accident.
Some code in the RewindTest test suite is more generally useful than
just for that suite, so put it where other test suites can reach it.
Some postgresql.conf parameters change their default values when a
cluster is initialized with 'allows_streaming' than the previous
behavior; most notably, autovacuum is no longer turned off.
(Also, we no longer call pg_ctl promote with -w, but that flag doesn't
actually do anything in promote so there's no behavior change.)
Author: Michael Paquier
CREATE TABLE .. AS EXECUTE can turn an apparently read-only query into
a write operation, which parallel query can't handle. It's a bit of a
shame that requires us to avoid parallel query for queries prepared via
PREPARE in all cases, but for right now it does.
This is basically a bug fix; the old code assumes that a ForeignScan
is always parallel-safe, but for postgres_fdw, for example, this is
definitely false. It should be true for file_fdw, though, since a
worker can read a file from the filesystem just as well as any other
backend process.
Original patch by Thomas Munro. Documentation, and changes to the
comments, by me.
Also, the dump_info method got split into another method that returns
the stuff as a string instead of just printing it to stdout.
Add a new README in src/test/perl too.
Author: Craig Ringer
Reviewed by: Michaël Paquier
Parallel query can't handle running a query only partially rather than
to completion. However, there seems to be no way to run a statement
prepared via SQL PREPARE other than to completion, so we can enable it
there without a problem.
The situation is more complicated for the extend query protocol.
libpq seems to provide no way to send an Execute message with a
non-zero rowcount, but some other client might. If that happens, and
a parallel plan was chosen, we'll execute the parallel plan without
using any workers, which may be somewhat inefficient but should still
work. Hopefully this won't be a problem; users can always set
max_parallel_degree=0 to avoid choosing parallel plans in the first
place.
Amit Kapila, reviewed by me.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
We don't really need this field, because it's either zero or redundant with
PGPROC.pid. The use of zero to mark "not a group leader" is not necessary
since we can just as well test whether lockGroupLeader is NULL. This does
not save very much, either as to code or data, but the simplification seems
worthwhile anyway.
In 4b4b680c3 I accidentally used sizeof(PrivateRefCountArray) instead of
sizeof(PrivateRefCountEntry) when creating the refcount overflow
hashtable. As the former is bigger than the latter, this luckily only
resulted in a slightly increased memory usage when many buffers are
pinned in a backend.
Reported-By: Takashi Horikawa
Discussion: 73FA3881462C614096F815F75628AFCD035A48C3@BPXM01GP.gisp.nec.co.jp
Backpatch: 9.5, where thew new ref count infrastructure was introduced
Coverity griped about use of unchecked strcpy() into a local variable.
There's unlikely to be any actual bug there, since no caller would be
passing a path longer than MAXPGPATH, but nonetheless use of strlcpy()
seems preferable.
While at it, get rid of unmaintainable separation between list of
field names and list of field values in favor of initializing them
in parallel. And we might as well declare get_configdata()'s path
argument as const char *, even though no current caller needs that.
Some over-eager copy-and-pasting on my part resulted in a nonsense
result being returned in this case. I have adopted the same pattern for
handling this case as is used in the one argument form of the function,
i.e. we just skip over the code that adds values to the object.
Diagnosis and patch from Michael Paquier, although not quite his
solution.
Fixes bug #13936.
Backpatch to 9.5 where jsonb_object was introduced.
Reflow text in lock manager README so that it fits within 80 columns.
Correct some mistakes. Expand the README to explain not only why group
locking exists but also the data structures that support it. Improve
comments related to group locking several files. Change the name of a
macro argument for improved clarity.
Most of these problems were reported by Tom Lane, but I found a few
of them myself.
Robert Haas and Tom Lane
Commit 53874c5228 broke various 32-bit
buildfarm machines because it incorrectly used an 'L' suffix for what
needed to be a 64-bit literal. Thanks to Michael Paquier for helping
to diagnose this.
This will parse strings in the format produced by pg_size_pretty() and
return sizes in bytes. This allows queries to be written with clauses
like "pg_total_relation_size(oid) > pg_size_bytes('10 GB')".
Author: Pavel Stehule with various improvements by Vitaly Burovoy
Discussion: http://www.postgresql.org/message-id/CAFj8pRD-tGoDKnxdYgECzA4On01_uRqPrwF-8LdkSE-6bDHp0w@mail.gmail.com
Reviewed-by: Vitaly Burovoy, Oleksandr Shulgin, Kyotaro Horiguchi,
Michael Paquier and Robert Haas
Architecture reference material specifies this order, and s_lock.h
inline assembly agrees. The former order failed to provide mutual
exclusion to lwlock.c and perhaps to other clients. The two xlc
buildfarm members, hornet and mandrill, have failed sixteen times with
duplicate key errors involving pg_class_oid_index or pg_type_oid_index.
Back-patch to 9.5, where commit b64d92f1a5
introduced atomics.
Reviewed by Andres Freund and Tom Lane.
StartupSUBTRANS() incorrectly handled cases near the max pageid in the subtrans
data structure, which in some cases could lead to errors in startup for Hot
Standby.
This patch wraps the pageids correctly, avoiding any such errors.
Identified by exhaustive crash testing by Jeff Janes.
Jeff Janes
It was using %u to read a string that was earlier produced by snprintf with %d
into a signed integer variable. This seems to work in practice but is
incorrect.
found by cppcheck
Up to now, there's been an assumption that all Paths for a given relation
compute the same output column set (targetlist). However, there are good
reasons to remove that assumption. For example, an indexscan on an
expression index might be able to return the value of an expensive function
"for free". While we have the ability to generate such a plan today in
simple cases, we don't have a way to model that it's cheaper than a plan
that computes the function from scratch, nor a way to create such a plan
in join cases (where the function computation would normally happen at
the topmost join node). Also, we need this so that we can have Paths
representing post-scan/join steps, where the targetlist may well change
from one step to the next. Therefore, invent a "struct PathTarget"
representing the columns we expect a plan step to emit. It's convenient
to include the output tuple width and tlist evaluation cost in this struct,
and there will likely be additional fields in future.
While Path nodes that actually do have custom outputs will need their own
PathTargets, it will still be true that most Paths for a given relation
will compute the same tlist. To reduce the overhead added by this patch,
keep a "default PathTarget" in RelOptInfo, and allow Paths that compute
that column set to just point to their parent RelOptInfo's reltarget.
(In the patch as committed, actually every Path is like that, since we
do not yet have any cases of custom PathTargets.)
I took this opportunity to provide some more-honest costing of
PlaceHolderVar evaluation. Up to now, the assumption that "scan/join
reltargetlists have cost zero" was applied not only to Vars, where it's
reasonable, but also PlaceHolderVars where it isn't. Now, we add the eval
cost of a PlaceHolderVar's expression to the first plan level where it can
be computed, by including it in the PathTarget cost field and adding that
to the cost estimates for Paths. This isn't perfect yet but it's much
better than before, and there is a way forward to improve it more. This
costing change affects the join order chosen for a couple of the regression
tests, changing expected row ordering.
Suppress creation of the pg_upgrade delete script when the new data
directory is inside the old data directory.
Reported-by: IRC
Backpatch-through: 9.3, where delete script tests were added
In commit a5c43b88 the behavior of command line pg_config was
inadvertantly changed to include the config name when specific
configs are requested, similar to when none are requested and
all are emitted. This breaks scripts that expect to use
pg_config for e.g. PGXS. Revert the behavior to the previous.
Move and refactor the underlying code for the pg_config client
application to src/common in support of sharing it with a new
system information SRF called pg_config() which makes the same
information available via SQL. Additionally wrap the SRF with a
new system view, as called pg_config.
Patch by me with extensive input and review by Michael Paquier
and additional review by Alvaro Herrera.
When processing ordered aggregates following a sort that could make use
of the abbreviated key optimization, only call the equality operator to
compare successive pairs of tuples when their abbreviated keys were not
equal.
Peter Geoghegan, reviewd by Andreas Karlsson and by me.
A function name that's double-quoted in SQL can contain almost any
characters, but we were using that name directly as part of the name
generated for the Python-level function, and Python doesn't like
anything that isn't pretty much a standard identifier. To fix,
replace anything that isn't an ASCII letter or digit with an underscore
in the generated name. This doesn't create any risk of duplicate Python
function names because we were already appending the function OID to
the generated name to ensure uniqueness. Per bug #13960 from Jim Nasby.
Patch by Jim Nasby, modified a bit by me. Back-patch to all
supported branches.