LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0,
but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf
checks for the individual functions we need, and stop relying on
OPENSSL_VERSION_NUMBER.
Backport to 9.5 and 9.6, like the patch that broke this. In the
back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left,
to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL
has all those functions, so they work as intended.
Per buildfarm member curculio.
Discussion: <2442.1473957669@sss.pgh.pa.us>
The documentation states that the default value is 8MB, but this was
only true at BLCKSZ = 8kB, because the default was hard-coded as 1024.
Make the code match the docs by computing the default as 8MB/BLCKSZ.
Oversight in commit 75be66464, noted pursuant to a gripe from Peter E.
Discussion: <90634e20-097a-e4fd-67d5-fb2c42f0dd71@2ndquadrant.com>
Changes needed to build at all:
- Check for SSL_new in configure, now that SSL_library_init is a macro.
- Do not access struct members directly. This includes some new code in
pgcrypto, to use the resource owner mechanism to ensure that we don't
leak OpenSSL handles, now that we can't embed them in other structs
anymore.
- RAND_SSLeay() -> RAND_OpenSSL()
Changes that were needed to silence deprecation warnings, but were not
strictly necessary:
- RAND_pseudo_bytes() -> RAND_bytes().
- SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl()
- ASN1_STRING_data() -> ASN1_STRING_get0_data()
- DH_generate_parameters() -> DH_generate_parameters()
- Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good
riddance!)
Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER,
for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time
immemorial.
Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have
the "CA:true" basic constraint extension now, or OpenSSL will refuse them.
Regenerate the test certificates with that. The "openssl" binary, used to
generate the certificates, is also now more picky, and throws an error
if an X509 extension is specified in "req_extensions", but that section
is empty.
Backpatch to all supported branches, per popular demand. In back-branches,
we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work
too, but I didn't test it. In master, we only support 0.9.8 and above.
Patch by Andreas Karlsson, with additional changes by me.
Discussion: <20160627151604.GD1051@msg.df7cb.de>
The money type input function did not have any overflow checks at all.
There were some regression tests that purported to check for overflow,
but they actually checked for the overflow behavior of the int8 type
before casting to money. Remove those unnecessary checks and add some
that actually check the money input function.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
We were misapplying NameGetDatum() to plain C strings in some places.
This worked, because it was just a pointer cast anyway, but it's a type
cheat in some sense. Use CStringGetDatum instead, and modify the
NameGetDatum macro so it won't compile if applied to something that's
not a pointer to NameData. This should result in no changes to
generated code, but it is logically cleaner.
Mark Dilger, tweaked a bit by me
Discussion: <EFD8AC94-4C1F-40C1-A5EA-304080089C1B@gmail.com>
Teach the parser to reject misplaced set-returning functions during parse
analysis using p_expr_kind, in much the same way as we do for aggregates
and window functions (cf commit eaccfded9). While this isn't complete
(it misses nesting-based restrictions), it's much better than the previous
error reporting for such cases, and it allows elimination of assorted
ad-hoc expression_returns_set() error checks. We could add nesting checks
later if it seems important to catch all cases at parse time.
There is one case the parser will now throw error for although previous
versions allowed it, which is SRFs in the tlist of an UPDATE. That never
behaved sensibly (since it's ill-defined which generated row should be
used to perform the update) and it's hard to see why it should not be
treated as an error. It's a release-note-worthy change though.
Also, add a new Query field hasTargetSRFs reporting whether there are
any SRFs in the targetlist (including GROUP BY/ORDER BY expressions).
The parser can now set that basically for free during parse analysis,
and we can use it in a number of places to avoid expression_returns_set
searches. (There will be more such checks soon.) In some places, this
allows decontorting the logic since it's no longer expensive to check for
SRFs in the tlist --- so I made the checks parallel to the handling of
hasAggs/hasWindowFuncs wherever it seemed appropriate.
catversion bump because adding a Query field changes stored rules.
Andres Freund and Tom Lane
Discussion: <24639.1473782855@sss.pgh.pa.us>
lockdefs.h was only split from lock.h relatively recently, and
represents a minimal subset of the old lock.h. heapam.h only needs
that smaller subset, so adjust it to include only that. This requires
some corresponding adjustments elsewhere.
Peter Geoghegan
The output of the function changes whenever previous (or, as in this
case, concurrent) tests leave a table in place. That causes unneeded
churn.
This should fix failures due to the tests added bfe16d1a5, like on
lapwing, caused by the tsrf test running concurrently with misc. Those
could also have been addressed by using temp tables, but that test has
annoyed me before.
Discussion: <27626.1473729905@sss.pgh.pa.us>
We're considering changing the implementation of targetlist SRFs
considerably, and a lot of the current behaviour isn't tested in our
regression tests. Thus it seems useful to increase coverage to avoid
accidental behaviour changes.
It's quite possible that some of the plans here will require adjustments
to avoid falling afoul of ordering differences (e.g. hashed group
bys). The buildfarm will tell us.
Reviewed-By: Tom Lane
Discussion: <20160827214829.zo2dfb5jaikii5nw@alap3.anarazel.de>
Like initdb, clean up created data and xlog directories, unless the new
-n/--noclean option is specified.
Tablespace directories are not cleaned up, but a message is written
about that.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
The details of commit 52803098ab were
based on a misunderstanding of the role inheritance allowing use
of a database for a template. While the CREATEDB privilege is not
inherited, the database ownership is privileges are.
Pointed out by Vitaly Burovoy and Tom Lane.
Fix provided by Tom Lane, reviewed by Vitaly Burovoy.
Following 8299471c37 walsender procs are now visible in pg_stat_activity.
Set query to ‘walsender’ for walsender procs to allow them to be identified.
Discussion:CAB7nPqS8c76KPSufK_HSDeYrbtg+zZ7D0EEkjeM6txSEuCB_jA@mail.gmail.com
Michael Paquier, issue raised by Fujii Masao, reviewed by Tom Lane
Previously, to update an extension you had to produce both a version-update
script and a new base installation script. It's become more and more
obvious that that's tedious, duplicative, and error-prone. This patch
attempts to improve matters by allowing the new base installation script
to be omitted. CREATE EXTENSION will install a requested version if it
can find a base script and a chain of update scripts that will get there.
As in the existing update logic, shorter chains are preferred if there's
more than one possibility, with an arbitrary tie-break rule for chains
of equal length.
Also adjust the pg_available_extension_versions view to show such versions
as installable.
While at it, refactor the code so that CASCADE processing works for
extensions requested during ApplyExtensionUpdates(). Without this,
addition of a new requirement in an updated extension would require
creating a new base script, even if there was no other reason to do that.
(It would be easy at this point to add a CASCADE option to ALTER EXTENSION
UPDATE, to allow the same thing to happen during a manually-commanded
version update, but I have not done that here.)
Tom Lane, reviewed by Andres Freund
Discussion: <20160905005919.jz2m2yh3und2dsuy@alap3.anarazel.de>
Solution.pm mistakenly believed that the xml option requires the xslt
option, when actually the dependency is the other way around; and it
believed that libxml requires libiconv, which is not necessarily so,
so we shouldn't enforce it here. Fix the option cross-checking logic.
Also, since AddProject already takes care of adding libxml and libxslt
include and library dependencies to every project, there's no need
for the custom code that did that in mkvcbuild. While at it, let's
handle the similar dependencies for uuid in a similar fashion.
Given the lack of field complaints about these overly strict build
dependency requirements, there seems no need for a back-patch.
Michael Paquier
Discussion: <CAB7nPqR0+gpu3mRQvFjf-V-bMxmiSJ6NpTg9_WzVDL+a31cV2g@mail.gmail.com>
In external sort's merge phase, we maintain a binary heap holding the next
tuple from each input tape. On each step, the topmost tuple is returned,
and replaced with the next tuple from the same tape. We were doing the
replacement by deleting the top node in one operation, and inserting the
next tuple after that. However, you can do a "replace-top" operation more
efficiently, in one "sift-up". A deletion will always walk the heap from
top to bottom, but in a replacement, we can stop as soon as we find the
right place for the new tuple. This is particularly helpful, if the tapes
are not in completely random order, so that the next tuple from a tape is
likely to land near the top of the heap.
Peter Geoghegan, reviewed by Claudio Freire, with some editing by me.
Discussion: <CAM3SWZRhBhiknTF_=NjDSnNZ11hx=U_SEYwbc5vd=x7M4mMiCw@mail.gmail.com>
Some experimentation with an older version of gcc showed that it is able
to determine whether "if (elevel_ >= ERROR)" is compile-time constant
if elevel_ is declared "const", but otherwise not so much. We had
accounted for that in ereport() but were too miserly with braces to
make it so in elog(). I don't know how many currently-interesting
compilers have the same quirk, but in case it will save some code
space, let's make sure that elog() is on the same footing as ereport()
for this purpose.
Back-patch to 9.3 where we introduced pg_unreachable() calls into
elog/ereport.
Commit dd1a3bccc replaced a test on whether a subroutine returned a
null pointer with a test on whether &pointer->backendStatus was null.
This accidentally failed to fail, at least on common compilers, because
backendStatus is the first field in the struct; but it was surely trouble
waiting to happen. Commit f91feba87 then messed things up further,
changing the logic to
local_beentry = pgstat_fetch_stat_local_beentry(curr_backend);
if (!local_beentry)
continue;
beentry = &local_beentry->backendStatus;
if (!beentry)
{
where the second "if" is now dead code, so that the intended behavior of
printing a row with "<backend information not available>" cannot occur.
I suspect this is all moot because pgstat_fetch_stat_local_beentry
will never actually return null in this function's usage, but it's still
very poor coding. Repair back to 9.4 where the original problem was
introduced.
The full generality of deleting an arbitrary number of tuples is no longer
needed, so let's save some code and cycles by replacing the original coding
with an implementation based on PageIndexTupleDelete.
We can always get back the old code from git if we need it again for new
callers (though I don't care for its willingness to mess with line pointers
it wasn't told to mess with).
Discussion: <552.1473445163@sss.pgh.pa.us>
Nowadays this is just a backwards-compatibility wrapper around
PageAddItemExtended, so let's avoid the extra level of function call.
In addition, because pretty much all callers are passing constants
for the two bool arguments, compilers will be able to constant-fold
the conversion to a flags bitmask.
Discussion: <552.1473445163@sss.pgh.pa.us>
PageIndexTupleOverwrite performs approximately the same function as
PageIndexTupleDelete (or PageIndexDeleteNoCompact) followed by PageAddItem
targeting the same item pointer offset. But in the case where the new
tuple is the same size as the old, it avoids shuffling other data around on
the page, because the new tuple is placed where the old one was rather than
being appended to the end of the page. This has been shown to provide a
substantial speedup for some GiST use-cases.
Also, this change allows some API simplifications: we can get rid of
the rather klugy and error-prone PAI_ALLOW_FAR_OFFSET flag for
PageAddItemExtended, since that was used only to cover a corner case
for BRIN that's better expressed by using PageIndexTupleOverwrite.
Note that this patch causes a rather subtle WAL incompatibility: the
physical page content change represented by certain WAL records is now
different than it was before, because while the tuples have the same
itempointer line numbers, the tuples themselves are in different places.
I have not bumped the WAL version number because I think it doesn't matter
unless you are trying to do bitwise comparisons of original and replayed
pages, and in any case we're early in a devel cycle and there will probably
be more WAL changes before v10 gets out the door.
There is probably room to make use of PageIndexTupleOverwrite in SP-GiST
and GIN too, but that is left for a future patch.
Andrey Borodin, reviewed by Anastasia Lubennikova, whacked around a bit
by me
Discussion: <CAJEAwVGQjGGOj6mMSgMwGvtFd5Kwe6VFAxY=uEPZWMDjzbn4VQ@mail.gmail.com>
When heap_lock_tuple decides to follow the update chain, it tried to
also lock any version of the tuple that was created by an update that
was subsequently rolled back. This is pointless, since for all intents
and purposes that tuple exists no more; and moreover it causes
misbehavior, as reported independently by Marko Tiikkaja and Marti
Raudsepp: some SELECT FOR UPDATE/SHARE queries may fail to return
the tuples, and assertion-enabled builds crash.
Fix by having heap_lock_updated_tuple test the xmin and return success
immediately if the tuple was created by an aborted transaction.
The condition where tuples become invisible occurs when an updated tuple
chain is followed by heap_lock_updated_tuple, which reports the problem
as HeapTupleSelfUpdated to its caller heap_lock_tuple, which in turn
propagates that code outwards possibly leading the calling code
(ExecLockRows) to believe that the tuple exists no longer.
Backpatch to 9.3. Only on 9.5 and newer this leads to a visible
failure, because of commit 27846f02c176; before that, heap_lock_tuple
skips the whole dance when the tuple is already locked by the same
transaction, because of the ancient HeapTupleSatisfiesUpdate behavior.
Still, the buggy condition may also exist in more convoluted scenarios
involving concurrent transactions, so it seems safer to fix the bug in
the old branches too.
Discussion:
https://www.postgresql.org/message-id/CABRT9RC81YUf1=jsmWopcKJEro=VoeG2ou6sPwyOUTx_qteRsg@mail.gmail.comhttps://www.postgresql.org/message-id/48d3eade-98d3-8b9a-477e-1a8dc32a724d@joh.to
PageAddItem stores the item length as-is. It MAXALIGN's the amount of
space actually allocated for each tuple, but not the stored length.
PageRepairFragmentation, PageIndexMultiDelete, and PageIndexDeleteNoCompact
are all on board with this and MAXALIGN item lengths after fetching them.
But PageIndexTupleDelete expects the stored length to be a MAXALIGN
multiple already. This accidentally works for existing index AMs because
they all maxalign their tuple sizes internally; but we don't do that for
heap tuples, and it shouldn't be a requirement for index tuples either.
So, sync PageIndexTupleDelete with the rest of bufpage.c by having it
maxalign the item size after fetching.
Also add a check that pd_special is maxaligned, to ensure that the test
"(offset + size) > phdr->pd_special" is still doing the right thing.
(If offset and pd_special are aligned, it doesn't matter whether size is.)
Again, this is in sync with the rest of the routines here, except for
PageAddItem which doesn't test because it doesn't actually do anything
for which pd_special alignment matters.
This shouldn't have any immediate functional impact; it just adds the
flexibility to use PageIndexTupleDelete on index tuples with non-aligned
lengths.
Discussion: <3814.1473366762@sss.pgh.pa.us>
plpgsql.h defines a number of enums, but most of the code passes them
around as ints. Update structs and function prototypes to take enum
types instead. This clarifies the struct definitions in plpgsql.h in
particular.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
We have a not-terribly-thoroughly-enforced-yet project policy that internal
errors with SQLSTATE XX000 (ie, plain elog) should not be triggerable from
SQL. record_in, domain_in, and PL validator functions all failed to meet
this standard, because they threw plain elog("cache lookup failed for XXX")
errors on bad OIDs, and those are all invokable from SQL.
For record_in, the best fix is to upgrade typcache.c (lookup_type_cache)
to throw a user-facing error for this case. That seems consistent because
it was more than halfway there already, having user-facing errors for shell
types and non-composite types. Having done that, tweak domain_in to rely
on the typcache to throw an appropriate error. (This costs little because
InitDomainConstraintRef would fetch the typcache entry anyway.)
For the PL validator functions, we already have a single choke point at
CheckFunctionValidatorAccess, so just fix its error to be user-facing.
Dilip Kumar, reviewed by Haribabu Kommi
Discussion: <87wpxfygg9.fsf@credativ.de>
Reading 2PC state files during recovery was borked, causing corruptions during
recovery. Effect limited to servers with 2PC, subtransactions and
recovery/replication.
Stas Kelvich, reviewed by Michael Paquier and Pavan Deolasee
So far md.c used a linked list of segments. That proved to be a problem
when processing large relations, because every smgr.c/md.c level access
to a page incurred walking through a linked list of all preceding
segments. Thus making accessing pages O(#segments).
Replace the linked list of segments hanging off SMgrRelationData with an
array of opened segments. That allows O(1) access to individual
segments, if they've previously been opened.
Discussion: <20140331101001.GE13135@alap3.anarazel.de>
Reviewed-By: Peter Geoghegan, Tom Lane (in an older version)
That's primarily useful for testing very large relations, using sparse
files.
Discussion: <20140331101001.GE13135@alap3.anarazel.de>
Reviewed-By: Peter Geoghegan
mdtruncate() forgot to FileClose() a segment's mdfd_vfd, when deleting
it. That lead to a fd.c handle to a truncated file being kept open until
backend exit.
The issue appears to have been introduced way back in 1a5c450f30,
before that the handle was closed inside FileUnlink().
The impact of this bug is limited - only VACUUM and ON COMMIT TRUNCATE
for temporary tables, truncate files in place (i.e. TRUNCATE itself is
not affected), and the relation has to be bigger than 1GB. The
consequences of a leaked fd.c handle aren't severe either.
Discussion: <20160908220748.oqh37ukwqqncbl3n@alap3.anarazel.de>
Backpatch: all supported releases
commit_ts and test_pg_dump were declaring targets before including the
PGXS stanza, which meant that the "all" target customarily defined as
the first (and therefore default target) was not the default anymore.
Fix that by moving those target definitions to after PGXS.
commit_ts was initially good, but I broke it in commit 9def031bd2;
test_pg_dump was born broken, probably copying from commit_ts' mistake.
In passing, fix a comment mistake in test_pg_dump/Makefile.
Backpatch to 9.6.
Noted by Tom Lane.
Previously, if a schema was created by an extension, a normal pg_dump run
(not --binary-upgrade) would summarily skip every object in that schema.
In a case where an extension creates a schema and then users create other
objects within that schema, this does the wrong thing: we want pg_dump
to skip the schema but still create the non-extension-owned objects.
There's no easy way to fix this pre-9.6, because in earlier versions the
"dump" status for a schema is just a bool and there's no way to distinguish
"dump me" from "dump my members". However, as of 9.6 we do have enough
state to represent that, so this is a simple correction of the logic in
selectDumpableNamespace.
In passing, make some cosmetic fixes in nearby code.
Martín Marqués, reviewed by Michael Paquier
Discussion: <99581032-71de-6466-c325-069861f1947d@2ndquadrant.com>
If the database has a non-default tablespace, we emitted a TABLESPACE
clause in the CREATE DATABASE command emitted by -C, even if
--no-tablespaces was also specified. This seems wrong, and it's
inconsistent with what pg_dumpall does, so change it. Per bug #14315
from Danylo Hlynskyi.
Back-patch to 9.5. The bug is much older, but it'd be a more invasive
change before 9.5 because dumpDatabase() hasn't got an easy way to get
to the outputNoTablespaces flag. Doesn't seem worth the work given
the lack of previous complaints.
Report: <20160908081953.1402.75347@wrigleys.postgresql.org>
StandbyRecoverPreparedTransactions() leaked the buffer
used for two phase state file. This was leaked once
at startup and at every shutdown checkpoint seen.
Backpatch to 9.6
Stas Kelvich
Until now, it used the current working directory. This makes it safe
for simultaneous invocations of gendef.pl, with different target
directories, to run from a single current working directory, such as
$(top_srcdir). The MSVC build system will soon rely on this.
Christian Ullrich, reviewed by Michael Paquier.
Not much to be said about this patch: it does what it says on the tin.
In passing, rename AlterEnumStmt.skipIfExists to skipIfNewValExists
to clarify what it actually does. In the discussion of this patch
we considered supporting other similar options, such as IF EXISTS
on the type as a whole or IF NOT EXISTS on the target name. This
patch doesn't actually add any such feature, but it might happen later.
Dagfinn Ilmari Mannsåker, reviewed by Emre Hasegeli
Discussion: <CAO=2mx6uvgPaPDf-rHqG8=1MZnGyVDMQeh8zS4euRyyg4D35OQ@mail.gmail.com>
Users often get confused between COPY and \copy and try to use client-side
paths with COPY. The server then cannot find the file (if remote), or sees
a permissions problem (if local), or some variant of that. Emit a hint
about this in the most common cases.
In future we might want to expand the set of errnos for which the hint
gets printed, but be conservative for now.
Craig Ringer, reviewed by Christoph Berg and Tom Lane
Discussion: <CAMsr+YEqtD97qPEzQDqrCt5QiqPbWP_X4hmvy2pQzWC0GWiyPA@mail.gmail.com>
Add a location field to the DefElem struct, used to parse many utility
commands. Update various error messages to supply error position
information.
To propogate the error position information in a more systematic way,
create a ParseState in standard_ProcessUtility() and pass that to
interested functions implementing the utility commands. This seems
better than passing the query string and then reassembling a parse state
ad hoc, which violates the encapsulation of the ParseState type.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Negative availMemLessRefund would be problematic. It's not entirely
clear whether the case can be hit in the code as it stands, but this
seems like good future-proofing in any case. While we're at it,
insist that the value be not merely positive but not tiny, so as to
avoid doing a lot of repalloc work for little gain.
Peter Geoghegan
Discussion: <CAM3SWZRVkuUB68DbAkgw=532gW0f+fofKueAMsY7hVYi68MuYQ@mail.gmail.com>
Brain fade in commit a00c58314: I was thinking that a string starting with
"-" could be taken as a switch depending on command line syntax. That's
true, but having appendShellString() quote it will not help, so we may as
well not do so. Per complaint from Peter Eisentraut.
What used to be four spaces somehow turned into a tab and a couple of
spaces in commit a00c58314, no doubt from overhelpful emacs autoindent.
Noted by Peter Eisentraut.
lazy_truncate_heap() was waiting for
VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL, but in microseconds
not milliseconds as originally intended.
Found by code inspection.
Simon Riggs
Some of the comments added by the CREATE EXTENSION CASCADE patch were
a bit sloppy, and I didn't care for redeclaring the same local variable
inside a nested block either. No functional changes.
Previously, we failed to recognize Unicode characters above U+7FF as
being members of locale-dependent character classes such as [[:alpha:]].
(Actually, the same problem occurs for large pg_wchar values in any
multibyte encoding, but UTF8 is the only case people have actually
complained about.) It's impractical to get Spencer's original code to
handle character classes or ranges containing many thousands of characters,
because it insists on considering each member character individually at
regex compile time, whether or not the character will ever be of interest
at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which
we process characters individually as before, and deal with entire ranges
or classes as single entities above that. We can actually make things
cheaper than before for chars below the cutoff, because the color map can
now be a simple linear array for those chars, rather than the multilevel
tree structure Spencer designed. It's more expensive than before for
chars above the cutoff, because we must do a binary search in a list of
high chars and char ranges used in the regex pattern, plus call iswalpha()
and friends for each locale-dependent character class used in the pattern.
However, multibyte encodings are normally designed to give smaller codes
to popular characters, so that we can expect that the slow path will be
taken relatively infrequently. In any case, the speed penalty appears
minor except when we have to apply iswalpha() etc. to high character codes
at runtime --- and the previous coding gave wrong answers for those cases,
so whether it was faster is moot.
Tom Lane, reviewed by Heikki Linnakangas
Discussion: <15563.1471913698@sss.pgh.pa.us>
To prevent possibly breaking indexes on enum columns, we must keep
uncommitted enum values from getting stored in tables, unless we
can be sure that any such column is new in the current transaction.
Formerly, we enforced this by disallowing ALTER TYPE ... ADD VALUE
from being executed at all in a transaction block, unless the target
enum type had been created in the current transaction. This patch
removes that restriction, and instead insists that an uncommitted enum
value can't be referenced unless it belongs to an enum type created
in the same transaction as the value. Per discussion, this should be
a bit less onerous. It does require each function that could possibly
return a new enum value to SQL operations to check this restriction,
but there aren't so many of those that this seems unmaintainable.
Andrew Dunstan and Tom Lane
Discussion: <4075.1459088427@sss.pgh.pa.us>
When pg_logical_slot_get_changes(...) sets confirmed_flush_lsn to the point at
which replay stopped, it doesn't dirty the replication slot. So if the replay
didn't cause restart_lsn or catalog_xmin to change as well, this change will
not get written out to disk. Even on a clean shutdown.
If Pg crashes or restarts, a subsequent pg_logical_slot_get_changes(...) call
will see the same changes already replayed since it uses the slot's
confirmed_flush_lsn as the start point for fetching changes. The caller can't
specify a start LSN when using the SQL interface.
Mark the slot as dirty after reading changes using the SQL interface so that
users won't see repeated changes after a clean shutdown. Repeated changes still
occur when using the walsender interface or after an unclean shutdown.
Craig Ringer
Andres is apparently the only hacker who thinks this code is better as-is.
I (tgl) follow some of his logic, but the fact that it's setting off
warnings from static code analyzers seems like a sufficient reason to
put the complexity into a comment rather than the code.
Aleksander Alekseev
Discussion: <20160404190345.54d84ee8@fujitsu>
After further reflection about the mess cleaned up in commit 39b691f25,
I decided the main bit of test coverage that was still missing was to
check that the non-default abbreviation-set files we supply are usable.
Add that.
Back-patch to supported branches, just because it seems like a good
idea to keep this all in sync.
Commit b2cbced9e instituted a policy of referring to the timezone database
as the "IANA timezone database" in our user-facing documentation.
Propagate that wording into a couple of places that were still using "zic"
to refer to the database, which is definitely not right (zic is the
compilation tool, not the data).
Back-patch, not because this is very important in itself, but because
we routinely cherry-pick updates to the tznames files and I don't want
to risk future merge failures.
split_to_stringlist() doesn't modify its first argument nor expect it
to remain valid after exit, so there's no need to duplicate the optarg
string at the call sites. Per Coverity. (This has been wrong all along,
but commit 052cc223d changed the useless calls from "strdup" to
"pg_strdup", which apparently made Coverity think it's a new bug.
It's not, but it's also not worth back-patching.)
Coverity complained about the for(;;) loop, because it never actually
iterated. It was used just to be able to use "break" to exit it early. I
agree with Coverity, that's a bit confusing, so refactor the code to
use if-else instead.
While we're at it, use a local variable to hold the "current" node. That's
shorter and clearer than referring to "iter->last_visited" all the time.
In addition to the existing decimal-milliseconds output value,
display the same value in mm:ss.fff format if it exceeds one second.
Tack on hours and even days fields if the interval is large enough.
This avoids needing mental arithmetic to convert the values into
customary time units.
Corey Huinker, reviewed by Gerdan Santos; bikeshedding by many
Discussion: <CADkLM=dbC4R8sbbuFXQVBFWoJGQkTEW8RWnC0PbW9nZsovZpJQ@mail.gmail.com>
These were evidently introduced by yesterday's commit 9cca11c91,
which perhaps needs more review than it got.
Per report from Andreas Seltenreich and additional examination
of nearby code.
Report: <87oa45qfwq.fsf@credativ.de>
computeLeafRecompressWALData() tried to produce a uint16 WAL log field by
memcpy'ing the first two bytes of an int-sized variable. That accidentally
works on little-endian hardware, but not at all on big-endian. Replay then
thinks it's looking at an ADDITEMS action with zero entries, and reads the
first two bytes of the first TID therein as the next segno/action,
typically leading to "unexpected GIN leaf action" errors during replay.
Even if replay failed to crash, the resulting GIN index page would surely
be incorrect. To fix, just declare the variable as uint16 instead.
Per bug #14295 from Spencer Thomason (much thanks to Spencer for turning
his problem into a self-contained test case). This likely also explains
a previous report of the same symptom from Bernd Helmle.
Back-patch to 9.4 where the problem was introduced (by commit 14d02f0bb).
Discussion: <20160826072658.15676.7628@wrigleys.postgresql.org>
Possible-Report: <2DA7350F7296B2A142272901@eje.land.credativ.lan>
Previously, we threw an error if a dynamic timezone abbreviation did not
match any abbreviation recorded in the referenced IANA time zone entry.
That seemed like a good consistency check at the time, but it turns out
that a number of the abbreviations in the IANA database are things that
Olson and crew made up out of whole cloth. Their current policy is to
remove such names in favor of using simple numeric offsets. Perhaps
unsurprisingly, a lot of these made-up abbreviations have varied in meaning
over time, which meant that our commit b2cbced9e and later changes made
them into dynamic abbreviations. So with newer IANA database versions
that don't mention these abbreviations at all, we fail, as reported in bug
#14307 from Neil Anderson. It's worse than just a few unused-in-the-wild
abbreviations not working, because the pg_timezone_abbrevs view stops
working altogether (since its underlying function tries to compute the
whole view result in one call).
We considered deleting these abbreviations from our abbreviations list, but
the problem with that is that we can't stay ahead of possible future IANA
changes. Instead, let's leave the abbreviations list alone, and treat any
"orphaned" dynamic abbreviation as just meaning the referenced time zone.
It will behave a bit differently than it used to, in that you can't any
longer override the zone's standard vs. daylight rule by using the "wrong"
abbreviation of a pair, but that's better than failing entirely. (Also,
this solution can be interpreted as adding a small new feature, which is
that any abbreviation a user wants can be defined as referencing a time
zone name.)
Back-patch to all supported branches, since this problem affects all
of them when using tzdata 2016f or newer.
Report: <20160902031551.15674.67337@wrigleys.postgresql.org>
Discussion: <6189.1472820913@sss.pgh.pa.us>
When building libpq, ip.c and md5.c were symlinked or copied from
src/backend/libpq into src/interfaces/libpq, but now that we have a
directory specifically for routines that are shared between the server and
client binaries, src/common/, move them there.
Some routines in ip.c were only used in the backend. Keep those in
src/backend/libpq, but rename to ifaddr.c to avoid confusion with the file
that's now in common.
Fix the comment in src/common/Makefile to reflect how libpq actually links
those files.
There are two more files that libpq symlinks directly from src/backend:
encnames.c and wchar.c. I don't feel compelled to move those right now,
though.
Patch by Michael Paquier, with some changes by me.
Discussion: <69938195-9c76-8523-0af8-eb718ea5b36e@iki.fi>
This introduces a numeric sum accumulator, which performs better than
repeatedly calling add_var(). The performance comes from using wider digits
and delaying carry propagation, tallying positive and negative values
separately, and avoiding a round of palloc/pfree on every value. This
speeds up SUM(), as well as other standard aggregates like AVG() and
STDDEV() that also calculate a sum internally.
Reviewed-by: Andrey Borodin
Discussion: <c0545351-a467-5b76-6d46-4840d1ea8aa4@iki.fi>
While we don't need multiple iterators at the moment, the interface is
nicer and less dangerous this way.
Aleksander Alekseev, with some changes by me.
A majority of callers seem to have believed that this was the API spec
already, because they omitted any check for a NULL result, and hence
would crash on an out-of-shared-memory failure. The original proposal
was to just add such error checks everywhere, but that does nothing to
prevent similar omissions in future. Instead, let's make ShmemAlloc()
throw the error (so we can remove the caller-side checks that do exist),
and introduce a new function ShmemAllocNoError() that has the previous
behavior of returning NULL, for the small number of callers that need
that and are prepared to do the right thing. This also lets us remove
the rather wishy-washy behavior of printing a WARNING for out-of-shmem,
which never made much sense: either the caller has a strategy for
dealing with that, or it doesn't. It's not ShmemAlloc's business to
decide whether a warning is appropriate.
The v10 release notes will need to call this out as a significant
source-code change. It's likely that it will be a bug fix for
extension callers too, but if not, they'll need to change to using
ShmemAllocNoError().
This is nominally a bug fix, but the odds that it's fixing any live
bug are actually rather small, because in general the requests
being made by the unchecked callers were already accounted for in
determining the overall shmem size, so really they ought not fail.
Between that and the possible impact on extensions, no back-patch.
Discussion: <24843.1472563085@sss.pgh.pa.us>
Unlike PL/Tcl, PL/Perl at least made an attempt to clean up after itself
when a function gets redefined. But it was still using TopMemoryContext
for the fn_mcxt of argument/result I/O functions, resulting in the
potential for memory leaks depending on what those functions did, and the
retail alloc/free logic was pretty bulky as well. Fix things to use a
per-function memory context like the other PLs now do. Tweak a couple of
places where things were being done in a not-very-safe order (on the
principle that a memory leak is better than leaving global state
inconsistent after an error). Also make some minor cosmetic adjustments,
mostly in field names, to make the code look similar to the way PL/Tcl does
now wherever it's essentially the same logic.
Michael Paquier and Tom Lane
Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
Formerly, the memory used to represent a PL/Tcl function was allocated with
malloc() or in TopMemoryContext, and we'd leak it all if the function got
redefined during the session. Instead, create a per-function context and
keep everything in or under that context. Add a reference-counting
mechanism (like the one plpgsql has long had) so that we can safely clean
up an old function definition, either immediately if it's not being
executed or at the end of the outermost execution.
Currently, we only detect that a cached function is obsolete when we next
attempt to call that function. So this covers the updated-definition case
but leaves cruft around after DROP FUNCTION. It's not clear whether it's
worth installing a syscache invalidation callback to watch for drops;
none of the other PLs do, so for now we won't do it here either.
Michael Paquier and Tom Lane
Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
The hack embodied in commit 4ba61a487 no longer works after today's change
to allow DatumGetFloat4/Float4GetDatum to be inlined (commit 14cca1bf8).
Probably what's happening is that the faulty compilers are deciding that
the now-inlined assignment is a no-op and so they're not required to
round to float4 width.
We had a bunch of similar issues earlier this year in the degree-based
trig functions, and eventually settled on using volatile intermediate
variables as the least ugly method of forcing recalcitrant compilers
to do what the C standard says (cf commit 82311bcdd). Let's see if
that method works here.
Discussion: <4640.1472664476@sss.pgh.pa.us>
Since we removed SSL renegotiation, there's no longer any reason to
keep track of the amount of data transferred over the link.
Daniel Gustafsson
Discussion: <FEA7F89C-ECDF-4799-B789-2F8DDCBA467F@yesql.se>
Now that we are OK with using static inline functions, we can use them
to avoid function call overhead of pass-by-val versions of Float4GetDatum,
DatumGetFloat8, and Float8GetDatum. Those functions are only a few CPU
instructions long, but they could not be written into macros previously,
because we need a local union variable for the conversion.
I kept the pass-by-ref versions as regular functions. They are very simple
too, but they call palloc() anyway, so shaving a few instructions from the
function call doesn't seem so important there.
Discussion: <dbb82a4a-2c15-ba27-dd0a-009d2aa72b77@iki.fi>
This can't really work because standby_mode expects there to be more
WAL arriving, which there will not ever be because there's no WAL
receiver process to fetch it. Moreover, if standby_mode is on then
hot standby might also be turned on, causing even more strangeness
because that expects read-only sessions to be executing in parallel.
Bernd Helmle reported a case where btree_xlog_delete_get_latestRemovedXid
got confused, but rather than band-aiding individual problems it seems
best to prevent getting anywhere near this state in the first place.
Back-patch to all supported branches.
In passing, also fix some omissions of errcodes in other ereport's in
readRecoveryCommandFile().
Michael Paquier (errcode hacking by me)
Discussion: <00F0B2CEF6D0CEF8A90119D4@eje.credativ.lan>
Where possible, use palloc or pg_malloc instead; otherwise, insert
explicit NULL checks.
Generally speaking, these are places where an actual OOM is quite
unlikely, either because they're in client programs that don't
allocate all that much, or they're very early in process startup
so that we'd likely have had a fork() failure instead. Hence,
no back-patch, even though this is nominally a bug fix.
Michael Paquier, with some adjustments by me
Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
The previous API for this function had it returning a malloc'd string.
That meant that callers had to check for NULL return, which few of them
were doing, and it also meant that callers had to remember to free()
the string later, which required extra logic in most cases.
Instead, make simple_prompt() write into a buffer supplied by the caller.
Anywhere that the maximum required input length is reasonably small,
which is almost all of the callers, we can just use a local or static
array as the buffer instead of dealing with malloc/free.
A fair number of callers used "pointer == NULL" as a proxy for "haven't
requested the password yet". Maintaining the same behavior requires
adding a separate boolean flag for that, which adds back some of the
complexity we save by removing free()s. Nonetheless, this nets out
at a small reduction in overall code size, and considerably less code
than we would have had if we'd added the missing NULL-return checks
everywhere they were needed.
In passing, clean up the API comment for simple_prompt() and get rid
of a very-unnecessary malloc/free in its Windows code path.
This is nominally a bug fix, but it does not seem worth back-patching,
because the actual risk of an OOM failure in any of these places seems
pretty tiny, and all of them are client-side not server-side anyway.
This patch is by me, but it owes a great deal to Michael Paquier
who identified the problem and drafted a patch for fixing it the
other way.
Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
While testing simple_prompt() revisions, I happened to notice that
current initdb behaves rather badly when --pwprompt is specified and
the user miskeys the second password. It complains about the mismatch,
does "rm -rf" on the data directory, and exits. The problem is that
since commit c4a8812cf, there's a standalone backend sitting waiting
for commands at that point. It gets unhappy about its datadir having
gone away, and spews a PANIC message at the user, which is not nice.
(And the shell then adds to the mess with meaningless bleating about a
core dump...) We don't really want that sort of thing to happen unless
there's an internal failure in initdb, which this surely is not.
The best fix seems to be to move the collection of the password
earlier, so that it's done essentially as part of argument collection,
rather than at the rather ad-hoc time it was done before.
Back-patch to 9.6 where the problem was introduced.
Since the hash AM is going to be revamped to have WAL, this is a good
opportunity to clean up the include file a little bit to avoid including
a lot of extra stuff in the future.
Author: Amit Kapila
OpenSSL officially only supports 1.0.1 and newer. Some OS distributions
still provide patches for 0.9.8, but anything older than that is not
interesting anymore. Let's simplify things by removing compatibility code.
Andreas Karlsson, with small changes by me.
The previous behavior was to silently change them to something valid.
That obscured the bugs fixed in commit ea268cdc9, and generally seems
less useful than complaining. Unlike the previous commit, though,
we'll do this in HEAD only --- it's a bit too late to be possibly
breaking third-party code in 9.6.
Discussion: <CA+TgmobNcELVd3QmLD3tx=w7+CokRQiC4_U0txjz=WHpfdkU=w@mail.gmail.com>
Previously pg_xlogdump failed to dump the contents of the WAL file
if the file starts with the continuation WAL record which spans
more than one pages. Since pg_xlogdump assumed that the continuation
record always fits on a page, it could not find the valid WAL record to
start reading from in that case.
This patch changes pg_xlogdump so that it can handle a continuation
WAL record which crosses a page boundary and find the valid record
to start reading from.
Back-patch to 9.3 where pg_xlogdump was introduced.
Author: Pavan Deolasee
Reviewed-By: Michael Paquier and Craig Ringer
Discussion: CABOikdPsPByMiG6J01DKq6om2+BNkxHTPkOyqHM2a4oYwGKsqQ@mail.gmail.com
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters. While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea. Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.
While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.
In passing, change TopMemoryContext to use the default allocation
parameters. Formerly it could only be extended 8K at a time. That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there. There seems no good reason
not to let it use growing blocks like most other contexts.
Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain. The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.
Discussion: <21072.1472321324@sss.pgh.pa.us>
This has been requested a few times, but the use-case for it was never
entirely clear. The reason for adding it now is that transmission of
error reports from parallel workers fails when NLS is active, because
pq_parse_errornotice() wrongly assumes that the existing severity field
is nonlocalized. There are other ways we could have fixed that, but the
other options were basically kluges, whereas this way provides something
that's at least arguably a useful feature along with the bug fix.
Per report from Jakob Egger. Back-patch into 9.6, because otherwise
parallel query is essentially unusable in non-English locales. The
problem exists in 9.5 as well, but we don't want to risk changing
on-the-wire behavior in 9.5 (even though the possibility of new error
fields is specifically called out in the protocol document). It may
be sufficient to leave the issue unfixed in 9.5, given the very limited
usefulness of pq_parse_errornotice in that version.
Discussion: <A88E0006-13CB-49C6-95CC-1A77D717213C@eggerapps.at>
HandleParallelMessages leaked memory into the caller's context. Since it's
called from ProcessInterrupts, there is basically zero certainty as to what
CurrentMemoryContext is, which means we could be leaking into long-lived
contexts. Over the processing of many worker messages that would grow to
be a problem. Things could be even worse than just a leak, if we happened
to service the interrupt while ErrorContext is current: elog.c thinks it
can reset that on its own whim, possibly yanking storage out from under
HandleParallelMessages.
Give HandleParallelMessages its own dedicated context instead, which we can
reset during each call to ensure there's no accumulation of wasted memory.
Discussion: <16610.1472222135@sss.pgh.pa.us>
The guiding principle for the last few patches in this area apparently
involved throwing darts.
Cosmetic only, but back-patch to 9.6 because there is no reason for
9.6 and HEAD to diverge yet in this file.
Copy the palloc'd strings into the correct context, ie ErrorContext
not wherever the source ErrorData is. This would be a large bug,
except that it appears that all catchers of thrown errors do either
EmitErrorReport or CopyErrorData before doing anything that would
cause transient memory contexts to be cleaned up. Still, it's wrong
and it will bite somebody someday.
Fix failure to copy cursorpos and internalpos.
Utter the appropriate incantations involving recursion_depth, so that
we'll behave sanely if we get an error inside pstrdup. (In general,
the body of this function ought to act like, eg, errdetail().)
Per code reading induced by Jakob Egger's report.
The previous coding here was capable of adding a "parallel worker" context
line to errors that were not, in fact, returned from a parallel worker.
Instead of using an errcontext callback to add that annotation, just paste
it onto the message by hand; this looks uglier but is more reliable.
Discussion: <19757.1472151987@sss.pgh.pa.us>
Commit f0c7b789a added a test case in case.sql that creates and then drops
both an '=' operator and the type it's for. Given the right timing, that
can cause a "cache lookup failed for type" failure in concurrent sessions,
which see the '=' operator as a potential match for '=' in a query, but
then the type is gone by the time they inquire into its properties.
It might be nice to make that behavior more robust someday, but as a
back-patchable solution, adjust the new test case so that the operator
is never visible to other sessions. Like the previous commit, back-patch
to all supported branches.
Discussion: <5983.1471371667@sss.pgh.pa.us>
When there is an identifiable REPLICA IDENTITY index on the target table,
heap_update leaks the id_attrs bitmapset. That's not many bytes, but it
adds up over enough rows, since the code typically runs in a query-lifespan
context. Bug introduced in commit e55704d8b, which did a rather poor job
of cloning the existing use-pattern for RelationGetIndexAttrBitmap().
Per bug #14293 from Zhou Digoal. Back-patch to 9.4 where the bug was
introduced.
Report: <20160824114320.15676.45171@wrigleys.postgresql.org>
ExecReScanAgg's check for whether it could re-use a previously calculated
hashtable neglected the possibility that the Agg node might reference
PARAM_EXEC Params that are not referenced by its input plan node. That's
okay if the Params are in upper tlist or qual expressions; but if one
appears in aggregate input expressions, then the hashtable contents need
to be recomputed when the Param's value changes.
To avoid unnecessary performance degradation in the case of a Param that
isn't within an aggregate input, add logic to the planner to determine
which Params are within aggregate inputs. This requires a new field in
struct Agg, but fortunately we never write plans to disk, so this isn't
an initdb-forcing change.
Per report from Jeevan Chalke. This has been broken since forever,
so back-patch to all supported branches.
Andrew Gierth, with minor adjustments by me
Report: <CAM2+6=VY8ykfLT5Q8vb9B6EbeBk-NGuLbT6seaQ+Fq4zXvrDcA@mail.gmail.com>
Every program having -lpgfeutils in LDFLAGS must have this dependency,
whether or not the program uses a libpgfeutils symbol. Back-patch to
9.6, where libpgfeutils was introduced.
With Asserts off, these variables are set but never used, resulting
in warnings from pickier compilers. Fix that with our standard solution.
Per report from Jeff Janes.
AF_INET is apparently defined in something that's pulled in automatically
on Linux, but the buildfarm says that's not true everywhere. Comparing
to network_gist.c suggests that including <sys/socket.h> ought to fix it,
and the POSIX standard concurs.
This seems to offer significantly better search performance than the
existing GiST opclass for inet/cidr, at least on data with a wide mix
of network mask lengths. (That may suggest that the data splitting
heuristics in the GiST opclass could be improved.)
Emre Hasegeli, with mostly-cosmetic adjustments by me
Discussion: <CAE2gYzxtth9qatW_OAqdOjykS0bxq7AYHLuyAQLPgT7H9ZU0Cw@mail.gmail.com>
If you have previously pinned a segment and decide that you don't
actually want to keep it around until shutdown, this new API lets you
remove the pin. This is pretty trivial except on Windows, where it
requires closing the duplicate handle that was used to implement the
pin.
Thomas Munro and Amit Kapila, reviewed by Amit Kapila and by me.
Previously, the spgSplitTuple action could only create a new upper tuple
containing a single labeled node. This made it useless for opclasses
that prefer to work with fixed sets of nodes (labeled or otherwise),
which meant that restrictive prefixes could not be used with such
node definitions. Change the output field set for the choose() method
to allow it to specify any valid node set for the new upper tuple,
and to specify which of these nodes to place the modified lower tuple in.
In addition to its primary use for fixed node sets, this feature could
allow existing opclasses that use variable node sets to skip a separate
spgAddNode action when splitting a tuple, by setting up the node needed
for the incoming value as part of the spgSplitTuple action. However, care
would have to be taken to add the extra node only when it would not make
the tuple bigger than before. (spgAddNode can enlarge the tuple,
spgSplitTuple can't.)
This is a prerequisite for an upcoming SP-GiST inet opclass, but is
being committed separately to increase the visibility of the API change.
In passing, improve the documentation about the traverse-values feature
that was added by commit ccd6eb49a.
Emre Hasegeli, with cosmetic adjustments and documentation rework by me
Discussion: <CAE2gYzxtth9qatW_OAqdOjykS0bxq7AYHLuyAQLPgT7H9ZU0Cw@mail.gmail.com>
Add a variant of txid_current() that returns NULL if no transaction ID
is assigned. This version can be used even on a standby server,
although it will always return NULL since no transaction IDs can be
assigned during recovery.
Craig Ringer, per suggestion from Jim Nasby. Reviewed by Petr Jelinek
and by me.
Merge several copies of "copy an inet value and adjust the mask length"
code to create a single, conveniently C-callable function. This function
is exported for future use by inet SPGiST support, but it's good cleanup
anyway since we had three slightly-different-for-no-good-reason copies.
(Extracted from a larger patch, to separate new code from refactoring
of old code)
Emre Hasegeli
Due to an error in the abbreviated key abort logic, the most recently
processed SortTuple could be incorrectly marked NULL, resulting in an
incorrect final sort order.
In the worst case, this could result in a corrupt btree index, which
would need to be rebuild using REINDEX. However, abbrevation doesn't
abort very often, not all data types use it, and only one tuple would
end up in the wrong place, so the practical impact of this mistake may
be somewhat limited.
Report and patch by Peter Geoghegan.
Remove the plpgsql wrapping that hides the context. So now the test
will fail if the work doesn't actually happen in a parallel worker. Run
the test in its own test group to ensure it won't run out of resources
for that.
It is redundant with appendConnStrVal(), which became an extern function
in commit 41f18f021a. This changes the
handling of out-of-memory and of certain inputs for which quoting is
optional, but pg_basebackup has no need for unusual treatment thereof.
The original coding here was not nearly careful enough about quoting
special characters, and it didn't get corner cases right for constructing
the pg_ctl path either. Use join_path_components() and appendShellString()
to do it honestly, so that the string will more likely work if blindly
copied-and-pasted.
While at it, teach appendShellString() not to quote strings that clearly
don't need it, so that the output from initdb doesn't become uglier than
it was before in typical cases where quoting is not needed.
Ryan Murphy, reviewed by Michael Paquier and myself
Discussion: <CAHeEsBeAe1FeBypT3E8R1ZVZU0e8xv3A-7BHg6bEOi=jZny2Uw@mail.gmail.com>
This might have been too much of a foot-gun before 9.6, but with the
new commands-end-at-semicolons parsing rule, the only way to get an
empty query into a script is to explicitly write an extra ";".
So we may as well allow the case.
Fabien Coelho
Patch: <alpine.DEB.2.20.1607090922170.3412@sto>
In particular, left join to pg_authid so that rows in pg_stat_activity
don't disappear if the session's owning user has been dropped.
Also convert a few joins to pg_database to left joins, in the same spirit,
though that case might be harder to hit. We were doing this in other
views already, so it was a bit inconsistent that these views didn't.
Oskari Saarenmaa, with some further tweaking by me
Discussion: <56E87CD8.60007@ohmu.fi>
Obvious brain fade in set_rel_consider_parallel(). Noticed it while
adjusting the adjacent RTE_FUNCTION case.
In 9.6, also make the code look more like what I just did in HEAD
by removing the unnecessary function_rte_parallel_ok subroutine
(it does nothing that expression_tree_walker wouldn't do).
We need to scan the whole parse tree for parallel-unsafe functions.
If there are none, we'll later need to determine whether particular
subtrees contain any parallel-restricted functions. The previous coding
retained no knowledge from the first scan, even though this is very
wasteful in the common case where the query contains only parallel-safe
functions. We can bypass all of the later scans by remembering that fact.
This provides a small but measurable speed improvement when the case
applies, and shouldn't cost anything when it doesn't.
Patch by me, reviewed by Robert Haas
Discussion: <3740.1471538387@sss.pgh.pa.us>
Clobbering errno during cleanup after an error is an oft-repeated, easy
to make mistake. Deal with it here as everywhere else, by saving it
aside and restoring after cleanup, before ereport'ing.
In passing, add a missing errcode declaration in another ereport() call
in the same file, which I noticed while skimming the file looking for
similar problems.
Backpatch to 9.4, where this code was introduced.
pcolor was used to represent function arguments that are nominally of
type color, but when using a pre-ANSI C compiler would be passed as the
promoted integer type. We really don't need that anymore.
The regex library used to have a notion of a "collating element" that was
distinct from a "character", but Henry Spencer never actually implemented
his planned support for multi-character collating elements, and the Tcl
crew ripped out most of the stubs for that years ago. The only thing left
that distinguished the "celt" typedef from the "chr" typedef was that
"celt" was supposed to also be able to hold the not-a-character "NOCELT"
value. However, NOCELT was not used anywhere after the MCCE stub removal
changes, which means there's no need for celt to be different from chr.
Removing the separate typedef simplifies matters and also removes a trap
for the unwary, in that celt is signed while chr may not be, so comparisons
could mean different things. There's no bug there today because we
restrict CHR_MAX to be less than INT_MAX, but I think there may have been
such bugs before we did that, and there could be again if anyone ever
decides to fool with the range of chr.
This patch also removes assorted unnecessary casts to "chr" of values
that are already chrs. Many of these seem to be leftover from days when
the code was compatible with pre-ANSI C.
On some buildfarm animals the isolationtest added in 07ef0351 failed, as
the order in which processes are run after unlocking is not
guaranteed. Add an alternative output for that.
Discussion: <7969.1471484738@sss.pgh.pa.us>
Backpatch: 9.6, like the test in the aforementioned commit
As usual, we've been pretty awful about maintaining these counts.
They're not all that critical, perhaps, but let's get them right
at release time. Also fix 9.5, which I notice is just as bad.
It's probably wrong further back, but the lack of --help=foo
options before 9.5 makes it too painful to count.
These types are storage-compatible with real arrays, but they don't support
toasting, so of course they can't support expansion either.
Per bug #14289 from Michael Overmeyer. Back-patch to 9.5 where expanded
arrays were introduced.
Report: <20160818174414.1529.37913@wrigleys.postgresql.org>
Offer a list of available versions for that extension. Formerly, since
there was no special support for this, it triggered off the UPDATE
keyword and offered a list of table names --- not too helpful.
Jeff Janes, reviewed by Gerdan Santos
Patch: <CAMkU=1z0gxEOLg2BWa69P4X4Ot8xBxipGUiGkXe_tC+raj79-Q@mail.gmail.com>
This file had some unusual comment layout. Most of the comments
introducing structs ended up to the right of the screen and following
the start of the struct. Some comments for struct members ended up
after the member definition.
Fix that by moving comments consistently before what they are
describing. Also add missing struct tags where missing so that it is
easier to tell what the struct is.
All we need is 4 bytes at the moment, for MD5 authentication. But in
upcomint patches for SCRAM authentication, SCRAM will need a salt of
different length. It's less scary for the caller to pass the buffer
length anyway, than assume a certain-sized output buffer.
Author: Michael Paquier
Discussion: <CAB7nPqQvO4sxLFeS9D+NM3wpy08ieZdAj_6e117MQHZAfxBFsg@mail.gmail.com>
This adds a couple of new timezones that are present in the newer
versions of Windows. It also updates comments to reference UTC rather
than GMT, as this change has been made in Windows.
Michael Paquier
This way sendAuthRequest doesn't need to know the details of all the
different authentication methods. This is in preparation for adding SCRAM
authentication, which will add yet another authentication request message
type, with different payload.
Reviewed-By: Michael Paquier
Discussion: <CAB7nPqQvO4sxLFeS9D+NM3wpy08ieZdAj_6e117MQHZAfxBFsg@mail.gmail.com>
INSERT .. ON CONFLICT runs a pre-check of the possible conflicting
constraints before performing the actual speculative insertion. In case
the inserted tuple included TOASTed columns the ON CONFLICT condition
would be handled correctly in case the conflict was caught by the
pre-check, but if two transactions entered the speculative insertion
phase at the same time, one would have to re-try, and the code for
aborting a speculative insertion did not handle deleting the
speculatively inserted TOAST datums correctly.
TOAST deletion would fail with "ERROR: attempted to delete invisible
tuple" as we attempted to remove the TOAST tuples using
simple_heap_delete which reasoned that the given tuples should not be
visible to the command that wrote them.
This commit updates the heap_abort_speculative() function which aborts
the conflicting tuple to use itself, via toast_delete, for deleting
associated TOAST datums. Like before, the inserted toast rows are not
marked as being speculative.
This commit also adds a isolationtester spec test, exercising the
relevant code path. Unfortunately 9.5 cannot handle two waiting
sessions, and thus cannot execute this test.
Reported-By: Viren Negi, Oskari Saarenmaa
Author: Oskari Saarenmaa, edited a bit by me
Bug: #14150
Discussion: <20160519123338.12513.20271@wrigleys.postgresql.org>
Backpatch: 9.5, where ON CONFLICT was introduced
regexp_match() is like regexp_matches(), but it disallows the 'g' flag
and in consequence does not need to return a set. Instead, it returns
a simple text array value, or NULL if there's no match. Previously people
usually got that behavior with a sub-select, but this way is considerably
more efficient.
Documentation adjusted so that regexp_match() is presented first and then
regexp_matches() is introduced as a more complicated version. This is
a bit historically revisionist but seems pedagogically better.
Still TODO: extend contrib/citext to support this function.
Emre Hasegeli, reviewed by David Johnston
Discussion: <CAE2gYzy42sna2ME_e3y1KLQ-4UBrB-eVF0SWn8QG39sQSeVhEw@mail.gmail.com>
Slot creation did not clear all fields upon creation. After start the
memory is zeroed, but when a physical replication slot was created in
the shared memory of a previously existing logical slot, catalog_xmin
would not be cleared. That in turn would prevent vacuum from doing its
duties.
To fix initialize all the fields. To make similar future bugs less
likely, zero all of ReplicationSlotPersistentData, and re-order the
rest of the initialization to be in struct member order.
Analysis: Andrew Gierth
Reported-By: md@chewy.com
Author: Michael Paquier
Discussion: <20160705173502.1398.70934@wrigleys.postgresql.org>
Backpatch: 9.4, where replication slots were introduced
In some cases, exiting out of a plpgsql statement due to an error, then
catching the error in a surrounding exception block, led to leakage of
temporary data the statement was working with, because we kept all such
data in the function-lifespan SPI Proc context. Iterating such behavior
many times within one function call thus led to noticeable memory bloat.
To fix, create an additional memory context meant to have statement
lifespan. Since many plpgsql statements, particularly the simpler/more
common ones, don't need this, create it only on demand. Reset this context
at the end of any statement that uses it, and arrange for exception cleanup
to reset it too, thereby fixing the memory-leak issue. Allow a stack of
such contexts to exist to handle cases where a compound statement needs
statement-lifespan data that persists across calls of inner statements.
While at it, clean up code and improve comments referring to the existing
short-term memory context, which by plpgsql convention is the per-tuple
context of the eval_econtext ExprContext. We now uniformly refer to that
as the eval_mcontext, whereas the new statement-lifespan memory contexts
are called stmt_mcontext.
This change adds some context-creation overhead, but on the other hand
it allows removal of some retail pfree's in favor of context resets.
On balance it seems to be about a wash performance-wise.
In principle this is a bug fix, but it seems too invasive for a back-patch,
and the infrequency of complaints weighs against taking the risk in the
back branches. So we'll fix it only in HEAD, at least for now.
Tom Lane, reviewed by Pavel Stehule
Discussion: <17863.1469142152@sss.pgh.pa.us>
The performance overhead of this can be significant on Windows, and most
people don't have the tools to view it anyway as Windows does not have
native support for process titles.
Discussion: <0A3221C70F24FB45833433255569204D1F5BE3E8@G01JPEXMBYT05>
Takayuki Tsunakawa
We implement a dozen or so parameterless functions that the SQL standard
defines special syntax for. Up to now, that was done by converting them
into more or less ad-hoc constructs such as "'now'::text::date". That's
messy for multiple reasons: it exposes what should be implementation
details to users, and performance is worse than it needs to be in several
cases. To improve matters, invent a new expression node type
SQLValueFunction that can represent any of these parameterless functions.
Bump catversion because this changes stored parsetrees for rules.
Discussion: <30058.1463091294@sss.pgh.pa.us>
I'm not sure which bozo thought it's a problem to use strtol() only
for its endptr result, but silence the warning using same method
used elsewhere.
Report: <f845d3a6-5328-3e2a-924f-f8e91aa2b6d2@2ndquadrant.com>
This is somewhat cosmetic, since as long as you know what you are looking
at, "10.0" is a serviceable substitute for "10". But there is a potential
for confusion between version numbers with minor numbers and those without
--- we don't want people asking "why is psql saying 10.0 when my server is
10.2". Therefore, back-patch as far as practical, which turns out to be
9.3. I could have redone the patch to use fprintf(stderr) in place of
psql_error(), but it seems more work than is warranted for branches that
will be EOL or nearly so by the time v10 comes out.
Although only psql seems to contain any code that needs this, I chose
to put the support function into fe_utils, since it seems likely we'll
need it in other client programs in future. (In 9.3-9.5, use dumputils.c,
the predecessor of fe_utils/string_utils.c.)
In HEAD, also fix the backend code that whines about loadable-library
version mismatch. I don't see much need to back-patch that.
Up to now we've manually adjusted these numbers in several different
Makefiles at the start of each development cycle. While that's not
much work, it's easily forgotten, so let's get rid of it by setting
the SO_MINOR_VERSION values directly from $(MAJORVERSION).
In the case of libpq, this dev cycle's value of SO_MINOR_VERSION happens
to be "10" anyway, so this switch is transparent. For ecpg's shared
libraries, this will result in skipping one or two minor version numbers
between v9.6 and v10, which seems like no big problem; and it was a bit
inconsistent that they didn't have equal minor version numbers anyway.
Discussion: <21969.1471287988@sss.pgh.pa.us>
Commit af33039317 aimed to reduce
leakage from tqueue.c, which is good. Unfortunately, by changing the
memory context in which all of gather_readnext() executes, it also
changed the context in which ExecShutdownGatherWorkers executes, which
is not good, because that function eventually causes a call to
ExecParallelRetrieveInstrumentation, which proceeds to allocate
planstate->worker_instrument in a short-lived context, causing a
crash.
Rushabh Lathia, reviewed by Amit Kapila and by me.
Once upon a time, it made sense for the ecpg preprocessor to have its
own version number, because it used a manually-maintained grammar that
wasn't always in sync with the core grammar. But those days are
thankfully long gone, leaving only a maintenance nuisance behind.
Let's use the PG v10 version numbering changeover as an excuse to get
rid of the ecpg version number and just have ecpg identify itself by
PG_VERSION. From the user's standpoint, ecpg will go from "4.12" in
the 9.6 branch to "10" in the 10 branch, so there's no failure of
monotonicity.
Discussion: <1471332659.4410.67.camel@postgresql.org>
Prior to commit 7882c3b0b9, it was
possible to use LWLocks within DSM segments, but that commit broke
this use case by switching from a doubly linked list to a circular
linked list. Switch back, using a new bit of general infrastructure
for maintaining lists of PGPROCs.
Thomas Munro, reviewed by me.