Commit Graph

3148 Commits

Author SHA1 Message Date
Tom Lane
d86a2b7b58 Re-establish postgres_fdw connections after server or user mapping changes.
Previously, postgres_fdw would keep on using an existing connection even
if the user did ALTER SERVER or ALTER USER MAPPING commands that should
affect connection parameters.  Teach it to watch for catcache invals
on these catalogs and re-establish connections when the relevant catalog
entries change.  Per bug #14738 from Michal Lis.

In passing, clean up some rather crufty decisions in commit ae9bfc5d6
about where fields of ConnCacheEntry should be reset.  We now reset
all the fields whenever we open a new connection.

Kyotaro Horiguchi, reviewed by Ashutosh Bapat and myself.
Back-patch to 9.3 where postgres_fdw appeared.

Discussion: https://postgr.es/m/20170710113917.7727.10247@wrigleys.postgresql.org
2017-07-21 12:51:38 -04:00
Tom Lane
b217459877 Fix low-probability leaks of PGresult objects in the backend.
We had three occurrences of essentially the same coding pattern
wherein we tried to retrieve a query result from a libpq connection
without blocking.  In the case where PQconsumeInput failed (typically
indicating a lost connection), all three loops simply gave up and
returned, forgetting to clear any previously-collected PGresult
object.  Since those are malloc'd not palloc'd, the oversight results
in a process-lifespan memory leak.

One instance, in libpqwalreceiver, is of little significance because
the walreceiver process would just quit anyway if its connection fails.
But we might as well fix it.

The other two instances, in postgres_fdw, are somewhat more worrisome
because at least in principle the scenario could be repeated, allowing
the amount of memory leaked to build up to something worth worrying
about.  Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
calls, as well as other calls that could potentially elog(ERROR),
providing another way to exit without having cleared the PGresult.
Here we need to add PG_TRY logic similar to what exists in quite a
few other places in postgres_fdw.

Coverity noted the libpqwalreceiver bug; I found the other two cases
by checking all calls of PQconsumeInput.

Back-patch to all supported versions as appropriate (9.2 lacks
postgres_fdw, so this is really quite unexciting for that branch).

Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
2017-06-15 15:03:53 -04:00
Robert Haas
fd849956cc postgres_fdw: Allow cancellation of transaction control commands.
Commit f039eaac71, later back-patched
with commit 1b812afb0e, allowed many of
the queries issued by postgres_fdw to fetch remote data to respond to
cancel interrupts in a timely fashion.  However, it didn't do anything
about the transaction control commands, which remained
noninterruptible.

Improve the situation by changing do_sql_command() to retrieve query
results using pgfdw_get_result(), which uses the asynchronous
interface to libpq so that it can check for interrupts every time
libpq returns control.  Since this might result in a situation
where we can no longer be sure that the remote transaction state
matches the local transaction state, add a facility to force all
levels of the local transaction to abort if we've lost track of
the remote state; without this, an apparently-successful commit of
the local transaction might fail to commit changes made on the
remote side.  Also, add a 60-second timeout for queries issue during
transaction abort; if that expires, give up and mark the state of
the connection as unknown.  Drop all such connections when we exit
the local transaction.  Together, these changes mean that if we're
aborting the local toplevel transaction anyway, we can just drop the
remote connection in lieu of waiting (possibly for a very long time)
for it to complete an abort.

This still leaves quite a bit of room for improvement.  PQcancel()
has no asynchronous interface, so if we get stuck sending the cancel
request we'll still hang.  Also, PQsetnonblocking() is not used, which
means we could block uninterruptibly when sending a query.  There
might be some other optimizations possible as well.  Nonetheless,
this allows us to escape a wait for an unresponsive remote server
quickly in many more cases than previously.

Report by Suraj Kharage.  Patch by me and Rafia Sabih.  Review
and testing by Amit Kapila and Tushar Ahuja.

Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
2017-06-07 15:24:22 -04:00
Peter Eisentraut
47b4913050 Fix new warnings from GCC 7
This addresses the new warning types -Wformat-truncation
-Wformat-overflow that are part of -Wall, via -Wformat, in GCC 7.
2017-05-16 08:52:39 -04:00
Andrew Dunstan
69db3b0cfc Suppress indentation from Data::Dumper in regression tests
Ultra-modern versions of the perl Data::Dumper module have apparently
changed how they indent output. Instead of trying to keep up we choose
to tell it to supporess all indentation in the hstore_plperl regression
tests.

Backpatch to 9.5 where this feature was introduced.
2017-05-14 01:13:35 -04:00
Andres Freund
75784859cd Fix race condition leading to hanging logical slot creation.
The snapshot assembly during the creation of logical slots relied
waiting for transactions in xl_running_xacts to end, by checking for
their commit/abort records.  Unfortunately, despite locking, it is
possible to see an xl_running_xact record listing transactions as
ready, that have already WAL-logged an commit/abort record, as the
locking just prevents the ProcArray to be adjusted, and the commit
record has to be logged first.

That lead to either delayed or hanging snapshot creation, because
snapbuild.c would wait "forever" to see commit/abort records for some
transactions.  That hang resolved only if a xl_running_xacts record
without any running transactions happened to be logged, far from
certain on a busy server.

It's impractical to prevent that via more heavyweight locking, the
likelihood of deadlocks and significantly increased contention would
be too big.

Instead change the initial snapshot creation to be solely based on
tracking the oldest running transaction via
xl_running_xacts->oldestRunningXid - that actually ends up
significantly simplifying the code.  That has two disadvantages:
1) Because we cannot fully "trust" the contents of xl_running_xacts,
   we cannot use it to build the initial snapshot.  Instead we have to
   wait twice for all running transactions to finish.
2) Previously a slot, unless the race occurred, could be created when
   the all transaction perceived as running based on commit/abort
   records, now we have to wait for the next xl_running_xacts record.
To address that, trigger logging new xl_running_xacts record from
within snapbuild.c exactly when necessary.

Unfortunately snabuild.c's SnapBuild is stored on disk, one of the
stupider ideas of a certain Mr Freund, so we can't change it in a
minor release.  As this is going to be backpatched, we have to hack
around a bit to keep on-disk compatibility.  A later commit will
rejigger that on master.

Author: Andres Freund, based on a quite different patch from Petr Jelinek
Analyzed-By: Petr Jelinek
Reviewed-By: Petr Jelinek
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backpatch: 9.4-, where logical decoding has been introduced
2017-05-13 14:21:00 -07:00
Peter Eisentraut
86e640a694 postgres_fdw: Fix join push down with extensions
Objects in an extension are shippable to a foreign server if the
extension is part of the foreign server definition's shippable
extensions list.  But this was not properly considered in some cases
when checking whether a join condition can be pushed to a foreign server
and the join condition uses an object from a shippable extension.  So
the join would never be pushed down in those cases.

So, the list of extensions needs to be made available in fpinfo of the
relation being considered to be pushed down before any expressions are
assessed for being shippable.  Fix foreign_join_ok() to do that for a
join relation.

David Rowley and Ashutosh Bapat, per report from David Rowley

reduced version of patch 332bec1e60 for
backpatch to 9.6, first release with join push down
2017-04-26 09:14:21 -04:00
Tom Lane
d51279433c Further fix pg_trgm's extraction of trigrams from regular expressions.
Commit 9e43e8714 turns out to have been insufficient: not only is it
necessary to track tentative parent links while considering a set of
arc removals, but it's necessary to track tentative flag additions
as well.  This is because we always merge arc target states into
arc source states; therefore, when considering a merge of the final
state with some other, it is the other state that will acquire a new
TSTATE_FIN bit.  If there's another arc for the same color trigram
that would cause merging of that state with the initial state, we
failed to recognize the problem.  The test cases for the prior commit
evidently only exercised situations where a tentative merge with the
initial state occurs before one with the final state.  If it goes the
other way around, we'll happily merge the initial and final states,
either producing a broken final graph that would never match anything,
or triggering the Assert added by the prior commit.

It's tempting to consider switching the merge direction when the merge
involves the final state, but I lack the time to analyze that idea in
detail.  Instead just keep track of the flag changes that would result
from proposed merges, in the same way that the prior commit tracked
proposed parent links.

Along the way, add some more debugging support, because I'm not entirely
confident that this is the last bug here.  And tweak matters so that
the transformed.dot file uses small integers rather than pointer values
to identify states; that makes it more readable if you're just eyeballing
it rather than fooling with Graphviz.  And rename a couple of identically
named struct fields to reduce confusion.

Per report from Corey Csuhta.  Add a test case based on his example.
(Note: this case does not trigger the bug under 9.3, apparently because
its different measurement of costs causes it to stop merging states before
it hits the failure.  I spent some time trying to find a variant that would
fail in 9.3, without success; but I'm sure such cases exist.)

Like the previous patch, back-patch to 9.3 where this code was added.

Report: https://postgr.es/m/E2B01A4B-4530-406B-8D17-2F67CF9A16BA@csuhta.com
2017-04-14 14:52:03 -04:00
Tom Lane
a70b18b894 Fix regexport.c to behave sanely with lookaround constraints.
regexport.c thought it could just ignore LACON arcs, but the correct
behavior is to treat them as satisfiable while consuming zero input
(rather reminiscently of commit 9f1e642d5).  Otherwise, the emitted
simplified-NFA representation may contain no paths leading from initial
to final state, which unsurprisingly confuses pg_trgm, as seen in
bug #14623 from Jeff Janes.

Since regexport's output representation has no concept of an arc that
consumes zero input, recurse internally to find the next normal arc(s)
after any LACON transitions.  We'd be forced into changing that
representation if a LACON could be the last arc reaching the final
state, but fortunately the regex library never builds NFAs with such
a configuration, so there always is a next normal arc.

Back-patch to 9.3 where this logic was introduced.

Discussion: https://postgr.es/m/20170413180503.25948.94871@wrigleys.postgresql.org
2017-04-13 17:18:35 -04:00
Joe Conway
dd93afca3a Silence compiler warning in sepgsql
<selinux/label.h> includes <stdbool.h>, which creates an incompatible
We don't care if <stdbool.h> redefines "true"/"false"; those are close
enough.

Complaint and initial patch by Mike Palmiotto. Final approach per
Tom Lane's suggestion, as discussed on hackers. Backpatching to
all supported branches.

Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
2017-04-06 14:24:41 -07:00
Peter Eisentraut
18dc2aee5f Spelling fixes
From: Josh Soref <jsoref@gmail.com>
2017-03-14 13:45:54 -04:00
Noah Misch
08c6d42c8c Fix pg_file_write() error handling.
Detect fclose() failures; given "ln -s /dev/full $PGDATA/devfull",
"pg_file_write('devfull', 'x', true)" now fails as it should.  Don't
leak a stream when fwrite() fails.  Remove a born-ineffective test that
aimed to skip zero-length writes.  Back-patch to 9.2 (all supported
versions).
2017-03-12 19:35:49 -04:00
Joe Conway
8469923f3e Fix ancient connection leak in dblink
When using unnamed connections with dblink, every time a new
connection is made, the old one is leaked. Fix that.

This has been an issue probably since dblink was first committed.
Someone complained almost ten years ago, but apparently I decided
not to pursue it at the time, and neither did anyone else, so it
slipped between the cracks. Now that someone else has complained,
fix in all supported branches.

Discussion: (orig) https://postgr.es/m/flat/F680AB59-6D6F-4026-9599-1BE28880273D%40decibel.org#F680AB59-6D6F-4026-9599-1BE28880273D@decibel.org
Discussion: (new) https://postgr.es/m/flat/0A3221C70F24FB45833433255569204D1F6ADF8C@G01JPEXMBYT05
Reported by: Jim Nasby and Takayuki Tsunakawa
2017-03-11 13:32:26 -08:00
Tom Lane
16500d2278 Fix contrib/pg_trgm's extraction of trigrams from regular expressions.
The logic for removing excess trigrams from the result was faulty.
It intends to avoid merging the initial and final states of the NFA,
which is necessary, but in testing whether removal of a specific trigram
would cause that, it failed to consider the combined effects of all the
state merges that that trigram's removal would cause.  This could result
in a broken final graph that would never match anything, leading to GIN
or GiST indexscans not finding anything.

To fix, add a "tentParent" field that is used only within this loop,
and set it to show state merges that we are tentatively going to do.
While examining a particular arc, we must chase up through tentParent
links as well as regular parent links (the former can only appear atop
the latter), and we must account for state init/fin flag merges that
haven't actually been done yet.

To simplify the latter, combine the separate init and fin bool fields
into a bitmap flags field.  I also chose to get rid of the "children"
state list, which seems entirely inessential.

Per bug #14563 from Alexey Isayko, which the added test cases are based on.
Back-patch to 9.3 where this code was added.

Report: https://postgr.es/m/20170222111446.1256.67547@wrigleys.postgresql.org
Discussion: https://postgr.es/m/8816.1487787594@sss.pgh.pa.us
2017-02-22 15:04:07 -05:00
Heikki Linnakangas
50baad4433 Fix typo also in expected output.
Commit 181bdb90ba fixed the typo in the .sql file, but forgot to update the
expected output.
2017-02-06 12:04:28 +02:00
Heikki Linnakangas
90e8599219 Fix typos in comments.
Backpatch to all supported versions, where applicable, to make backpatching
of future fixes go more smoothly.

Josh Soref

Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
2017-02-06 11:34:15 +02:00
Tom Lane
4103a2f200 Invalidate cached plans on FDW option changes.
This fixes problems where a plan must change but fails to do so,
as seen in a bug report from Rajkumar Raghuwanshi.

For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of
forcing a relcache flush on the table.  For ALTER FOREIGN DATA WRAPPER
and ALTER SERVER, just flush the whole plan cache on any change in
pg_foreign_data_wrapper or pg_foreign_server.  That matches the way
we handle some other low-probability cases such as opclass changes, and
it's unclear that the case arises often enough to be worth working harder.
Besides, that gives a patch that is simple enough to back-patch with
confidence.

Back-patch to 9.3.  In principle we could apply the code change to 9.2 as
well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that
anyone is doing anything exciting enough with FDWs that far back to need
this desperately, and (c) the patch doesn't apply cleanly.

Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh
Bapat, who each contributed substantial changes as well.

Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwga=og@mail.gmail.com
2017-01-06 14:12:52 -05:00
Joe Conway
51126ccdb1 Make dblink try harder to form useful error messages
When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.

Back-patch to all supported branches.

Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
2016-12-22 09:47:55 -08:00
Joe Conway
150841fb94 Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.

Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.

Author: Corey Huinker
Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
2016-12-22 09:19:34 -08:00
Tom Lane
60ad2ff586 Give a useful error message if uuid-ossp is built without preconfiguration.
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without
having told configure you meant to; you could just cd into that directory
and "make".  That no longer works because the code depends on configure to
have done header and library probes, but the ensuing error messages are
not so easy to interpret if you're not an old C hand.  We've gotten a
couple of complaints recently from people trying to do this the low-tech
way, so add an explicit #error directing the user to use --with-uuid.

(In principle we might want to do something similar in the other
optionally-built contrib modules; but I don't think any of the others have
ever worked without preconfiguration, so there are no bad habits to break
people of.)

Back-patch to 9.4 where the previous commit came in.

Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com
Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
2016-12-22 11:19:13 -05:00
Joe Conway
88673c9d3b Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.

Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.

Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
2016-12-21 15:52:01 -08:00
Tom Lane
512f27cb35 In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
Oversight in commit b8cc8f947.  I just noticed this causes compiler
warnings on FreeBSD, and it really ought to cause warnings elsewhere too:
all references I can find say that <arpa/inet.h> is required for these.
We have a lot of code elsewhere that thinks that both <netinet/in.h>
and <arpa/inet.h> should be included for these functions, so do it that
way here too, even though <arpa/inet.h> ought to be sufficient according
to the references I consulted.

Back-patch to 9.4 where the previous commit landed.
2016-12-17 22:24:27 -05:00
Robert Haas
1ed3c6ff9e Log the creation of an init fork unconditionally.
Previously, it was thought that this only needed to be done for the
benefit of possible standbys, so wal_level = minimal skipped it.
But that's not safe, because during crash recovery we might replay
XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively
removes the directory that contains the new init fork.  So log it
always.

The user-visible effect of this bug is that if you create a database
or tablespace, then create an unlogged table, then crash without
checkpointing, then restart, accessing the table will fail, because
the it won't have been properly reset.  This commit fixes that.

Michael Paquier, per a report from Konstantin Knizhnik.  Wording of
the comments per a suggestion from me.
2016-12-08 14:13:55 -05:00
Tom Lane
292765ce35 Make contrib/test_decoding regression tests safe for CZ locale.
A little COLLATE "C" goes a long way.

Pavel Stehule, per suggestion from Craig Ringer

Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
2016-11-21 20:39:28 -05:00
Tom Lane
4774f61830 Back-patch portability fixes for contrib/pageinspect/ginfuncs.c.
Back-patch commits 84ad68d64 and 367b99bbb.
2016-11-04 12:37:29 -04:00
Tom Lane
2a8783e440 Fix portability bug in gin_page_opaque_info().
Somebody apparently thought that "if Int32GetDatum is good,
Int64GetDatum must be better".  Per buildfarm failures now
that Peter has added some regression tests here.
2016-11-02 00:09:28 -04:00
Heikki Linnakangas
e3c5e48bb8 Fix typo in comment.
Daniel Gustafsson
2016-10-26 11:13:11 +03:00
Andres Freund
76c0b73df6 Correct logical decoding restore behaviour for subtransactions.
Before initializing iteration over a subtransaction's changes, the last
few changes were not spilled to disk. That's correct if the transaction
didn't spill to disk, but otherwise... This bug can lead to missed or
misorderd subtransaction contents when they were spilled to disk.

Move spilling of the remaining in-memory changes to
ReorderBufferIterTXNInit(), where it can easily be applied to the top
transaction and, if present, subtransactions.

Since this code had too many bugs already, noticeably increase test
coverage.

Fixes: #14319
Reported-By: Huan Ruan
Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org>
Backport: 9,4-, where logical decoding was added
2016-10-03 22:12:31 -07:00
Tom Lane
f4e787c826 Fix bugs in contrib/pg_visibility.
collect_corrupt_items() failed to initialize tuple.t_self.  While
HeapTupleSatisfiesVacuum() doesn't actually use that value, it does
Assert that it's valid, so that the code would dump core if ip_posid
chanced to be zero.  (That's somewhat unlikely, which probably explains
how this got missed.  In any case it wouldn't matter for field use.)

Also, collect_corrupt_items was returning the wrong TIDs, that is the
contents of t_ctid rather than the tuple's own location.  This would
be the same thing in simple cases, but it could be wrong if, for
example, a past update attempt had been rolled back, leaving a live
tuple whose t_ctid doesn't point at itself.

Also, in pg_visibility(), guard against trying to read a page past
the end of the rel.  The VM code handles inquiries beyond the end
of the map by silently returning zeroes, and it seems like we should
do the same thing here.

I ran into the assertion failure while using pg_visibility to check
pg_upgrade's behavior, and then noted the other problems while
reading the code.

Report: <29043.1475288648@sss.pgh.pa.us>
2016-10-01 16:32:55 -04:00
Heikki Linnakangas
9895818d56 Fix building with LibreSSL.
LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0,
but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf
checks for the individual functions we need, and stop relying on
OPENSSL_VERSION_NUMBER.

Backport to 9.5 and 9.6, like the patch that broke this. In the
back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left,
to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL
has all those functions, so they work as intended.

Per buildfarm member curculio.

Discussion: <2442.1473957669@sss.pgh.pa.us>
2016-09-15 22:45:08 +03:00
Robert Haas
bea38f34a4 pg_buffercache: Allow huge allocations.
Otherwise, users who have configured shared_buffers >= 256GB won't
be able to use this module.  There probably aren't many of those, but
it doesn't hurt anything to fix it so that it works.

Backpatch to 9.4, where MemoryContextAllocHuge was introduced.  The
same problem exists in older branches, but there's no easy way to
fix it there.

KaiGai Kohei
2016-09-15 09:30:37 -04:00
Heikki Linnakangas
fcd93e4af9 Support OpenSSL 1.1.0.
Changes needed to build at all:

- Check for SSL_new in configure, now that SSL_library_init is a macro.
- Do not access struct members directly. This includes some new code in
  pgcrypto, to use the resource owner mechanism to ensure that we don't
  leak OpenSSL handles, now that we can't embed them in other structs
  anymore.
- RAND_SSLeay() -> RAND_OpenSSL()

Changes that were needed to silence deprecation warnings, but were not
strictly necessary:

- RAND_pseudo_bytes() -> RAND_bytes().
- SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl()
- ASN1_STRING_data() -> ASN1_STRING_get0_data()
- DH_generate_parameters() -> DH_generate_parameters()
- Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good
  riddance!)

Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER,
for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time
immemorial.

Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have
the "CA:true" basic constraint extension now, or OpenSSL will refuse them.
Regenerate the test certificates with that. The "openssl" binary, used to
generate the certificates, is also now more picky, and throws an error
if an X509 extension is specified in "req_extensions", but that section
is empty.

Backpatch to 9.5 and 9.6, per popular demand. The file structure was
somewhat different in earlier branches, so I didn't bother to go further
than that. In back-branches, we still support OpenSSL 0.9.7 and above.
OpenSSL 0.9.6 should still work too, but I didn't test it. In master, we
only support 0.9.8 and above.

Patch by Andreas Karlsson, with additional changes by me.

Discussion: <20160627151604.GD1051@msg.df7cb.de>
2016-09-15 12:55:38 +03:00
Tom Lane
b9fe6cbc81 Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters.  While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea.  Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.

While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.

In passing, change TopMemoryContext to use the default allocation
parameters.  Formerly it could only be extended 8K at a time.  That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there.  There seems no good reason
not to let it use growing blocks like most other contexts.

Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain.  The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.

Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 17:50:38 -04:00
Tom Lane
d715b76d12 Fix -e option in contrib/intarray/bench/bench.pl.
As implemented, -e ran an EXPLAIN but then discarded the output, which
certainly seems pointless.  Make it print to stdout instead.  It's been
like that forever, so back-patch to all supported branches.

Daniel Gustafsson, reviewed by Andreas Scherbaum

Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
2016-08-17 15:51:10 -04:00
Tom Lane
d6c9e05cb7 Fix assorted bugs in contrib/bloom.
In blinsert(), cope with the possibility that a page we pull from the
notFullPage list is marked BLOOM_DELETED.  This could happen if VACUUM
recently marked it deleted but hasn't (yet) updated the metapage.
We can re-use such a page safely, but we *must* reinitialize it so that
it's no longer marked deleted.

Fix blvacuum() so that it updates the notFullPage list even if it's
going to update it to empty.  The previous "optimization" of skipping
the update seems pretty dubious, since it means that the next blinsert()
will uselessly visit whatever pages we left in the list.

Uniformly treat PageIsNew pages the same as deleted pages.  This should
allow proper recovery if a crash occurs just after relation extension.

Properly use vacuum_delay_point, not assorted ad-hoc CHECK_FOR_INTERRUPTS
calls, in the blvacuum() main loop.

Fix broken tuple-counting logic: blvacuum.c counted the number of live
index tuples over again in each scan, leading to VACUUM VERBOSE reporting
some multiple of the actual number of surviving index tuples after any
vacuum that removed any tuples (since they'd be counted in blvacuum, maybe
more than once, and then again in blvacuumcleanup, without ever zeroing the
counter).  It's sufficient to count them in blvacuumcleanup.

stats->estimated_count is a boolean, not a counter, and we don't want
to set it true, so don't add tuple counts to it.

Add a couple of Asserts that we don't overrun available space on a bloom
page.  I don't think there's any bug there today, but the way the
FreeBlockNumberArray size calculation is set up is scarily fragile, and
BloomPageGetFreeSpace isn't much better.  The Asserts should help catch
any future mistakes.

Per investigation of a report from Jeff Janes.  I think the first item
above may explain his report; the other changes were things I noticed
while casting about for an explanation.

Report: <CAMkU=1xEUuBphDwDmB1WjN4+td4kpnEniFaTBxnk1xzHCw8_OQ@mail.gmail.com>
2016-08-13 22:24:48 -04:00
Tom Lane
ed0097e4f9 Add SQL-accessible functions for inspecting index AM properties.
Per discussion, we should provide such functions to replace the lost
ability to discover AM properties by inspecting pg_am (cf commit
65c5fcd35).  The added functionality is also meant to displace any code
that was looking directly at pg_index.indoption, since we'd rather not
believe that the bit meanings in that field are part of any client API
contract.

As future-proofing, define the SQL API to not assume that properties that
are currently AM-wide or index-wide will remain so unless they logically
must be; instead, expose them only when inquiring about a specific index
or even specific index column.  Also provide the ability for an index
AM to override the behavior.

In passing, document pg_am.amtype, overlooked in commit 473b93287.

Andrew Gierth, with kibitzing by me and others

Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
2016-08-13 18:31:14 -04:00
Tom Lane
e3049285a3 Trivial cosmetic cleanup in bloom/blutils.c.
Don't spell "InvalidOid" as "0".  Initialize method fields in the same
order as amapi.h declares them (and every other AM handler initializes
them).
2016-08-11 12:23:35 -04:00
Peter Eisentraut
ab0a23c7c9 Fix typo 2016-08-09 19:08:00 -04:00
Tom Lane
bcbecbce2f Don't propagate a null subtransaction snapshot up to parent transaction.
This oversight could cause logical decoding to fail to decode an outer
transaction containing changes, if a subtransaction had an XID but no
actual changes.  Per bug #14279 from Marko Tiikkaja.  Patch by Marko
based on analysis by Andrew Gierth.

Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
2016-08-07 13:15:55 -04:00
Tom Lane
9492cf86e4 Fix assorted fallout from IS [NOT] NULL patch.
Commits 4452000f3 et al established semantics for NullTest.argisrow that
are a bit different from its initial conception: rather than being merely
a cache of whether we've determined the input to have composite type,
the flag now has the further meaning that we should apply field-by-field
testing as per the standard's definition of IS [NOT] NULL.  If argisrow
is false and yet the input has composite type, the construct instead has
the semantics of IS [NOT] DISTINCT FROM NULL.  Update the comments in
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print
such cases correctly.  In the case of ruleutils.c, this merely results in
cosmetic changes in EXPLAIN output, since the case can't currently arise
in stored rules.  However, it represents a live bug for deparse.c, which
would formerly have sent a remote query that had semantics different
from the local behavior.  (From the user's standpoint, this means that
testing a remote nested-composite column for null-ness could have had
unexpected recursive behavior much like that fixed in 4452000f3.)

In a related but somewhat independent fix, make plancat.c set argisrow
to false in all NullTest expressions constructed to represent "attnotnull"
constructs.  Since attnotnull is actually enforced as a simple null-value
check, this is a more accurate representation of the semantics; we were
previously overpromising what it meant for composite columns, which might
possibly lead to incorrect planner optimizations.  (It seems that what the
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so
arguably we are violating the spec and should fix attnotnull to do the
other thing.  If we ever do, this part should get reverted.)

Back-patch, same as the previous commit.

Discussion: <10682.1469566308@sss.pgh.pa.us>
2016-07-28 16:09:15 -04:00
Robert Haas
fe5e3fce79 Repair damage done by citext--1.1--1.2.sql.
That script is incorrect in that it sets the combine function for
max(citext) twice instead of setting the combine function for
max(citext) once and the combine functon for min(citext) once.  The
consequence is that if you install 1.0 or 1.1 and then update to 1.2,
you end up with min(citext) not having a combine function, contrary to
what was intended.  If you install 1.2 directly, you're OK.

Fix things up by defining a new 1.3 version.  Upgrading from 1.2 to
1.3 won't change anything for people who first installed the 1.2
version, but people upgrading from 1.0 or 1.1 will get the right
catalog contents once they reach 1.3.

Report and patch by David Rowley, reviewed by Andreas Karlsson.
2016-07-26 15:32:57 -04:00
Peter Eisentraut
40fcfec82c Message style improvements 2016-07-25 22:07:44 -04:00
Fujii Masao
1804d1555f Fix typo in comment.
Author: Masahiko Sawada
2016-07-25 17:51:26 +09:00
Tom Lane
d70d119151 Make contrib regression tests safe for Danish locale.
In btree_gin and citext, avoid some not-particularly-interesting
dependencies on the sorting of 'aa'.  In tsearch2, use COLLATE "C" to
remove an uninteresting dependency on locale sort order (and thereby
allow removal of a variant expected-file).

Also, in citext, avoid assuming that lower('I') = 'i'.  This isn't relevant
to Danish but it does fail in Turkish.
2016-07-21 16:52:35 -04:00
Tom Lane
18555b1323 Establish conventions about global object names used in regression tests.
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects.  There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.

This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_".  It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user".  That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.

There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again.  Again, that's not such a cosmetic change and it will require
discussion.  (But I did use a quick-hack enforcement patch to find these
cases.)

Discussion: <16638.1468620817@sss.pgh.pa.us>
2016-07-17 18:42:43 -04:00
Peter Eisentraut
f36ca9af05 Use correct symbol for minimum int64 value
The old code used SEQ_MINVALUE to get the smallest int64 value.  This
was done as a convenience to avoid having to deal with INT64_IS_BUSTED,
but that is obsolete now.  Also, it is incorrect because the smallest
int64 value is actually SEQ_MINVALUE-1.  Fix by using PG_INT64_MIN.
2016-07-17 09:15:37 -04:00
Tom Lane
45639a0525 Avoid invalidating all foreign-join cached plans when user mappings change.
We must not push down a foreign join when the foreign tables involved
should be accessed under different user mappings.  Previously we tried
to enforce that rule literally during planning, but that meant that the
resulting plans were dependent on the current contents of the
pg_user_mapping catalog, and we had to blow away all cached plans
containing any remote join when anything at all changed in pg_user_mapping.
This could have been improved somewhat, but the fact that a syscache inval
callback has very limited info about what changed made it hard to do better
within that design.  Instead, let's change the planner to not consider user
mappings per se, but to allow a foreign join if both RTEs have the same
checkAsUser value.  If they do, then they necessarily will use the same
user mapping at runtime, and we don't need to know specifically which one
that is.  Post-plan-time changes in pg_user_mapping no longer require any
plan invalidation.

This rule does give up some optimization ability, to wit where two foreign
table references come from views with different owners or one's from a view
and one's directly in the query, but nonetheless the same user mapping
would have applied.  We'll sacrifice the first case, but to not regress
more than we have to in the second case, allow a foreign join involving
both zero and nonzero checkAsUser values if the nonzero one is the same as
the prevailing effective userID.  In that case, mark the plan as only
runnable by that userID.

The plancache code already had a notion of plans being userID-specific,
in order to support RLS.  It was a little confused though, in particular
lacking clarity of thought as to whether it was the rewritten query or just
the finished plan that's dependent on the userID.  Rearrange that code so
that it's clearer what depends on which, and so that the same logic applies
to both RLS-injected role dependency and foreign-join-injected role
dependency.

Note that this patch doesn't remove the other issue mentioned in the
original complaint, which is that while we'll reliably stop using a foreign
join if it's disallowed in a new context, we might fail to start using a
foreign join if it's now allowed, but we previously created a generic
cached plan that didn't use one.  It was agreed that the chance of winning
that way was not high enough to justify the much larger number of plan
invalidations that would have to occur if we tried to cause it to happen.

In passing, clean up randomly-varying spelling of EXPLAIN commands in
postgres_fdw.sql, and fix a COSTS ON example that had been allowed to
leak into the committed tests.

This reverts most of commits fbe5a3fb7 and 5d4171d1c, which were the
previous attempt at ensuring we wouldn't push down foreign joins that
span permissions contexts.

Etsuro Fujita and Tom Lane

Discussion: <d49c1e5b-f059-20f4-c132-e9752ee0113e@lab.ntt.co.jp>
2016-07-15 17:23:02 -04:00
Robert Haas
86437ddf8c postgres_fdw: Fix cache lookup failure while creating error context.
This is fallout from join pushdown; get_relid_attribute_name can't
handle an attribute number of 0, indicating a whole-row reference,
and shouldn't be called in that case.

Etsuro Fujita, reviewed by Ashutosh Bapat
2016-07-01 11:29:25 -04:00
Robert Haas
5f3499b2b5 postgres_fdw: Remove schema-qualification from cast to text.
As pointed out by Ashutosh Bapat, the header comments for this file
say that schema-qualification is needed for all and only those types
outside pg_catalog.  pg_catalog.text is not outside pg_catalog.
2016-07-01 10:14:42 -04:00
Alvaro Herrera
e3ad3ffa68 Fix handling of multixacts predating pg_upgrade
After pg_upgrade, it is possible that some tuples' Xmax have multixacts
corresponding to the old installation; such multixacts cannot have
running members anymore.  In many code sites we already know not to read
them and clobber them silently, but at least when VACUUM tries to freeze
a multixact or determine whether one needs freezing, there's an attempt
to resolve it to its member transactions by calling GetMultiXactIdMembers,
and if the multixact value is "in the future" with regards to the
current valid multixact range, an error like this is raised:
    ERROR:  MultiXactId 123 has not been created yet -- apparent wraparound
and vacuuming fails.  Per discussion with Andrew Gierth, it is completely
bogus to try to resolve multixacts coming from before a pg_upgrade,
regardless of where they stand with regards to the current valid
multixact range.

It's possible to get from under this problem by doing SELECT FOR UPDATE
of the problem tuples, but if tables are large, this is slow and
tedious, so a more thorough solution is desirable.

To fix, we realize that multixacts in xmax created in 9.2 and previous
have a specific bit pattern that is never used in 9.3 and later (we
already knew this, per comments and infomask tests sprinkled in various
places, but we weren't leveraging this knowledge appropriately).
Whenever the infomask of the tuple matches that bit pattern, we just
ignore the multixact completely as if Xmax wasn't set; or, in the case
of tuple freezing, we act as if an unwanted value is set and clobber it
without decoding.  This guarantees that no errors will be raised, and
that the values will be progressively removed until all tables are
clean.  Most callers of GetMultiXactIdMembers are patched to recognize
directly that the value is a removable "empty" multixact and avoid
calling GetMultiXactIdMembers altogether.

To avoid changing the signature of GetMultiXactIdMembers() in back
branches, we keep the "allow_old" boolean flag but rename it to
"from_pgupgrade"; if the flag is true, we always return an empty set
instead of looking up the multixact.  (I suppose we could remove the
argument in the master branch, but I chose not to do so in this commit).

This was broken all along, but the error-facing message appeared first
because of commit 8e9a16ab8f and was partially fixed in a25c2b7c4d.
This fix, backpatched all the way back to 9.3, goes approximately in the
same direction as a25c2b7c4d but should cover all cases.

Bug analysis by Andrew Gierth and Álvaro Herrera.

A number of public reports match this bug:
  https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net
  https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com
  https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk
  https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com
  https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
2016-06-24 18:29:28 -04:00
Robert Haas
9e9c38e159 postgres_fdw: Fix incorrect NULL handling in join pushdown.
something.* IS NOT NULL means that every attribute of the row is not
NULL, not that the row itself is non-NULL (e.g. because it's coming
from below an outer join.  Use (somevar.*)::pg_catalog.text IS NOT
NULL instead.

Ashutosh Bapat, per a report by Rushabh Lathia.  Reviewed by
Amit Langote and Etsuro Fujita.  Schema-qualification added by me.
2016-06-24 15:14:15 -04:00
Robert Haas
267569b24c postgres_fdw: Remove useless return statement.
Etsuro Fujita
2016-06-24 14:33:13 -04:00
Tom Lane
e611515dd6 pg_trgm's set_limit() function is parallel unsafe, not parallel restricted.
Per buildfarm.  Fortunately, it's not quite too late to squeeze this fix
into the pg_trgm 1.3 update.
2016-06-20 11:29:54 -04:00
Tom Lane
9c852566a3 Fix comparison of similarity to threshold in GIST trigram searches.
There was some very strange code here, dating to commit b525bf77, that
purported to work around an ancient gcc bug by forcing a float4 comparison
to be done as int instead.  Commit 5871b8848 broke that when it changed
one side of the comparison to "double" but left the comparison code alone.
Commit f576b17cd doubled down on the weirdness by introducing a "volatile"
marker, which had nothing to do with the actual problem.

Guess that the gcc bug, even if it's still present in the wild, was
triggered by comparison of float4's and can be avoided if we store the
result of cnt_sml() into a double before comparing to the double "nlimit".
This will at least work correctly on non-broken compilers, and it's way
more readable.

Per bug #14202 from Greg Navis.  Add a regression test based on his
example.

Report: <20160620115321.5792.10766@wrigleys.postgresql.org>
2016-06-20 10:49:19 -04:00
Tom Lane
7e81a18d49 Fix parallel-safety markings for contrib/dblink.
As shown by buildfarm reports, dblink_build_sql_insert and
dblink_build_sql_update are *not* parallel safe, because they
may attempt to access temporary tables of the local session.

Although dblink_build_sql_delete doesn't actually touch the
contents of the referenced table, it seems consistent and prudent
to mark it PARALLEL RESTRICTED too.
2016-06-17 23:08:21 -04:00
Robert Haas
71d05a2c7b pg_visibility: Add pg_truncate_visibility_map function.
This requires some core changes as well so that we can properly
WAL-log the truncation.  Specifically, it changes the format of the
XLOG_SMGR_TRUNCATE WAL record, so bump XLOG_PAGE_MAGIC.

Patch by me, reviewed but not fully endorsed by Andres Freund.
2016-06-17 17:37:30 -04:00
Robert Haas
20eb2731b7 Update dblink extension for parallel query.
Almost all functions provided by this extension are PARALLEL
RESTRICTED.  Mostly, that's because the leader's TCP connections won't
be shared with the workers, but in some cases like dblink_get_pkey
it's because they obtain locks which might be released early if taken
within a parallel worker.  dblink_fdw_validator probably can't be used
in a query anyway, but there would be no problem from the point of
view of parallel query if it were, so it's PARALLEL SAFE.

Andreas Karlsson
2016-06-17 15:18:44 -04:00
Robert Haas
177c56d608 postgres_fdw: Rephrase comment.
Per gripe from Thomas Munro, who only complained about a more
localized problem, but I couldn't resist a bit more wordsmithing.
2016-06-17 13:02:22 -04:00
Robert Haas
e472ce9624 Add integrity-checking functions to pg_visibility.
The new pg_check_visible() and pg_check_frozen() functions can be used to
verify that the visibility map bits for a relation's data pages match the
actual state of the tuples on those pages.

Amit Kapila and Robert Haas, reviewed (in earlier versions) by Andres
Freund.  Additional testing help by Thomas Munro.
2016-06-15 14:33:58 -04:00
Robert Haas
13e7453135 Update xml2 extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-14 15:49:32 -04:00
Robert Haas
20f6c3a2a1 Update uuid-ossp extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-14 14:56:21 -04:00
Robert Haas
202ac08c08 Update unaccent extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-14 14:55:49 -04:00
Robert Haas
6b7d11ffda Update sslinfo extension for parallel query.
All functions provided by this extension are PARALLEL RESTRICTED,
because they provide information about the connection state.  Parallel
workers don't have this information and therefore these functions
can't be executed in a worker (but they can be present in a query some
other part of which uses parallelism).

Andreas Karlsson
2016-06-14 14:52:55 -04:00
Robert Haas
2910fc8239 Update extensions with GIN/GIST support for parallel query.
Commit 749a787c5b bumped the extension
version on all of these extensions already, and we haven't had a
release since then, so we can make further changes without bumping the
extension version again.  Take this opportunity to mark all of the
functions exported by these modules PARALLEL SAFE -- except for
pg_trgm's set_limit().  Mark that one PARALLEL RESTRICTED, because it
makes a persistent change to a GUC value.

Note that some of the markings added by this commit don't have any
effect; for example, gseg_picksplit() isn't likely to be mentioned
explicitly in a query and therefore it's parallel-safety marking will
never be consulted.  But this commit just marks everything for
consistency: if it were somehow used in a query, that would be fine as
far as parallel query is concerned, since it does not consult any
backend-private state, attempt to write data, etc.

Andreas Karlsson, with a few revisions by me.
2016-06-14 13:34:37 -04:00
Robert Haas
131c7e70b4 postgres_fdw: Check PlaceHolderVars before pushing down a join.
As discovered by Andreas Seltenreich via sqlsmith, it's possible for a
remote join to need to generate a target list which contains a
PlaceHolderVar which would need to be evaluated on the remote server.
This happens when we try to push down a join tree which contains outer
joins and the nullable side of the join contains a subquery which
evauates some expression which can go to NULL above the level of the
join.  Since the deparsing logic can't build a remote query that
involves subqueries, it fails while trying to produce an SQL query
that can be sent to the remote side.  Detect such cases and don't try
to push down the join at all.

It's actually fine to push down the join if the PlaceHolderVar needs
to be evaluated at the current join level.  This patch makes a small
change to build_tlist_to_deparse so that this case will work.

Amit Langote, Ashutosh Bapat, and me.
2016-06-14 11:48:27 -04:00
Tom Lane
5484c0a980 Minor fixes in contrib installation scripts.
Extension scripts should never use CREATE OR REPLACE for initial object
creation.  If there is a collision with a pre-existing (probably
user-created) object, we want extension installation to fail, not silently
overwrite the user's object.  Bloom and sslinfo both violated this precept.

Also fix a number of scripts that had no standard header (the file name
comment and the \echo...\quit guard).  Probably the \echo...\quit hack
is less important now than it was in 9.1 days, but that doesn't mean
that individual extensions get to choose whether to use it or not.

And fix a couple of evident copy-and-pasteos in file name comments.

No need for back-patch: the REPLACE bugs are both new in 9.6, and the
rest of this is pretty much cosmetic.

Andreas Karlsson and Tom Lane
2016-06-14 10:47:06 -04:00
Robert Haas
332fdbef20 postgres_fdw: Promote an Assert() to elog().
Andreas Seltenreich reports that it is possible for a PlaceHolderVar
to creep into this tlist, and I fear that even after that's fixed we
might have other, similar bugs in this area either now or in the
future.  There's a lot of action-at-a-distance here, because the
validity of this assertion depends on core planner behavior; so, let's
use elog() to make sure we catch this even in non-assert builds,
rather than just crashing.
2016-06-14 09:00:12 -04:00
Noah Misch
3be0a62ffe Finish pgindent run for 9.6: Perl files. 2016-06-12 04:19:56 -04:00
Robert Haas
a8501ba119 Update pgstattuple extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-10 10:42:03 -04:00
Robert Haas
496899ccc2 Update pg_stat_statements extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.  Given the
general prohibition against write operations in parallel queries, it is
perhaps a bit surprising that pg_stat_statements_reset() is parallel safe.
But since it only modifies shared memory, not the database, it's OK.

Andreas Karlsson
2016-06-10 10:42:01 -04:00
Robert Haas
3d8fc8c68c Schema-qualify some references to regprocedure.
Andreas Karlsson, per a gripe from Tom Lane.
2016-06-10 10:41:58 -04:00
Robert Haas
4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Robert Haas
9164deea2f Update pgrowlocks extension for parallel query.
The pgrowlocks function provided by this extension is PARALLEL SAFE.

Andreas Karlsson
2016-06-09 17:35:53 -04:00
Robert Haas
6b3586caa8 Update pg_prewarm extension for parallel query.
The pg_prewarm function provided by this extension is PARALLEL SAFE.

Andreas Karlsson
2016-06-09 17:18:18 -04:00
Robert Haas
42d4257a06 Update pg_freespacemap extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-09 17:18:16 -04:00
Robert Haas
0dbf3ce0e0 Update pgcrypto extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-09 17:18:14 -04:00
Robert Haas
06d7fd6e29 Update pg_buffercache extension for parallel query.
The pg_buffercache_pages function provided by this extension is
PARALLEL SAFE.

Andreas Karlsson
2016-06-09 17:18:12 -04:00
Robert Haas
e3b607cd7a Update pageinspect extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-09 17:18:09 -04:00
Tom Lane
749a787c5b Handle contrib's GIN/GIST support function signature changes honestly.
In commits 9ff60273e3 and dbe2328959 I (tgl) fixed the
signatures of a bunch of contrib's GIN and GIST support functions so that
they would pass validation by the recently-added amvalidate functions.
The backend does not actually consult or check those signatures otherwise,
so I figured this was basically cosmetic and did not require an extension
version bump.  However, Alexander Korotkov pointed out that that would
leave us in a pretty messy situation if we ever wanted to redefine those
functions later, because there wouldn't be a unique way to name them.
Since we're going to be bumping these extensions' versions anyway for
parallel-query cleanups, let's take care of this now.

Andreas Karlsson, adjusted for more search-path-safety by me
2016-06-09 16:44:25 -04:00
Alvaro Herrera
4f04b66f97 Fix loose ends for SQL ACCESS METHOD objects
COMMENT ON ACCESS METHOD was missing; add it, along psql tab-completion
support for it.

psql was also missing a way to list existing access methods; the new \dA
command does that.

Also add tab-completion support for DROP ACCESS METHOD.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqTzdZdu8J7EF8SXr_R2U5bSUUYNOT3oAWBZdEoggnwhGA@mail.gmail.com
2016-06-07 17:59:34 -04:00
Robert Haas
e7880e5d39 Update lo extension for parallel query.
The lo_oid function provided by this extension is PARALLEL SAFE.

Andreas Karlsson
2016-06-07 11:26:42 -04:00
Robert Haas
b79b8d8f55 Update isn extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-07 11:26:42 -04:00
Robert Haas
1ab194a3a9 Update intagg extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-07 11:26:41 -04:00
Robert Haas
ffab82fbda Update fuzzystrmatch extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-07 11:26:41 -04:00
Robert Haas
50e5226bb3 Update earthdistance extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-07 11:26:41 -04:00
Robert Haas
a89b4b1be0 Update citext extension for parallel query.
All citext functions are PARALLEL SAFE, and a couple of them can
benefit from having aggregate combine functions.

Andreas Karlsson
2016-06-07 11:26:41 -04:00
Tom Lane
8a859691d5 Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.
Fix still another bug in commit 35fcb1b3d: it failed to fully initialize
the SortSupport states it introduced to allow the executor to re-check
ORDER BY expressions containing distance operators.  That led to a null
pointer dereference if the sortsupport code tried to use ssup_cxt.  The
problem only manifests in narrow cases, explaining the lack of previous
field reports.  It requires a GiST-indexable distance operator that lacks
SortSupport and is on a pass-by-ref data type, which among core+contrib
seems to be only btree_gist's interval opclass; and it requires the scan
to be done as an IndexScan not an IndexOnlyScan, which explains how
btree_gist's regression test didn't catch it.  Per bug #14134 from
Jihyun Yu.

Peter Geoghegan

Report: <20160511154904.2603.43889@wrigleys.postgresql.org>
2016-06-05 11:53:06 -04:00
Tom Lane
de33af8823 Update contrib/tsearch2/expected/tsearch2_1.out for phrase FTS.
Commits bb140506d and 38627f687 didn't bother with this.
Per buildfarm member magpie.
2016-06-04 00:49:42 -04:00
Tom Lane
ee4af347ba Measure Bloom index signature-length reloption in bits, not words.
Per discussion, this is a more understandable and future-proof way of
exposing the setting to users.  On-disk, we can still store it in words,
so as to not break on-disk compatibility with beta1.

Along the way, clean up the code associated with Bloom reloptions.
Provide explicit macros for default and maximum lengths rather than
having magic numbers buried in multiple places in the code.  Drop
the adjustBloomOptions() code altogether: it was useless in view of
the fact that reloptions.c already performed default-substitution and
range checking for the options.  Rename a couple of macros and types
for more clarity.

Discussion: <23767.1464926580@sss.pgh.pa.us>
2016-06-03 10:52:45 -04:00
Tom Lane
abaffa9075 Fix contrib/bloom to work for unlogged indexes.
blbuildempty did not do even approximately the right thing: it tried
to add a metapage to the relation's regular data fork, which already
has one at that point.  It should look like the ambuildempty methods
for all the standard index types, ie, initialize a metapage image in
some transient storage and then write it directly to the init fork.
To support that, refactor BloomInitMetapage into two functions.

In passing, fix BloomInitMetapage so it doesn't leave the rd_options
field of the index's relcache entry pointing at transient storage.
I'm not sure this had any visible consequence, since nothing much
else is likely to look at a bloom index's rd_options, but it's
certainly poor practice.

Per bug #14155 from Zhou Digoal.

Report: <20160524144146.22598.42558@wrigleys.postgresql.org>
2016-05-24 21:04:35 -04:00
Tom Lane
e13ac5586c Avoid possible crash in contrib/bloom's blendscan().
It's possible to begin and end an indexscan without ever calling
amrescan.  contrib/bloom, unlike every other index AM, allocated
its "scan->opaque" storage at amrescan time, and thus would crash
in amendscan if amrescan hadn't been called.  We could fix this
by putting in a null-pointer check in blendscan, but I see no very
good reason why contrib/bloom should march to its own drummer in
this respect.  Let's move that initialization to blbeginscan
instead.  Per report from Jeff Janes.
2016-05-17 17:01:18 -04:00
Robert Haas
02a568a027 postgres_fdw: Fix the fix for crash when pushing down multiple joins.
Commit 3151f16e18 was intended to be
a commit of a patch from Ashutosh Bapat, but instead I mistakenly
committed an earlier version from Michael Paquier (because both
patches were submitted with the same filename, and I confused them).
Michael's patch fixes the crash but doesn't actually implement the
correct test.

Repair the incorrect logic, and also expand the comments considerably
so that this is all more clear.

Ashutosh Bapat and Robert Haas
2016-05-16 11:28:28 -04:00
Robert Haas
1b812afb0e Fix multiple problems in postgres_fdw query cancellation logic.
First, even if we cancel a query, we still have to roll back the
containing transaction; otherwise, the session will be left in a
failed transaction state.

Second, we need to support canceling queries whe aborting a
subtransaction as well as when aborting a toplevel transaction.

Etsuro Fujita, reviewed by Michael Paquier
2016-05-16 11:19:10 -04:00
Tom Lane
d94977ef1c Ensure plan stability in contrib/btree_gist regression test.
Buildfarm member skink failed with symptoms suggesting that an
auto-analyze had happened and changed the plan displayed for a
test query.  Although this is evidently of low probability,
regression tests that sometimes fail are no fun, so add commands
to force a bitmap scan to be chosen.
2016-05-12 20:04:38 -04:00
Robert Haas
8826d85078 Tweak a few more things in preparation for upcoming pgindent run.
These adjustments adjust code and comments in minor ways to prevent
pgindent from mangling them.  Among other things, I tried to avoid
situations where pgindent would emit "a +b" instead of "a + b", and I
tried to avoid having it break up inline comments across multiple
lines.
2016-05-03 10:52:25 -04:00
Heikki Linnakangas
d22b85fbd4 Remove unused macros.
CHECK_PAGE_OFFSET_RANGE() has been unused forever.
CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since
bt_page_stats() and bt_page_items() functions were moved from pgstattuple
to pageinspect module. It still exists in pageinspect/btreefuncs.c.

Daniel Gustafsson
2016-05-02 10:07:49 +03:00
Tom Lane
f050423052 Revert "Convert contrib/seg's bool-returning SQL functions to V1 call convention."
This reverts commit c8e81afc60.
That turns out to have been based on a faulty diagnosis of why the
VS2015 build was misbehaving.  Instead, we need to fix DatumGetBool().
2016-04-28 11:46:07 -04:00
Teodor Sigaev
f8467f7da8 Prevent to use magic constants
Use macroses for definition amstrategies/amsupport fields instead of
hardcoded values.

Author: Nikolay Shaplov with addition for contrib/bloom
2016-04-28 16:39:25 +03:00
Tom Lane
c8e81afc60 Convert contrib/seg's bool-returning SQL functions to V1 call convention.
It appears that we can no longer get away with using V0 call convention
for bool-returning functions in newer versions of MSVC.  The compiler
seems to generate code that doesn't clear the higher-order bits of the
result register, causing the bool result Datum to often read as "true"
when "false" was intended.  This is not very surprising, since the
function thinks it's returning a bool-width result but fmgr_oldstyle
assumes that V0 functions return "char *"; what's surprising is that
that hack worked for so long on so many platforms.

The only functions of this description in core+contrib are in contrib/seg,
which we'd intentionally left mostly in V0 style to serve as a warning
canary if V0 call convention breaks.  We could imagine hacking things
so that they're still V0 (we'd have to redeclare the bool-returning
functions as returning some suitably wide integer type, like size_t,
at the C level).  But on the whole it seems better to convert 'em to V1.
We can still leave the pointer- and int-returning functions in V0 style,
so that the test coverage isn't gone entirely.

Back-patch to 9.5, since our intention is to support VS2015 in 9.5
and later.  There's no SQL-level change in the functions' behavior
so back-patching should be safe enough.

Discussion: <22094.1461273324@sss.pgh.pa.us>

Michael Paquier, adjusted some by me
2016-04-22 11:54:23 -04:00
Tom Lane
14216649f3 PGDLLIMPORT-ify old_snapshot_threshold.
Revert commit 7cb1db1d95, which represented
a misunderstanding of the problem (if snapmgr.h weren't already included
in bufmgr.h, things wouldn't compile anywhere).  Instead install what
I think is the real fix.
2016-04-21 14:33:34 -04:00