Commit Graph

3481 Commits

Author SHA1 Message Date
Robert Haas 7086be6e36 When WCOs are present, disable direct foreign table modification.
If the user modifies a view that has CHECK OPTIONs and this gets
translated into a modification to an underlying relation which happens
to be a foreign table, the check options should be enforced.  In the
normal code path, that was happening properly, but it was not working
properly for "direct" modification because the whole operation gets
pushed to the remote side in that case and we never have an option to
enforce the constraint against individual tuples.  Fix by disabling
direct modification when there is a need to enforce CHECK OPTIONs.

Etsuro Fujita, reviewed by Kyotaro Horiguchi and by me.

Discussion: http://postgr.es/m/f8a48f54-6f02-9c8a-5250-9791603171ee@lab.ntt.co.jp
2017-07-24 15:57:24 -04:00
Tom Lane 88f48b57fd Stabilize postgres_fdw regression tests.
The new test cases added in commit 8bf58c0d9 turn out to have output
that can vary depending on the lc_messages setting prevailing on the
test server.  Hide the remote end's error messages to ensure stable
output.  This isn't a terribly desirable solution; we'd rather know
that the connection failed for the expected reason and not some other
one.  But there seems little choice for the moment.

Per buildfarm.

Discussion: https://postgr.es/m/18419.1500658570@sss.pgh.pa.us
2017-07-21 14:20:43 -04:00
Tom Lane 8bf58c0d9b Re-establish postgres_fdw connections after server or user mapping changes.
Previously, postgres_fdw would keep on using an existing connection even
if the user did ALTER SERVER or ALTER USER MAPPING commands that should
affect connection parameters.  Teach it to watch for catcache invals
on these catalogs and re-establish connections when the relevant catalog
entries change.  Per bug #14738 from Michal Lis.

In passing, clean up some rather crufty decisions in commit ae9bfc5d6
about where fields of ConnCacheEntry should be reset.  We now reset
all the fields whenever we open a new connection.

Kyotaro Horiguchi, reviewed by Ashutosh Bapat and myself.
Back-patch to 9.3 where postgres_fdw appeared.

Discussion: https://postgr.es/m/20170710113917.7727.10247@wrigleys.postgresql.org
2017-07-21 12:51:38 -04:00
Tom Lane decb08ebdf Code review for NextValueExpr expression node type.
Add missing infrastructure for this node type, notably in ruleutils.c where
its lack could demonstrably cause EXPLAIN to fail.  Add outfuncs/readfuncs
support.  (outfuncs support is useful today for debugging purposes.  The
readfuncs support may never be needed, since at present it would only
matter for parallel query and NextValueExpr should never appear in a
parallelizable query; but it seems like a bad idea to have a primnode type
that isn't fully supported here.)  Teach planner infrastructure that
NextValueExpr is a volatile, parallel-unsafe, non-leaky expression node
with cost cpu_operator_cost.  Given its limited scope of usage, there
*might* be no live bug today from the lack of that knowledge, but it's
certainly going to bite us on the rear someday.  Teach pg_stat_statements
about the new node type, too.

While at it, also teach cost_qual_eval() that MinMaxExpr, SQLValueFunction,
XmlExpr, and CoerceToDomain should be charged as cpu_operator_cost.
Failing to do this for SQLValueFunction was an oversight in my commit
0bb51aa96.  The others are longer-standing oversights, but no time like the
present to fix them.  (In principle, CoerceToDomain could have cost much
higher than this, but it doesn't presently seem worth trying to examine the
domain's constraints here.)

Modify execExprInterp.c to execute NextValueExpr as an out-of-line
function; it seems quite unlikely to me that it's worth insisting that
it be inlined in all expression eval methods.  Besides, providing the
out-of-line function doesn't stop anyone from inlining if they want to.

Adjust some places where NextValueExpr support had been inserted with the
aid of a dartboard rather than keeping it in the same order as elsewhere.

Discussion: https://postgr.es/m/23862.1499981661@sss.pgh.pa.us
2017-07-14 15:25:43 -04:00
Heikki Linnakangas bf723a274c Forbid gen_random_uuid() with --disable-strong-random
Previously, gen_random_uuid() would fall back to a weak random number
generator, unlike gen_random_bytes() which would just fail. And this was
not made very clear in the docs. For consistency, also make
gen_random_uuid() fail outright, if compiled with --disable-strong-random.

Re-word the error message you get with --disable-strong-random. It is also
used by pgp functions that require random salts, and now also
gen_random_uuid().

Reported by Radek Slupik.

Discussion: https://www.postgresql.org/message-id/20170101232054.10135.50528@wrigleys.postgresql.org
2017-07-03 12:10:11 +03:00
Peter Eisentraut 1db49c3b6d Fix typo in comment
Author: Albe Laurenz <laurenz.albe@wien.gv.at>
2017-06-30 14:51:15 -04:00
Robert Haas 2c77903b2b postgres_fdw: Move function prototype to correct section.
Etsuro Fujita, reviewed by Ashutosh Bapat.

Discussion: http://postgr.es/m/93a9c487-9920-a38f-da96-503422c50f59@lab.ntt.co.jp
2017-06-22 12:44:53 -04:00
Peter Eisentraut b56818abd4 Teach pgrowlocks to check relkind before scanning
Author: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
2017-06-21 23:19:13 -04:00
Tom Lane 382ceffdf7 Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.

By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis.  However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent.  That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.

This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:35:54 -04:00
Tom Lane c7b8998ebb Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.

Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code.  The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there.  BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs.  So the
net result is that in about half the cases, such comments are placed
one tab stop left of before.  This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.

Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:19:25 -04:00
Tom Lane e3860ffa4d Initial pgindent run with pg_bsd_indent version 2.0.
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:

* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
  sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
  well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
  with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
  than the expected column 33.

On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list.  This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.

There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses.  I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 14:39:04 -04:00
Tom Lane a3bed62d44 Fix low-probability leaks of PGresult objects in the backend.
We had three occurrences of essentially the same coding pattern
wherein we tried to retrieve a query result from a libpq connection
without blocking.  In the case where PQconsumeInput failed (typically
indicating a lost connection), all three loops simply gave up and
returned, forgetting to clear any previously-collected PGresult
object.  Since those are malloc'd not palloc'd, the oversight results
in a process-lifespan memory leak.

One instance, in libpqwalreceiver, is of little significance because
the walreceiver process would just quit anyway if its connection fails.
But we might as well fix it.

The other two instances, in postgres_fdw, are somewhat more worrisome
because at least in principle the scenario could be repeated, allowing
the amount of memory leaked to build up to something worth worrying
about.  Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
calls, as well as other calls that could potentially elog(ERROR),
providing another way to exit without having cleared the PGresult.
Here we need to add PG_TRY logic similar to what exists in quite a
few other places in postgres_fdw.

Coverity noted the libpqwalreceiver bug; I found the other two cases
by checking all calls of PQconsumeInput.

Back-patch to all supported versions as appropriate (9.2 lacks
postgres_fdw, so this is really quite unexciting for that branch).

Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
2017-06-15 15:03:52 -04:00
Peter Eisentraut 272171279f psql: Use more consistent capitalization of some output headings 2017-06-13 14:41:14 -04:00
Joe Conway 06c0afe56e Fix contrib/sepgsql regr tests for tup-routing constraint check change.
Commit 15ce775 changed tuple-routing constraint checking logic.
This affects the expected output for contrib/sepgsql, because
there's no longer LOG entries reporting allowance of int4eq()
execution. Per buildfarm.
2017-06-07 17:54:33 -07:00
Robert Haas ae9bfc5d65 postgres_fdw: Allow cancellation of transaction control commands.
Commit f039eaac71, later back-patched
with commit 1b812afb0e, allowed many of
the queries issued by postgres_fdw to fetch remote data to respond to
cancel interrupts in a timely fashion.  However, it didn't do anything
about the transaction control commands, which remained
noninterruptible.

Improve the situation by changing do_sql_command() to retrieve query
results using pgfdw_get_result(), which uses the asynchronous
interface to libpq so that it can check for interrupts every time
libpq returns control.  Since this might result in a situation
where we can no longer be sure that the remote transaction state
matches the local transaction state, add a facility to force all
levels of the local transaction to abort if we've lost track of
the remote state; without this, an apparently-successful commit of
the local transaction might fail to commit changes made on the
remote side.  Also, add a 60-second timeout for queries issue during
transaction abort; if that expires, give up and mark the state of
the connection as unknown.  Drop all such connections when we exit
the local transaction.  Together, these changes mean that if we're
aborting the local toplevel transaction anyway, we can just drop the
remote connection in lieu of waiting (possibly for a very long time)
for it to complete an abort.

This still leaves quite a bit of room for improvement.  PQcancel()
has no asynchronous interface, so if we get stuck sending the cancel
request we'll still hang.  Also, PQsetnonblocking() is not used, which
means we could block uninterruptibly when sending a query.  There
might be some other optimizations possible as well.  Nonetheless,
this allows us to escape a wait for an unresponsive remote server
quickly in many more cases than previously.

Report by Suraj Kharage.  Patch by me and Rafia Sabih.  Review
and testing by Amit Kapila and Tushar Ahuja.

Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
2017-06-07 15:14:55 -04:00
Tom Lane e7941a9766 Replace over-optimistic Assert in partitioning code with a runtime test.
get_partition_parent felt that it could simply Assert that systable_getnext
found a tuple.  This is unlike any other caller of that function, and it's
unsafe IMO --- in fact, the reason I noticed it was that the Assert failed.
(OK, I was working with known-inconsistent catalog contents, but I wasn't
expecting the DB to fall over quite that violently.  The behavior in a
non-assert-enabled build wouldn't be very nice, either.)  Fix it to do what
other callers do, namely an actual runtime-test-and-elog.

Also, standardize the wording of elog messages that are complaining about
unexpected failure of systable_getnext.  90% of them say "could not find
tuple for <object>", so make the remainder do likewise.  Many of the
holdouts were using the phrasing "cache lookup failed", which is outright
misleading since no catcache search is involved.
2017-06-04 16:20:03 -04:00
Alvaro Herrera e6785a5ca1 Fix wording in amvalidate error messages
Remove some gratuituous message differences by making the AM name
previously embedded in each message be a %s instead.  While at it, get
rid of terminology that's unclear and unnecessary in one message.

Discussion: https://postgr.es/m/20170523001557.bq2hbq7hxyvyw62q@alvherre.pgsql
2017-05-30 15:45:42 -04:00
Tom Lane 2dd510e630 Fix contrib/sepgsql regression tests for partition NOT NULL change.
Commit 3ec76ff1f changed the partitioning logic to not install a forced
NOT NULL constraint on range partitioning columns.  This affects the
expected output for contrib/sepgsql, because there's no longer LOG
entries reporting allowance of such a constraint.  Per buildfarm.
2017-05-21 11:46:04 -04:00
Robert Haas 3ec76ff1f2 Don't explicitly mark range partitioning columns NOT NULL.
This seemed like a good idea originally because there's no way to mark
a range partition as accepting NULL, but that now seems more like a
current limitation than something we want to lock down for all time.
For example, there's a proposal to add the notion of a default
partition which accepts all rows not otherwise routed, which directly
conflicts with the idea that a range-partitioned table should never
allow nulls anywhere.  So let's change this while we still can, by
putting the NOT NULL test into the partition constraint instead of
changing the column properties.

Amit Langote and Robert Haas, reviewed by Amit Kapila

Discussion: http://postgr.es/m/8e2dd63d-c6fb-bb74-3c2b-ed6d63629c9d@lab.ntt.co.jp
2017-05-18 13:49:31 -04:00
Bruce Momjian ce55481032 Post-PG 10 beta1 pgperltidy run 2017-05-17 19:01:23 -04:00
Bruce Momjian a6fd7b7a5f Post-PG 10 beta1 pgindent run
perltidy run not included.
2017-05-17 16:31:56 -04:00
Tom Lane c079673dcb Preventive maintenance in advance of pgindent run.
Reformat various places in which pgindent will make a mess, and
fix a few small violations of coding style that I happened to notice
while perusing the diffs from a pgindent dry run.

There is one actual bug fix here: the need-to-enlarge-the-buffer code
path in icu_convert_case was obviously broken.  Perhaps it's unreachable
in our usage?  Or maybe this is just sadly undertested.
2017-05-16 20:36:35 -04:00
Andrew Dunstan 12ad38b3b4 Suppress indentation from Data::Dumper in regression tests
Ultra-modern versions of the perl Data::Dumper module have apparently
changed how they indent output. Instead of trying to keep up we choose
to tell it to supporess all indentation in the hstore_plperl regression
tests.

Backpatch to 9.5 where this feature was introduced.
2017-05-14 01:10:18 -04:00
Andres Freund 955a684e04 Fix race condition leading to hanging logical slot creation.
The snapshot assembly during the creation of logical slots relied
waiting for transactions in xl_running_xacts to end, by checking for
their commit/abort records.  Unfortunately, despite locking, it is
possible to see an xl_running_xact record listing transactions as
ready, that have already WAL-logged an commit/abort record, as the
locking just prevents the ProcArray to be adjusted, and the commit
record has to be logged first.

That lead to either delayed or hanging snapshot creation, because
snapbuild.c would wait "forever" to see commit/abort records for some
transactions.  That hang resolved only if a xl_running_xacts record
without any running transactions happened to be logged, far from
certain on a busy server.

It's impractical to prevent that via more heavyweight locking, the
likelihood of deadlocks and significantly increased contention would
be too big.

Instead change the initial snapshot creation to be solely based on
tracking the oldest running transaction via
xl_running_xacts->oldestRunningXid - that actually ends up
significantly simplifying the code.  That has two disadvantages:
1) Because we cannot fully "trust" the contents of xl_running_xacts,
   we cannot use it to build the initial snapshot.  Instead we have to
   wait twice for all running transactions to finish.
2) Previously a slot, unless the race occurred, could be created when
   the all transaction perceived as running based on commit/abort
   records, now we have to wait for the next xl_running_xacts record.
To address that, trigger logging new xl_running_xacts record from
within snapbuild.c exactly when necessary.

Unfortunately snabuild.c's SnapBuild is stored on disk, one of the
stupider ideas of a certain Mr Freund, so we can't change it in a
minor release.  As this is going to be backpatched, we have to hack
around a bit to keep on-disk compatibility.  A later commit will
rejigger that on master.

Author: Andres Freund, based on a quite different patch from Petr Jelinek
Analyzed-By: Petr Jelinek
Reviewed-By: Petr Jelinek
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backpatch: 9.4-, where logical decoding has been introduced
2017-05-13 14:21:00 -07:00
Tom Lane 9aab83fc50 Redesign get_attstatsslot()/free_attstatsslot() for more safety and speed.
The mess cleaned up in commit da0759600 is clear evidence that it's a
bug hazard to expect the caller of get_attstatsslot()/free_attstatsslot()
to provide the correct type OID for the array elements in the slot.
Moreover, we weren't even getting any performance benefit from that,
since get_attstatsslot() was extracting the real type OID from the array
anyway.  So we ought to get rid of that requirement; indeed, it would
make more sense for get_attstatsslot() to pass back the type OID it found,
in case the caller isn't sure what to expect, which is likely in binary-
compatible-operator cases.

Another problem with the current implementation is that if the stats array
element type is pass-by-reference, we incur a palloc/memcpy/pfree cycle
for each element.  That seemed acceptable when the code was written because
we were targeting O(10) array sizes --- but these days, stats arrays are
almost always bigger than that, sometimes much bigger.  We can save a
significant number of cycles by doing one palloc/memcpy/pfree of the whole
array.  Indeed, in the now-probably-common case where the array is toasted,
that happens anyway so this method is basically free.  (Note: although the
catcache code will inline any out-of-line toasted values, it doesn't
decompress them.  At the other end of the size range, it doesn't expand
short-header datums either.  In either case, DatumGetArrayTypeP would have
to make a copy.  We do end up using an extra array copy step if the element
type is pass-by-value and the array length is neither small enough for a
short header nor large enough to have suffered compression.  But that
seems like a very acceptable price for winning in pass-by-ref cases.)

Hence, redesign to take these insights into account.  While at it,
convert to an API in which we fill a struct rather than passing a bunch
of pointers to individual output arguments.  That will make it less
painful if we ever want further expansion of what get_attstatsslot can
pass back.

It's certainly arguable that this is new development and not something to
push post-feature-freeze.  However, I view it as primarily bug-proofing
and therefore something that's better to have sooner not later.  Since
we aren't quite at beta phase yet, let's put it in.

Discussion: https://postgr.es/m/16364.1494520862@sss.pgh.pa.us
2017-05-13 15:14:39 -04:00
Tom Lane d10c626de4 Rename WAL-related functions and views to use "lsn" not "location".
Per discussion, "location" is a rather vague term that could refer to
multiple concepts.  "LSN" is an unambiguous term for WAL locations and
should be preferred.  Some function names, view column names, and function
output argument names used "lsn" already, but others used "location",
as well as yet other terms such as "wal_position".  Since we've already
renamed a lot of things in this area from "xlog" to "wal" for v10,
we may as well incur a bit more compatibility pain and make these names
all consistent.

David Rowley, minor additional docs hacking by me

Discussion: https://postgr.es/m/CAKJS1f8O0njDKe8ePFQ-LK5-EjwThsDws6ohJ-+c6nWK+oUxtg@mail.gmail.com
2017-05-11 11:49:59 -04:00
Heikki Linnakangas eb61136dc7 Remove support for password_encryption='off' / 'plain'.
Storing passwords in plaintext hasn't been a good idea for a very long
time, if ever. Now seems like a good time to finally forbid it, since we're
messing with this in PostgreSQL 10 anyway.

Remove the CREATE/ALTER USER UNENCRYPTED PASSSWORD 'foo' syntax, since
storing passwords unencrypted is no longer supported. ENCRYPTED PASSWORD
'foo' is still accepted, but ENCRYPTED is now just a noise-word, it does
the same as just PASSWORD 'foo'.

Likewise, remove the --unencrypted option from createuser, but accept
--encrypted as a no-op for backward compatibility. AFAICS, --encrypted was
a no-op even before this patch, because createuser encrypted the password
before sending it to the server even if --encrypted was not specified. It
added the ENCRYPTED keyword to the SQL command, but since the password was
already in encrypted form, it didn't make any difference. The documentation
was not clear on whether that was intended or not, but it's moot now.

Also, while password_encryption='on' is still accepted as an alias for
'md5', it is now marked as hidden, so that it is not listed as an accepted
value in error hints, for example. That's not directly related to removing
'plain', but it seems better this way.

Reviewed by Michael Paquier

Discussion: https://www.postgresql.org/message-id/16e9b768-fd78-0b12-cfc1-7b6b7f238fde@iki.fi
2017-05-08 11:26:07 +03:00
Peter Eisentraut 332bec1e60 postgres_fdw: Fix join push down with extensions
Objects in an extension are shippable to a foreign server if the
extension is part of the foreign server definition's shippable
extensions list.  But this was not properly considered in some cases
when checking whether a join condition can be pushed to a foreign server
and the join condition uses an object from a shippable extension.  So
the join would never be pushed down in those cases.

So, the list of extensions needs to be made available in fpinfo of the
relation being considered to be pushed down before any expressions are
assessed for being shippable.  Fix foreign_join_ok() to do that for a
join relation.

The code to save FDW options in fpinfo is scattered at multiple places.
Bring all of that together into functions apply_server_options(),
apply_table_options(), and merge_fdw_options().

David Rowley and Ashutosh Bapat, per report from David Rowley
2017-04-24 22:50:07 -04:00
Peter Eisentraut 6275f5d28a Fix new warnings from GCC 7
This addresses the new warning types -Wformat-truncation
-Wformat-overflow that are part of -Wall, via -Wformat, in GCC 7.
2017-04-17 13:59:46 -04:00
Tom Lane 2040bb4a0b Clean up manipulations of hash indexes' hasho_flag field.
Standardize on testing a hash index page's type by doing
	(opaque->hasho_flag & LH_PAGE_TYPE) == LH_xxx_PAGE
Various places were taking shortcuts like
	opaque->hasho_flag & LH_BUCKET_PAGE
which while not actually wrong, is still bad practice because
it encourages use of
	opaque->hasho_flag & LH_UNUSED_PAGE
which *is* wrong (LH_UNUSED_PAGE == 0, so the above is constant false).
hash_xlog.c's hash_mask() contained such an incorrect test.

This also ensures that we mask out the additional flag bits that
hasho_flag has accreted since 9.6.  pgstattuple's pgstat_hash_page(),
for one, was failing to do that and was thus actively broken.

Also fix assorted comments that hadn't been updated to reflect the
extended usage of hasho_flag, and fix some macros that were testing
just "(hasho_flag & bit)" to use the less dangerous, project-approved
form "((hasho_flag & bit) != 0)".

Coverity found the bug in hash_mask(); I noted the one in
pgstat_hash_page() through code reading.
2017-04-14 17:04:25 -04:00
Tom Lane 1dffabed49 Further fix pg_trgm's extraction of trigrams from regular expressions.
Commit 9e43e8714 turns out to have been insufficient: not only is it
necessary to track tentative parent links while considering a set of
arc removals, but it's necessary to track tentative flag additions
as well.  This is because we always merge arc target states into
arc source states; therefore, when considering a merge of the final
state with some other, it is the other state that will acquire a new
TSTATE_FIN bit.  If there's another arc for the same color trigram
that would cause merging of that state with the initial state, we
failed to recognize the problem.  The test cases for the prior commit
evidently only exercised situations where a tentative merge with the
initial state occurs before one with the final state.  If it goes the
other way around, we'll happily merge the initial and final states,
either producing a broken final graph that would never match anything,
or triggering the Assert added by the prior commit.

It's tempting to consider switching the merge direction when the merge
involves the final state, but I lack the time to analyze that idea in
detail.  Instead just keep track of the flag changes that would result
from proposed merges, in the same way that the prior commit tracked
proposed parent links.

Along the way, add some more debugging support, because I'm not entirely
confident that this is the last bug here.  And tweak matters so that
the transformed.dot file uses small integers rather than pointer values
to identify states; that makes it more readable if you're just eyeballing
it rather than fooling with Graphviz.  And rename a couple of identically
named struct fields to reduce confusion.

Per report from Corey Csuhta.  Add a test case based on his example.
(Note: this case does not trigger the bug under 9.3, apparently because
its different measurement of costs causes it to stop merging states before
it hits the failure.  I spent some time trying to find a variant that would
fail in 9.3, without success; but I'm sure such cases exist.)

Like the previous patch, back-patch to 9.3 where this code was added.

Report: https://postgr.es/m/E2B01A4B-4530-406B-8D17-2F67CF9A16BA@csuhta.com
2017-04-14 14:52:21 -04:00
Peter Eisentraut 0c22327f26 Remove useless trailing spaces in queries in C strings
Author: Alexander Law <exclusion@gmail.com>
2017-04-13 23:47:46 -04:00
Tom Lane 6cfaffc0dd Fix regexport.c to behave sanely with lookaround constraints.
regexport.c thought it could just ignore LACON arcs, but the correct
behavior is to treat them as satisfiable while consuming zero input
(rather reminiscently of commit 9f1e642d5).  Otherwise, the emitted
simplified-NFA representation may contain no paths leading from initial
to final state, which unsurprisingly confuses pg_trgm, as seen in
bug #14623 from Jeff Janes.

Since regexport's output representation has no concept of an arc that
consumes zero input, recurse internally to find the next normal arc(s)
after any LACON transitions.  We'd be forced into changing that
representation if a LACON could be the last arc reaching the final
state, but fortunately the regex library never builds NFAs with such
a configuration, so there always is a next normal arc.

Back-patch to 9.3 where this logic was introduced.

Discussion: https://postgr.es/m/20170413180503.25948.94871@wrigleys.postgresql.org
2017-04-13 17:18:35 -04:00
Robert Haas 9cc27566c1 Fix pgstattuple's handling of unused hash pages.
Hash indexes can contain both pages which are all-zeroes (i.e.
PageIsNew()) and pages which have been initialized but currently
aren't used.  The latter category can happen either when a page
has been reserved but not yet used or when it is used for a time
and then freed.  pgstattuple was only prepared to deal with the
pages that are actually-zeroes, which it called zero_pages.
Rename the column to unused_pages (extension version 1.5 is
as-yet-unreleased) and make it count both kinds of unused pages.

Along the way, slightly tidy up the way we test for pages of
various types.

Robert Haas and Ashutosh Sharma, reviewed by Amit Kapila

Discussion: http://postgr.es/m/CAE9k0PkTtKFB3YndOyQMjwuHx+-FtUP1ynK8E-nHtetoow3NtQ@mail.gmail.com
2017-04-12 11:53:00 -04:00
Tom Lane 88e902b769 Simplify handling of remote-qual pass-forward in postgres_fdw.
Commit 0bf3ae88a encountered a need to pass the finally chosen remote qual
conditions forward from postgresGetForeignPlan to postgresPlanDirectModify.
It solved that by sticking them into the plan node's fdw_private list,
which in hindsight was a pretty bad idea.  In the first place, there's no
use for those qual trees either in EXPLAIN or execution; indeed they could
never safely be used for any post-planning purposes, because they would not
get processed by setrefs.c.  So they're just dead weight to carry around in
the finished plan tree, plus being an attractive nuisance for somebody who
might get the idea that they could be used that way.  Secondly, because
those qual trees (sometimes) contained RestrictInfos, they created a
plan-transmission hazard for parallel query, which is how come we noticed a
problem.  We dealt with that symptom in commit 28b047875, but really a more
straightforward and more efficient fix is to pass the data through in a new
field of struct PgFdwRelationInfo.  So do it that way.  (There's no need
to revert 28b047875, as it has sufficient reason to live anyway.)

Per fuzz testing by Andreas Seltenreich.

Discussion: https://postgr.es/m/87tw5x4vcu.fsf@credativ.de
2017-04-11 13:53:21 -04:00
Tom Lane 28b0478755 Handle restriction clause lists more uniformly in postgres_fdw.
Clauses in the lists retained by postgres_fdw during planning were
sometimes bare boolean clauses, sometimes RestrictInfos, and sometimes
a mixture of the two in the same list.  The comment about that situation
didn't come close to telling the full truth, either.  Aside from being
confusing, this had a couple of bad practical consequences:
* waste of planning cycles due to inability to cache per-clause selectivity
and cost estimates;
* sometimes, RestrictInfos would sneak into the fdw_private list of a
finished Plan node, causing failures if, for example, we tried to ship
the Plan tree to a parallel worker.
(It may well be that it's a bug in the parallel-query logic that we
would ever try to ship such a plan to a parallel worker, but in any
case this deserves to be cleaned up.)

To fix, rearrange so that clause lists in PgFdwRelationInfo are always
lists of RestrictInfos, and then strip the RestrictInfos at the last
minute when making a Plan node.  In passing do a bit of refactoring and
comment cleanup in postgresGetForeignPlan and foreign_join_ok.

Although the messiness here dates back at least to 9.6, there's no evidence
that it causes anything worse than wasted planning cycles in 9.6, so no
back-patch for now.

Per fuzz testing by Andreas Seltenreich.

Tom Lane and Ashutosh Bapat

Discussion: https://postgr.es/m/87tw5x4vcu.fsf@credativ.de
2017-04-11 11:59:09 -04:00
Tom Lane 8f0530f580 Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type.  For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))".  Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.

As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.

Patch by me, after an idea of Andrew Gierth's.

Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
2017-04-10 13:51:53 -04:00
Peter Eisentraut 244f1c8907 Fix whitespace 2017-04-10 11:18:15 -04:00
Joe Conway 86fa9b2d1b Make sepgsql regression tests robust vs. collation differences
In commit 25542d77, regression test coverage was added to sepgsql
for partitioned tables. Unfortunately it was not robust in the face
of collation differences, per the buildfarm. Force "C" collation
in order to fix that.

Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
2017-04-09 15:59:02 -07:00
Joe Conway 25542d77dd Add partitioned table support to sepgsql
The new partitioned table capability added a new relkind, namely
RELKIND_PARTITIONED_TABLE. Update sepgsql to treat this new relkind
exactly the same way it does RELKIND_RELATION.

In addition, add regression test coverage for partitioned tables.

Issue raised by Stephen Frost and initial patch by Mike Palmiotto.
Review by Tom Lane and Robert Haas, and editorializing by me.

Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
2017-04-09 14:01:58 -07:00
Tom Lane 9c7f5229ad Optimize joins when the inner relation can be proven unique.
If there can certainly be no more than one matching inner row for a given
outer row, then the executor can move on to the next outer row as soon as
it's found one match; there's no need to continue scanning the inner
relation for this outer row.  This saves useless scanning in nestloop
and hash joins.  In merge joins, it offers the opportunity to skip
mark/restore processing, because we know we have not advanced past the
first possible match for the next outer row.

Of course, the devil is in the details: the proof of uniqueness must
depend only on joinquals (not otherquals), and if we want to skip
mergejoin mark/restore then it must depend only on merge clauses.
To avoid adding more planning overhead than absolutely necessary,
the present patch errs in the conservative direction: there are cases
where inner_unique or skip_mark_restore processing could be used, but
it will not do so because it's not sure that the uniqueness proof
depended only on "safe" clauses.  This could be improved later.

David Rowley, reviewed and rather heavily editorialized on by me

Discussion: https://postgr.es/m/CAApHDvqF6Sw-TK98bW48TdtFJ+3a7D2mFyZ7++=D-RyPsL76gw@mail.gmail.com
2017-04-07 22:20:13 -04:00
Alvaro Herrera 8bf74967da Reduce the number of pallocs() in BRIN
Instead of allocating memory in brin_deform_tuple and brin_copy_tuple
over and over during a scan, allow reuse of previously allocated memory.
This is said to make for a measurable performance improvement.

Author: Jinyu Zhang, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/495deb78.4186.1500dacaa63.Coremail.beijing_pg@163.com
2017-04-07 19:08:43 -03:00
Simon Riggs ac2b095088 Reset API of clause_selectivity()
Discussion: https://postgr.es/m/CAKJS1f9yurJQW9pdnzL+rmOtsp2vOytkpXKGnMFJEO-qz5O5eA@mail.gmail.com
2017-04-06 19:10:51 -04:00
Joe Conway af8a94d18d Silence uninitialized variable compiler warning in sepgsql
At -Og optimization gcc warns that variable tclass may be used
uninitialized when relkind == RELKIND_INDEX. Actually that can't
happen due to an early return, but quiet the compiler by initializing
tclass to 0.

In passing, use uint16_t consistently for the declaration of tclass.

Complaint and initial patch by Mike Palmiotto. Editorializing by me.
Probably not worth backpatching given that it is cosmetic, so apply
to development head only.

Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
2017-04-06 14:28:19 -07:00
Joe Conway bd190eae36 Silence compiler warning in sepgsql
<selinux/label.h> includes <stdbool.h>, which creates an incompatible
We don't care if <stdbool.h> redefines "true"/"false"; those are close
enough.

Complaint and initial patch by Mike Palmiotto. Final approach per
Tom Lane's suggestion, as discussed on hackers. Backpatching to
all supported branches.

Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
2017-04-06 14:24:29 -07:00
Peter Eisentraut 301ca0d9a2 Fix AclResult vs bool type mix-up
Using AclResult as a bool or vice versa works by accident, but it's
unusual and possibly confusing style, so write it out more explicitly.
2017-04-06 11:32:07 -04:00
Simon Riggs 2686ee1b7c Collect and use multi-column dependency stats
Follow on patch in the multi-variate statistics patch series.

CREATE STATISTICS s1 WITH (dependencies) ON (a, b) FROM t;
ANALYZE;
will collect dependency stats on (a, b) and then use the measured
dependency in subsequent query planning.

Commit 7b504eb282 added
CREATE STATISTICS with n-distinct coefficients. These are now
specified using the mutually exclusive option WITH (ndistinct).

Author: Tomas Vondra, David Rowley
Reviewed-by: Kyotaro HORIGUCHI, Álvaro Herrera, Dean Rasheed, Robert Haas
and many other comments and contributions
Discussion: https://postgr.es/m/56f40b20-c464-fad2-ff39-06b668fac47c@2ndquadrant.com
2017-04-05 18:00:42 -04:00
Robert Haas 633e15ea0f Fix pageinspect failures on hash indexes.
Make every page in a hash index which isn't all-zeroes have a valid
special space, so that tools like pageinspect don't error out.

Also, make pageinspect cope with all-zeroes pages, because
_hash_alloc_buckets can leave behind large numbers of those until
they're consumed by splits.

Ashutosh Sharma and Robert Haas, reviewed by Amit Kapila.
Original trouble report from Jeff Janes.

Discussion: http://postgr.es/m/CAMkU=1y6NjKmqbJ8wLMhr=F74WzcMALYWcVFhEpm7i=mV=XsOg@mail.gmail.com
2017-04-05 14:18:15 -04:00
Peter Eisentraut e59b74a3fc dblink: Small code rearrangement for clarity
suggested by Tom Lane
2017-04-05 09:03:11 -04:00
Peter Eisentraut afd79873a0 Capitalize names of PLs consistently
Author: Daniel Gustafsson <daniel@yesql.se>
2017-04-05 00:38:25 -04:00
Peter Eisentraut 193f5f9e91 pageinspect: Add bt_page_items function with bytea argument
Author: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
2017-04-04 23:52:55 -04:00
Andres Freund bae9b80160 Force synchronous commit in new-ish test_decoding test.
This was missed in a924c327 ff.
2017-04-04 14:38:00 -07:00
Robert Haas ea69a0dead Expand hash indexes more gradually.
Since hash indexes typically have very few overflow pages, adding a
new splitpoint essentially doubles the on-disk size of the index,
which can lead to large and abrupt increases in disk usage (and
perhaps long delays on occasion).  To mitigate this problem to some
degree, divide larger splitpoints into four equal phases.  This means
that, for example, instead of growing from 4GB to 8GB all at once, a
hash index will now grow from 4GB to 5GB to 6GB to 7GB to 8GB, which
is perhaps still not as smooth as we'd like but certainly an
improvement.

This changes the on-disk format of the metapage, so bump HASH_VERSION
from 2 to 3.  This will force a REINDEX of all existing hash indexes,
but that's probably a good idea anyway.  First, hash indexes from
pre-10 versions of PostgreSQL could easily be corrupted, and we don't
want to confuse corruption carried over from an older release with any
corruption caused despite the new write-ahead logging in v10.  Second,
it will let us remove some backward-compatibility code added by commit
293e24e507.

Mithun Cy, reviewed by Amit Kapila, Jesper Pedersen and me.  Regression
test outputs updated by me.

Discussion: http://postgr.es/m/CAD__OuhG6F1gQLCgMQNnMNgoCvOLQZz9zKYJQNYvYmmJoM42gA@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoYty0jCf-pa+m+vYUJ716+AxM7nv_syvyanyf5O-L_i2A@mail.gmail.com
2017-04-03 23:46:33 -04:00
Robert Haas 7a39b5e4d1 Abstract logic to allow for multiple kinds of child rels.
Currently, the only type of child relation is an "other member rel",
which is the child of a baserel, but in the future joins and even
upper relations may have child rels.  To facilitate that, introduce
macros that test to test for particular RelOptKind values, and use
them in various places where they help to clarify the sense of a test.
(For example, a test may allow RELOPT_OTHER_MEMBER_REL either because
it intends to allow child rels, or because it intends to allow simple
rels.)

Also, remove find_childrel_top_parent, which will not work for a
child rel that is not a baserel.  Instead, add a new RelOptInfo
member top_parent_relids to track the same kind of information in a
more generic manner.

Ashutosh Bapat, slightly tweaked by me.  Review and testing of the
patch set from which this was taken by Rajkumar Raghuwanshi and Rafia
Sabih.

Discussion: http://postgr.es/m/CA+TgmoagTnF2yqR3PT2rv=om=wJiZ4-A+ATwdnriTGku1CLYxA@mail.gmail.com
2017-04-03 22:41:31 -04:00
Kevin Grittner 01fd6f8f2d Try to fix breakage of sepgsql hooks by ENR patch.
Turned up by buildfarm animal rhinoceros.  Fixing blind.  Will have
to wait for next run by rhinoceros to know whether it worked.
2017-04-01 00:10:12 -05:00
Kevin Grittner 18ce3a4ab2 Add infrastructure to support EphemeralNamedRelation references.
A QueryEnvironment concept is added, which allows new types of
objects to be passed into queries from parsing on through
execution.  At this point, the only thing implemented is a
collection of EphemeralNamedRelation objects -- relations which
can be referenced by name in queries, but do not exist in the
catalogs.  The only type of ENR implemented is NamedTuplestore, but
provision is made to add more types fairly easily.

An ENR can carry its own TupleDesc or reference a relation in the
catalogs by relid.

Although these features can be used without SPI, convenience
functions are added to SPI so that ENRs can easily be used by code
run through SPI.

The initial use of all this is going to be transition tables in
AFTER triggers, but that will be added to each PL as a separate
commit.

An incidental effect of this patch is to produce a more informative
error message if an attempt is made to modify the contents of a CTE
from a referencing DML statement.  No tests previously covered that
possibility, so one is added.

Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
2017-03-31 23:17:18 -05:00
Robert Haas f49bcd4ef3 postgres_fdw: Teach IMPORT FOREIGN SCHEMA about partitioning.
Don't import partitions.  Do import partitioned tables which are
not themselves partitions.

Report by Stephen Frost.  Design and patch by Michael Paquier,
reviewed by Amit Langote.  Documentation revised by me.

Discussion: http://postgr.es/m/20170309141531.GD9812@tamriel.snowman.net
2017-03-31 15:06:34 -04:00
Simon Riggs ceb3158abe Fix pgrowlocks minor coding oversight 2017-03-30 14:53:07 -04:00
Simon Riggs 25fff40798 Default monitoring roles
Three nologin roles with non-overlapping privs are created by default
* pg_read_all_settings - read all GUCs.
* pg_read_all_stats - pg_stat_*, pg_database_size(), pg_tablespace_size()
* pg_stat_scan_tables - may lock/scan tables

Top level role - pg_monitor includes all of the above by default, plus others

Author: Dave Page
Reviewed-by: Stephen Frost, Robert Haas, Peter Eisentraut, Simon Riggs
2017-03-30 14:18:53 -04:00
Andres Freund 5ded4bd214 Remove support for version-0 calling conventions.
The V0 convention is failure prone because we've so far assumed that a
function is V0 if PG_FUNCTION_INFO_V1 is missing, leading to crashes
if a function was coded against the V1 interface.  V0 doesn't allow
proper NULL, SRF and toast handling.  V0 doesn't offer features that
V1 doesn't.

Thus remove V0 support and obsolete fmgr README contents relating to
it.

Author: Andres Freund, with contributions by Peter Eisentraut & Craig Ringer
Reviewed-By: Peter Eisentraut, Craig Ringer
Discussion: https://postgr.es/m/20161208213441.k3mbno4twhg2qf7g@alap3.anarazel.de
2017-03-30 06:25:46 -07:00
Andres Freund 389bb2818f Move contrib/seg to only use V1 calling conventions.
A later commit will remove V0 support.

Author: Andres Freund, with contributions by Craig Ringer
Reviewed-By: Peter Eisentraut, Craig Ringer
Discussion: https://postgr.es/m/20161208213441.k3mbno4twhg2qf7g@alap3.anarazel.de
2017-03-30 06:25:46 -07:00
Alvaro Herrera ce96ce60ca Remove direct uses of ItemPointer.{ip_blkid,ip_posid}
There are no functional changes here; this simply encapsulates knowledge
of the ItemPointerData struct so that a future patch can change things
without more breakage.

All direct users of ip_blkid and ip_posid are changed to use existing
macros ItemPointerGetBlockNumber and ItemPointerGetOffsetNumber
respectively.  For callers where that's inappropriate (because they
Assert that the itempointer is is valid-looking), add
ItemPointerGetBlockNumberNoCheck and ItemPointerGetOffsetNumberNoCheck,
which lack the assertion but are otherwise identical.

Author: Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdNnFon4cJiL=h1mZH3bgUeU+sWHuU4Yr8AB=j3A2p1GiA@mail.gmail.com
2017-03-28 19:02:23 -03:00
Peter Eisentraut 85163641f8 dblink: Fix error reporting
The conname variable was not initialized in some code paths, resulting
in error reports referring to the "unnamed" connection rather than the
correct connection name.

Author: Rushabh Lathia <rushabh.lathia@gmail.com>
2017-03-28 11:08:38 -04:00
Tom Lane a6f22e8356 Show ignored constants as "$N" rather than "?" in pg_stat_statements.
The trouble with the original choice here is that "?" is a valid (and
indeed used) operator name, so that you could end up with ambiguous
statement texts like "SELECT ? ? ?".  With this patch, you instead
see "SELECT $1 ? $2", which seems significantly more readable.
The numbers used for this purpose begin after the last actual $N parameter
in the particular query.  The conflict with external parameters has its own
potential for confusion of course, but it was agreed to be an improvement
over the previous behavior.

Lukas Fittl

Discussion: https://postgr.es/m/CAP53PkxeaCuwYmF-A4J5z2-qk5fYFo5_NH3gpXGJJBxv1DMwEw@mail.gmail.com
2017-03-27 20:14:36 -04:00
Peter Eisentraut facde2a98f Clean up Perl code according to perlcritic
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2017-03-27 08:18:22 -04:00
Andrew Gierth b5635948ab Support hashed aggregation with grouping sets.
This extends the Aggregate node with two new features: HashAggregate
can now run multiple hashtables concurrently, and a new strategy
MixedAggregate populates hashtables while doing sorted grouping.

The planner will now attempt to save as many sorts as possible when
planning grouping sets queries, while not exceeding work_mem for the
estimated combined sizes of all hashtables used.  No SQL-level changes
are required.  There should be no user-visible impact other than the
new EXPLAIN output and possible changes to result ordering when ORDER
BY was not used (which affected a few regression tests).  The
enable_hashagg option is respected.

Author: Andrew Gierth
Reviewers: Mark Dilger, Andres Freund
Discussion: https://postgr.es/m/87vatszyhj.fsf@news-spur.riddles.org.uk
2017-03-27 04:20:54 +01:00
Andres Freund d253b0f6e3 Blindly attempt to fix sepgsql tests #2. 2017-03-25 20:54:23 -07:00
Andres Freund 83bbcb04ab Blindly attempt to fix sepgsql tests.
Due to b8d7f053c5 some permission checks are now happening even on
empty tables, and some of the checks move around.

Discussion: https://postgr.es/m/95bdb608-093c-160f-c6be-983a36ccd7f9@joeconway.com
2017-03-25 20:35:55 -07:00
Andres Freund b8d7f053c5 Faster expression evaluation and targetlist projection.
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.

This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.

The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
  function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
  out operation metadata sequentially; including the avoidance of
  nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
  constant re-checks at evaluation time

Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.

The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
  overhead of expression evaluation, by caching state in prepared
  statements.  That'd be helpful in OLTPish scenarios where
  initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
  work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
  been made here too.

The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
  initialization, whereas previously they were done during
  execution. In edge cases this can lead to errors being raised that
  previously wouldn't have been, e.g. a NULL array being coerced to a
  different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
  during expression initialization, previously it was re-built
  every time a domain check was evaluated. For normal queries this
  doesn't change much, but e.g. for plpgsql functions, which caches
  ExprStates, the old set could stick around longer.  The behavior
  around might still change.

Author: Andres Freund, with significant changes by Tom Lane,
	changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
2017-03-25 14:52:06 -07:00
Robert Haas 691b8d5928 Allow for parallel execution whenever ExecutorRun() is done only once.
Previously, it was unsafe to execute a plan in parallel if
ExecutorRun() might be called with a non-zero row count.  However,
it's quite easy to fix things up so that we can support that case,
provided that it is known that we will never call ExecutorRun() a
second time for the same QueryDesc.  Add infrastructure to signal
this, and cross-checks to make sure that a caller who claims this is
true doesn't later reneg.

While that pattern never happens with queries received directly from a
client -- there's no way to know whether multiple Execute messages
will be sent unless the first one requests all the rows -- it's pretty
common for queries originating from procedural languages, which often
limit the result to a single tuple or to a user-specified number of
tuples.

This commit doesn't actually enable parallelism in any additional
cases, because currently none of the places that would be able to
benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the
first place, but it makes it much more palatable to pass
CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it
eliminates some cases where we'd end up having to run the parallel
plan serially.

Patch by me, based on some ideas from Rafia Sabih and corrected by
Rafia Sabih based on feedback from Dilip Kumar and myself.

Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
2017-03-23 13:14:36 -04:00
Peter Eisentraut 7c4f52409a Logical replication support for initial data copy
Add functionality for a new subscription to copy the initial data in the
tables and then sync with the ongoing apply process.

For the copying, add a new internal COPY option to have the COPY source
data provided by a callback function.  The initial data copy works on
the subscriber by receiving COPY data from the publisher and then
providing it locally into a COPY that writes to the destination table.

A WAL receiver can now execute full SQL commands.  This is used here to
obtain information about tables and publications.

Several new options were added to CREATE and ALTER SUBSCRIPTION to
control whether and when initial table syncing happens.

Change pg_dump option --no-create-subscription-slots to
--no-subscription-connect and use the new CREATE SUBSCRIPTION
... NOCONNECT option for that.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Tested-by: Erik Rijkers <er@xs4all.nl>
2017-03-23 08:55:37 -04:00
Simon Riggs af4b1a0869 Refactor GetOldestXmin() to use flags
Replace ignoreVacuum parameter with more flexible flags.

Author: Eiji Seki
Review: Haribabu Kommi
2017-03-22 16:51:01 +00:00
Andrew Dunstan 4ad0f88c44 Add btree_gin support for enum types
Reviewed by Tom Lane and Anastasia Lubennikova

Discussion:  http://postgr.es/m/56EA8A71.8060107@dunslane.net
2017-03-21 11:04:17 -04:00
Andrew Dunstan f7946a92b6 Add btree_gist support for enum types.
This will allow enums to be used in exclusion constraints.

The code uses the new CallerFInfoFunctionCall infrastructure in fmgr,
and the support for it added to btree_gist in commit 393bb504d7.

Reviewed by Tom Lane and Anastasia Lubennikova

Discussion:  http://postgr.es/m/56EA8A71.8060107@dunslane.net
2017-03-21 10:43:27 -04:00
Andrew Dunstan 65a9138b9b Use CallerFInfoFunctionCall with btree_gist for varlena types
Follow up to commit 393bb504d7 which did this for numeric types.
2017-03-21 10:43:27 -04:00
Andrew Dunstan 4b1c68d63e Use CallerFInfoFunctionCall with btree_gist for numeric types
None of the existing types actually need to use this mechanism, but this
will allow support for enum types which will need it. A separate patch
will adjust the varlena types support for consistency.

Reviewed by Tom Lane and Anastasia Lubennikova

Discussion:  http://postgr.es/m/27220.1478360811@sss.pgh.pa.us
2017-03-21 10:43:27 -04:00
Peter Eisentraut fef2bcdcba pageinspect: Add page_checksum function
Author: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
2017-03-17 10:55:17 -04:00
Peter Eisentraut a02731cb10 pageinspect: Add test for page_header function 2017-03-17 09:23:39 -04:00
Robert Haas b30fb56b07 postgres_fdw: Push down FULL JOINs with restriction clauses.
The previous deparsing logic wasn't smart enough to produce subqueries
when deparsing; make it smart enough to do that.  However, we only do
it that way when necessary, because it generates more complicated SQL
which will be harder for any humans reading the queries to understand.

Etsuro Fujita, reviewed by Ashutosh Bapat

Discussion: http://postgr.es/m/c449261a-b033-dc02-9254-2fe5b7044795@lab.ntt.co.jp
2017-03-16 13:34:59 -04:00
Stephen Frost c7a9fa399d Add support for EUI-64 MAC addresses as macaddr8
This adds in support for EUI-64 MAC addresses by adding a new data type
called 'macaddr8' (using our usual convention of indicating the number
of bytes stored).

This was largely a copy-and-paste from the macaddr data type, with
appropriate adjustments for having 8 bytes instead of 6 and adding
support for converting a provided EUI-48 (6 byte format) to the EUI-64
format.  Conversion from EUI-48 to EUI-64 inserts FFFE as the 4th and
5th bytes but does not perform the IPv6 modified EUI-64 action of
flipping the 7th bit, but we add a function to perform that specific
action for the user as it may be commonly done by users who wish to
calculate their IPv6 address based on their network prefix and 48-bit
MAC address.

Author: Haribabu Kommi, with a good bit of rework of macaddr8_in by me.
Reviewed by: Vitaly Burovoy, Kuntal Ghosh

Discussion: https://postgr.es/m/CAJrrPGcUi8ZH+KkK+=TctNQ+EfkeCEHtMU_yo1mvX8hsk_ghNQ@mail.gmail.com
2017-03-15 11:16:25 -04:00
Andres Freund 60f826c5e6 Improve isolation tests infrastructure.
Previously if a directory had both isolationtester and plain
regression tests, they couldn't be run in parallel, because they'd
access the same files/directories.  That, so far, only affected
contrib/test_decoding.

Rather than fix that locally in contrib/test_decoding, improve
pg_regress_isolation_[install]check to use separate resources from
plain regression tests.

That requires a minor change in pg_regress, namely that the
--outputdir is created if not already existing, that seems like good
idea anyway.

Use the improved helpers even where previously not used.

Author: Tom Lane and Andres Freund
Discussion: https://postgr.es/m/20170311194831.vm5ikpczq52c2drg@alap3.anarazel.de
2017-03-14 15:56:17 -07:00
Andres Freund 7150402655 amcheck: Harden tests against concurrent autovacuums.
The previous coding of the test was vulnerable against autovacuum
triggering work on one of the tables in check_btree.sql.

For the purpose of the test it's entirely sufficient to check for
locks taken by the current process, so add an appropriate restriction.
While touching the test, expand it to also check for locks on the
underlying relations, rather than just the indexes.

Reported-By: Tom Lane
Discussion: https://postgr.es/m/30354.1489434301@sss.pgh.pa.us
2017-03-14 13:07:38 -07:00
Robert Haas c11453ce0a hash: Add write-ahead logging support.
The warning about hash indexes not being write-ahead logged and their
use being discouraged has been removed.  "snapshot too old" is now
supported for tables with hash indexes.  Most importantly, barring
bugs, hash indexes will now be crash-safe and usable on standbys.

This commit doesn't yet add WAL consistency checking for hash
indexes, as we now have for other index types; a separate patch has
been submitted to cure that lack.

Amit Kapila, reviewed and slightly modified by me.  The larger patch
series of which this is a part has been reviewed and tested by Álvaro
Herrera, Ashutosh Sharma, Mark Kirkwood, Jeff Janes, and Jesper
Pedersen.

Discussion: http://postgr.es/m/CAA4eK1JOBX=YU33631Qh-XivYXtPSALh514+jR8XeD7v+K3r_Q@mail.gmail.com
2017-03-14 13:27:02 -04:00
Peter Eisentraut a47b38c9ee Spelling fixes
From: Josh Soref <jsoref@gmail.com>
2017-03-14 12:58:39 -04:00
Peter Eisentraut f97a028d8e Spelling fixes in code comments
From: Josh Soref <jsoref@gmail.com>
2017-03-14 12:58:39 -04:00
Peter Eisentraut 57488c1ce3 Fix compiler warning
From: David Rowley <david.rowley@2ndquadrant.com>
2017-03-13 15:44:50 -04:00
Noah Misch 3a0d473192 Use wrappers of PG_DETOAST_DATUM_PACKED() more.
This makes almost all core code follow the policy introduced in the
previous commit.  Specific decisions:

- Text search support functions with char* and length arguments, such as
  prsstart and lexize, may receive unaligned strings.  I doubt
  maintainers of non-core text search code will notice.

- Use plain VARDATA() on values detoasted or synthesized earlier in the
  same function.  Use VARDATA_ANY() on varlenas sourced outside the
  function, even if they happen to always have four-byte headers.  As an
  exception, retain the universal practice of using VARDATA() on return
  values of SendFunctionCall().

- Retain PG_GETARG_BYTEA_P() in pageinspect.  (Page images are too large
  for a one-byte header, so this misses no optimization.)  Sites that do
  not call get_page_from_raw() typically need the four-byte alignment.

- For now, do not change btree_gist.  Its use of four-byte headers in
  memory is partly entangled with storage of 4-byte headers inside
  GBT_VARKEY, on disk.

- For now, do not change gtrgm_consistent() or gtrgm_distance().  They
  incorporate the varlena header into a cache, and there are multiple
  credible implementation strategies to consider.
2017-03-12 19:35:34 -04:00
Noah Misch 944a026b4e Fix pg_file_write() error handling.
Detect fclose() failures; given "ln -s /dev/full $PGDATA/devfull",
"pg_file_write('devfull', 'x', true)" now fails as it should.  Don't
leak a stream when fwrite() fails.  Remove a born-ineffective test that
aimed to skip zero-length writes.  Back-patch to 9.2 (all supported
versions).
2017-03-12 19:35:31 -04:00
Noah Misch 2fd26b23b6 Assume deconstruct_array() outputs are untoasted.
In functions that issue a deconstruct_array() call, consistently use
plain VARSIZE()/VARDATA() on the array elements.  Prior practice was
divided between those and VARSIZE_ANY_EXHDR()/VARDATA_ANY().
2017-03-12 19:35:31 -04:00
Joe Conway cd1e23e93b Fix ancient connection leak in dblink
When using unnamed connections with dblink, every time a new
connection is made, the old one is leaked. Fix that.

This has been an issue probably since dblink was first committed.
Someone complained almost ten years ago, but apparently I decided
not to pursue it at the time, and neither did anyone else, so it
slipped between the cracks. Now that someone else has complained,
fix in all supported branches.

Discussion: (orig) https://postgr.es/m/flat/F680AB59-6D6F-4026-9599-1BE28880273D%40decibel.org#F680AB59-6D6F-4026-9599-1BE28880273D@decibel.org
Discussion: (new) https://postgr.es/m/flat/0A3221C70F24FB45833433255569204D1F6ADF8C@G01JPEXMBYT05
Reported by: Jim Nasby and Takayuki Tsunakawa
2017-03-11 13:32:18 -08:00
Peter Eisentraut 22ef6b041a dblink: Change some StringInfo to StringInfoData
For consistency with other code and to avoid wasting some small amount
of memory.

From: Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
2017-03-10 09:59:10 -05:00
Peter Eisentraut acaf7ccb94 dblink: Replace some macros by static functions
Also remove some unused code and the no longer useful dblink.h file.

Reviewed-by: Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
2017-03-10 09:42:30 -05:00
Tom Lane 9c2635e26f Fix hard-coded relkind constants in assorted other files.
Although it's reasonable to expect that most of these constants will
never change, that does not make it good programming style to hard-code
the value rather than using the RELKIND_FOO macros.

I think I've now gotten all the hard-coded references in C code.
Unfortunately there's no equally convenient way to parameterize
SQL files ...

Discussion: https://postgr.es/m/11145.1488931324@sss.pgh.pa.us
2017-03-09 23:36:52 -05:00
Andres Freund fcd8d25d38 amcheck: editorialize variable name & comment.
No exclusive lock is taken anymore...
2017-03-09 20:03:30 -08:00
Tom Lane 574268e37b Add .gitignore to contrib/amcheck.
Oversight in commit 3717dc149.
2017-03-09 22:45:24 -05:00
Stephen Frost 90e91e242f pgstattuple: Fix typo partitiond -> partitioned
Pointed out by Michael Paquier
2017-03-09 20:06:11 -05:00
Andres Freund 3717dc149e Add amcheck extension to contrib.
This is the beginning of a collection of SQL-callable functions to
verify the integrity of data files.  For now it only contains code to
verify B-Tree indexes.

This adds two SQL-callable functions, validating B-Tree consistency to
a varying degree.  Check the, extensive, docs for details.

The goal is to later extend the coverage of the module to further
access methods, possibly including the heap.  Once checks for
additional access methods exist, we'll likely add some "dispatch"
functions that cover multiple access methods.

Author: Peter Geoghegan, editorialized by Andres Freund
Reviewed-By: Andres Freund, Tomas Vondra, Thomas Munro,
   Anastasia Lubennikova, Robert Haas, Amit Langote
Discussion: CAM3SWZQzLMhMwmBqjzK+pRKXrNUZ4w90wYMUWfkeV8mZ3Debvw@mail.gmail.com
2017-03-09 16:33:02 -08:00
Stephen Frost c08d82f38e Add relkind checks to certain contrib modules
The contrib extensions pageinspect, pg_visibility and pgstattuple only
work against regular relations which have storage.  They don't work
against foreign tables, partitioned (parent) tables, views, et al.

Add checks to the user-callable functions to return a useful error
message to the user if they mistakenly pass an invalid relation to a
function which doesn't accept that kind of relation.

In passing, improve some of the existing checks to use ereport() instead
of elog(), add a function to consolidate common checks where
appropriate, and add some regression tests.

Author: Amit Langote, with various changes by me
Reviewed by: Michael Paquier and Corey Huinker
Discussion: https://postgr.es/m/ab91fd9d-4751-ee77-c87b-4dd704c1e59c@lab.ntt.co.jp
2017-03-09 16:34:25 -05:00
Alvaro Herrera fcec6caafa Support XMLTABLE query expression
XMLTABLE is defined by the SQL/XML standard as a feature that allows
turning XML-formatted data into relational form, so that it can be used
as a <table primary> in the FROM clause of a query.

This new construct provides significant simplicity and performance
benefit for XML data processing; what in a client-side custom
implementation was reported to take 20 minutes can be executed in 400ms
using XMLTABLE.  (The same functionality was said to take 10 seconds
using nested PostgreSQL XPath function calls, and 5 seconds using
XMLReader under PL/Python).

The implemented syntax deviates slightly from what the standard
requires.  First, the standard indicates that the PASSING clause is
optional and that multiple XML input documents may be given to it; we
make it mandatory and accept a single document only.  Second, we don't
currently support a default namespace to be specified.

This implementation relies on a new executor node based on a hardcoded
method table.  (Because the grammar is fixed, there is no extensibility
in the current approach; further constructs can be implemented on top of
this such as JSON_TABLE, but they require changes to core code.)

Author: Pavel Stehule, Álvaro Herrera
Extensively reviewed by: Craig Ringer
Discussion: https://postgr.es/m/CAFj8pRAgfzMD-LoSmnMGybD0WsEznLHWap8DO79+-GTRAPR4qA@mail.gmail.com
2017-03-08 12:40:26 -03:00
Heikki Linnakangas 818fd4a67d Support SCRAM-SHA-256 authentication (RFC 5802 and 7677).
This introduces a new generic SASL authentication method, similar to the
GSS and SSPI methods. The server first tells the client which SASL
authentication mechanism to use, and then the mechanism-specific SASL
messages are exchanged in AuthenticationSASLcontinue and PasswordMessage
messages. Only SCRAM-SHA-256 is supported at the moment, but this allows
adding more SASL mechanisms in the future, without changing the overall
protocol.

Support for channel binding, aka SCRAM-SHA-256-PLUS is left for later.

The SASLPrep algorithm, for pre-processing the password, is not yet
implemented. That could cause trouble, if you use a password with
non-ASCII characters, and a client library that does implement SASLprep.
That will hopefully be added later.

Authorization identities, as specified in the SCRAM-SHA-256 specification,
are ignored. SET SESSION AUTHORIZATION provides more or less the same
functionality, anyway.

If a user doesn't exist, perform a "mock" authentication, by constructing
an authentic-looking challenge on the fly. The challenge is derived from
a new system-wide random value, "mock authentication nonce", which is
created at initdb, and stored in the control file. We go through these
motions, in order to not give away the information on whether the user
exists, to unauthenticated users.

Bumps PG_CONTROL_VERSION, because of the new field in control file.

Patch by Michael Paquier and Heikki Linnakangas, reviewed at different
stages by Robert Haas, Stephen Frost, David Steele, Aleksander Alekseev,
and many others.

Discussion: https://www.postgresql.org/message-id/CAB7nPqRbR3GmFYdedCAhzukfKrgBLTLtMvENOmPrVWREsZkF8g%40mail.gmail.com
Discussion: https://www.postgresql.org/message-id/CAB7nPqSMXU35g%3DW9X74HVeQp0uvgJxvYOuA4A-A3M%2B0wfEBv-w%40mail.gmail.com
Discussion: https://www.postgresql.org/message-id/55192AFE.6080106@iki.fi
2017-03-07 14:25:40 +02:00
Heikki Linnakangas 273c458a2b Refactor SHA2 functions and move them to src/common/.
This way both frontend and backends can use them. The functions are taken
from pgcrypto, which now fetches the source files it needs from
src/common/.

A new interface is designed for the SHA2 functions, which allow linking
to either OpenSSL or the in-core stuff taken from KAME as needed.

Michael Paquier, reviewed by Robert Haas.

Discussion: https://www.postgresql.org/message-id/CAB7nPqTGKuTM5jiZriHrNaQeVqp5e_iT3X4BFLWY_HyHxLvySQ%40mail.gmail.com
2017-03-07 14:23:49 +02:00
Peter Eisentraut 788af6f854 Move atooid() definition to a central place 2017-03-01 11:55:28 -05:00
Peter Eisentraut 2ed193c904 chomp PQerrorMessage() in backend uses
PQerrorMessage() returns an error message with a trailing newline, but
in backend use (dblink, postgres_fdw, libpqwalreceiver), we want to have
the error message without that for emitting via ereport().  To simplify
that, add a function pchomp() that returns a pstrdup'ed string with the
trailing newline characters removed.
2017-02-27 08:54:51 -05:00
Tom Lane 9e3755ecb2 Remove useless duplicate inclusions of system header files.
c.h #includes a number of core libc header files, such as <stdio.h>.
There's no point in re-including these after having read postgres.h,
postgres_fe.h, or c.h; so remove code that did so.

While at it, also fix some places that were ignoring our standard pattern
of "include postgres[_fe].h, then system header files, then other Postgres
header files".  While there's not any great magic in doing it that way
rather than system headers last, it's silly to have just a few files
deviating from the general pattern.  (But I didn't attempt to enforce this
globally, only in files I was touching anyway.)

I'd be the first to say that this is mostly compulsive neatnik-ism,
but over time it might save enough compile cycles to be useful.
2017-02-25 16:12:55 -05:00
Tom Lane c29aff959d Consistently declare timestamp variables as TimestampTz.
Twiddle the replication-related code so that its timestamp variables
are declared TimestampTz, rather than the uninformative "int64" that
was previously used for meant-to-be-always-integer timestamps.
This resolves the int64-vs-TimestampTz declaration inconsistencies
introduced by commit 7c030783a, though in the opposite direction to
what was originally suggested.

This required including datatype/timestamp.h in a couple more places
than before.  I decided it would be a good idea to slim down that
header by not having it pull in <float.h> etc, as those headers are
no longer at all relevant to its purpose.  Unsurprisingly, a small number
of .c files turn out to have been depending on those inclusions, so add
them back in the .c files as needed.

Discussion: https://postgr.es/m/26788.1487455319@sss.pgh.pa.us
Discussion: https://postgr.es/m/27694.1487456324@sss.pgh.pa.us
2017-02-23 15:57:08 -05:00
Tom Lane b9d092c962 Remove now-dead code for !HAVE_INT64_TIMESTAMP.
This is a basically mechanical removal of #ifdef HAVE_INT64_TIMESTAMP
tests and the negative-case controlled code.

Discussion: https://postgr.es/m/26788.1487455319@sss.pgh.pa.us
2017-02-23 14:04:43 -05:00
Tom Lane 9e43e8714c Fix contrib/pg_trgm's extraction of trigrams from regular expressions.
The logic for removing excess trigrams from the result was faulty.
It intends to avoid merging the initial and final states of the NFA,
which is necessary, but in testing whether removal of a specific trigram
would cause that, it failed to consider the combined effects of all the
state merges that that trigram's removal would cause.  This could result
in a broken final graph that would never match anything, leading to GIN
or GiST indexscans not finding anything.

To fix, add a "tentParent" field that is used only within this loop,
and set it to show state merges that we are tentatively going to do.
While examining a particular arc, we must chase up through tentParent
links as well as regular parent links (the former can only appear atop
the latter), and we must account for state init/fin flag merges that
haven't actually been done yet.

To simplify the latter, combine the separate init and fin bool fields
into a bitmap flags field.  I also chose to get rid of the "children"
state list, which seems entirely inessential.

Per bug #14563 from Alexey Isayko, which the added test cases are based on.
Back-patch to 9.3 where this code was added.

Report: https://postgr.es/m/20170222111446.1256.67547@wrigleys.postgresql.org
Discussion: https://postgr.es/m/8816.1487787594@sss.pgh.pa.us
2017-02-22 15:04:26 -05:00
Robert Haas b4316928d5 Fix incorrect typecast.
Ashutosh Sharma, per a report from Mithun Cy.

Discussion: http://postgr.es/m/CAD__OujgqNNnCujeFTmKpjNu+W4smS8Hbi=RcWAhf1ZUs3H4WA@mail.gmail.com
2017-02-22 12:05:42 +05:30
Peter Eisentraut 04aad40186 Drop support for Python 2.3
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden.  The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.

In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
2017-02-21 09:49:22 -05:00
Robert Haas 5262f7a4fc Add optimizer and executor support for parallel index scans.
In combination with 569174f1be, which
taught the btree AM how to perform parallel index scans, this allows
parallel index scan plans on btree indexes.  This infrastructure
should be general enough to support parallel index scans for other
index AMs as well, if someone updates them to support parallel
scans.

Amit Kapila, reviewed and tested by Anastasia Lubennikova, Tushar
Ahuja, and Haribabu Kommi, and me.
2017-02-15 13:53:24 -05:00
Robert Haas 7ada2d31f4 Remove contrib/tsearch2.
This module was intended to ease migrations of applications that used
the pre-8.3 version of text search to the in-core version introduced
in that release.  However, since all pre-8.3 releases of the database
have been out of support for more than 5 years at this point, we
expect that few people are depending on it at this point.  If some
people still need it, nothing prevents it from being maintained as a
separate extension, outside of core.

Discussion: http://postgr.es/m/CA+Tgmob5R8aDHiFRTQsSJbT1oreKg2FOSBrC=2f4tqEH3dOMAg@mail.gmail.com
2017-02-13 11:06:11 -05:00
Robert Haas 806091c96f Remove all references to "xlog" from SQL-callable functions in pg_proc.
Commit f82ec32ac3 renamed the pg_xlog
directory to pg_wal.  To make things consistent, and because "xlog" is
terrible terminology for either "transaction log" or "write-ahead log"
rename all SQL-callable functions that contain "xlog" in the name to
instead contain "wal".  (Note that this may pose an upgrade hazard for
some users.)

Similarly, rename the xlog_position argument of the functions that
create slots to be called wal_position.

Discussion: https://www.postgresql.org/message-id/CA+Tgmob=YmA=H3DbW1YuOXnFVgBheRmyDkWcD9M8f=5bGWYEoQ@mail.gmail.com
2017-02-09 15:10:09 -05:00
Robert Haas fc8219dc54 pageinspect: Fix hash_bitmap_info not to read the underlying page.
It did that to verify that the page was an overflow page rather than
anything else, but that means that checking the status of all the
overflow bits requires reading the entire index.  So don't do that.
The new code validates that the page is not a primary bucket page
or bitmap page by looking at the metapage, so that using this on
large numbers of pages can be reasonably efficient.

Ashutosh Sharma, per a complaint from me, and with further
modifications by me.
2017-02-09 14:34:34 -05:00
Tom Lane 86d911ec0f Allow index AMs to cache data across aminsert calls within a SQL command.
It's always been possible for index AMs to cache data across successive
amgettuple calls within a single SQL command: the IndexScanDesc.opaque
field is meant for precisely that.  However, no comparable facility
exists for amortizing setup work across successive aminsert calls.
This patch adds such a feature and teaches GIN, GIST, and BRIN to use it
to amortize catalog lookups they'd previously been doing on every call.
(The other standard index AMs keep everything they need in the relcache,
so there's little to improve there.)

For GIN, the overall improvement in a statement that inserts many rows
can be as much as 10%, though it seems a bit less for the other two.
In addition, this makes a really significant difference in runtime
for CLOBBER_CACHE_ALWAYS tests, since in those builds the repeated
catalog lookups are vastly more expensive.

The reason this has been hard up to now is that the aminsert function is
not passed any useful place to cache per-statement data.  What I chose to
do is to add suitable fields to struct IndexInfo and pass that to aminsert.
That's not widening the index AM API very much because IndexInfo is already
within the ken of ambuild; in fact, by passing the same info to aminsert
as to ambuild, this is really removing an inconsistency in the AM API.

Discussion: https://postgr.es/m/27568.1486508680@sss.pgh.pa.us
2017-02-09 11:52:12 -05:00
Robert Haas 293e24e507 Cache hash index's metapage in rel->rd_amcache.
This avoids a very significant amount of buffer manager traffic and
contention when scanning hash indexes, because it's no longer
necessary to lock and pin the metapage for every scan.  We do need
some way of figuring out when the cache is too stale to use any more,
so that when we lock the primary bucket page to which the cached
metapage points us, we can tell whether a split has occurred since we
cached the metapage data.  To do that, we use the hash_prevblkno field
in the primary bucket page, which would otherwise always be set to
InvalidBuffer.

This patch contains code so that it will continue working (although
less efficiently) with hash indexes built before this change, but
perhaps we should consider bumping the hash version and ripping out
the compatibility code.  That decision can be made later, though.

Mithun Cy, reviewed by Jesper Pedersen, Amit Kapila, and by me.
Before committing, I made a number of cosmetic changes to the last
posted version of the patch, adjusted _hash_getcachedmetap to be more
careful about order of operation, and made some necessary updates to
the pageinspect documentation and regression tests.
2017-02-07 12:35:45 -05:00
Heikki Linnakangas d93b7535a6 Fix typo also in expected output.
Commit 181bdb90ba fixed the typo in the .sql file, but forgot to update the
expected output.
2017-02-06 12:04:04 +02:00
Heikki Linnakangas 181bdb90ba Fix typos in comments.
Backpatch to all supported versions, where applicable, to make backpatching
of future fixes go more smoothly.

Josh Soref

Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
2017-02-06 11:33:58 +02:00
Robert Haas 871ec0e336 pageinspect: More type-sanity surgery on the new hash index code.
Uniformly expose unsigned quantities using the next-wider signed
integer type (since we have no unsigned types at the SQL level).
At the SQL level, this results a change to report itemoffset as
int4 rather than int2.  Also at the SQL level, report one value
that is an OID as type oid.  Under the hood, uniformly use macros
that match the SQL output type as to both width and signedness.
2017-02-03 16:28:13 -05:00
Robert Haas e759854a09 pgstattuple: Add pgstathashindex.
Since pgstattuple v1.5 hasn't been released yet, no need for a new
extension version.  The new function exposes statistics about hash
indexes similar to what other pgstatindex functions return for other
index types.

Ashutosh Sharma, reviewed by Kuntal Ghosh.  Substantial further
revisions by me.
2017-02-03 14:37:16 -05:00
Tom Lane 14e9b18fed In pageinspect/hashfuncs.c, avoid crashes on alignment-picky machines.
On machines with MAXALIGN = 8, the payload of a bytea is not maxaligned,
since it will start 4 bytes into a palloc'd value.  On alignment-picky
hardware, this will cause failures in accesses to 8-byte-wide values
within the page.  We already encountered this problem when we introduced
GIN index inspection functions, and fixed it in commit 84ad68d64.  Make
use of the same function for hash indexes.

A small difficulty is that up to now contrib/pageinspect has not shared
any functions at all across files.  To support that, introduce a common
header file "pageinspect.h" for the module.

Also, move get_page_from_raw() out of ginfuncs.c, where it didn't
especially belong, and put it in rawpage.c which seems a more natural home.

Per buildfarm.

Discussion: https://postgr.es/m/17311.1486134714@sss.pgh.pa.us
2017-02-03 11:34:47 -05:00
Robert Haas 29e312bc13 pageinspect: Remove platform-dependent values from hash tests.
Per a report from Tom Lane, the ffactor reported by hash_metapage_info
and the free_size reported by hash_page_stats vary by platform.

Ashutosh Sharma and Robert Haas
2017-02-03 11:06:41 -05:00
Tom Lane c6eeb67dcc Fix a bunch more portability bugs in commit 08bf6e529.
It seems like somebody used a dartboard while choosing integer widths
for the various values taken and returned by these functions ... and
then threw a fresh set of darts while writing the SQL declarations.

This patch brings the C code into line with what the SQL declarations
say, which is enough to make it not dump core on the particular 32-bit
machine I'm testing on.  But I think we could do with another round
of looking at what the datum widths *should* be.  For instance, it's
not all that sensible that hash_bitmap_info decided to use int64 to
represent a BlockNumber input when get_raw_page doesn't do it that way.

There's also a remaining problem that the expected outputs from the
test script are platform-dependent, but I'll leave that issue for
somebody else.

Per buildfarm.
2017-02-02 23:11:08 -05:00
Robert Haas ed807fda6d pageinspect: Try to fix some bugs in previous commit.
Commit 08bf6e5295 seems not to have
used the correct *GetDatum and PG_GETARG_* macros for the SQL types
in some cases, and some of the SQL types seem to have been poorly
chosen, too.  Try to fix it.  I'm not sure if this is the reason
why the buildfarm is currently unhappy with this code, but it
seems like a good place to start.

Buildfarm unhappiness reported by Tom Lane.
2017-02-02 22:32:06 -05:00
Robert Haas 08bf6e5295 pageinspect: Support hash indexes.
Patch by Jesper Pedersen and Ashutosh Sharma, with some error handling
improvements by me.  Tests from Peter Eisentraut.  Reviewed by Álvaro
Herrera, Michael Paquier, Jesper Pedersen, Jeff Janes, Peter
Eisentraut, Amit Kapila, Mithun Cy, and me.

Discussion: http://postgr.es/m/e2ac6c58-b93f-9dd9-f4e6-d6d30add7fdf@redhat.com
2017-02-02 14:19:32 -05:00
Noah Misch acd73ad1a1 Code review for avoidance of direct cross-module links.
Remove $(pkglibdir) from $(rpathdir), since commits
d51924be88 and
eda04886c1 removed direct linkage to
objects stored there.  Users are unlikely to notice the difference.
Accompany every $(python_libspec) with $(python_additional_libs); this
doesn't fix a demonstrated bug, but it might do so on rare Python
configurations.  With these changes, AIX ceases to be a special case.
2017-02-02 11:21:16 -05:00
Heikki Linnakangas dbd69118c0 Replace isMD5() with a more future-proof way to check if pw is encrypted.
The rule is that if pg_authid.rolpassword begins with "md5" and has the
right length, it's an MD5 hash, otherwise it's a plaintext password. The
idiom has been to use isMD5() to check for that, but that gets awkward,
when we add new kinds of verifiers, like the verifiers for SCRAM
authentication in the pending SCRAM patch set. Replace isMD5() with a new
get_password_type() function, so that when new verifier types are added, we
don't need to remember to modify every place that currently calls isMD5(),
to also recognize the new kinds of verifiers.

Also, use the new plain_crypt_verify function in passwordcheck, so that it
doesn't need to know about MD5, or in the future, about other kinds of
hashes or password verifiers.

Reviewed by Michael Paquier and Peter Eisentraut.

Discussion: https://www.postgresql.org/message-id/2d07165c-1793-e243-a2a9-e45b624c7580@iki.fi
2017-02-01 13:11:37 +02:00
Robert Haas 4bf371cf2a Fix typo in comment.
Etsuro Fujita
2017-01-27 17:22:40 -05:00
Andres Freund 9ba8a9ce45 Use the new castNode() macro in a number of places.
This is far from a pervasive conversion, but it's a good starting
point.

Author: Peter Eisentraut, with some minor changes by me
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/c5d387d9-3440-f5e0-f9d4-71d53b9fbe52@2ndquadrant.com
2017-01-26 16:47:03 -08:00
Tom Lane aa7f593b1f Improve speed of contrib/postgres_fdw regression tests.
Commit 7012b132d added some tests that consumed an excessive amount of
time, more than tripling the time needed for "make installcheck" for this
module.  Add filter conditions to reduce the number of rows scanned,
bringing the runtime down to within hailing distance of what it was before.

Jeevan Chalke and Ashutosh Bapat, per a gripe from me

Discussion: https://postgr.es/m/16565.1478104765@sss.pgh.pa.us
2017-01-25 08:31:31 -05:00
Robert Haas 7b4ac19982 Extend index AM API for parallel index scans.
This patch doesn't actually make any index AM parallel-aware, but it
provides the necessary functions at the AM layer to do so.

Rahila Syed, Amit Kapila, Robert Haas
2017-01-24 16:42:58 -05:00
Peter Eisentraut f21a563d25 Move some things from builtins.h to new header files
This avoids that builtins.h has to include additional header files.
2017-01-20 20:29:53 -05:00
Andres Freund ea15e18677 Remove obsoleted code relating to targetlist SRF evaluation.
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.

This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.

Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
2017-01-19 14:40:41 -08:00
Alvaro Herrera 9a34123bc3 Make messages mentioning type names more uniform
This avoids additional translatable strings for each distinct type, as
well as making our quoting style around type names more consistent
(namely, that we don't quote type names).  This continues what started
as f402b99501.

Discussion: https://postgr.es/m/20160401170642.GA57509@alvherre.pgsql
2017-01-18 16:08:20 -03:00
Peter Eisentraut 352a24a1f9 Generate fmgr prototypes automatically
Gen_fmgrtab.pl creates a new file fmgrprotos.h, which contains
prototypes for all functions registered in pg_proc.h.  This avoids
having to manually maintain these prototypes across a random variety of
header files.  It also automatically enforces a correct function
signature, and since there are warnings about missing prototypes, it
will detect functions that are defined but not registered in
pg_proc.h (or otherwise used).

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2017-01-17 14:06:07 -05:00
Peter Eisentraut 6fc547960d Rename C symbols for backend lo_ functions
Rename the C symbols for lo_* to be_lo_*, so they don't conflict with
libpq prototypes.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2017-01-17 12:35:30 -05:00
Peter Eisentraut 5d7c9c906a Remove unnecessary prototypes in loadable modules
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2017-01-17 12:35:11 -05:00
Tom Lane 83f2061dd0 Teach contrib/pg_stat_statements to handle multi-statement commands better.
Make use of the statement boundary info added by commit ab1f0c822
to let pg_stat_statements behave more sanely when multiple SQL queries
are jammed into one query string.  It now records just the relevant
part of the source string, not the whole thing, for each individual
query.

Even when no multi-statement strings are involved, users may notice small
changes in the output: leading and trailing whitespace and semicolons will
be stripped from statements, which did not happen before.

Also, significantly expand pg_stat_statements' regression test script.

Fabien Coelho, reviewed by Craig Ringer and Kyotaro Horiguchi,
some mods by me

Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 16:17:30 -05:00
Tom Lane ab1f0c8225 Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements.  It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list.  This patch brings
similar consistency to the outputs of raw parsing and planning steps:

* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.

* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements.  In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node.  This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.

Now, every list of statements has a consistent head-node type depending
on how far along it is in processing.  This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.

Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc.  That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way.  It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)

Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list.  While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.

The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement.  This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)

There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.

Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes.  This allows
more intelligent handling of cases where a source query string contains
multiple statements.  This patch doesn't actually do anything with the
information, but a follow-on patch will.  (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)

catversion bump because addition of location fields to struct Query
affects stored rules.

This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.

Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-14 16:02:35 -05:00
Peter Eisentraut 05cd12ed5b pg_ctl: Change default to wait for all actions
The different actions in pg_ctl had different defaults for -w and -W,
mostly for historical reasons.  Most users will want the -w behavior, so
make that the default.

Remove the -w option in most example and test code, so avoid confusion
and reduce verbosity.  pg_upgrade is not touched, so it can continue to
work with older installations.

Reviewed-by: Beena Emerson <memissemerson@gmail.com>
Reviewed-by: Ryan Murphy <ryanfmurphy@gmail.com>
2017-01-14 09:15:08 -05:00
Peter Eisentraut e574f15d62 Updates to reflect that pg_ctl stop -m fast is the default
Various example and test code used -m fast explicitly, but since it's
the default, this can be omitted now or should be replaced by a better
example.

pg_upgrade is not touched, so it can continue to operate with older
installations.
2017-01-13 21:25:36 -05:00
Peter Eisentraut da0dbea9c3 Make whitespace consistent inside some script files
I don't know what the global standard might be, but at least adjacent
code should use the same whitespace.
2017-01-12 10:17:37 -05:00
Tom Lane c52d37c8b3 Invalidate cached plans on FDW option changes.
This fixes problems where a plan must change but fails to do so,
as seen in a bug report from Rajkumar Raghuwanshi.

For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of
forcing a relcache flush on the table.  For ALTER FOREIGN DATA WRAPPER
and ALTER SERVER, just flush the whole plan cache on any change in
pg_foreign_data_wrapper or pg_foreign_server.  That matches the way
we handle some other low-probability cases such as opclass changes, and
it's unclear that the case arises often enough to be worth working harder.
Besides, that gives a patch that is simple enough to back-patch with
confidence.

Back-patch to 9.3.  In principle we could apply the code change to 9.2 as
well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that
anyone is doing anything exciting enough with FDWs that far back to need
this desperately, and (c) the patch doesn't apply cleanly.

Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh
Bapat, who each contributed substantial changes as well.

Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwga=og@mail.gmail.com
2017-01-06 14:12:52 -05:00
Peter Eisentraut 933b46644c Use 'use strict' in all Perl programs 2017-01-05 12:34:48 -05:00
Bruce Momjian 1d25779284 Update copyright via script for 2017 2017-01-03 13:48:53 -05:00
Joe Conway 2f802d95b4 Make dblink try harder to form useful error messages
When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.

Back-patch to all supported branches.

Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
2016-12-22 09:48:55 -08:00
Joe Conway c444868389 Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.

Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.

Author: Corey Huinker
Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
2016-12-22 09:20:35 -08:00
Tom Lane b86515da1a Give a useful error message if uuid-ossp is built without preconfiguration.
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without
having told configure you meant to; you could just cd into that directory
and "make".  That no longer works because the code depends on configure to
have done header and library probes, but the ensuing error messages are
not so easy to interpret if you're not an old C hand.  We've gotten a
couple of complaints recently from people trying to do this the low-tech
way, so add an explicit #error directing the user to use --with-uuid.

(In principle we might want to do something similar in the other
optionally-built contrib modules; but I don't think any of the others have
ever worked without preconfiguration, so there are no bad habits to break
people of.)

Back-patch to 9.4 where the previous commit came in.

Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com
Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
2016-12-22 11:19:04 -05:00
Joe Conway ea0aa9698c Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.

Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.

Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
2016-12-21 15:51:31 -08:00
Tom Lane 4a0a34b5b6 In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
Oversight in commit b8cc8f947.  I just noticed this causes compiler
warnings on FreeBSD, and it really ought to cause warnings elsewhere too:
all references I can find say that <arpa/inet.h> is required for these.
We have a lot of code elsewhere that thinks that both <netinet/in.h>
and <arpa/inet.h> should be included for these functions, so do it that
way here too, even though <arpa/inet.h> ought to be sufficient according
to the references I consulted.

Back-patch to 9.4 where the previous commit landed.
2016-12-17 22:24:13 -05:00
Peter Eisentraut fdf71389dd Various temporary slots test improvements
Fix the tests on slow machines (per buildfarm).

Add test for dropping on error.  And also try to consume real changes
from temporary slots.

From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2016-12-15 08:50:09 -05:00
Tom Lane 23f722ba8e Fix race condition in test_decoding "slot" test.
This test, just added in commit a924c327e, sometimes fails because
the old backend hasn't finished dropping the temporary replication slot
when the new backend looks.  Borrow the previously-invented methodology
for waiting for the old process to disappear from pg_stat_activity.

Petr Jelinek

Discussion: https://postgr.es/m/62935e6f-4f1b-c433-e0fa-7f936a38b3e5@2ndquadrant.com
2016-12-12 14:32:13 -05:00
Peter Eisentraut a924c327e2 Add support for temporary replication slots
This allows creating temporary replication slots that are removed
automatically at the end of the session or on error.

From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
2016-12-12 08:38:17 -05:00
Heikki Linnakangas 58445c5c8d Further cleanup from the strong-random patch.
Also use the new facility for generating RADIUS authenticator requests,
and salt in chkpass extension.

Reword the error messages to be nicer. Fix bogus error code used in the
message in BackendStartup.
2016-12-12 11:55:32 +02:00
Heikki Linnakangas 9bbbf029dd Fix pgcrypto compilation with OpenSSL 1.1.0.
Was broken by the switch to using OpenSSL's EVP interface for ciphers, in
commit 5ff4a67f.

Reported by Andres Freund. Fix by Michael Paquier with some kibitzing by me.

Discussion: https://www.postgresql.org/message-id/20161201014826.ic72tfkahmevpwz7@alap3.anarazel.de
2016-12-12 11:14:44 +02:00
Robert Haas fa0f466d53 Log the creation of an init fork unconditionally.
Previously, it was thought that this only needed to be done for the
benefit of possible standbys, so wal_level = minimal skipped it.
But that's not safe, because during crash recovery we might replay
XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively
removes the directory that contains the new init fork.  So log it
always.

The user-visible effect of this bug is that if you create a database
or tablespace, then create an unlogged table, then crash without
checkpointing, then restart, accessing the table will fail, because
the it won't have been properly reset.  This commit fixes that.

Michael Paquier, per a report from Konstantin Knizhnik.  Wording of
the comments per a suggestion from me.
2016-12-08 14:12:08 -05:00
Heikki Linnakangas fe0a0b5993 Replace PostmasterRandom() with a stronger source, second attempt.
This adds a new routine, pg_strong_random() for generating random bytes,
for use in both frontend and backend. At the moment, it's only used in
the backend, but the upcoming SCRAM authentication patches need strong
random numbers in libpq as well.

pg_strong_random() is based on, and replaces, the existing implementation
in pgcrypto. It can acquire strong random numbers from a number of sources,
depending on what's available:

- OpenSSL RAND_bytes(), if built with OpenSSL
- On Windows, the native cryptographic functions are used
- /dev/urandom

Unlike the current pgcrypto function, the source is chosen by configure.
That makes it easier to test different implementations, and ensures that
we don't accidentally fall back to a less secure implementation, if the
primary source fails. All of those methods are quite reliable, it would be
pretty surprising for them to fail, so we'd rather find out by failing
hard.

If no strong random source is available, we fall back to using erand48(),
seeded from current timestamp, like PostmasterRandom() was. That isn't
cryptographically secure, but allows us to still work on platforms that
don't have any of the above stronger sources. Because it's not very secure,
the built-in implementation is only used if explicitly requested with
--disable-strong-random.

This replaces the more complicated Fortuna algorithm we used to have in
pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom,
so it doesn't seem worth the maintenance effort to keep that. pgcrypto
functions that require strong random numbers will be disabled with
--disable-strong-random.

Original patch by Magnus Hagander, tons of further work by Michael Paquier
and me.

Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
2016-12-05 13:42:59 +02:00
Heikki Linnakangas b2cc748b09 Remove dead stuff from pgcrypto.
pgp-pubkey-DISABLED test has been unused since 2006, when support for
built-in bignum math was added (commit 1abf76e8). pgp-encrypt-DISABLED has
been unused forever, AFAICS.

Also remove a couple of unused error codes.
2016-11-30 13:04:16 +02:00
Tom Lane ade49c605f Test all contrib-created operator classes with amvalidate.
I'd supposed that people would do this manually when creating new operator
classes, but the folly of that was exposed today.  The tests seem fast
enough that we can just apply them during the normal regression tests.

contrib/isn fails the checks for lack of complete sets of cross-type
operators.  That's a nice-to-have policy rather than a functional
requirement, so leave it as-is, but insert ORDER BY in the query to
ensure consistent cross-platform output.

Discussion: https://postgr.es/m/7076.1480446837@sss.pgh.pa.us
2016-11-29 15:05:22 -05:00
Tom Lane 11da83a0e7 Add uuid to the set of types supported by contrib/btree_gist.
Paul Jungwirth, reviewed and hacked on by Teodor Sigaev, Ildus
Kurbangaliev, Adam Brusselback, Chris Bandy, and myself.

Discussion: https://postgr.es/m/CA+renyUEE29=X01JXdz8_TQvo6n9=2XoEBBRnQ8rkLyr+kjPxQ@mail.gmail.com
Discussion: https://postgr.es/m/55F6EE82.8080209@sigaev.ru
2016-11-29 14:08:34 -05:00
Tom Lane e2a0ee6900 Make contrib/test_decoding regression tests safe for CZ locale.
A little COLLATE "C" goes a long way.

Pavel Stehule, per suggestion from Craig Ringer

Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
2016-11-21 20:39:28 -05:00
Andres Freund ffa8c3d852 Provide NO_INSTALLCHECK option for pgxs.
This allows us to avoid running the regression tests in contrib modules
like pg_stat_statement in a less ugly manner.

Discussion: <22432.1478968242@sss.pgh.pa.us>
2016-11-14 14:53:07 -08:00
Peter Eisentraut b7292a8670 pg_stat_statements: Add .gitignore file for tests 2016-11-13 08:24:43 -05:00
Andres Freund 9be244dbe8 Add minimal set of regression tests for pg_stat_statements.
While the set of covered functionality is fairly small, the added tests
still are useful to get some basic buildfarm testing of
pg_stat_statements itself, but also to exercise the lwlock tranch code
on the buildfarm.

Author: Amit Kapila, slightly editorialized by me
Reviewed-By: Ashutosh Sharma, Andres Freund
Discussion: <CAA4eK1JOjkdXYtHxh=2aDK4VgDtN-LNGKY_YqX0N=YEvuzQVWg@mail.gmail.com>
2016-11-12 05:01:48 -08:00
Tom Lane 9257f07872 Replace uses of SPI_modifytuple that intend to allocate in current context.
Invent a new function heap_modify_tuple_by_cols() that is functionally
equivalent to SPI_modifytuple except that it always allocates its result
by simple palloc.  I chose however to make the API details a bit more
like heap_modify_tuple: pass a tupdesc rather than a Relation, and use
bool convention for the isnull array.

Use this function in place of SPI_modifytuple at all call sites where the
intended behavior is to allocate in current context.  (There actually are
only two call sites left that depend on the old behavior, which makes me
wonder if we should just drop this function rather than keep it.)

This new function is easier to use than heap_modify_tuple() for purposes
of replacing a single column (or, really, any fixed number of columns).
There are a number of places where it would simplify the code to change
over, but I resisted that temptation for the moment ... everywhere except
in plpgsql's exec_assign_value(); changing that might offer some small
performance benefit, so I did it.

This is on the way to removing SPI_push/SPI_pop, but it seems like
good code cleanup in its own right.

Discussion: <9633.1478552022@sss.pgh.pa.us>
2016-11-08 15:36:44 -05:00
Tom Lane 6d30fb1f75 Make SPI_fnumber() reject dropped columns.
There's basically no scenario where it's sensible for this to match
dropped columns, so put a test for dropped-ness into SPI_fnumber()
itself, and excise the test from the small number of callers that
were paying attention to the case.  (Most weren't :-(.)

In passing, normalize tests at call sites: always reject attnum <= 0
if we're disallowing system columns.  Previously there was a mixture
of "< 0" and "<= 0" tests.  This makes no practical difference since
SPI_fnumber() never returns 0, but I'm feeling pedantic today.

Also, in the places that are actually live user-facing code and not
legacy cruft, distinguish "column not found" from "can't handle
system column".

Per discussion with Jim Nasby; thi supersedes his original patch
that just changed the behavior at one call site.

Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
2016-11-08 13:11:26 -05:00
Tom Lane c59f94e81e Revert "Delete contrib/xml2's legacy implementation of xml_is_well_formed()."
This partly reverts commit 20540710e8.
Since we've given up on adding PGDLLEXPORT markers to PG_FUNCTION_INFO_V1,
there's no need to remove the legacy compatibility function.  I kept the
documentation changes, though, as they seem appropriate anyway.
2016-11-07 10:27:52 -05:00
Tom Lane 33cb96ba1a Revert "Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro."
This reverts commit c8ead2a397.
Seems there is no way to do this that doesn't cause MSVC to give
warnings, so let's just go back to the way we've been doing it.

Discussion: <11843.1478358206@sss.pgh.pa.us>
2016-11-07 10:19:22 -05:00
Tom Lane c8ead2a397 Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.
Second try at the change originally made in commit 8518583cd;
this time with contrib updates so that manual extern declarations
are also marked with PGDLLEXPORT.  The release notes should point
this out as a significant source-code change for extension authors,
since they'll have to make similar additions to avoid trouble on Windows.

Laurenz Albe, doc change by me

Patch: <A737B7A37273E048B164557ADEF4A58B53962ED8@ntex2010a.host.magwien.gv.at>
2016-11-04 19:04:56 -04:00
Tom Lane 20540710e8 Delete contrib/xml2's legacy implementation of xml_is_well_formed().
This function is unreferenced in modern usage; it was superseded in 9.1
by a core function of the same name.  It has been left in place in the C
code only so that pre-9.1 SQL definitions of the contrib/xml2 functions
would continue to work.  Six years seems like enough time for people to
have updated to the extension-style version of the xml2 module, so let's
drop this.

The key reason for not keeping it any longer is that we want to stick
an explicit PGDLLEXPORT into PG_FUNCTION_INFO_V1(), and the similarity
of name to the core function creates a conflict that compilers will
complain about.

Extracted from a larger patch for that purpose.  I'm committing this
change separately to give it more visibility in the commit logs.

While at it, remove the documentation entry that claimed that
xml_is_well_formed() is a function provided by contrib/xml2, and
instead mention the even more ancient alias xml_valid().

Laurenz Albe, doc change by me

Patch: <A737B7A37273E048B164557ADEF4A58B53962ED8@ntex2010a.host.magwien.gv.at>
2016-11-04 18:29:53 -04:00
Tom Lane 367b99bbb1 Fix gin_leafpage_items().
On closer inspection, commit 84ad68d64 broke gin_leafpage_items(),
because the aligned copy of the page got palloc'd in a short-lived
context whereas it needs to be in the SRF's multi_call_memory_ctx.
This was not exposed by the regression test, because the regression
test doesn't actually exercise the function in a meaningful way.
Fix the code bug, and extend the test in what I hope is a portable
fashion.
2016-11-04 12:11:54 -04:00
Peter Eisentraut 84ad68d645 pageinspect: Fix unaligned struct access in GIN functions
The raw page data that is passed into the functions will not be aligned
at 8-byte boundaries.  Casting that to a struct and accessing int64
fields will result in unaligned access.  On most platforms, you get away
with it, but it will result on a crash on pickier platforms such as ia64
and sparc64.
2016-11-04 10:05:37 -04:00
Robert Haas 7016e4c4f5 postgres_fdw: Fix typo in comment.
Etsuro Fujita
2016-11-04 09:03:42 -04:00
Peter Eisentraut a0f357e570 psql: Split up "Modifiers" column in \d and \dD
Make separate columns "Collation", "Nullable", "Default".

Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>
2016-11-03 14:02:46 -04:00
Robert Haas c6ff380e38 Use NIL instead of NULL for an empty List.
Tatsuro Yamada, reviewed by Ashutosh Bapat
2016-11-03 11:24:01 -04:00
Tom Lane da8f3ebf30 Don't convert Consts into Vars during setrefs.c processing.
While converting expressions in an upper-level plan node so that they
reference Vars and expressions provided by the input plan node(s),
don't convert plain Const items, even if there happens to be a matching
Const in the input.  It's silly to do so because a Var is more expensive to
execute than a Const.  Moreover, converting can fool ExecCheckPlanOutput's
check that an insert or update query inserts nulls into dropped columns,
leading to "query provides a value for a dropped column" errors during
INSERT or UPDATE on a table with a dropped column.  We could solve this
by making that check more complicated, but I don't see the point; this fix
should save a marginal number of cycles, and it also makes for less messy
EXPLAIN output, as shown by the ensuing regression test result changes.

Per report from Pavel Hanák.  I have not incorporated a test case based
on that example, as there doesn't seem to be a simple way of checking
this in isolation without making a bunch of assumptions about other
planner and SQL-function behavior.

Back-patch to 9.6.  This setrefs.c behavior exists much further back,
but there is not currently reason to think that it causes problems
before 9.6.

Discussion: <83shraampf.fsf@is-it.eu>
2016-11-02 14:32:13 -04:00
Peter Eisentraut 00a86856c1 pageinspect: Make page test more portable
Choose test data that makes the output independent of endianness.
2016-11-02 08:45:17 -04:00
Tom Lane 14ee35799f Fix portability bug in gin_page_opaque_info().
Somebody apparently thought that "if Int32GetDatum is good,
Int64GetDatum must be better".  Per buildfarm failures now
that Peter has added some regression tests here.
2016-11-02 00:09:27 -04:00
Peter Eisentraut f7c9a6e083 pageinspect: Make btree test more portable
Choose test data that makes the output independent of endianness and
alignment.
2016-11-01 22:02:39 -04:00
Robert Haas 9779f71c40 postgres_fdw: Fix typo in comment.
Etsuro Fujita
2016-11-01 16:21:29 -04:00
Peter Eisentraut adfb81d9e1 pageinspect: Add tests 2016-11-01 14:02:16 -04:00
Robert Haas d4b5d4cadd pgstattuple: Don't take heavyweight locks when examining a hash index.
It's currently necessary to take a heavyweight lock when scanning a
hash bucket, but pgstattuple only examines individual pages, so it
doesn't need to do this.  If, for some hypothetical reason, it did
need to do any heavyweight locking here, this logic would probably
still be incorrect, because most of the locks that it is taking are
meaningless.  Only a heavyweight lock on a primary bucket page has any
meaning, but this takes heavyweight locks on all pages regardless of
function - and in particular overflow pages, where you might imagine
that we'd want to lock the primary bucket page if we needed to lock
anything at all.

This is arguably a bug that has existed since this code was added in
commit dab42382f4, but I'm not going to
bother back-patching it because in most cases the only consequence is
that running pgstattuple() on a hash index is a little slower than it
otherwise might be, which is no big deal.

Extracted from a vastly larger patch by Amit Kapila which heavyweight
locking for hash indexes entirely; analysis of why this can be done
independently of the rest by me.
2016-10-28 12:21:15 -04:00
Tom Lane 8529686ccb Suppress unused-variable warning in non-assert builds.
Introduced in commit 7012b132d.

Kyotaro Horiguchi
2016-10-26 10:19:41 -04:00
Heikki Linnakangas 8a2f08fbea Fix typo in comment.
Daniel Gustafsson
2016-10-26 11:10:13 +03:00
Robert Haas f5d6bce63c postgres_fdw: Try again to stabilize aggregate pushdown regression tests.
A query that only aggregates one row isn't a great argument for pushdown,
and buildfarm member brolga decides against it.  Adjust the query a bit
in the hopes of getting remote aggregation to win consistently.

Jeevan Chalke, per suggestion from Tom Lane
2016-10-24 22:36:24 -04:00
Robert Haas ad13a09d76 postgres_fdw: Attempt to stabilize regression results.
Set enable_hashagg to false for tests involving least_agg(), so that
we get the same plan regardless of local costing variances.  Also,
remove a test involving sqrt(); it's there to test deparsing of
HAVING clauses containing expressions, but that's tested elsewhere
anyway, and sqrt(2) deparses with different amounts of precision on
different machines.

Per buildfarm.
2016-10-21 11:29:33 -04:00
Robert Haas 7012b132d0 postgres_fdw: Push down aggregates to remote servers.
Now that the upper planner uses paths, and now that we have proper hooks
to inject paths into the upper planning process, it's possible for
foreign data wrappers to arrange to push aggregates to the remote side
instead of fetching all of the rows and aggregating them locally.  This
figures to be a massive win for performance, so teach postgres_fdw to
do it.

Jeevan Chalke and Ashutosh Bapat.  Reviewed by Ashutosh Bapat with
additional testing by Prabhat Sahu.  Various mostly cosmetic changes
by me.
2016-10-21 09:54:29 -04:00
Heikki Linnakangas faae1c918e Revert "Replace PostmasterRandom() with a stronger way of generating randomness."
This reverts commit 9e083fd468. That was a
few bricks shy of a load:

* Query cancel stopped working
* Buildfarm member pademelon stopped working, because the box doesn't have
  /dev/urandom nor /dev/random.

This clearly needs some more discussion, and a quite different patch, so
revert for now.
2016-10-18 16:28:23 +03:00
Heikki Linnakangas 5ff4a67f63 Use OpenSSL EVP API for symmetric encryption in pgcrypto.
The old "low-level" API is deprecated, and doesn't support hardware
acceleration. And this makes the code simpler, too.

Discussion: <561274F1.1030000@iki.fi>
2016-10-17 17:29:33 +03:00
Heikki Linnakangas 9e083fd468 Replace PostmasterRandom() with a stronger way of generating randomness.
This adds a new routine, pg_strong_random() for generating random bytes,
for use in both frontend and backend. At the moment, it's only used in
the backend, but the upcoming SCRAM authentication patches need strong
random numbers in libpq as well.

pg_strong_random() is based on, and replaces, the existing implementation
in pgcrypto. It can acquire strong random numbers from a number of sources,
depending on what's available:
- OpenSSL RAND_bytes(), if built with OpenSSL
- On Windows, the native cryptographic functions are used
- /dev/urandom
- /dev/random

Original patch by Magnus Hagander, with further work by Michael Paquier
and me.

Discussion: <CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com>
2016-10-17 11:52:50 +03:00
Andres Freund ccbb852cd6 Fix further hash table order dependent tests.
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order.  After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
2016-10-12 18:31:45 -07:00
Robert Haas d2ce38e204 Rename WAIT_* constants to PG_WAIT_*.
Windows apparently has a constant named WAIT_TIMEOUT, and some of these
other names are pretty generic, too.  Insert "PG_" at the front of each
name in order to disambiguate.

Michael Paquier
2016-10-05 08:04:52 -04:00
Tom Lane eda04886c1 Avoid direct cross-module links in hstore_plperl and ltree_plpython, too.
Just turning the crank on the project started in commit d51924be8.
These cases turn out to be exact subsets of the boilerplate needed
for hstore_plpython.

Discussion: <2652.1475512158@sss.pgh.pa.us>
2016-10-04 17:49:07 -04:00
Robert Haas 6f3bd98ebf Extend framework from commit 53be0b1ad to report latch waits.
WaitLatch, WaitLatchOrSocket, and WaitEventSetWait now taken an
additional wait_event_info parameter; legal values are defined in
pgstat.h.  This makes it possible to uniquely identify every point in
the core code where we are waiting for a latch; extensions can pass
WAIT_EXTENSION.

Because latches were the major wait primitive not previously covered
by this patch, it is now possible to see information in
pg_stat_activity on a large number of important wait events not
previously addressed, such as ClientRead, ClientWrite, and SyncRep.

Unfortunately, many of the wait events added by this patch will fail
to appear in pg_stat_activity because they're only used in background
processes which don't currently appear in pg_stat_activity.  We should
fix this either by creating a separate view for such information, or
else by deciding to include them in pg_stat_activity after all.

Michael Paquier and Robert Haas, reviewed by Alexander Korotkov and
Thomas Munro.
2016-10-04 11:01:42 -04:00
Tom Lane 490ed1ebb9 Fix hstore_plpython for Python 3.
In commit d51924be8, I overlooked the need to provide linkage for
PLyUnicode_FromStringAndSize, because that's only used (and indeed
only exists) in Python 3 builds.

In light of the need to #if this item, rearrange the ordering of
the code related to each function pointer, so as not to need more
#if's than absolutely necessary.

Per buildfarm.
2016-10-04 09:38:43 -04:00
Andres Freund 61633f7904 Correct logical decoding restore behaviour for subtransactions.
Before initializing iteration over a subtransaction's changes, the last
few changes were not spilled to disk. That's correct if the transaction
didn't spill to disk, but otherwise... This bug can lead to missed or
misorderd subtransaction contents when they were spilled to disk.

Move spilling of the remaining in-memory changes to
ReorderBufferIterTXNInit(), where it can easily be applied to the top
transaction and, if present, subtransactions.

Since this code had too many bugs already, noticeably increase test
coverage.

Fixes: #14319
Reported-By: Huan Ruan
Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org>
Backport: 9,4-, where logical decoding was added
2016-10-03 22:11:36 -07:00
Tom Lane d51924be88 Convert contrib/hstore_plpython to not use direct linking to other modules.
Previously, on most platforms, we allowed hstore_plpython's references
to hstore and plpython to be unresolved symbols at link time, trusting
the dynamic linker to resolve them when the module is loaded.  This
has a number of problems, the worst being that the dynamic linker
does not know where the references come from and can do nothing but
fail if those other modules haven't been loaded.  We've more or less
gotten away with that for the limited use-case of datatype transform
modules, but even there, it requires some awkward hacks, most recently
commit 83c249200.

Instead, let's not treat these references as linker-resolvable at all,
but use function pointers that are manually filled in by the module's
_PG_init function.  There are few enough contact points that this
doesn't seem unmaintainable, at least for these use-cases.  (Note that
the same technique wouldn't work at all for decoupling from libpython
itself, but fortunately that's just a standard shared library and can
be linked to normally.)

This is an initial patch that just converts hstore_plpython.  If the
buildfarm doesn't find any fatal problems, I'll work on the other
transform modules soon.

Tom Lane, per an idea of Andres Freund's.

Discussion: <2652.1475512158@sss.pgh.pa.us>
2016-10-03 22:27:11 -04:00
Tom Lane 9a109452da Fix bugs in contrib/pg_visibility.
collect_corrupt_items() failed to initialize tuple.t_self.  While
HeapTupleSatisfiesVacuum() doesn't actually use that value, it does
Assert that it's valid, so that the code would dump core if ip_posid
chanced to be zero.  (That's somewhat unlikely, which probably explains
how this got missed.  In any case it wouldn't matter for field use.)

Also, collect_corrupt_items was returning the wrong TIDs, that is the
contents of t_ctid rather than the tuple's own location.  This would
be the same thing in simple cases, but it could be wrong if, for
example, a past update attempt had been rolled back, leaving a live
tuple whose t_ctid doesn't point at itself.

Also, in pg_visibility(), guard against trying to read a page past
the end of the rel.  The VM code handles inquiries beyond the end
of the map by silently returning zeroes, and it seems like we should
do the same thing here.

I ran into the assertion failure while using pg_visibility to check
pg_upgrade's behavior, and then noted the other problems while
reading the code.

Report: <29043.1475288648@sss.pgh.pa.us>
2016-10-01 16:32:54 -04:00
Peter Eisentraut 0665023b44 Remove unnecessary prototypes
Prototypes for functions implementing V1-callable functions are no
longer necessary.

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
2016-09-30 14:04:16 -04:00
Peter Eisentraut f1a469c9f1 Fix use of offsetof()
Using offsetof() with a run-time computed argument is not allowed in
either C or C++.  Apparently, gcc allows it, but g++ doesn't.

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
2016-09-30 14:00:44 -04:00
Stephen Frost fd321a1dfd Remove superuser checks in pgstattuple
Now that we track initial privileges on extension objects and changes to
those permissions, we can drop the superuser() checks from the various
functions which are part of the pgstattuple extension and rely on the
GRANT system to control access to those functions.

Since a pg_upgrade will preserve the version of the extension which
existed prior to the upgrade, we can't simply modify the existing
functions but instead need to create new functions which remove the
checks and update the SQL-level functions to use the new functions
(and to REVOKE EXECUTE rights on those functions from PUBLIC).

Thanks to Tom and Andres for adding support for extensions to follow
update paths (see: 40b449a), allowing this patch to be much smaller
since no new base version script needed to be included.

Approach suggested by Noah.

Reviewed by Michael Paquier.
2016-09-29 22:13:38 -04:00
Tom Lane 8e91e12bc3 Allow contrib/file_fdw to read from a program, like COPY FROM PROGRAM.
This patch just exposes COPY's FROM PROGRAM option in contrib/file_fdw.
There don't seem to be any security issues with that that are any worse
than what already exist with file_fdw and COPY; as in the existing cases,
only superusers are allowed to control what gets executed.

A regression test case might be nice here, but choosing a 100% portable
command to run is hard.  (We haven't got a test for COPY FROM PROGRAM
itself, either.)

Corey Huinker and Adam Gomaa, reviewed by Amit Langote

Discussion: <CADkLM=dGDGmaEiZ=UDepzumWg-CVn7r8MHPjr2NArj8S3TsROQ@mail.gmail.com>
2016-09-29 13:32:34 -04:00
Heikki Linnakangas 6e654546fb Don't bother to lock bufmgr partitions in pg_buffercache.
That makes the view a lot less disruptive to use on a production system.
Without the locks, you don't get a consistent snapshot across all buffers,
but that's OK. It wasn't a very useful guarantee in practice.

Ivan Kartyshov, reviewed by Tomas Vondra and Robert Haas.

Discusssion: <f9d6cab2-73a7-7a84-55a8-07dcb8516ae5@postgrespro.ru>
2016-09-29 13:16:30 +03:00
Tom Lane f31a931fad Improve contrib/cube's handling of zero-D cubes, infinities, and NaNs.
It's always been possible to create a zero-dimensional cube by converting
from a zero-length float8 array, but cube_in failed to accept the '()'
representation that cube_out produced for that case, resulting in a
dump/reload hazard.  Make it accept the case.  Also fix a couple of
other places that didn't behave sanely for zero-dimensional cubes:
cube_size would produce 1.0 when surely the answer should be 0.0,
and g_cube_distance risked a divide-by-zero failure.

Likewise, it's always been possible to create cubes containing float8
infinity or NaN coordinate values, but cube_in couldn't parse such input,
and cube_out produced platform-dependent spellings of the values.  Convert
them to use float8in_internal and float8out_internal so that the behavior
will be the same as for float8, as we recently did for the core geometric
types (cf commit 50861cd68).  As in that commit, I don't pretend that this
patch fixes all insane corner-case behaviors that may exist for NaNs, but
it's a step forward.

(This change allows removal of the separate cube_1.out and cube_3.out
expected-files, as the platform dependency that previously required them
is now gone: an underflowing coordinate value will now produce an error
not plus or minus zero.)

Make errors from cube_in follow project conventions as to spelling
("invalid input syntax for cube" not "bad cube representation")
and errcode (INVALID_TEXT_REPRESENTATION not SYNTAX_ERROR).

Also a few marginal code cleanups and comment improvements.

Tom Lane, reviewed by Amul Sul

Discussion: <15085.1472494782@sss.pgh.pa.us>
2016-09-27 11:38:33 -04:00
Heikki Linnakangas 5c6df67e0c Fix building with LibreSSL.
LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0,
but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf
checks for the individual functions we need, and stop relying on
OPENSSL_VERSION_NUMBER.

Backport to 9.5 and 9.6, like the patch that broke this. In the
back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left,
to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL
has all those functions, so they work as intended.

Per buildfarm member curculio.

Discussion: <2442.1473957669@sss.pgh.pa.us>
2016-09-15 22:52:51 +03:00
Robert Haas 8a503526e4 pg_buffercache: Allow huge allocations.
Otherwise, users who have configured shared_buffers >= 256GB won't
be able to use this module.  There probably aren't many of those, but
it doesn't hurt anything to fix it so that it works.

Backpatch to 9.4, where MemoryContextAllocHuge was introduced.  The
same problem exists in older branches, but there's no easy way to
fix it there.

KaiGai Kohei
2016-09-15 09:30:38 -04:00
Heikki Linnakangas 593d4e47db Support OpenSSL 1.1.0.
Changes needed to build at all:

- Check for SSL_new in configure, now that SSL_library_init is a macro.
- Do not access struct members directly. This includes some new code in
  pgcrypto, to use the resource owner mechanism to ensure that we don't
  leak OpenSSL handles, now that we can't embed them in other structs
  anymore.
- RAND_SSLeay() -> RAND_OpenSSL()

Changes that were needed to silence deprecation warnings, but were not
strictly necessary:

- RAND_pseudo_bytes() -> RAND_bytes().
- SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl()
- ASN1_STRING_data() -> ASN1_STRING_get0_data()
- DH_generate_parameters() -> DH_generate_parameters()
- Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good
  riddance!)

Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER,
for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time
immemorial.

Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have
the "CA:true" basic constraint extension now, or OpenSSL will refuse them.
Regenerate the test certificates with that. The "openssl" binary, used to
generate the certificates, is also now more picky, and throws an error
if an X509 extension is specified in "req_extensions", but that section
is empty.

Backpatch to all supported branches, per popular demand. In back-branches,
we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work
too, but I didn't test it. In master, we only support 0.9.8 and above.

Patch by Andreas Karlsson, with additional changes by me.

Discussion: <20160627151604.GD1051@msg.df7cb.de>
2016-09-15 14:42:29 +03:00
Peter Eisentraut 49eb0fd097 Add location field to DefElem
Add a location field to the DefElem struct, used to parse many utility
commands.  Update various error messages to supply error position
information.

To propogate the error position information in a more systematic way,
create a ParseState in standard_ProcessUtility() and pass that to
interested functions implementing the utility commands.  This seems
better than passing the query string and then reassembling a parse state
ad hoc, which violates the encapsulation of the ParseState type.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
2016-09-06 12:00:00 -04:00
Heikki Linnakangas ec136d19b2 Move code shared between libpq and backend from backend/libpq/ to common/.
When building libpq, ip.c and md5.c were symlinked or copied from
src/backend/libpq into src/interfaces/libpq, but now that we have a
directory specifically for routines that are shared between the server and
client binaries, src/common/, move them there.

Some routines in ip.c were only used in the backend. Keep those in
src/backend/libpq, but rename to ifaddr.c to avoid confusion with the file
that's now in common.

Fix the comment in src/common/Makefile to reflect how libpq actually links
those files.

There are two more files that libpq symlinks directly from src/backend:
encnames.c and wchar.c. I don't feel compelled to move those right now,
though.

Patch by Michael Paquier, with some changes by me.

Discussion: <69938195-9c76-8523-0af8-eb718ea5b36e@iki.fi>
2016-09-02 13:49:59 +03:00
Tom Lane 052cc223d5 Fix a bunch of places that called malloc and friends with no NULL check.
Where possible, use palloc or pg_malloc instead; otherwise, insert
explicit NULL checks.

Generally speaking, these are places where an actual OOM is quite
unlikely, either because they're in client programs that don't
allocate all that much, or they're very early in process startup
so that we'd likely have had a fork() failure instead.  Hence,
no back-patch, even though this is nominally a bug fix.

Michael Paquier, with some adjustments by me

Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
2016-08-30 18:22:43 -04:00
Tom Lane 9daec77e16 Simplify correct use of simple_prompt().
The previous API for this function had it returning a malloc'd string.
That meant that callers had to check for NULL return, which few of them
were doing, and it also meant that callers had to remember to free()
the string later, which required extra logic in most cases.

Instead, make simple_prompt() write into a buffer supplied by the caller.
Anywhere that the maximum required input length is reasonably small,
which is almost all of the callers, we can just use a local or static
array as the buffer instead of dealing with malloc/free.

A fair number of callers used "pointer == NULL" as a proxy for "haven't
requested the password yet".  Maintaining the same behavior requires
adding a separate boolean flag for that, which adds back some of the
complexity we save by removing free()s.  Nonetheless, this nets out
at a small reduction in overall code size, and considerably less code
than we would have had if we'd added the missing NULL-return checks
everywhere they were needed.

In passing, clean up the API comment for simple_prompt() and get rid
of a very-unnecessary malloc/free in its Windows code path.

This is nominally a bug fix, but it does not seem worth back-patching,
because the actual risk of an OOM failure in any of these places seems
pretty tiny, and all of them are client-side not server-side anyway.

This patch is by me, but it owes a great deal to Michael Paquier
who identified the problem and drafted a patch for fixing it the
other way.

Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
2016-08-30 17:02:02 -04:00
Heikki Linnakangas 9b7cd59af1 Remove support for OpenSSL versions older than 0.9.8.
OpenSSL officially only supports 1.0.1 and newer. Some OS distributions
still provide patches for 0.9.8, but anything older than that is not
interesting anymore. Let's simplify things by removing compatibility code.

Andreas Karlsson, with small changes by me.
2016-08-29 20:16:02 +03:00
Tom Lane ea268cdc9a Add macros to make AllocSetContextCreate() calls simpler and safer.
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters.  While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea.  Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.

While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.

In passing, change TopMemoryContext to use the default allocation
parameters.  Formerly it could only be extended 8K at a time.  That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there.  There seems no good reason
not to let it use growing blocks like most other contexts.

Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain.  The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.

Discussion: <21072.1472321324@sss.pgh.pa.us>
2016-08-27 17:50:38 -04:00
Heikki Linnakangas ae025a1598 Support OID system column in postgres_fdw.
You can use ALTER FOREIGN TABLE SET WITH OIDS on a foreign table, but the
oid column read out as zeros, because the postgres_fdw didn't know about
it. Teach postgres_fdw how to fetch it.

Etsuro Fujita, with an additional test case by me.

Discussion: <56E90A76.5000503@lab.ntt.co.jp>
2016-08-26 16:33:57 +03:00
Robert Haas dcb7a54bd1 postgres_fdw: Cosmetic cleanup.
Etsuro Fujita
2016-08-24 16:29:10 -04:00
Tom Lane f9d747a4e9 Support the new regexp_match() function for citext.
Emre Hasegeli

Patch: <CAE2gYzzF24ZHWqkMukkHwqa0otbES9Rex22LrjQUNbi=oKziNQ@mail.gmail.com>
2016-08-18 10:52:31 -04:00
Tom Lane 6657acc010 Fix -e option in contrib/intarray/bench/bench.pl.
As implemented, -e ran an EXPLAIN but then discarded the output, which
certainly seems pointless.  Make it print to stdout instead.  It's been
like that forever, so back-patch to all supported branches.

Daniel Gustafsson, reviewed by Andreas Scherbaum

Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
2016-08-17 15:51:10 -04:00
Tom Lane 0bb51aa967 Improve parsetree representation of special functions such as CURRENT_DATE.
We implement a dozen or so parameterless functions that the SQL standard
defines special syntax for.  Up to now, that was done by converting them
into more or less ad-hoc constructs such as "'now'::text::date".  That's
messy for multiple reasons: it exposes what should be implementation
details to users, and performance is worse than it needs to be in several
cases.  To improve matters, invent a new expression node type
SQLValueFunction that can represent any of these parameterless functions.

Bump catversion because this changes stored parsetrees for rules.

Discussion: <30058.1463091294@sss.pgh.pa.us>
2016-08-16 20:33:01 -04:00
Tom Lane d6c9e05cb7 Fix assorted bugs in contrib/bloom.
In blinsert(), cope with the possibility that a page we pull from the
notFullPage list is marked BLOOM_DELETED.  This could happen if VACUUM
recently marked it deleted but hasn't (yet) updated the metapage.
We can re-use such a page safely, but we *must* reinitialize it so that
it's no longer marked deleted.

Fix blvacuum() so that it updates the notFullPage list even if it's
going to update it to empty.  The previous "optimization" of skipping
the update seems pretty dubious, since it means that the next blinsert()
will uselessly visit whatever pages we left in the list.

Uniformly treat PageIsNew pages the same as deleted pages.  This should
allow proper recovery if a crash occurs just after relation extension.

Properly use vacuum_delay_point, not assorted ad-hoc CHECK_FOR_INTERRUPTS
calls, in the blvacuum() main loop.

Fix broken tuple-counting logic: blvacuum.c counted the number of live
index tuples over again in each scan, leading to VACUUM VERBOSE reporting
some multiple of the actual number of surviving index tuples after any
vacuum that removed any tuples (since they'd be counted in blvacuum, maybe
more than once, and then again in blvacuumcleanup, without ever zeroing the
counter).  It's sufficient to count them in blvacuumcleanup.

stats->estimated_count is a boolean, not a counter, and we don't want
to set it true, so don't add tuple counts to it.

Add a couple of Asserts that we don't overrun available space on a bloom
page.  I don't think there's any bug there today, but the way the
FreeBlockNumberArray size calculation is set up is scarily fragile, and
BloomPageGetFreeSpace isn't much better.  The Asserts should help catch
any future mistakes.

Per investigation of a report from Jeff Janes.  I think the first item
above may explain his report; the other changes were things I noticed
while casting about for an explanation.

Report: <CAMkU=1xEUuBphDwDmB1WjN4+td4kpnEniFaTBxnk1xzHCw8_OQ@mail.gmail.com>
2016-08-13 22:24:48 -04:00
Tom Lane ed0097e4f9 Add SQL-accessible functions for inspecting index AM properties.
Per discussion, we should provide such functions to replace the lost
ability to discover AM properties by inspecting pg_am (cf commit
65c5fcd35).  The added functionality is also meant to displace any code
that was looking directly at pg_index.indoption, since we'd rather not
believe that the bit meanings in that field are part of any client API
contract.

As future-proofing, define the SQL API to not assume that properties that
are currently AM-wide or index-wide will remain so unless they logically
must be; instead, expose them only when inquiring about a specific index
or even specific index column.  Also provide the ability for an index
AM to override the behavior.

In passing, document pg_am.amtype, overlooked in commit 473b93287.

Andrew Gierth, with kibitzing by me and others

Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
2016-08-13 18:31:14 -04:00
Tom Lane e3049285a3 Trivial cosmetic cleanup in bloom/blutils.c.
Don't spell "InvalidOid" as "0".  Initialize method fields in the same
order as amapi.h declares them (and every other AM handler initializes
them).
2016-08-11 12:23:35 -04:00
Peter Eisentraut ab0a23c7c9 Fix typo 2016-08-09 19:08:00 -04:00
Tom Lane bcbecbce2f Don't propagate a null subtransaction snapshot up to parent transaction.
This oversight could cause logical decoding to fail to decode an outer
transaction containing changes, if a subtransaction had an XID but no
actual changes.  Per bug #14279 from Marko Tiikkaja.  Patch by Marko
based on analysis by Andrew Gierth.

Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
2016-08-07 13:15:55 -04:00
Tom Lane 9492cf86e4 Fix assorted fallout from IS [NOT] NULL patch.
Commits 4452000f3 et al established semantics for NullTest.argisrow that
are a bit different from its initial conception: rather than being merely
a cache of whether we've determined the input to have composite type,
the flag now has the further meaning that we should apply field-by-field
testing as per the standard's definition of IS [NOT] NULL.  If argisrow
is false and yet the input has composite type, the construct instead has
the semantics of IS [NOT] DISTINCT FROM NULL.  Update the comments in
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print
such cases correctly.  In the case of ruleutils.c, this merely results in
cosmetic changes in EXPLAIN output, since the case can't currently arise
in stored rules.  However, it represents a live bug for deparse.c, which
would formerly have sent a remote query that had semantics different
from the local behavior.  (From the user's standpoint, this means that
testing a remote nested-composite column for null-ness could have had
unexpected recursive behavior much like that fixed in 4452000f3.)

In a related but somewhat independent fix, make plancat.c set argisrow
to false in all NullTest expressions constructed to represent "attnotnull"
constructs.  Since attnotnull is actually enforced as a simple null-value
check, this is a more accurate representation of the semantics; we were
previously overpromising what it meant for composite columns, which might
possibly lead to incorrect planner optimizations.  (It seems that what the
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so
arguably we are violating the spec and should fix attnotnull to do the
other thing.  If we ever do, this part should get reverted.)

Back-patch, same as the previous commit.

Discussion: <10682.1469566308@sss.pgh.pa.us>
2016-07-28 16:09:15 -04:00
Robert Haas fe5e3fce79 Repair damage done by citext--1.1--1.2.sql.
That script is incorrect in that it sets the combine function for
max(citext) twice instead of setting the combine function for
max(citext) once and the combine functon for min(citext) once.  The
consequence is that if you install 1.0 or 1.1 and then update to 1.2,
you end up with min(citext) not having a combine function, contrary to
what was intended.  If you install 1.2 directly, you're OK.

Fix things up by defining a new 1.3 version.  Upgrading from 1.2 to
1.3 won't change anything for people who first installed the 1.2
version, but people upgrading from 1.0 or 1.1 will get the right
catalog contents once they reach 1.3.

Report and patch by David Rowley, reviewed by Andreas Karlsson.
2016-07-26 15:32:57 -04:00
Peter Eisentraut 40fcfec82c Message style improvements 2016-07-25 22:07:44 -04:00
Fujii Masao 1804d1555f Fix typo in comment.
Author: Masahiko Sawada
2016-07-25 17:51:26 +09:00
Tom Lane d70d119151 Make contrib regression tests safe for Danish locale.
In btree_gin and citext, avoid some not-particularly-interesting
dependencies on the sorting of 'aa'.  In tsearch2, use COLLATE "C" to
remove an uninteresting dependency on locale sort order (and thereby
allow removal of a variant expected-file).

Also, in citext, avoid assuming that lower('I') = 'i'.  This isn't relevant
to Danish but it does fail in Turkish.
2016-07-21 16:52:35 -04:00
Tom Lane 18555b1323 Establish conventions about global object names used in regression tests.
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects.  There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.

This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_".  It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user".  That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.

There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again.  Again, that's not such a cosmetic change and it will require
discussion.  (But I did use a quick-hack enforcement patch to find these
cases.)

Discussion: <16638.1468620817@sss.pgh.pa.us>
2016-07-17 18:42:43 -04:00
Peter Eisentraut f36ca9af05 Use correct symbol for minimum int64 value
The old code used SEQ_MINVALUE to get the smallest int64 value.  This
was done as a convenience to avoid having to deal with INT64_IS_BUSTED,
but that is obsolete now.  Also, it is incorrect because the smallest
int64 value is actually SEQ_MINVALUE-1.  Fix by using PG_INT64_MIN.
2016-07-17 09:15:37 -04:00
Tom Lane 45639a0525 Avoid invalidating all foreign-join cached plans when user mappings change.
We must not push down a foreign join when the foreign tables involved
should be accessed under different user mappings.  Previously we tried
to enforce that rule literally during planning, but that meant that the
resulting plans were dependent on the current contents of the
pg_user_mapping catalog, and we had to blow away all cached plans
containing any remote join when anything at all changed in pg_user_mapping.
This could have been improved somewhat, but the fact that a syscache inval
callback has very limited info about what changed made it hard to do better
within that design.  Instead, let's change the planner to not consider user
mappings per se, but to allow a foreign join if both RTEs have the same
checkAsUser value.  If they do, then they necessarily will use the same
user mapping at runtime, and we don't need to know specifically which one
that is.  Post-plan-time changes in pg_user_mapping no longer require any
plan invalidation.

This rule does give up some optimization ability, to wit where two foreign
table references come from views with different owners or one's from a view
and one's directly in the query, but nonetheless the same user mapping
would have applied.  We'll sacrifice the first case, but to not regress
more than we have to in the second case, allow a foreign join involving
both zero and nonzero checkAsUser values if the nonzero one is the same as
the prevailing effective userID.  In that case, mark the plan as only
runnable by that userID.

The plancache code already had a notion of plans being userID-specific,
in order to support RLS.  It was a little confused though, in particular
lacking clarity of thought as to whether it was the rewritten query or just
the finished plan that's dependent on the userID.  Rearrange that code so
that it's clearer what depends on which, and so that the same logic applies
to both RLS-injected role dependency and foreign-join-injected role
dependency.

Note that this patch doesn't remove the other issue mentioned in the
original complaint, which is that while we'll reliably stop using a foreign
join if it's disallowed in a new context, we might fail to start using a
foreign join if it's now allowed, but we previously created a generic
cached plan that didn't use one.  It was agreed that the chance of winning
that way was not high enough to justify the much larger number of plan
invalidations that would have to occur if we tried to cause it to happen.

In passing, clean up randomly-varying spelling of EXPLAIN commands in
postgres_fdw.sql, and fix a COSTS ON example that had been allowed to
leak into the committed tests.

This reverts most of commits fbe5a3fb7 and 5d4171d1c, which were the
previous attempt at ensuring we wouldn't push down foreign joins that
span permissions contexts.

Etsuro Fujita and Tom Lane

Discussion: <d49c1e5b-f059-20f4-c132-e9752ee0113e@lab.ntt.co.jp>
2016-07-15 17:23:02 -04:00
Robert Haas 86437ddf8c postgres_fdw: Fix cache lookup failure while creating error context.
This is fallout from join pushdown; get_relid_attribute_name can't
handle an attribute number of 0, indicating a whole-row reference,
and shouldn't be called in that case.

Etsuro Fujita, reviewed by Ashutosh Bapat
2016-07-01 11:29:25 -04:00
Robert Haas 5f3499b2b5 postgres_fdw: Remove schema-qualification from cast to text.
As pointed out by Ashutosh Bapat, the header comments for this file
say that schema-qualification is needed for all and only those types
outside pg_catalog.  pg_catalog.text is not outside pg_catalog.
2016-07-01 10:14:42 -04:00
Alvaro Herrera e3ad3ffa68 Fix handling of multixacts predating pg_upgrade
After pg_upgrade, it is possible that some tuples' Xmax have multixacts
corresponding to the old installation; such multixacts cannot have
running members anymore.  In many code sites we already know not to read
them and clobber them silently, but at least when VACUUM tries to freeze
a multixact or determine whether one needs freezing, there's an attempt
to resolve it to its member transactions by calling GetMultiXactIdMembers,
and if the multixact value is "in the future" with regards to the
current valid multixact range, an error like this is raised:
    ERROR:  MultiXactId 123 has not been created yet -- apparent wraparound
and vacuuming fails.  Per discussion with Andrew Gierth, it is completely
bogus to try to resolve multixacts coming from before a pg_upgrade,
regardless of where they stand with regards to the current valid
multixact range.

It's possible to get from under this problem by doing SELECT FOR UPDATE
of the problem tuples, but if tables are large, this is slow and
tedious, so a more thorough solution is desirable.

To fix, we realize that multixacts in xmax created in 9.2 and previous
have a specific bit pattern that is never used in 9.3 and later (we
already knew this, per comments and infomask tests sprinkled in various
places, but we weren't leveraging this knowledge appropriately).
Whenever the infomask of the tuple matches that bit pattern, we just
ignore the multixact completely as if Xmax wasn't set; or, in the case
of tuple freezing, we act as if an unwanted value is set and clobber it
without decoding.  This guarantees that no errors will be raised, and
that the values will be progressively removed until all tables are
clean.  Most callers of GetMultiXactIdMembers are patched to recognize
directly that the value is a removable "empty" multixact and avoid
calling GetMultiXactIdMembers altogether.

To avoid changing the signature of GetMultiXactIdMembers() in back
branches, we keep the "allow_old" boolean flag but rename it to
"from_pgupgrade"; if the flag is true, we always return an empty set
instead of looking up the multixact.  (I suppose we could remove the
argument in the master branch, but I chose not to do so in this commit).

This was broken all along, but the error-facing message appeared first
because of commit 8e9a16ab8f and was partially fixed in a25c2b7c4d.
This fix, backpatched all the way back to 9.3, goes approximately in the
same direction as a25c2b7c4d but should cover all cases.

Bug analysis by Andrew Gierth and Álvaro Herrera.

A number of public reports match this bug:
  https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net
  https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com
  https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk
  https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com
  https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
2016-06-24 18:29:28 -04:00
Robert Haas 9e9c38e159 postgres_fdw: Fix incorrect NULL handling in join pushdown.
something.* IS NOT NULL means that every attribute of the row is not
NULL, not that the row itself is non-NULL (e.g. because it's coming
from below an outer join.  Use (somevar.*)::pg_catalog.text IS NOT
NULL instead.

Ashutosh Bapat, per a report by Rushabh Lathia.  Reviewed by
Amit Langote and Etsuro Fujita.  Schema-qualification added by me.
2016-06-24 15:14:15 -04:00
Robert Haas 267569b24c postgres_fdw: Remove useless return statement.
Etsuro Fujita
2016-06-24 14:33:13 -04:00
Tom Lane e611515dd6 pg_trgm's set_limit() function is parallel unsafe, not parallel restricted.
Per buildfarm.  Fortunately, it's not quite too late to squeeze this fix
into the pg_trgm 1.3 update.
2016-06-20 11:29:54 -04:00
Tom Lane 9c852566a3 Fix comparison of similarity to threshold in GIST trigram searches.
There was some very strange code here, dating to commit b525bf77, that
purported to work around an ancient gcc bug by forcing a float4 comparison
to be done as int instead.  Commit 5871b8848 broke that when it changed
one side of the comparison to "double" but left the comparison code alone.
Commit f576b17cd doubled down on the weirdness by introducing a "volatile"
marker, which had nothing to do with the actual problem.

Guess that the gcc bug, even if it's still present in the wild, was
triggered by comparison of float4's and can be avoided if we store the
result of cnt_sml() into a double before comparing to the double "nlimit".
This will at least work correctly on non-broken compilers, and it's way
more readable.

Per bug #14202 from Greg Navis.  Add a regression test based on his
example.

Report: <20160620115321.5792.10766@wrigleys.postgresql.org>
2016-06-20 10:49:19 -04:00
Tom Lane 7e81a18d49 Fix parallel-safety markings for contrib/dblink.
As shown by buildfarm reports, dblink_build_sql_insert and
dblink_build_sql_update are *not* parallel safe, because they
may attempt to access temporary tables of the local session.

Although dblink_build_sql_delete doesn't actually touch the
contents of the referenced table, it seems consistent and prudent
to mark it PARALLEL RESTRICTED too.
2016-06-17 23:08:21 -04:00
Robert Haas 71d05a2c7b pg_visibility: Add pg_truncate_visibility_map function.
This requires some core changes as well so that we can properly
WAL-log the truncation.  Specifically, it changes the format of the
XLOG_SMGR_TRUNCATE WAL record, so bump XLOG_PAGE_MAGIC.

Patch by me, reviewed but not fully endorsed by Andres Freund.
2016-06-17 17:37:30 -04:00
Robert Haas 20eb2731b7 Update dblink extension for parallel query.
Almost all functions provided by this extension are PARALLEL
RESTRICTED.  Mostly, that's because the leader's TCP connections won't
be shared with the workers, but in some cases like dblink_get_pkey
it's because they obtain locks which might be released early if taken
within a parallel worker.  dblink_fdw_validator probably can't be used
in a query anyway, but there would be no problem from the point of
view of parallel query if it were, so it's PARALLEL SAFE.

Andreas Karlsson
2016-06-17 15:18:44 -04:00
Robert Haas 177c56d608 postgres_fdw: Rephrase comment.
Per gripe from Thomas Munro, who only complained about a more
localized problem, but I couldn't resist a bit more wordsmithing.
2016-06-17 13:02:22 -04:00
Robert Haas e472ce9624 Add integrity-checking functions to pg_visibility.
The new pg_check_visible() and pg_check_frozen() functions can be used to
verify that the visibility map bits for a relation's data pages match the
actual state of the tuples on those pages.

Amit Kapila and Robert Haas, reviewed (in earlier versions) by Andres
Freund.  Additional testing help by Thomas Munro.
2016-06-15 14:33:58 -04:00
Robert Haas 13e7453135 Update xml2 extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-14 15:49:32 -04:00
Robert Haas 20f6c3a2a1 Update uuid-ossp extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-14 14:56:21 -04:00
Robert Haas 202ac08c08 Update unaccent extension for parallel query.
All functions provided by this extension are PARALLEL SAFE.

Andreas Karlsson
2016-06-14 14:55:49 -04:00
Robert Haas 6b7d11ffda Update sslinfo extension for parallel query.
All functions provided by this extension are PARALLEL RESTRICTED,
because they provide information about the connection state.  Parallel
workers don't have this information and therefore these functions
can't be executed in a worker (but they can be present in a query some
other part of which uses parallelism).

Andreas Karlsson
2016-06-14 14:52:55 -04:00
Robert Haas 2910fc8239 Update extensions with GIN/GIST support for parallel query.
Commit 749a787c5b bumped the extension
version on all of these extensions already, and we haven't had a
release since then, so we can make further changes without bumping the
extension version again.  Take this opportunity to mark all of the
functions exported by these modules PARALLEL SAFE -- except for
pg_trgm's set_limit().  Mark that one PARALLEL RESTRICTED, because it
makes a persistent change to a GUC value.

Note that some of the markings added by this commit don't have any
effect; for example, gseg_picksplit() isn't likely to be mentioned
explicitly in a query and therefore it's parallel-safety marking will
never be consulted.  But this commit just marks everything for
consistency: if it were somehow used in a query, that would be fine as
far as parallel query is concerned, since it does not consult any
backend-private state, attempt to write data, etc.

Andreas Karlsson, with a few revisions by me.
2016-06-14 13:34:37 -04:00
Robert Haas 131c7e70b4 postgres_fdw: Check PlaceHolderVars before pushing down a join.
As discovered by Andreas Seltenreich via sqlsmith, it's possible for a
remote join to need to generate a target list which contains a
PlaceHolderVar which would need to be evaluated on the remote server.
This happens when we try to push down a join tree which contains outer
joins and the nullable side of the join contains a subquery which
evauates some expression which can go to NULL above the level of the
join.  Since the deparsing logic can't build a remote query that
involves subqueries, it fails while trying to produce an SQL query
that can be sent to the remote side.  Detect such cases and don't try
to push down the join at all.

It's actually fine to push down the join if the PlaceHolderVar needs
to be evaluated at the current join level.  This patch makes a small
change to build_tlist_to_deparse so that this case will work.

Amit Langote, Ashutosh Bapat, and me.
2016-06-14 11:48:27 -04:00
Tom Lane 5484c0a980 Minor fixes in contrib installation scripts.
Extension scripts should never use CREATE OR REPLACE for initial object
creation.  If there is a collision with a pre-existing (probably
user-created) object, we want extension installation to fail, not silently
overwrite the user's object.  Bloom and sslinfo both violated this precept.

Also fix a number of scripts that had no standard header (the file name
comment and the \echo...\quit guard).  Probably the \echo...\quit hack
is less important now than it was in 9.1 days, but that doesn't mean
that individual extensions get to choose whether to use it or not.

And fix a couple of evident copy-and-pasteos in file name comments.

No need for back-patch: the REPLACE bugs are both new in 9.6, and the
rest of this is pretty much cosmetic.

Andreas Karlsson and Tom Lane
2016-06-14 10:47:06 -04:00
Robert Haas 332fdbef20 postgres_fdw: Promote an Assert() to elog().
Andreas Seltenreich reports that it is possible for a PlaceHolderVar
to creep into this tlist, and I fear that even after that's fixed we
might have other, similar bugs in this area either now or in the
future.  There's a lot of action-at-a-distance here, because the
validity of this assertion depends on core planner behavior; so, let's
use elog() to make sure we catch this even in non-assert builds,
rather than just crashing.
2016-06-14 09:00:12 -04:00