Commit Graph

3125 Commits

Author SHA1 Message Date
Teodor Sigaev eb7308d298 Fix typo
Michael Paquier
2016-04-04 14:55:29 +03:00
Tom Lane a75a418d07 Clean up dubious code in contrib/seg.
The restore() function assumed that the result of sprintf() with %e format
would necessarily contain an 'e', which is false: what if the supplied
number is an infinity or NaN?  If that did happen, we'd get a
null-pointer-dereference core dump.  The case appears impossible currently,
because seg_in() does not accept such values, and there are no seg-creating
functions that would create one.  But it seems unwise to rely on it never
happening in future.

Quite aside from that, the code was pretty ugly: it relied on modifying a
static format string when it could use a "*" precision argument, and it
used strtok() entirely gratuitously, and it stripped off trailing spaces
by hand instead of just not asking for them to begin with.

Coverity noticed the potential null pointer dereference (though I wonder
why it didn't complain years ago, since this code is ancient).

Since this is just code cleanup and forestalling a hypothetical future
bug, there seems no need for back-patching.
2016-04-03 17:36:53 -04:00
Tom Lane 8f75fd1f40 Fix contrib/bloom to not fail under CLOBBER_CACHE_ALWAYS.
The code was supposing that rd_amcache wouldn't disappear from under it
during a scan; which is wrong.  Copy the data out of the relcache rather
than trying to reference it there.
2016-04-03 15:16:07 -04:00
Tom Lane a9284849b4 Clean up some stuff in new contrib/bloom module.
Coverity complained about implicit sign-extension in the
BloomPageGetFreeSpace macro, probably because sizeOfBloomTuple isn't wide
enough for size calculations.  No overflow is really possible as long as
maxoff and sizeOfBloomTuple are small enough to represent a realistic
situation, but it seems like a good idea to declare sizeOfBloomTuple as
Size not int32.

Add missing check on BloomPageAddItem() result, again from Coverity.

Avoid core dump due to not allocating so->sign array when
scan->numberOfKeys is zero.  Also thanks to Coverity.

Use FLEXIBLE_ARRAY_MEMBER rather than declaring an array as size 1
when it isn't necessarily.

Very minor beautification of related code.

Unfortunately, none of the Coverity-detected mistakes look like they
could account for the remaining buildfarm unhappiness with this
module.  It's barely possible that the FLEXIBLE_ARRAY_MEMBER mistake
does account for that, if it's enabling bogus compiler optimizations;
but I'm not terribly optimistic.  We probably still have bugs to
find here.
2016-04-03 14:17:23 -04:00
Tom Lane 5a5b917184 Add missing "static".
Per buildfarm member pademelon.
2016-04-02 13:59:11 -04:00
Teodor Sigaev 9c50372d20 Fix condition in e9e441c9fa
Comment is right, but if - not.
2016-04-02 18:38:16 +03:00
Teodor Sigaev e9e441c9fa Prevent mark as deleted and as 'has free space' page in bloom module
Vacuum might put page into list of pages with some free space and mark as
deleted at the same time.
2016-04-02 14:20:46 +03:00
Teodor Sigaev 80afb62db0 Fixes in bloom contrib module
Looking at result of buildfarm member jaguarundi it seems to me that
BloomOptions isn't inited sometime, but I don't see yet how it's possible.
Nevertheless, check of signature length's is missed, so, add
a limit of it. Also add missed GenericXLogAbort() in case of already
deleted page in vacuum + minor code refactoring.
2016-04-02 13:47:04 +03:00
Noah Misch 4ad6f13500 Copyedit comments and documentation. 2016-04-01 21:53:10 -04:00
Teodor Sigaev 27f3bbfad4 Fixes in bloom contrib module missed during review
- macroses llike (var & FLAG) are changed to ((var & FLAG) != 0)
- do not copy uninitialized part of notFullPage array to page
2016-04-01 20:09:13 +03:00
Teodor Sigaev 9ee014fc89 Bloom index contrib module
Module provides new access method. It is actually a simple Bloom filter
implemented as pgsql's index. It could give some benefits on search
with large number of columns.

Module is a single way to test generic WAL interface committed earlier.

Author: Teodor Sigaev, Alexander Korotkov
Reviewers: Aleksander Alekseev, Michael Paquier, Jim Nasby
2016-04-01 16:42:24 +03:00
Robert Haas 5d4171d1c7 Don't require a user mapping for FDWs to work.
Commit fbe5a3fb73 accidentally changed
this behavior; put things back the way they were, and add some
regression tests.

Report by Andres Freund; patch by Ashutosh Bapat, with a bit of
kibitzing by me.
2016-03-28 21:50:28 -04:00
Alvaro Herrera 3e1338475f Add missing checks to some of pageinspect's BRIN functions
brin_page_type() and brin_metapage_info() did not enforce being called
by superuser, like other pageinspect functions that take bytea do.
Since they don't verify the passed page thoroughly, it is possible to
use them to read the server memory with a carefully crafted bytea value,
up to a file kilobytes from where the input bytea is located.

Have them throw errors if called by a non-superuser.

Report and initial patch: Andreas Seltenreich

Security: CVE-2016-3065
2016-03-28 10:57:42 -03:00
Andres Freund 1a7a43672b Don't use !! but != 0/NULL to force boolean evaluation.
I introduced several uses of !! to force bit arithmetic to be boolean,
but per discussion the project prefers != 0/NULL.

Discussion: CA+TgmoZP5KakLGP6B4vUjgMBUW0woq_dJYi0paOz-My0Hwt_vQ@mail.gmail.com
2016-03-27 18:10:19 +02:00
Robert Haas 3151f16e18 postgres_fdw: Fix crash when pushing down multiple joins.
A join clause might mention multiple relations on either side, so it
need not be the case that a given joinrel's constituent relations are
all on one side of the join clause or all on the other.

Report by Rajkumar Raghuwanshi.  Analysis and fix by Michael Paquier
and Ashutosh Bapat.
2016-03-23 12:28:01 -04:00
Tom Lane 92b7902deb Clean up some Coverity complaints about commit 0bf3ae88af.
The two get_tle_by_resno() calls introduced by this commit lacked any
check for a NULL return, unlike any other calls of that function anywhere
in our tree.  Coverity quite properly complained about it.  Also fix a
misindented line in process_query_params(), which Coverity also complained
about on the grounds that the bad indentation suggested possible programmer
misinterpretation.
2016-03-21 12:00:02 -04:00
Tom Lane d5351fcb03 Fix phony .PHONY.
A couple makefiles had misspelled the magic .PHONY target as PHONY.
2016-03-19 17:19:37 -04:00
Robert Haas 0bf3ae88af Directly modify foreign tables.
postgres_fdw can now sent an UPDATE or DELETE statement directly to
the foreign server in simple cases, rather than sending a SELECT FOR
UPDATE statement and then updating or deleting rows one-by-one.

Etsuro Fujita, reviewed by Rushabh Lathia, Shigeru Hanada, Kyotaro
Horiguchi, Albe Laurenz, Thom Brown, and me.
2016-03-18 13:55:52 -04:00
Robert Haas 2d8a1e22b1 Various minor corrections of and improvements to comments.
Aleksander Alekseev
2016-03-18 09:38:59 -04:00
Teodor Sigaev aa698d7535 pg_trgm's set_limit() now uses SetConfigOption()
Deprecated set_limit() is modified to use SetConfigOption() to set
similarity_threshold which is actually an instance of
pg_trgm.similarity_threshold GUC variable. Previous coding directly sets
similarity_threshold what could cause an inconsistency between states of
actual variable and GUC representation.

Per gripe from Tom Lane
2016-03-18 12:26:27 +03:00
Teodor Sigaev e4b523e5b5 Add files forgotten in f576b17cd6 2016-03-16 19:23:41 +03:00
Teodor Sigaev f576b17cd6 Add word_similarity to pg_trgm contrib module.
Patch introduces a concept of similarity over string and just a word from
another string.

Version of extension is not changed because 1.2 was already introduced in 9.6
release cycle, so, there wasn't a public version.

Author: Alexander Korotkov, Artur Zakirov
2016-03-16 18:59:21 +03:00
Teodor Sigaev 5871b88487 GUC variable pg_trgm.similarity_threshold insead of set_limit()
Use GUC variable pg_trgm.similarity_threshold insead of
set_limit()/show_limit() which was introduced when defining GUC varuables
by modules was absent.

Author: Artur Zakirov
2016-03-16 17:44:58 +03:00
Teodor Sigaev ce91b9209f fix typo in comment 2016-03-16 17:18:14 +03:00
Teodor Sigaev 9a206d063c Improve script generating unaccent rules
Script now use the standard Unicode transliterator Latin-ASCII.

Author: Leonard Benedetti
2016-03-16 16:47:03 +03:00
Robert Haas 3aff33aa68 Fix typos.
Oskari Saarenmaa
2016-03-15 18:06:11 -04:00
Robert Haas 4a46a99d89 postgres_fdw: make_tuple_from_result_row should set cur_attno for ctid.
There's no reason for this function to do this for every other
attribute number and omit it for CTID, especially since
conversion_error_callback has code to handle that case.  This seems
to be an oversight in commit e690b95150.

Etsuro Fujita
2016-03-15 16:51:56 -04:00
Tom Lane 28048cbaa2 Allow callers of create_foreignscan_path to specify nondefault PathTarget.
Although the default choice of rel->reltarget should typically be
sufficient for scan or join paths, it's not at all sufficient for the
purposes PathTargets were invented for; in particular not for
upper-relation Paths.  So break API compatibility by adding a PathTarget
argument to create_foreignscan_path().  To ease updating of existing
code, accept a NULL value of the argument as selecting rel->reltarget.
2016-03-14 17:31:28 -04:00
Tom Lane 307c78852f Rethink representation of PathTargets.
In commit 19a541143a I did not make PathTarget a subtype of Node,
and embedded a RelOptInfo's reltarget directly into it rather than having
a separately-allocated Node.  In hindsight that was misguided
micro-optimization, enabled by the fact that at that point we didn't have
any Paths with custom PathTargets.  Now that PathTarget processing has
been fleshed out some more, it's easier to see that it's better to have
PathTarget as an indepedent Node type, even if it does cost us one more
palloc to create a RelOptInfo.  So change it while we still can.

This commit just changes the representation, without doing anything more
interesting than that.
2016-03-14 16:59:59 -04:00
Robert Haas 6be84eeb8d Update more comments for 96198d94cb.
Etsuro Fujita, reviewed (though not completely endorsed) by Ashutosh
Bapat, and slightly expanded by me.
2016-03-14 14:29:12 -04:00
Magnus Hagander 7a8d874836 Rename auto_explain.sample_ratio to sample_rate
Per suggestion from Tomas Vondra

Author: Julien Rouhaud
2016-03-13 13:18:03 +01:00
Tom Lane 23a27b039d Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout.  Some of these values were declared uint32 before, and
others "long".

I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.

The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command.  Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.

Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long".  It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.

Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
2016-03-12 16:05:29 -05:00
Magnus Hagander 92f03fe76f Allow setting sample ratio for auto_explain
New configuration parameter auto_explain.sample_ratio makes it
possible to log just a fraction of the queries meeting the configured
threshold, to reduce the amount of logging.

Author: Craig Ringer and Julien Rouhaud
Review: Petr Jelinek
2016-03-11 15:08:34 +01:00
Tom Lane 364a9f47ab Refactor pull_var_clause's API to make it less tedious to extend.
In commit 1d97c19a0f and later c1d9579dd8, we extended
pull_var_clause's API by adding enum-type arguments.  That's sort of a pain
to maintain, though, because it means every time we add a new behavior we
must touch every last one of the call sites, even if there's a reasonable
default behavior that most of them could use.  Let's switch over to using a
bitmask of flags, instead; that seems more maintainable and might save a
nanosecond or two as well.  This commit changes no behavior in itself,
though I'm going to follow it up with one that does add a new behavior.

In passing, remove flatten_tlist(), which has not been used since 9.1
and would otherwise need the same API changes.

Removing these enums means that optimizer/tlist.h no longer needs to
depend on optimizer/var.h.  Changing that caused a number of C files to
need addition of #include "optimizer/var.h" (probably we can thank old
runs of pgrminclude for that); but on balance it seems like a good change
anyway.
2016-03-10 15:53:07 -05:00
Andres Freund 1d4a0ab19a Avoid unlikely data-loss scenarios due to rename() without fsync.
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes. Use the previously added durable_rename()/durable_link_or_rename()
in various places where we previously just renamed files.

Most of the changed call sites are arguably not critical, but it seems
better to err on the side of too much durability.  The most prominent
known case where the previously missing fsyncs could cause data loss is
crashes at the end of a checkpoint. After the actual checkpoint has been
performed, old WAL files are recycled. When they're filled, their
contents are fdatasynced, but we did not fsync the containing
directory. An OS/hardware crash in an unfortunate moment could then end
up leaving that file with its old name, but new content; WAL replay
would thus not replay it.

Reported-By: Tomas Vondra
Author: Michael Paquier, Tomas Vondra, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
2016-03-09 18:53:53 -08:00
Alvaro Herrera 188f359d39 pgcrypto: support changing S2K iteration count
pgcrypto already supports key-stretching during symmetric encryption,
including the salted-and-iterated method; but the number of iterations
was not configurable.  This commit implements a new s2k-count parameter
to pgp_sym_encrypt() which permits selecting a larger number of
iterations.

Author: Jeff Janes
2016-03-09 14:31:07 -03:00
Robert Haas aa09cd242f postgres_fdw: Consider foreign joining and foreign sorting together.
Commit ccd8f97922 gave us the ability to
request that the remote side sort the data, and, later, commit
e4106b2528 gave us the ability to
request that the remote side perform the join for us rather than doing
it locally.  But we could not do both things at the same time: a
remote SQL query that had an ORDER BY clause would never be a join.
This commit adds that capability.

Ashutosh Bapat, reviewed by me.
2016-03-09 10:51:49 -05:00
Andres Freund 7a1d4a2448 ltree: Zero padding bytes when allocating memory for externally visible data.
ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment,
requiring some padding bytes. So far we left these uninitialized. Zero
those by using palloc0.

Author: Andres Freund
Reported-By: Andres Freund / valgrind / buildarm animal skink
Backpatch: 9.1-
2016-03-08 14:59:29 -08:00
Robert Haas d29b153f18 Fix reversed argument to bms_is_subset.
Ashutosh Bapat
2016-03-08 13:59:11 -05:00
Robert Haas ba0a198fb1 Add pg_visibility contrib module.
This lets you examine the visibility map as well as page-level
visibility information.  I initially wrote it as a debugging aid,
but was encouraged to polish it for commit.

Patch by me, reviewed by Masahiko Sawada.

Discussion: 56D77803.6080503@BlueTreble.com
2016-03-08 08:42:01 -05:00
Andres Freund c8f621c43a logical decoding: Fix handling of large old tuples with replica identity full.
When decoding the old version of an UPDATE or DELETE change, and if that
tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or
failed in more subtle ways in non-assert builds.  Normally individual
tuples aren't bigger than MaxHeapTupleSize, with big datums toasted.
But that's not the case for the old version of a tuple for logical
decoding; the replica identity is logged as one piece. With the default
replica identity btree limits that to small tuples, but that's not the
case for FULL.

Change the tuple buffer infrastructure to separate allocate over-large
tuples, instead of always going through the slab cache.

This unfortunately requires changing the ReorderBufferTupleBuf
definition, we need to store the allocated size someplace. To avoid
requiring output plugins to recompile, don't store HeapTupleHeaderData
directly after HeapTupleData, but point to it via t_data; that leaves
rooms for the allocated size.  As there's no reason for an output plugin
to look at ReorderBufferTupleBuf->t_data.header, remove the field. It
was just a minor convenience having it directly accessible.

Reported-By: Adam Dratwiński
Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
2016-03-05 18:02:20 -08:00
Andres Freund 0bda14d54c logical decoding: old/newtuple in spooled UPDATE changes was switched around.
Somehow I managed to flip the order of restoring old & new tuples when
de-spooling a change in a large transaction from disk. This happens to
only take effect when a change is spooled to disk which has old/new
versions of the tuple. That only is the case for UPDATEs where he
primary key changed or where replica identity is changed to FULL.

The tests didn't catch this because either spooled updates, or updates
that changed primary keys, were tested; not both at the same time.

Found while adding tests for the following commit.

Backpatch: 9.4, where logical decoding was added
2016-03-05 18:02:20 -08:00
Andres Freund d9e903f3cb logical decoding: Tell reorderbuffer about all xids.
Logical decoding's reorderbuffer keeps transactions in an LSN ordered
list for efficiency. To make that's efficiently possible upper-level
xids are forced to be logged before nested subtransaction xids.  That
only works though if these records are all looked at: Unfortunately we
didn't do so for e.g. row level locks, which are otherwise uninteresting
for logical decoding.

This could lead to errors like:
"ERROR: subxact logged without previous toplevel record".

It's not sufficient to just look at row locking records, the xid could
appear first due to a lot of other types of records (which will trigger
the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny).
So invent infrastructure to tell reorderbuffer about xids seen, when
they'd otherwise not pass through reorderbuffer.c.

Reported-By: Jarred Ward
Bug: #13844
Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org
Backpatch: 9.4, where logical decoding was added
2016-03-05 18:02:20 -08:00
Robert Haas 3bea3f88d5 postgres_fdw: When sending ORDER BY, always include NULLS FIRST/LAST.
Previously, we included NULLS FIRST when appropriate but relied on the
default behavior to be NULLS LAST.  This is, however, not true for a
sort in descending order and seems like a fragile assumption anyway.

Report by Rajkumar Raghuwanshi.  Patch by Ashutosh Bapat.  Review
comments from Michael Paquier and Tom Lane.
2016-03-04 11:37:42 -05:00
Andres Freund 1986c3c440 Force synchronous_commit=on in test_decoding's concurrent_ddl_dml.spec.
Otherwise running installcheck-force on a server with
synchronous_commit=off will result in the tests failing. All the other
tests already do so...

Backpatch: 9.4, where logical decoding was added
2016-03-03 17:22:25 -08:00
Andres Freund 7c17aac69d logical decoding: fix decoding of a commit's commit time.
When adding replication origins in 5aa235042, I somehow managed to set
the timestamp of decoded transactions to InvalidXLogRecptr when decoding
one made without a replication origin. Fix that, and the wrong type of
the new commit_time variable.

This didn't trigger a regression test failure because we explicitly
don't show commit timestamps in the regression tests, as they obviously
are variable. Add a test that checks that a decoded commit's timestamp
is within minutes of NOW() from before the commit.

Reported-By: Weiping Qu
Diagnosed-By: Artur Zakirov
Discussion: 56D4197E.9050706@informatik.uni-kl.de,
    56D42918.1010108@postgrespro.ru
Backpatch: 9.5, where 5aa235042 originates.
2016-03-02 23:42:21 -08:00
Robert Haas a892234f83 Change the format of the VM fork to add a second bit per page.
The new bit indicates whether every tuple on the page is already frozen.
It is cleared only when the all-visible bit is cleared, and it can be
set only when we vacuum a page and find that every tuple on that page is
both visible to every transaction and in no need of any future
vacuuming.

A future commit will use this new bit to optimize away full-table scans
that would otherwise be triggered by XID wraparound considerations.  A
page which is merely all-visible must still be scanned in that case, but
a page which is all-frozen need not be.  This commit does not attempt
that optimization, although that optimization is the goal here.  It
seems better to get the basic infrastructure in place first.

Per discussion, it's very desirable for pg_upgrade to automatically
migrate existing VM forks from the old format to the new format.  That,
too, will be handled in a follow-on patch.

Masahiko Sawada, reviewed by Kyotaro Horiguchi, Fujii Masao, Amit
Kapila, Simon Riggs, Andres Freund, and others, and substantially
revised by me.
2016-03-01 21:49:41 -05:00
Andrew Dunstan 87cc6b57a9 Respect TEMP_CONFIG when pg_regress_check and friends are called
This reverts commit 9117985b6b in favor of
a more general solution.
2016-02-27 12:28:21 -05:00
Robert Haas 35746bc348 Add new FDW API to test for parallel-safety.
This is basically a bug fix; the old code assumes that a ForeignScan
is always parallel-safe, but for postgres_fdw, for example, this is
definitely false.  It should be true for file_fdw, though, since a
worker can read a file from the filesystem just as well as any other
backend process.

Original patch by Thomas Munro.  Documentation, and changes to the
comments, by me.
2016-02-26 16:14:46 +05:30
Robert Haas 9117985b6b Respect TEMP_CONFIG when running contrib regression tests.
Thomas Munro
2016-02-26 12:38:21 +05:30
Robert Haas dd077ef832 postgres_fdw: Avoid sharing list substructure.
list_concat(list_concat(a, b), c) destructively changes both a and b;
to avoid such perils, copy lists of remote_conds before incorporating
them into larger lists via list_concat().

Ashutosh Bapat, per a report from Etsuro Fujita
2016-02-21 14:17:50 +05:30
Tom Lane 19a541143a Add an explicit representation of the output targetlist to Paths.
Up to now, there's been an assumption that all Paths for a given relation
compute the same output column set (targetlist).  However, there are good
reasons to remove that assumption.  For example, an indexscan on an
expression index might be able to return the value of an expensive function
"for free".  While we have the ability to generate such a plan today in
simple cases, we don't have a way to model that it's cheaper than a plan
that computes the function from scratch, nor a way to create such a plan
in join cases (where the function computation would normally happen at
the topmost join node).  Also, we need this so that we can have Paths
representing post-scan/join steps, where the targetlist may well change
from one step to the next.  Therefore, invent a "struct PathTarget"
representing the columns we expect a plan step to emit.  It's convenient
to include the output tuple width and tlist evaluation cost in this struct,
and there will likely be additional fields in future.

While Path nodes that actually do have custom outputs will need their own
PathTargets, it will still be true that most Paths for a given relation
will compute the same tlist.  To reduce the overhead added by this patch,
keep a "default PathTarget" in RelOptInfo, and allow Paths that compute
that column set to just point to their parent RelOptInfo's reltarget.
(In the patch as committed, actually every Path is like that, since we
do not yet have any cases of custom PathTargets.)

I took this opportunity to provide some more-honest costing of
PlaceHolderVar evaluation.  Up to now, the assumption that "scan/join
reltargetlists have cost zero" was applied not only to Vars, where it's
reasonable, but also PlaceHolderVars where it isn't.  Now, we add the eval
cost of a PlaceHolderVar's expression to the first plan level where it can
be computed, by including it in the PathTarget cost field and adding that
to the cost estimates for Paths.  This isn't perfect yet but it's much
better than before, and there is a way forward to improve it more.  This
costing change affects the join order chosen for a couple of the regression
tests, changing expected row ordering.
2016-02-18 20:02:03 -05:00
Tom Lane 48e6c943e5 Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb5.

The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.

Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages.  Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan.  With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.

Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes.  (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)

Including the metapage causes the reported index_size to not be zero for
an empty index.  To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.

The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
2016-02-18 15:40:35 -05:00
Tom Lane 99a9d6d563 Add missing "static" qualifier.
Per buildfarm member pademelon.
2016-02-12 11:20:16 -05:00
Robert Haas 019e788137 postgres_fdw: Remove unnecessary variable.
It causes warnings in non-Assert-enabled builds.

Per report from Jeff Janes.
2016-02-10 08:17:43 -05:00
Robert Haas bb4df42e6a postgres_fdw: Remove unstable regression test.
Per Tom Lane and the buildfarm.
2016-02-09 15:42:20 -05:00
Robert Haas e4106b2528 postgres_fdw: Push down joins to remote servers.
If we've got a relatively straightforward join between two tables,
this pushes that join down to the remote server instead of fetching
the rows for each table and performing the join locally.  Some cases
are not handled yet, such as SEMI and ANTI joins.  Also, we don't
yet attempt to create presorted join paths or parameterized join
paths even though these options do get tried for a base relation
scan.  Nevertheless, this seems likely to be a very significant win
in many practical cases.

Shigeru Hanada and Ashutosh Bapat, reviewed by Robert Haas, with
additional review at various points by Tom Lane, Etsuro Fujita,
KaiGai Kohei, and Jeevan Chalke.
2016-02-09 14:00:50 -05:00
Tom Lane 63828969c8 Use %u not %d to print OIDs.
Oversight in commit 96198d94c.

Etsuro Fujita
2016-02-08 11:06:23 -05:00
Tom Lane 392998bc58 Add missing "static" qualifier.
Per buildfarm member pademelon.
2016-02-06 12:21:14 -05:00
Robert Haas d0cd7bda97 postgres_fdw: pgindent run.
In preparation for upcoming commits.
2016-02-04 22:30:08 -05:00
Robert Haas 37c84570b1 postgres_fdw: Avoid possible misbehavior when RETURNING tableoid column only.
deparseReturningList ended up adding up RETURNING NULL to the code, but
code elsewhere saw an empty list of attributes and concluded that it
should not expect tuples from the remote side.

Etsuro Fujita and Robert Haas, reviewed by Thom Brown
2016-02-04 22:27:13 -05:00
Robert Haas c1772ad922 Change the way that LWLocks for extensions are allocated.
The previous RequestAddinLWLocks() method had several disadvantages.
First, the locks would be in the main tranche; we've recently decided
that it's useful for LWLocks used for separate purposes to have
separate tranche IDs.  Second, there wasn't any correlation between
what code called RequestAddinLWLocks() and what code called
LWLockAssign(); when multiple modules are in use, it could become
quite difficult to troubleshoot problems where LWLockAssign() ran out
of locks.  To fix, create a concept of named LWLock tranches which
can be used either by extension or by core code.

Amit Kapila and Robert Haas
2016-02-04 16:43:04 -05:00
Tom Lane 41d2c081ce Make hstore_to_jsonb_loose match hstore_to_json_loose on what's a number.
Commit e09996ff8d removed some ad-hoc code in hstore_to_json_loose
that determined whether an hstore value string looked like a number,
in favor of calling the JSON parser's is-it-a-number code.  However,
it neglected the fact that the exact same code appeared in
hstore_to_jsonb_loose.

This is not a bug, exactly, because the requirements on the two functions
are not the same: hstore_to_json_loose must accept only syntactically legal
JSON numbers as numbers, or it will produce invalid JSON output, as per bug
#12070 which spawned the prior commit.  But hstore_to_jsonb_loose could
accept anything that numeric_in will eat, other than Inf and NaN.

Nonetheless it seems surprising and arbitrary that the two functions don't
use the same rules for what is a number versus what is a string; especially
since they did use the same rules before the aforesaid commit.  For one
thing, that means that doing hstore_to_json_loose and then casting to jsonb
can produce results different from doing just hstore_to_jsonb_loose.

Hence, change hstore_to_jsonb_loose's logic to match hstore_to_json_loose,
ie, hstore values are treated as numbers when they match the JSON syntax
for numbers.

No back-patch, since this is more in the nature of a definitional change
than a bug fix.
2016-02-03 12:04:02 -05:00
Robert Haas 52b63649fc Code review for commit dc203dc3ac.
Remove duplicate assignment.  This part by Ashutosh Bapat.

Remove now-obsolete comment.  This part by me, although the pending
join pushdown patch does something similar, and for the same reason:
there's no reason to keep two lists of the things in the fdw_private
structure that have to be kept in sync with each other.
2016-02-03 11:53:46 -05:00
Robert Haas dc203dc3ac postgres_fdw: Allow fetch_size to be set per-table or per-server.
The default fetch size of 100 rows might not be right in every
environment, so allow users to configure it.

Corey Huinker, reviewed by Kyotaro Horiguchi, Andres Freund, and me.
2016-02-03 09:07:35 -05:00
Tom Lane e6ecc93a17 Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.
Commit e09996ff8d was one brick shy of a load: it didn't insist
that the detected JSON number be the whole of the supplied string.
This allowed inputs such as "2016-01-01" to be misdetected as valid JSON
numbers.  Per bug #13906 from Dmitry Ryabov.

In passing, be more wary of zero-length input (I'm not sure this can
happen given current callers, but better safe than sorry), and do some
minor cosmetic cleanup.
2016-02-03 01:39:48 -05:00
Magnus Hagander e51ab85cd9 Fix typos in comments
Author: Michael Paquier
2016-02-01 11:43:48 +01:00
Robert Haas cc592c48c5 postgres_fdw: More preliminary refactoring for upcoming join pushdown.
The code that generates a complete SQL query for a given foreign relation
was repeated in two places, and they didn't quite agree: the EXPLAIN case
left out the locking clause.  Centralize the code so we get the same
behavior everywhere, and adjust calling conventions and which functions
are static vs. extern accordingly .  Centralize the code so we get the same
behavior everywhere, and adjust calling conventions and which functions
are static vs. extern accordingly.

Ashutosh Bapat, reviewed and slightly adjusted by me.
2016-01-30 10:32:38 -05:00
Tom Lane 7e22470471 Fix incorrect pattern-match processing in psql's \det command.
listForeignTables' invocation of processSQLNamePattern did not match up
with the other ones that handle potentially-schema-qualified names; it
failed to make use of pg_table_is_visible() and also passed the name
arguments in the wrong order.  Bug seems to have been aboriginal in commit
0d692a0dc9.  It accidentally sort of worked as long as you didn't
inquire too closely into the behavior, although the silliness was later
exposed by inconsistencies in the test queries added by 59efda3e50
(which I probably should have questioned at the time, but didn't).

Per bug #13899 from Reece Hart.  Patch by Reece Hart and Tom Lane.
Back-patch to all affected branches.
2016-01-29 10:28:02 +01:00
Robert Haas b88ef201d4 postgres_fdw: Refactor deparsing code for locking clauses.
The upcoming patch to allow join pushdown in postgres_fdw needs to use
this code multiple times, which requires moving it to deparse.c.  That
seems like a good idea anyway, so do that now both on general principle
and to simplify the future patch.

Inspired by a patch by Shigeru Hanada and Ashutosh Bapat, but I did
it a little differently than what that patch did.
2016-01-28 16:44:01 -05:00
Robert Haas 2f6b041f76 Add missing quotation mark.
This fix accidentally got left out of the previous commit.
2016-01-28 12:21:51 -05:00
Robert Haas 96198d94cb Avoid multiple foreign server connections when all use same user mapping.
Previously, postgres_fdw's connection cache was keyed by user OID and
server OID, but this can lead to multiple connections when it's not
really necessary.  In particular, if all relevant users are mapped to
the public user mapping, then their connection options are certainly
the same, so one connection can be used for all of them.

While we're cleaning things up here, drop the "server" argument to
GetConnection(), which isn't really needed.  This saves a few cycles
because callers no longer have to look this up; the function itself
does, but only when establishing a new connection, not when reusing
an existing one.

Ashutosh Bapat, with a few small changes by me.
2016-01-28 12:05:19 -05:00
Tom Lane a396144ac0 Remove new coupling between NAMEDATALEN and MAX_LEVENSHTEIN_STRLEN.
Commit e529cd4ffa introduced an Assert requiring NAMEDATALEN to be
less than MAX_LEVENSHTEIN_STRLEN, which has been 255 for a long time.
Since up to that instant we had always allowed NAMEDATALEN to be
substantially more than that, this was ill-advised.

It's debatable whether we need MAX_LEVENSHTEIN_STRLEN at all (versus
putting a CHECK_FOR_INTERRUPTS into the loop), or whether it has to be
so tight; but this patch takes the narrower approach of just not applying
the MAX_LEVENSHTEIN_STRLEN limit to calls from the parser.

Trusting the parser for this seems reasonable, first because the strings
are limited to NAMEDATALEN which is unlikely to be hugely more than 256,
and second because the maximum distance is tightly constrained by
MAX_FUZZY_DISTANCE (though we'd forgotten to make use of that limit in one
place).  That means the cost is not really O(mn) but more like O(max(m,n)).

Relaxing the limit for user-supplied calls is left for future research;
given the lack of complaints to date, it doesn't seem very high priority.

In passing, fix confusion between lengths-in-bytes and lengths-in-chars
in comments and error messages.

Per gripe from Kevin Day; solution suggested by Robert Haas.  Back-patch
to 9.5 where the unwanted restriction was introduced.
2016-01-22 11:53:06 -05:00
Tom Lane dbe2328959 Fix assorted inconsistencies in GIN opclass support function declarations.
GIN had some minor issues too, mostly using "internal" where something
else would be more appropriate.  I went with the same approach as in
9ff60273e3, namely preferring the opclass' indexed datatype for
arguments that receive an operator RHS value, even if that's not
necessarily what they really are.

Again, this is with an eye to having a uniform rule for ginvalidate()
to check support function signatures.
2016-01-19 22:32:22 -05:00
Tom Lane 9ff60273e3 Fix assorted inconsistencies in GiST opclass support function declarations.
The conventions specified by the GiST SGML documentation were widely
ignored.  For example, the strategy-number argument for "consistent" and
"distance" functions is specified to be a smallint, but most of the
built-in support functions declared it as an integer, and for that matter
the core code passed it using Int32GetDatum not Int16GetDatum.  None of
that makes any real difference at runtime, but it's quite confusing for
newcomers to the code, and it makes it very hard to write an amvalidate()
function that checks support function signatures.  So let's try to instill
some consistency here.

Another similar issue is that the "query" argument is not of a single
well-defined type, but could have different types depending on the strategy
(corresponding to search operators with different righthand-side argument
types).  Some of the functions threw up their hands and declared the query
argument as being of "internal" type, which surely isn't right ("any" would
have been more appropriate); but the majority position seemed to be to
declare it as being of the indexed data type, corresponding to a search
operator with both input types the same.  So I've specified a convention
that that's what to do always.

Also, the result of the "union" support function actually must be of the
index's storage type, but the documentation suggested declaring it to
return "internal", and some of the functions followed that.  Standardize
on telling the truth, instead.

Similarly, standardize on declaring the "same" function's inputs as
being of the storage type, not "internal".

Also, somebody had forgotten to add the "recheck" argument to both
the documentation of the "distance" support function and all of their
SQL declarations, even though the C code was happily using that argument.
Clean that up too.

Fix up some other omissions in the docs too, such as documenting that
union's second input argument is vestigial.

So far as the errors in core function declarations go, we can just fix
pg_proc.h and bump catversion.  Adjusting the erroneous declarations in
contrib modules is more debatable: in principle any change in those
scripts should involve an extension version bump, which is a pain.
However, since these changes are purely cosmetic and make no functional
difference, I think we can get away without doing that.
2016-01-19 12:04:36 -05:00
Tom Lane 65c5fcd353 Restructure index access method API to hide most of it at the C level.
This patch reduces pg_am to just two columns, a name and a handler
function.  All the data formerly obtained from pg_am is now provided
in a C struct returned by the handler function.  This is similar to
the designs we've adopted for FDWs and tablesample methods.  There
are multiple advantages.  For one, the index AM's support functions
are now simple C functions, making them faster to call and much less
error-prone, since the C compiler can now check function signatures.
For another, this will make it far more practical to define index access
methods in installable extensions.

A disadvantage is that SQL-level code can no longer see attributes
of index AMs; in particular, some of the crosschecks in the opr_sanity
regression test are no longer possible from SQL.  We've addressed that
by adding a facility for the index AM to perform such checks instead.
(Much more could be done in that line, but for now we're content if the
amvalidate functions more or less replace what opr_sanity used to do.)
We might also want to expose some sort of reporting functionality, but
this patch doesn't do that.

Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily
editorialized on by me.
2016-01-17 19:36:59 -05:00
Tom Lane fb6fcbd33f Use LOAD not actual code execution to pull in plpython library.
Commit 866566a690 is insufficient to prevent dump/reload failures
when using transform modules in a database with both plpython2 and
plpython3 installed.  The reason is that the transform extension scripts
use DO blocks as a mechanism to pull in the libpython library before
creating the transform function.  It's necessary to preload the library
because the dynamic loader won't do it for us on every platform, leading
to "unresolved symbol" failures when the transform library is loaded.
But it's *not* necessary to execute Python code, and doing so will
provoke a multiple-Pythons-are-loaded error even after the preceding
commit.

To fix, use LOAD instead of a DO block.  That requires superuser privilege,
but creation of a C function does anyway.  It also embeds knowledge of
the underlying library name for each PL language; but that's wired into
the initdb-time contents of pg_pltemplate too, so that doesn't seem like
a large problem either.  Note that CREATE TRANSFORM as such doesn't call
the language module at all.

Per a report from Paul Jones.  Back-patch to 9.5 where transform modules
were introduced.
2016-01-11 20:06:36 -05:00
Tom Lane 3343ea9e8e Sort $(wildcard) output where needed for reproducible build output.
The order of inclusion of .o files makes a difference in linker output;
not a functional difference, but still a bitwise difference, which annoys
some packagers who would like reproducible builds.

Report and patch by Christoph Berg
2016-01-05 15:47:05 -05:00
Tom Lane 48c9f2889a Update copyright for 2016
On closer inspection, the reason copyright.pl was missing files is
that it is looking for 'Copyright (c)' and they had 'Copyright (C)'.
Fix that, and update a couple more that grepping for that revealed.
2016-01-02 14:19:48 -05:00
Bruce Momjian ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
Tom Lane 5f36096b77 Add a comment noting that FDWs don't have to implement EXCEPT or LIMIT TO.
postgresImportForeignSchema pays attention to IMPORT's EXCEPT and LIMIT TO
options, but only as an efficiency hack, not for correctness' sake.  The
FDW documentation does explain that, but someone using postgres_fdw.c
as a coding guide might not remember it, so let's add a comment here.
Per question from Regina Obe.
2015-12-31 17:59:10 -05:00
Tom Lane 81ee726d87 Code and docs review for cube kNN support.
Commit 33bd250f6c could have done with
some more review:

Adjust coding so that compilers unfamiliar with elog/ereport don't complain
about uninitialized values.

Fix misuse of PG_GETARG_INT16 to retrieve arguments declared as "integer"
at the SQL level.  (This was evidently copied from cube_ll_coord and
cube_ur_coord, but those were wrong too.)

Fix non-style-guide-conforming error messages.

Fix underparenthesized if statements, which pgindent would have made a
hash of, and remove some unnecessary parens elsewhere.

Run pgindent over new code.

Revise documentation: repeated accretion of more operators without any
rethinking of the text already there had left things in a bit of a mess.
Merge all the cube operators into one table and adjust surrounding text
appropriately.

David Rowley and Tom Lane
2015-12-28 14:39:12 -05:00
Tom Lane 870df2b3b7 Fix omission of -X (--no-psqlrc) in some psql invocations.
As of commit d5563d7df, psql -c no longer implies -X, but not all of
our regression testing scripts had gotten that memo.

To ensure consistency of results across different developers, make
sure that *all* invocations of psql in all scripts in our tree
use -X, even where this is not what previously happened.

Michael Paquier and Tom Lane
2015-12-28 11:46:43 -05:00
Alvaro Herrera 743229a67e Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()
Both Blowfish and DES implementations of crypt() can take arbitrarily
long time, depending on the number of rounds specified by the caller;
make sure they can be interrupted.

Author: Andreas Karlsson
Reviewer: Jeff Janes

Backpatch to 9.1.
2015-12-27 13:03:19 -03:00
Teodor Sigaev 25bfa7efd0 Improve the gin index scan performance in pg_trgm.
Previous coding assumes too pessimistic upper bound of similarity
in consistent method of GIN.

Author: Fornaroli Christophe with comments by me.
2015-12-25 13:05:13 +03:00
Robert Haas ccd8f97922 postgres_fdw: Consider requesting sorted data so we can do a merge join.
When use_remote_estimate is enabled, consider adding ORDER BY to the
query we sending to the remote server so that we can use that ordered
data for a merge join.  Commit f18c944b61
arranges to push down the query pathkeys, which seems like the case
mostly likely to be a win, but testing shows this can sometimes win,
too.

For a regular table, we know which indexes are present and therefore
test whether the ordering provided by each such index is useful.  Here,
we take the opposite approach: guess what orderings would be useful if
they could be generated cheaply, and then ask the remote side what those
will cost.

Ashutosh Bapat, with very substantial cosmetic revisions by me.  Also
reviewed by Rushabh Lathia.
2015-12-22 13:46:40 -05:00
Peter Eisentraut 529fd74c09 Fix whitespace 2015-12-19 11:46:38 -05:00
Teodor Sigaev 33bd250f6c Cube extension kNN support
Introduce distance operators over cubes:
<#> taxicab distance
<->  euclidean distance
<=> chebyshev distance

Also add kNN support of those distances in GiST opclass.

Author: Stas Kelvich
2015-12-18 14:38:27 +03:00
Robert Haas 385f337c9f Allow foreign and custom joins to handle EvalPlanQual rechecks.
Commit e7cb7ee145 provided basic
infrastructure for allowing a foreign data wrapper or custom scan
provider to replace a join of one or more tables with a scan.
However, this infrastructure failed to take into account the need
for possible EvalPlanQual rechecks, and ExecScanFetch would fail
an assertion (or just overwrite memory) if such a check was attempted
for a plan containing a pushed-down join.  To fix, adjust the EPQ
machinery to skip some processing steps when scanrelid == 0, making
those the responsibility of scan's recheck method, which also has
the responsibility in this case of correctly populating the relevant
slot.

To allow foreign scans to gain control in the right place to make
use of this new facility, add a new, optional RecheckForeignScan
method.  Also, allow a foreign scan to have a child plan, which can
be used to correctly populate the slot (or perhaps for something
else, but this is the only use currently envisioned).

KaiGai Kohei, reviewed by Robert Haas, Etsuro Fujita, and Kyotaro
Horiguchi.
2015-12-08 12:31:03 -05:00
Teodor Sigaev 0271e27c10 Add forgotten file in commit d6061f83a1 2015-11-25 16:59:07 +03:00
Teodor Sigaev d6061f83a1 Improve pageinspect module
Now pageinspect can show data stored in the heap tuple.

Nikolay Shaplov
2015-11-25 16:31:55 +03:00
Tom Lane 68c1d7d42e Dodge a macro-name conflict with Perl.
Some versions of Perl export a macro named HS_KEY.  This creates a
conflict in contrib/hstore_plperl against hstore's macro of the same
name.  The most future-proof solution seems to be to rename our macro;
I chose HSTORE_KEY.  For consistency, rename HS_VAL and related macros
similarly.

Back-patch to 9.5.  contrib/hstore_plperl doesn't exist before that
so there is no need to worry about the conflict in older releases.

Per reports from Marco Atzeri and Mike Blackwell.
2015-11-19 14:54:11 -05:00
Robert Haas e073490457 Make a comment more precise.
Remote expressions now also matter to make_foreignscan()

Noted by Etsuro Fujita.
2015-11-18 15:19:19 -05:00
Tom Lane b9f117d6cd Add regression tests for remote execution of extension operators/functions.
Rather than relying on other extensions to be available for installation,
let's just add some test objects to the postgres_fdw extension itself
within the regression script.
2015-11-04 12:03:30 -05:00
Tom Lane d894941663 Allow postgres_fdw to ship extension funcs/operators for remote execution.
The user can whitelist specified extension(s) in the foreign server's
options, whereupon we will treat immutable functions and operators of those
extensions as candidates to be sent for remote execution.

Whitelisting an extension in this way basically promises that the extension
exists on the remote server and behaves compatibly with the local instance.
We have no way to prove that formally, so we have to rely on the user to
get it right.  But this seems like something that people can usually get
right in practice.

We might in future allow functions and operators to be whitelisted
individually, but extension granularity is a very convenient special case,
so it got done first.

The patch as-committed lacks any regression tests, which is unfortunate,
but introducing dependencies on other extensions for testing purposes
would break "make installcheck" scenarios, which is worse.  I have some
ideas about klugy ways around that, but it seems like material for a
separate patch.  For the moment, leave the problem open.

Paul Ramsey, hacked up a bit more by me
2015-11-03 18:42:18 -05:00
Robert Haas f18c944b61 postgres_fdw: Add ORDER BY to some remote SQL queries.
If the join problem's entire ORDER BY clause can be pushed to the
remote server, consider a path that adds this ORDER BY clause.  If
use_remote_estimate is on, we cost this path using an additional
remote EXPLAIN.  If not, we just estimate that the path costs 20%
more, which is intended to be large enough that we won't request a
remote sort when it's not helpful, but small enough that we'll have
the remote side do the sort when in doubt.  In some cases, the remote
sort might actually be free, because the remote query plan might
happen to produce output that is ordered the way we need, but without
remote estimates we have no way of knowing that.

It might also be useful to request sorted output from the remote side
if it enables an efficient merge join, but this patch doesn't attempt
to handle that case.

Ashutosh Bapat with revisions by me.  Also reviewed by Fabrízio de Royes
Mello and Jeevan Chalke.
2015-11-03 13:04:42 -05:00
Peter Eisentraut a8d585c091 Message style improvements
Message style, plurals, quoting, spelling, consistency with similar
messages
2015-10-28 20:38:36 -04:00
Robert Haas 5fc4c26db5 Allow FDWs to push down quals without breaking EvalPlanQual rechecks.
This fixes a long-standing bug which was discovered while investigating
the interaction between the new join pushdown code and the EvalPlanQual
machinery: if a ForeignScan appears on the inner side of a paramaterized
nestloop, an EPQ recheck would re-return the original tuple even if
it no longer satisfied the pushed-down quals due to changed parameter
values.

This fix adds a new member to ForeignScan and ForeignScanState and a
new argument to make_foreignscan, and requires changes to FDWs which
push down quals to populate that new argument with a list of quals they
have chosen to push down.  Therefore, I'm only back-patching to 9.5,
even though the bug is not new in 9.5.

Etsuro Fujita, reviewed by me and by Kyotaro Horiguchi.
2015-10-15 13:00:40 -04:00
Noah Misch dfa1cddc4c Avoid scan-build warning about uninitialized htonl() arguments.
Josh Kupershmidt
2015-10-11 20:42:26 -04:00
Noah Misch 5976097c0f Prevent stack overflow in query-type functions.
The tsquery, ltxtquery and query_int data types have a common ancestor.
Having acquired check_stack_depth() calls independently, each was
missing at least one call.  Back-patch to 9.0 (all supported versions).
2015-10-05 10:06:30 -04:00
Noah Misch 1d812c8b05 pgcrypto: Detect and report too-short crypt() salts.
Certain short salts crashed the backend or disclosed a few bytes of
backend memory.  For existing salt-induced error conditions, emit a
message saying as much.  Back-patch to 9.0 (all supported versions).

Josh Kupershmidt

Security: CVE-2015-5288
2015-10-05 10:06:29 -04:00
Tom Lane 8bbe4cbd9b Improve contrib/pg_stat_statements' handling of garbage collection failure.
If we can't read the query texts file (whether because out-of-memory, or
for some other reason), give up and reset the file to empty, discarding all
stored query texts, though not the statistics per se.  We used to leave
things alone and hope for better luck next time, but the problem is that
the file is only going to get bigger and even harder to slurp into memory.
Better to do something that will get us out of trouble.

Likewise reset the file to empty for any other failure within gc_qtexts().
The previous behavior after a write error was to discard query texts but
not do anything to truncate the file, which is just weird.

Also, increase the maximum supported file size from MaxAllocSize to
MaxAllocHugeSize; this makes it more likely we'll be able to do a garbage
collection successfully.

Also, fix recalculation of mean_query_len within entry_dealloc() to match
the calculation in gc_qtexts().  The previous coding overlooked the
possibility of dropped texts (query_len == -1) and would underestimate the
mean of the remaining entries in such cases, thus possibly causing excess
garbage collection cycles.

In passing, add some errdetail to the log entry that complains about
insufficient memory to read the query texts file, which after all was
Jim Nasby's original complaint.

Back-patch to 9.4 where the current handling of query texts was
introduced.

Peter Geoghegan, rather editorialized upon by me
2015-10-04 17:58:42 -04:00
Andres Freund 23fc0b485d Add missed CREATE EXTENSION ... CASCADE regression test adjustment. 2015-10-03 21:31:51 +02:00
Andres Freund b67aaf21e8 Add CASCADE support for CREATE EXTENSION.
Without CASCADE, if an extension has an unfullfilled dependency on
another extension, CREATE EXTENSION ERRORs out with "required extension
... is not installed". That is annoying, especially when that dependency
is an implementation detail of the extension, rather than something the
extension's user can make sense of.

In addition to CASCADE this also includes a small set of regression
tests around CREATE EXTENSION.

Author: Petr Jelinek, editorialized by Michael Paquier, Andres Freund
Reviewed-By: Michael Paquier, Andres Freund, Jeff Janes
Discussion: 557E0520.3040800@2ndquadrant.com
2015-10-03 18:23:40 +02:00
Andres Freund 920218cbc0 Improve errhint() about replication slot naming restrictions.
The existing hint talked about "may only contain letters", but the
actual requirement is more strict: only lower case letters are allowed.

Reported-By: Rushabh Lathia
Author: Rushabh Lathia
Discussion: AGPqQf2x50qcwbYOBKzb4x75sO_V3g81ZsA8+Ji9iN5t_khFhQ@mail.gmail.com
Backpatch: 9.4-, where replication slots were added
2015-10-03 15:29:08 +02:00
Tom Lane 76f965ff1f Improve handling of collations in contrib/postgres_fdw.
If we have a local Var of say varchar type with default collation, and
we apply a RelabelType to convert that to text with default collation, we
don't want to consider that as creating an FDW_COLLATE_UNSAFE situation.
It should be okay to compare that to a remote Var, so long as the remote
Var determines the comparison collation.  (When we actually ship such an
expression to the remote side, the local Var would become a Param with
default collation, meaning the remote Var would in fact control the
comparison collation, because non-default implicit collation overrides
default implicit collation in parse_collate.c.)  To fix, be more precise
about what FDW_COLLATE_NONE means: it applies either to a noncollatable
data type or to a collatable type with default collation, if that collation
can't be traced to a remote Var.  (When it can, FDW_COLLATE_SAFE is
appropriate.)  We were essentially using that interpretation already at
the Var/Const/Param level, but we weren't bubbling it up properly.

An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT
value to describe the second situation, but that would add more code
without changing the actual behavior, so it didn't seem worthwhile.

Also, since we're clarifying the rule to be that we care about whether
operator/function input collations match, there seems no need to fail
immediately upon seeing a Const/Param/non-foreign-Var with nondefault
collation.  We only have to reject if it appears in a collation-sensitive
context (for example, "var IS NOT NULL" is perfectly safe from a collation
standpoint, whatever collation the var has).  So just set the state to
UNSAFE rather than failing immediately.

Per report from Jeevan Chalke.  This essentially corrects some sloppy
thinking in commit ed3ddf918b, so back-patch
to 9.3 where that logic appeared.
2015-09-24 12:47:29 -04:00
Andres Freund eef34e5236 test_decoding: Protect against rare spurious test failures.
A bunch of tests missed specifying that empty transactions shouldn't be
displayed. That causes problems when e.g. autovacuum runs in an
unfortunate moment. The tests in question only run for a very short
time, making this quite unlikely.

Reported-By: Buildfarm member axolotl
Backpatch: 9.4, where logical decoding was introduced
2015-09-22 15:39:46 +02:00
Alvaro Herrera 665a00c9e2 Fix error message wording in previous sslinfo commit 2015-09-08 11:10:20 -03:00
Alvaro Herrera 49124613f1 contrib/sslinfo: add ssl_extension_info SRF
This new function provides information about SSL extensions present in
the X509 certificate used for the current connection.

Extension version updated to version 1.1.

Author: Дмитрий Воронин (Dmitry Voronin)
Reviewed by: Michael Paquier, Heikki Linnakangas, Álvaro Herrera
2015-09-07 21:24:17 -03:00
Alvaro Herrera d94c36a45a Add more sanity checks in contrib/sslinfo
We were missing a few return checks on OpenSSL calls.  Should be pretty
harmless, since we haven't seen any user reports about problems, and
this is not a high-traffic module anyway; still, a bug is a bug, so
backpatch this all the way back to 9.0.

Author: Michael Paquier, while reviewing another sslinfo patch
2015-09-07 19:18:29 -03:00
Joe Conway 03543afe15 Adjust sepgsql regression output for recent error context change
Recent commit 0426f349e changed handling of error context reports
in such a way to have a minor effect on the sepgsql regression
output. Adapt the expected output file to suit. Since that commit
was HEAD only, so is this one.
2015-09-06 11:25:36 -07:00
Tom Lane 0426f349ef Rearrange the handling of error context reports.
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands.  That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function.  What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.

To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages.  Printing CONTEXT for errors only
is now their default behavior.

The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread.  I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.

In passing, fix up (again) the output line counts in psql's various
help displays.  Add some commentary about how to verify them.

Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others
2015-09-05 11:58:33 -04:00
Heikki Linnakangas c80b5f66c6 Fix misc typos.
Oskari Saarenmaa. Backpatch to stable branches where applicable.
2015-09-05 11:35:49 +03:00
Teodor Sigaev 1bbd52cb9a Make unaccent handle all diacritics known to Unicode, and expand ligatures correctly
Add Python script for buiding unaccent.rules from Unicode data. Don't
backpatch because unaccent changes may require tsvector/index
rebuild.

Thomas Munro <thomas.munro@enterprisedb.com>
2015-09-04 12:51:53 +03:00
Joe Conway 794e2558be Fix sepgsql regression tests.
The regression tests for sepgsql were broken by changes in the
base distro as-shipped policies. Specifically, definition of
unconfined_t in the system default policy was changed to bypass
multi-category rules, which the regression test depended on.
Fix that by defining a custom privileged domain
(sepgsql_regtest_superuser_t) and using it instead of system's
unconfined_t domain. The new sepgsql_regtest_superuser_t domain
performs almost like the current unconfined_t, but restricted by
multi-category policy as the traditional unconfined_t was.

The custom policy module is a self defined domain, and so should not
be affected by related future system policy changes. However, it still
uses the unconfined_u:unconfined_r pair for selinux-user and role.
Those definitions have not been changed for several years and seem
less risky to rely on than the unconfined_t domain. Additionally, if
we define custom user/role, they would need to be manually defined
at the operating system level, adding more complexity to an already
non-standard and complex regression test.

Back-patch to 9.3. The regression tests will need more work before
working correctly on 9.2. Starting with 9.2, sepgsql has had dependencies
on libselinux versions that are only available on newer distros with
the changed set of policies (e.g. RHEL 7.x). On 9.1 sepgsql works
fine with the older distros with original policy set (e.g. RHEL 6.x),
and on which the existing regression tests work fine. We might want
eventually change 9.1 sepgsql regression tests to be more independent
from the underlying OS policies, however more work will be needed to
make that happen and it is not clear that it is worth the effort.

Kohei KaiGai with review by Adam Brightwell and me, commentary by
Stephen, Alvaro, Tom, Robert, and others.
2015-08-30 11:09:05 -07:00
Peter Eisentraut 6103b3f368 Improve spelling 2015-08-22 21:54:35 -04:00
Andres Freund e95126cf04 Don't use function definitions looking like old-style ones.
This fixes a bunch of somewhat pedantic warnings with new
compilers. Since by far the majority of other functions definitions use
the (void) style it just seems to be consistent to do so as well in the
remaining few places.
2015-08-15 17:25:00 +02:00
Robert Haas 5243a35825 Remove bogus step from test_decoding isolation tests.
Commit 43b4a16817 made the isolation
tester reject duplicate step names, and it turns out that the
test_decoding module's concurrent_ddl_dml isolation test has a
duplicate name.  I think the second definition isn't actually getting
used, so just remove it.

Per buildfarm.
2015-08-14 22:40:55 -04:00
Alvaro Herrera 94d626ff5a Use materialize SRF mode in brin_page_items
This function was using the single-value-per-call mechanism, but the
code relied on a relcache entry that wasn't kept open across calls.
This manifested as weird errors in buildfarm during the short time that
the "brin-1" isolation test lived.

Backpatch to 9.5, where it was introduced.
2015-08-13 13:02:10 -03:00
Andres Freund 3f811c2d6f Add confirmed_flush column to pg_replication_slots.
There's no reason not to expose both restart_lsn and confirmed_flush
since they have rather distinct meanings. The former is the oldest WAL
still required and valid for both physical and logical slots, whereas
the latter is the location up to which a logical slot's consumer has
confirmed receiving data. Most of the time a slot will require older
WAL (i.e. restart_lsn) than the confirmed
position (i.e. confirmed_flush_lsn).

Author: Marko Tiikkaja, editorialized by me
Discussion: 559D110B.1020109@joh.to
2015-08-10 13:28:18 +02:00
Tom Lane fd7ed26c1a contrib/isn now needs a .gitignore file.
Oversight in commit cb3384a0cb.
Back-patch to 9.1, like that commit.
2015-08-02 23:57:32 -04:00
Tom Lane 09cecdf285 Fix a number of places that produced XX000 errors in the regression tests.
It's against project policy to use elog() for user-facing errors, or to
omit an errcode() selection for errors that aren't supposed to be "can't
happen" cases.  Fix all the violations of this policy that result in
ERRCODE_INTERNAL_ERROR log entries during the standard regression tests,
as errors that can reliably be triggered from SQL surely should be
considered user-facing.

I also looked through all the files touched by this commit and fixed
other nearby problems of the same ilk.  I do not claim to have fixed
all violations of the policy, just the ones in these files.

In a few places I also changed existing ERRCODE choices that didn't
seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR
by something more specific.

Back-patch to 9.5, but no further; changing ERRCODE assignments in
stable branches doesn't seem like a good idea.
2015-08-02 23:49:19 -04:00
Heikki Linnakangas cb3384a0cb Fix output of ISBN-13 numbers beginning with 979.
An EAN beginning with 979 (but not 9790 - those are ISMN's) are accepted
as ISBN numbers, but they cannot be represented in the old, 10-digit ISBN
format. They must be output in the new 13-digit ISBN-13 format. We printed
out an incorrect value for those.

Also add a regression test, to test this and some other basic functionality
of the module.

Patch by Fabien Coelho. This fixes bug #13442, reported by B.Z. Backpatch
to 9.1, where we started to recognize ISBN-13 numbers.
2015-08-02 22:12:33 +03:00
Tom Lane c879d51c59 Some platforms now need contrib/tsm_system_time to be linked with libm.
Buildfarm member hornet, at least, seems to want -lm in the link command.
Probably this is due to the just-added use of isnan().
2015-07-25 16:37:12 -04:00
Tom Lane dd7a8f66ed Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM.  (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.)  Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type.  This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.

Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.

Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.

Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 14:39:00 -04:00
Heikki Linnakangas c6fbe6d6fb Add selectivity estimation functions for intarray operators.
Uriy Zhuravlev and Alexander Korotkov, reviewed by Jeff Janes, some cleanup
by me.
2015-07-21 20:59:24 +03:00
Teodor Sigaev 97f3014647 This supports the triconsistent function for pg_trgm GIN opclass
to make it faster to implement indexed queries where some keys are
common and some are rare.

Patch by Jeff Janes
2015-07-20 18:18:48 +03:00
Andrew Dunstan 00eff86cb8 Enable transforms modules to build and test on Cygwin.
This still doesn't work correctly with Python 3, but I am committing
this so we can get Cygwin buildfarm members building with Python 2.
2015-07-18 10:09:04 -04:00
Noah Misch 7193436744 AIX: Link TRANSFORM modules with their dependencies.
The result closely resembles linking of these modules for the "win32"
port.  Augment the $(exports_file) header so the file is also usable as
an import file.  Unfortunately, relocating an AIX installation will now
require adding $(pkglibdir) to LD_LIBRARY_PATH.  Back-patch to 9.5,
where the modules were introduced.
2015-07-15 21:00:26 -04:00
Noah Misch 736c1f238b MinGW: Link ltree_plpython with plpython.
The MSVC build system already did this, and building against Python 3
requires it.  Back-patch to 9.5, where the module was introduced.
2015-07-15 21:00:26 -04:00
Fujii Masao 705d397cd9 Prevent pgstattuple() from reporting BRIN as unknown index.
Also this patch removes obsolete comment.

Back-patch to 9.5 where BRIN index was added.
2015-07-14 22:36:51 +09:00
Noah Misch 0689cfc34b Link pg_stat_statements with libm.
The AIX 7.1 libm is static, and AIX postgres executables do not export
symbols acquired from libraries.  Back-patch to 9.5, where commit
cfe12763c3 added a sqrt() call.
2015-07-08 20:44:22 -04:00
Tom Lane 10fb48d66d Add an optional missing_ok argument to SQL function current_setting().
This allows convenient checking for existence of a GUC from SQL, which is
particularly useful when dealing with custom variables.

David Christensen, reviewed by Jeevan Chalke
2015-07-02 16:41:07 -04:00
Heikki Linnakangas f92d6a540a Use appendStringInfoString/Char et al where appropriate.
Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in
9.5, and keeping the code in sync with master makes future backpatching
easier.
2015-07-02 12:36:03 +03:00
Fujii Masao fb174687f7 Make use of xlog_internal.h's macros in WAL-related utilities.
Commit 179cdd09 added macros to check if a filename is a WAL segment
or other such file. However there were still some instances of the
strlen + strspn combination to check for that in WAL-related utilities
like pg_archivecleanup. Those checks can be replaced with the macros.

This patch makes use of the macros in those utilities and
which would make the code a bit easier to read.

Back-patch to 9.5.

Michael Paquier
2015-07-02 10:35:38 +09:00
Andres Freund d47a1136e4 Fix test_decoding's handling of nonexistant columns in old tuple versions.
test_decoding used fastgetattr() to extract column values. That's wrong
when decoding updates and deletes if a table's replica identity is set
to FULL and new columns have been added since the old version of the
tuple was created. Due to the lack of a crosscheck with the datum's
natts values an invalid value will be output, leading to errors or
worse.

Bug: #13470
Reported-By: Krzysztof Kotlarski
Discussion: 20150626100333.3874.90852@wrigleys.postgresql.org

Backpatch to 9.4, where the feature, including the bug, was added.
2015-06-27 19:00:45 +02:00
Peter Eisentraut 75f9d17638 Make Python tests more portable
Newer Python versions randomize the hash seed for dictionaries,
resulting in a random output order, which messes up the regression test
diffs.

Instead, use Python assert to compare the dictionaries with their
expected value.
2015-05-31 07:10:45 -04:00
Stephen Frost cde9cf170c Finish removing pg_audit 2015-05-28 12:48:25 -04:00
Stephen Frost e5f1a4f1e3 Remove pg_audit
This removes pg_audit, per discussion:

20150528082038.GU26667@tamriel.snowman.net
2015-05-28 12:41:26 -04:00
Tom Lane 2aa0476dc3 Manual cleanup of pgindent results.
Fix some places where pgindent did silly stuff, often because project
style wasn't followed to begin with.  (I've not touched the atomics
headers, though.)
2015-05-24 15:04:10 -04:00
Tom Lane 91e79260f6 Remove no-longer-required function declarations.
Remove a bunch of "extern Datum foo(PG_FUNCTION_ARGS);" declarations that
are no longer needed now that PG_FUNCTION_INFO_V1(foo) provides that.

Some of these were evidently missed in commit e7128e8dbb, but others
were cargo-culted in in code added since then.  Possibly that can be blamed
in part on the fact that we'd not fixed relevant documentation examples,
which I've now done.
2015-05-24 12:20:23 -04:00
Bruce Momjian 807b9e0dff pgindent run for 9.5 2015-05-23 21:35:49 -04:00
Heikki Linnakangas 4fc72cc7bb Collection of typo fixes.
Use "a" and "an" correctly, mostly in comments. Two error messages were
also fixed (they were just elogs, so no translation work required). Two
function comments in pg_proc.h were also fixed. Etsuro Fujita reported one
of these, but I found a lot more with grep.

Also fix a few other typos spotted while grepping for the a/an typos.
For example, "consists out of ..." -> "consists of ...". Plus a "though"/
"through" mixup reported by Euler Taveira.

Many of these typos were in old code, which would be nice to backpatch to
make future backpatching easier. But much of the code was new, and I didn't
feel like crafting separate patches for each branch. So no backpatching.
2015-05-20 16:56:22 +03:00
Andres Freund 0740cbd759 Refactor ON CONFLICT index inference parse tree representation.
Defer lookup of opfamily and input type of a of a user specified opclass
until the optimizer selects among available unique indexes; and store
the opclass in the parse analyzed tree instead.  The primary reason for
doing this is that for rule deparsing it's easier to use the opclass
than the previous representation.

While at it also rename a variable in the inference code to better fit
it's purpose.

This is separate from the actual fixes for deparsing to make review
easier.
2015-05-19 21:21:27 +02:00
Peter Eisentraut 0779f2ba2d Fix parse tree of DROP TRANSFORM and COMMENT ON TRANSFORM
The plain C string language name needs to be wrapped in makeString() so
that the parse tree is copyable.  This is detectable by
-DCOPY_PARSE_PLAN_TREES.  Add a test case for the COMMENT case.

Also make the quoting in the error messages more consistent.

discovered by Tom Lane
2015-05-18 22:55:14 -04:00
Noah Misch 85270ac7a2 pgcrypto: Report errant decryption as "Wrong key or corrupt data".
This has been the predominant outcome.  When the output of decrypting
with a wrong key coincidentally resembled an OpenPGP packet header,
pgcrypto could instead report "Corrupt data", "Not text data" or
"Unsupported compression algorithm".  The distinct "Corrupt data"
message added no value.  The latter two error messages misled when the
decrypted payload also exhibited fundamental integrity problems.  Worse,
error message variance in other systems has enabled cryptologic attacks;
see RFC 4880 section "14. Security Considerations".  Whether these
pgcrypto behaviors are likewise exploitable is unknown.

In passing, document that pgcrypto does not resist side-channel attacks.
Back-patch to 9.0 (all supported versions).

Security: CVE-2015-3167
2015-05-18 10:02:31 -04:00
Tom Lane b14cf229f4 Use += not = to set makefile variables after including base makefiles.
The previous coding in hstore_plpython and ltree_plpython wiped out any
values set by the base makefiles.  This at least had the effect of running
the tests in "regression" not "contrib_regression" as expected.  These
being pretty new modules, there might be other bad effects we'd not
noticed yet.
2015-05-17 20:04:42 -04:00
Stephen Frost a936743b33 pg_audit Makefile, REINDEX changes
Clean up the Makefile, per Michael Paquier.

Classify REINDEX as we do in core, use '1.0' for the version, per Fujii.
2015-05-17 09:56:57 -04:00
Magnus Hagander 3b075e9d7b Fix typos in comments
Dmitriy Olshevskiy
2015-05-17 14:58:04 +02:00
Peter Eisentraut fab6ca23ea hstore_plpython: Fix regression tests under Python 3 2015-05-16 23:35:29 -04:00
Peter Eisentraut e6dc503445 Fix whitespace 2015-05-16 20:43:32 -04:00
Andres Freund f3d3118532 Support GROUPING SETS, CUBE and ROLLUP.
This SQL standard functionality allows to aggregate data by different
GROUP BY clauses at once. Each grouping set returns rows with columns
grouped by in other sets set to NULL.

This could previously be achieved by doing each grouping as a separate
query, conjoined by UNION ALLs. Besides being considerably more concise,
grouping sets will in many cases be faster, requiring only one scan over
the underlying data.

The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.

The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.

Instead of, as in an earlier versions of the patch, representing the
chain of sort and aggregation steps as full blown planner and executor
nodes, all but the first sort are performed inside the aggregation node
itself. This avoids the need to do some unusual gymnastics to handle
having to return aggregated and non-aggregated tuples from underlying
nodes, as well as having to shut down underlying nodes early to limit
memory usage.  The optimizer still builds Sort/Agg node to describe each
phase, but they're not part of the plan tree, but instead additional
data for the aggregation node. They're a convenient and preexisting way
to describe aggregation and sorting.  The first (and possibly only) sort
step is still performed as a separate execution step. That retains
similarity with existing group by plans, makes rescans fairly simple,
avoids very deep plans (leading to slow explains) and easily allows to
avoid the sorting step if the underlying data is sorted by other means.

A somewhat ugly side of this patch is having to deal with a grammar
ambiguity between the new CUBE keyword and the cube extension/functions
named cube (and rollup). To avoid breaking existing deployments of the
cube extension it has not been renamed, neither has cube been made a
reserved keyword. Instead precedence hacking is used to make GROUP BY
cube(..) refer to the CUBE grouping sets feature, and not the function
cube(). To actually group by a function cube(), unlikely as that might
be, the function name has to be quoted.

Needs a catversion bump because stored rules may change.

Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
    Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
2015-05-16 03:46:31 +02:00
Alvaro Herrera 26df7066cc Move strategy numbers to include/access/stratnum.h
For upcoming BRIN opclasses, it's convenient to have strategy numbers
defined in a single place.  Since there's nothing appropriate, create
it.  The StrategyNumber typedef now lives there, as well as existing
strategy numbers for B-trees (from skey.h) and R-tree-and-friends (from
gist.h).  skey.h is forced to include stratnum.h because of the
StrategyNumber typedef, but gist.h is not; extensions that currently
rely on gist.h for rtree strategy numbers might need to add a new

A few .c files can stop including skey.h and/or gist.h, which is a nice
side benefit.

Per discussion:
https://www.postgresql.org/message-id/20150514232132.GZ2523@alvh.no-ip.org

Authored by Emre Hasegeli and Álvaro.

(It's not clear to me why bootscanner.l has any #include lines at all.)
2015-05-15 17:03:16 -03:00
Simon Riggs df259759fb Add to contrib/Makefile 2015-05-15 15:33:37 -04:00
Simon Riggs 56e121a508 contrib/tsm_system_time 2015-05-15 15:31:50 -04:00
Simon Riggs 4d40494b11 contrib/tsm_system_rows 2015-05-15 15:31:14 -04:00
Simon Riggs f6d208d6e5 TABLESAMPLE, SQL Standard and extensible
Add a TABLESAMPLE clause to SELECT statements that allows
user to specify random BERNOULLI sampling or block level
SYSTEM sampling. Implementation allows for extensible
sampling functions to be written, using a standard API.
Basic version follows SQLStandard exactly. Usable
concrete use cases for the sampling API follow in later
commits.

Petr Jelinek

Reviewed by Michael Paquier and Simon Riggs
2015-05-15 14:37:10 -04:00
Stephen Frost aff27e3379 Remove useless pg_audit.conf
No need to have pg_audit.conf any longer since the regression tests are
just loading the module at the start of each session (to simulate being
in shared_preload_libraries, which isn't something we can actually make
happen on the buildfarm itself, it seems).

Pointed out by Tom
2015-05-15 10:41:53 -04:00
Simon Riggs 83e176ec18 Separate block sampling functions
Refactoring ahead of tablesample patch

Requested and reviewed by Michael Paquier

Petr Jelinek
2015-05-15 04:02:54 +02:00
Stephen Frost b22b770683 Make repeated 'make installcheck' runs work
In pg_audit, set client_min_messages up to warning, then reset the role
attributes, to completely reset the session while not making the
regression tests depend on being run by any particular user.
2015-05-14 15:41:39 -04:00
Stephen Frost ed6ea8e815 Improve pg_audit regression tests
Instead of creating a new superuser role, extract out what the current
user is and use that user instead.  Further, clean up and drop all
objects created by the regression test.

Pointed out by Tom.
2015-05-14 15:16:27 -04:00
Tom Lane 35a1e1d159 Fix portability issue in pg_audit.
"%ld" is not a portable way to print int64's.  This may explain the
buildfarm crashes we're seeing --- it seems to make dromedary happy,
at least.
2015-05-14 13:19:26 -04:00
Tom Lane 6c9e93d3ff Suppress uninitialized-variable warning. 2015-05-14 12:16:06 -04:00
Stephen Frost 8a2e1edd2b Further fixes for the buildfarm for pg_audit
Also, use a function to load the extension ahead of all other calls,
simulating load from shared_libraries_preload, to make sure the
hooks are in place before logging start.
2015-05-14 11:55:36 -04:00
Stephen Frost c703b1e689 Further fixes for the buildfarm for pg_audit
The database built by the buildfarm is specific to the extension, use
\connect - instead.
2015-05-14 11:44:16 -04:00
Stephen Frost dfb7624a13 Fix buildfarm with regard to pg_audit
Remove the check that pg_audit be installed by
shared_preload_libraries as that's not going to work when running the
regressions tests in the buildfarm.  That check was primairly a nice to
have and isn't required anyway.
2015-05-14 10:57:12 -04:00
Stephen Frost ac52bb0442 Add pg_audit, an auditing extension
This extension provides detailed logging classes, ability to control
logging at a per-object level, and includes fully-qualified object
names for logged statements (DML and DDL) in independent fields of the
log output.

Authors: Ian Barwick, Abhijit Menon-Sen, David Steele
Reviews by: Robert Haas, Tatsuo Ishii, Sawada Masahiko, Fujii Masao,
Simon Riggs

Discussion with: Josh Berkus, Jaime Casanova, Peter Eisentraut,
David Fetter, Yeb Havinga, Alvaro Herrera, Petr Jelinek, Tom Lane,
MauMau, Bruce Momjian, Jim Nasby, Michael Paquier,
Fabrízio de Royes Mello, Neil Tiffin
2015-05-14 10:36:16 -04:00
Tom Lane 0bb8528b5c Fix postgres_fdw to return the right ctid value in EvalPlanQual cases.
If a postgres_fdw foreign table is a non-locked source relation in an
UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the query selects its
ctid column, the wrong value would be returned if an EvalPlanQual
recheck occurred.  This happened because the foreign table's result row
was copied via the ROW_MARK_COPY code path, and EvalPlanQualFetchRowMarks
just unconditionally set the reconstructed tuple's t_self to "invalid".

To fix that, we can have EvalPlanQualFetchRowMarks copy the composite
datum's t_ctid field, and be sure to initialize that along with t_self
when postgres_fdw constructs a tuple to return.

If we just did that much then EvalPlanQualFetchRowMarks would start
returning "(0,0)" as ctid for all other ROW_MARK_COPY cases, which perhaps
does not matter much, but then again maybe it might.  The cause of that is
that heap_form_tuple, which is the ultimate source of all composite datums,
simply leaves t_ctid as zeroes in newly constructed tuples.  That seems
like a bad idea on general principles: a field that's really not been
initialized shouldn't appear to have a valid value.  So let's eat the
trivial additional overhead of doing "ItemPointerSetInvalid(&(td->t_ctid))"
in heap_form_tuple.

This closes out our handling of Etsuro Fujita's report that tableoid and
ctid weren't correctly set in postgres_fdw EvalPlanQual cases.  Along the
way we did a great deal of work to improve FDWs' ability to control row
locking behavior; which was not wasted effort by any means, but it didn't
end up being a fix for this problem because that feature would be too
expensive for postgres_fdw to use all the time.

Although the fix for the tableoid misbehavior was back-patched, I'm
hesitant to do so here; it seems far less likely that people would care
about remote ctid than tableoid, and even such a minor behavioral change
as this in heap_form_tuple is perhaps best not back-patched.  So commit
to HEAD only, at least for the moment.

Etsuro Fujita, with some adjustments by me
2015-05-13 14:05:29 -04:00
Andres Freund 5850b20f58 Add pgstattuple_approx() to the pgstattuple extension.
The new function allows to estimate bloat and other table level statics
in a faster, but approximate, way. It does so by using information from
the free space map for pages marked as all visible in the visibility
map. The rest of the table is actually read and free space/bloat is
measured accurately.  In many cases that allows to get bloat information
much quicker, causing less IO.

Author: Abhijit Menon-Sen
Reviewed-By: Andres Freund, Amit Kapila and Tomas Vondra
Discussion: 20140402214144.GA28681@kea.toroid.org
2015-05-13 07:35:06 +02:00
Peter Eisentraut d02f16470f Replace some appendStringInfo* calls with more appropriate variants
Author: David Rowley <dgrowleyml@gmail.com>
2015-05-11 20:38:55 -04:00
Tom Lane 1a8a4e5cde Code review for foreign/custom join pushdown patch.
Commit e7cb7ee145 included some design
decisions that seem pretty questionable to me, and there was quite a lot
of stuff not to like about the documentation and comments.  Clean up
as follows:

* Consider foreign joins only between foreign tables on the same server,
rather than between any two foreign tables with the same underlying FDW
handler function.  In most if not all cases, the FDW would simply have had
to apply the same-server restriction itself (far more expensively, both for
lack of caching and because it would be repeated for each combination of
input sub-joins), or else risk nasty bugs.  Anyone who's really intent on
doing something outside this restriction can always use the
set_join_pathlist_hook.

* Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist
to better reflect what they're for, and allow these custom scan tlists
to be used even for base relations.

* Change make_foreignscan() API to include passing the fdw_scan_tlist
value, since the FDW is required to set that.  Backwards compatibility
doesn't seem like an adequate reason to expect FDWs to set it in some
ad-hoc extra step, and anyway existing FDWs can just pass NIL.

* Change the API of path-generating subroutines of add_paths_to_joinrel,
and in particular that of GetForeignJoinPaths and set_join_pathlist_hook,
so that various less-used parameters are passed in a struct rather than
as separate parameter-list entries.  The objective here is to reduce the
probability that future additions to those parameter lists will result in
source-level API breaks for users of these hooks.  It's possible that this
is even a small win for the core code, since most CPU architectures can't
pass more than half a dozen parameters efficiently anyway.  I kept root,
joinrel, outerrel, innerrel, and jointype as separate parameters to reduce
code churn in joinpath.c --- in particular, putting jointype into the
struct would have been problematic because of the subroutines' habit of
changing their local copies of that variable.

* Avoid ad-hocery in ExecAssignScanProjectionInfo.  It was probably all
right for it to know about IndexOnlyScan, but if the list is to grow
we should refactor the knowledge out to the callers.

* Restore nodeForeignscan.c's previous use of the relcache to avoid
extra GetFdwRoutine lookups for base-relation scans.

* Lots of cleanup of documentation and missed comments.  Re-order some
code additions into more logical places.
2015-05-10 14:36:36 -04:00
Andrew Dunstan 0c90f6769d Add new OID alias type regrole
The new type has the scope of whole the database cluster so it doesn't
behave the same as the existing OID alias types which have database
scope,
concerning object dependency. To avoid confusion constants of the new
type are prohibited from appearing where dependencies are made involving
it.

Also, add a note to the docs about possible MVCC violation and
optimization issues, which are general over the all reg* types.

Kyotaro Horiguchi
2015-05-09 13:06:49 -04:00
Andres Freund 581f4f9657 Remove dependency on ordering in logical decoding upsert test.
Buildfarm member magpie sorted the output differently than intended by
Peter. "Resolve" the problem by simply not aggregating, it's not that
many lines.
2015-05-08 06:06:03 +02:00
Andres Freund 168d5805e4 Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint.  DO NOTHING avoids the
constraint violation, without touching the pre-existing row.  DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed.  The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.

This feature is often referred to as upsert.

This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert.  If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made.  If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.

To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.

Bumps catversion as stored rules change.

Author: Peter Geoghegan, with significant contributions from Heikki
    Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
    Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:43:10 +02:00
Andres Freund 2c8f4836db Represent columns requiring insert and update privileges indentently.
Previously, relation range table entries used a single Bitmapset field
representing which columns required either UPDATE or INSERT privileges,
despite the fact that INSERT and UPDATE privileges are separately
cataloged, and may be independently held.  As statements so far required
either insert or update privileges but never both, that was
sufficient. The required permission could be inferred from the top level
statement run.

The upcoming INSERT ... ON CONFLICT UPDATE feature needs to
independently check for both privileges in one statement though, so that
is not sufficient anymore.

Bumps catversion as stored rules change.

Author: Peter Geoghegan
Reviewed-By: Andres Freund
2015-05-08 00:20:46 +02:00
Alvaro Herrera db5f98ab4f Improve BRIN infra, minmax opclass and regression test
The minmax opclass was using the wrong support functions when
cross-datatypes queries were run.  Instead of trying to fix the
pg_amproc definitions (which apparently is not possible), use the
already correct pg_amop entries instead.  This requires jumping through
more hoops (read: extra syscache lookups) to obtain the underlying
functions to execute, but it is necessary for correctness.

Author: Emre Hasegeli, tweaked by Álvaro
Review: Andreas Karlsson

Also change BrinOpcInfo to record each stored type's typecache entry
instead of just the OID.  Turns out that the full type cache is
necessary in brin_deform_tuple: the original code used the indexed
type's byval and typlen properties to extract the stored tuple, which is
correct in Minmax; but in other implementations that want to store
something different, that's wrong.  The realization that this is a bug
comes from Emre also, but I did not use his patch.

I also adopted Emre's regression test code (with smallish changes),
which is more complete.
2015-05-07 13:02:22 -03:00
Tom Lane b22527f29d Fix incorrect declaration of citext's regexp_matches() functions.
These functions should return SETOF TEXT[], like the core functions they
are wrappers for; but they were incorrectly declared as returning just
TEXT[].  This mistake had two results: first, if there was no match you got
a scalar null result, whereas what you should get is an empty set (zero
rows).  Second, the 'g' flag was effectively ignored, since you would get
only one result array even if there were multiple matches, as reported by
Jeff Certain.

While ignoring 'g' is a clear bug, the behavior for no matches might well
have been thought to be the intended behavior by people who hadn't compared
it carefully to the core regexp_matches() functions.  So we should tread
carefully about introducing this change in the back branches.  Still, it
clearly is a bug and so providing some fix is desirable.

After discussion, the conclusion was to introduce the change in a 1.1
version of the citext extension (as we would need to do anyway); 1.0 still
contains the incorrect behavior.  1.1 is the default and only available
version in HEAD, but it is optional in the back branches, where 1.0 remains
the default version.  People wishing to adopt the fix in back branches will
need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'.  (I also
provided a downgrade script in the back branches, so people could go back
to 1.0 if necessary.)

This should be called out as an incompatible change in the 9.5 release
notes, although we'll also document it in the next set of back-branch
release notes.  The notes should mention that any views or rules that use
citext's regexp_matches() functions will need to be dropped before
upgrading to 1.1, and then recreated again afterwards.

Back-patch to 9.1.  The bug goes all the way back to citext's introduction
in 8.4, but pre-9.1 there is no extension mechanism with which to manage
the change.  Given the lack of previous complaints it seems unnecessary to
change this behavior in 9.0, anyway.
2015-05-05 15:51:22 -04:00
Peter Eisentraut c0574cd5aa hstore_plpython: Support tests on Python 2.3
Python 2.3 does not have the sorted() function, so do it the long way.
2015-05-04 22:30:21 -04:00
Andrew Dunstan f802c6ddba Enable transforms modules to build and run with Mingw builds.
These modules were all missing essential Windows scaffolding, including
resources files and descriptions, and links to the relevant library
import files. This latter item means that the modules can't be built
with pgxs on Windows, as we don't install the import files. If we ever
decide to install them this restriction could probably be removed.

Also, as with plperl we need to make sure that perl's CORE directory is
last on the include list, as on Windows it appears to contain some
headers with names that clash with names of some headers we include.
2015-05-03 09:10:47 -04:00
Peter Eisentraut e30a864963 hstore_plperl: Move port-specific parts to later in the makefile
PORTNAME isn't set until the global makefiles have been included.
2015-05-02 08:03:47 -04:00
Peter Eisentraut 0fd764647a Make hstore_plperl's build even more like plperl's
Combine the two places that set CPPFLAGS into one.  Also, some settings
should be restricted to Windows only.  More precisely, -Wno-comment is
a GCC-only option, but Windows in a makefile implies GCC at the moment.

Also, since -Wno-comment is more properly a preprocessor option, move it
to CPPFLAGS to simplify things a bit.
2015-05-01 22:16:58 -04:00
Andrew Dunstan 77477e745b Make hstore_plperl's build more like plperl's
This involves moving perl's CORE library to the end of the include list,
and adding other compilation settings that plperl uses. This won't
completely fix the breakage currently being seen by gcc builds on
Windows, but it will let the build get further, and should be wholly
benign, if not beneficial, on *nix.
2015-05-01 15:36:44 -04:00
Robert Haas 924bcf4f16 Create an infrastructure for parallel computation in PostgreSQL.
This does four basic things.  First, it provides convenience routines
to coordinate the startup and shutdown of parallel workers.  Second,
it synchronizes various pieces of state (e.g. GUCs, combo CID
mappings, transaction snapshot) from the parallel group leader to the
worker processes.  Third, it prohibits various operations that would
result in unsafe changes to that state while parallelism is active.
Finally, it propagates events that would result in an ErrorResponse,
NoticeResponse, or NotifyResponse message being sent to the client
from the parallel workers back to the master, from which they can then
be sent on to the client.

Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke.
Suggestions and review from Andres Freund, Heikki Linnakangas, Noah
Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
2015-04-30 15:02:14 -04:00
Peter Eisentraut dbf2ec1a1c Fix parallel make risk with new check temp-install setup
The "check" target no longer needs to depend on "all", because it now
runs "install" directly, which in turn depends on "all".  Doing both
will cause problems with parallel make, because two builds will run next
to each other.

Also remove the redirection of the temp-install output into a log file.
This was appropriate when this was done from within pg_regress, but now
it's just a regular make run, and especially with the above changes this
will now take the place of running the "all" target before the test
suites.

problem report by Jeff Janes, patch in part by Michael Paquier
2015-04-29 20:34:22 -04:00
Andres Freund 5aa2350426 Introduce replication progress tracking infrastructure.
When implementing a replication solution ontop of logical decoding, two
related problems exist:
* How to safely keep track of replication progress
* How to change replication behavior, based on the origin of a row;
  e.g. to avoid loops in bi-directional replication setups

The solution to these problems, as implemented here, consist out of
three parts:

1) 'replication origins', which identify nodes in a replication setup.
2) 'replication progress tracking', which remembers, for each
   replication origin, how far replay has progressed in a efficient and
   crash safe manner.
3) The ability to filter out changes performed on the behest of a
   replication origin during logical decoding; this allows complex
   replication topologies. E.g. by filtering all replayed changes out.

Most of this could also be implemented in "userspace", e.g. by inserting
additional rows contain origin information, but that ends up being much
less efficient and more complicated.  We don't want to require various
replication solutions to reimplement logic for this independently. The
infrastructure is intended to be generic enough to be reusable.

This infrastructure also replaces the 'nodeid' infrastructure of commit
timestamps. It is intended to provide all the former capabilities,
except that there's only 2^16 different origins; but now they integrate
with logical decoding. Additionally more functionality is accessible via
SQL.  Since the commit timestamp infrastructure has also been introduced
in 9.5 (commit 73c986add) changing the API is not a problem.

For now the number of origins for which the replication progress can be
tracked simultaneously is determined by the max_replication_slots
GUC. That GUC is not a perfect match to configure this, but there
doesn't seem to be sufficient reason to introduce a separate new one.

Bumps both catversion and wal page magic.

Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer
Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer
Discussion: 20150216002155.GI15326@awork2.anarazel.de,
    20140923182422.GA15776@alap3.anarazel.de,
    20131114172632.GE7522@alap2.anarazel.de
2015-04-29 19:30:53 +02:00
Peter Eisentraut f95425478a Fix hstore_plperl regression tests on some platforms
On some platforms, plperl and plperlu cannot be loaded at the same
time.  So split the test into two separate test files.
2015-04-26 16:13:58 -04:00
Peter Eisentraut cac7658205 Add transforms feature
This provides a mechanism for specifying conversions between SQL data
types and procedural languages.  As examples, there are transforms
for hstore and ltree for PL/Perl and PL/Python.

reviews by Pavel Stěhule and Andres Freund
2015-04-26 10:33:14 -04:00
Tom Lane f320cbb615 Fix typo in linux startup script.
Missed a "$" in what was meant to be a variable substitution.  Careless
mistake in commit f23425fa95.
2015-04-26 09:43:15 -04:00
Peter Eisentraut dcae5facca Improve speed of make check-world
Before, make check-world would create a new temporary installation for
each test suite, which is slow and wasteful.  Instead, we now create one
test installation that is used by all test suites that are part of a
make run.

The management of the temporary installation is removed from pg_regress
and handled in the makefiles.  This allows for better control, and
unifies the code with that of test suites not run through pg_regress.

review and msvc support by Michael Paquier <michael.paquier@gmail.com>

more review by Fabien Coelho <coelho@cri.ensmp.fr>
2015-04-23 08:59:52 -04:00
Stephen Frost 4ccc5bd28e Pull in tableoid for inheiritance with rowMarks
As noted by Etsuro Fujita [1] and Dean Rasheed[2],
cb1ca4d800 changed ExecBuildAuxRowMark()
to always look for the tableoid in the target list, but didn't also
change preprocess_targetlist() to always include the tableoid.  This
resulted in errors with soon-to-be-added RLS with inheritance tests,
and errors when using inheritance with foreign tables.

Authors: Etsuro Fujita and Dean Rasheed (independently)

Minor word-smithing on the comments by me.

[1] 552CF0B6.8010006@lab.ntt.co.jp
[2] CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
2015-04-22 11:29:35 -04:00
Andres Freund cef939c347 Rename pg_replication_slot's new active_in to active_pid.
In d811c037ce active_in was added but discussion since showed that
active_pid is preferred as a name.

Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
2015-04-22 09:43:40 +02:00
Peter Eisentraut b0a738f428 Move pg_xlogdump from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-21 19:03:49 -04:00
Andres Freund d811c037ce Add 'active_in' column to pg_replication_slots.
Right now it is visible whether a replication slot is active in any
session, but not in which.  Adding the active_in column, containing the
pid of the backend having acquired the slot, makes it much easier to
associate pg_replication_slots entries with the corresponding
pg_stat_replication/pg_stat_activity row.

This should have been done from the start, but I (Andres) dropped the
ball there somehow.

Author: Craig Ringer, revised by me Discussion:
CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
2015-04-21 11:51:06 +02:00
Peter Eisentraut 528c2e44ab Move pg_test_timing from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-20 21:30:12 -04:00
Peter Eisentraut 00882d9e5c Move pg_test_fsync from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-19 22:20:49 -04:00
Peter Eisentraut 9fa8b0ee90 Move pg_upgrade from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-14 19:26:38 -04:00
Peter Eisentraut 30982be4e5 Integrate pg_upgrade_support module into backend
Previously, these functions were created in a schema "binary_upgrade",
which was deleted after pg_upgrade was finished.  Because we don't want
to keep that schema around permanently, move them to pg_catalog but
rename them with a binary_upgrade_... prefix.

The provided functions are only small wrappers around global variables
that were added specifically for pg_upgrade use, so keeping the module
separate does not create any modularity.

The functions still check that they are only called in binary upgrade
mode, so it is not possible to call these during normal operation.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-14 19:26:37 -04:00
Heikki Linnakangas 4f700bcd20 Reorganize our CRC source files again.
Now that we use CRC-32C in WAL and the control file, the "traditional" and
"legacy" CRC-32 variants are not used in any frontend programs anymore.
Move the code for those back from src/common to src/backend/utils/hash.

Also move the slicing-by-8 implementation (back) to src/port. This is in
preparation for next patch that will add another implementation that uses
Intel SSE 4.2 instructions to calculate CRC-32C, where available.
2015-04-14 17:03:42 +03:00
Peter Eisentraut 81134af3ec Move pgbench from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-13 13:07:16 -04:00
Peter Eisentraut 83aca89f7c Move pg_archivecleanup from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-11 23:29:18 -04:00