Commit Graph

2756 Commits

Author SHA1 Message Date
Tom Lane
cb1ca4d800 Allow foreign tables to participate in inheritance.
Foreign tables can now be inheritance children, or parents.  Much of the
system was already ready for this, but we had to fix a few things of
course, mostly in the area of planner and executor handling of row locks.

As side effects of this, allow foreign tables to have NOT VALID CHECK
constraints (and hence to accept ALTER ... VALIDATE CONSTRAINT), and to
accept ALTER SET STORAGE and ALTER SET WITH/WITHOUT OIDS.  Continuing to
disallow these things would've required bizarre and inconsistent special
cases in inheritance behavior.  Since foreign tables don't enforce CHECK
constraints anyway, a NOT VALID one is a complete no-op, but that doesn't
mean we shouldn't allow it.  And it's possible that some FDWs might have
use for SET STORAGE or SET WITH OIDS, though doubtless they will be no-ops
for most.

An additional change in support of this is that when a ModifyTable node
has multiple target tables, they will all now be explicitly identified
in EXPLAIN output, for example:

 Update on pt1  (cost=0.00..321.05 rows=3541 width=46)
   Update on pt1
   Foreign Update on ft1
   Foreign Update on ft2
   Update on child3
   ->  Seq Scan on pt1  (cost=0.00..0.00 rows=1 width=46)
   ->  Foreign Scan on ft1  (cost=100.00..148.03 rows=1170 width=46)
   ->  Foreign Scan on ft2  (cost=100.00..148.03 rows=1170 width=46)
   ->  Seq Scan on child3  (cost=0.00..25.00 rows=1200 width=46)

This was done mainly to provide an unambiguous place to attach "Remote SQL"
fields, but it is useful for inherited updates even when no foreign tables
are involved.

Shigeru Hanada and Etsuro Fujita, reviewed by Ashutosh Bapat and Kyotaro
Horiguchi, some additional hacking by me
2015-03-22 13:53:21 -04:00
Tom Lane
8d1f239003 Replace insertion sort in contrib/intarray with qsort().
It's all very well to claim that a simplistic sort is fast in easy
cases, but O(N^2) in the worst case is not good ... especially if the
worst case is as easy to hit as "descending order input".  Replace that
bit with our standard qsort.

Per bug #12866 from Maksym Boguk.  Back-patch to all active branches.
2015-03-15 23:22:03 -04:00
Tom Lane
7b8b8a4331 Improve representation of PlanRowMark.
This patch fixes two inadequacies of the PlanRowMark representation.

First, that the original LockingClauseStrength isn't stored (and cannot be
inferred for foreign tables, which always get ROW_MARK_COPY).  Since some
PlanRowMarks are created out of whole cloth and don't actually have an
ancestral RowMarkClause, this requires adding a dummy LCS_NONE value to
enum LockingClauseStrength, which is fairly annoying but the alternatives
seem worse.  This fix allows getting rid of the use of get_parse_rowmark()
in FDWs (as per the discussion around commits 462bd95705 and
8ec8760fc8), and it simplifies some things elsewhere.

Second, that the representation assumed that all child tables in an
inheritance hierarchy would use the same RowMarkType.  That's true today
but will soon not be true.  We add an "allMarkTypes" field that identifies
the union of mark types used in all a parent table's children, and use
that where appropriate (currently, only in preprocess_targetlist()).

In passing fix a couple of minor infelicities left over from the SKIP
LOCKED patch, notably that _outPlanRowMark still thought waitPolicy
is a bool.

Catversion bump is required because the numeric values of enum
LockingClauseStrength can appear in on-disk rules.

Extracted from a much larger patch to support foreign table inheritance;
it seemed worth breaking this out, since it's a separable concern.

Shigeru Hanada and Etsuro Fujita, somewhat modified by me
2015-03-15 18:41:47 -04:00
Robert Haas
e96b7c6b9f sepgsql: Improve error message when unsupported object type is labeled.
KaiGai Kohei, reviewed by Álvaro Herrera and myself
2015-03-11 12:12:10 -04:00
Andres Freund
bbfd7edae5 Add macros wrapping all usage of gcc's __attribute__.
Until now __attribute__() was defined to be empty for all compilers but
gcc. That's problematic because it prevents using it in other compilers;
which is necessary e.g. for atomics portability.  It's also just
generally dubious to do so in a header as widely included as c.h.

Instead add pg_attribute_format_arg, pg_attribute_printf,
pg_attribute_noreturn macros which are implemented in the compilers that
understand them. Also add pg_attribute_noreturn and pg_attribute_packed,
but don't provide fallbacks, since they can affect functionality.

This means that external code that, possibly unwittingly, relied on
__attribute__ defined to be empty on !gcc compilers may now run into
warnings or errors on those compilers. But there shouldn't be many
occurances of that and it's hard to work around...

Discussion: 54B58BA3.8040302@ohmu.fi
Author: Oskari Saarenmaa, with some minor changes by me.
2015-03-11 14:30:01 +01:00
Fujii Masao
57aa5b2bb1 Add GUC to enable compression of full page images stored in WAL.
When newly-added GUC parameter, wal_compression, is on, the PostgreSQL server
compresses a full page image written to WAL when full_page_writes is on or
during a base backup. A compressed page image will be decompressed during WAL
replay. Turning this parameter on can reduce the WAL volume without increasing
the risk of unrecoverable data corruption, but at the cost of some extra CPU
spent on the compression during WAL logging and on the decompression during
WAL replay.

This commit changes the WAL format (so bumping WAL version number) so that
the one-byte flag indicating whether a full page image is compressed or not is
included in its header information. This means that the commit increases the
WAL volume one-byte per a full page image even if WAL compression is not used
at all. We can save that one-byte by borrowing one-bit from the existing field
like hole_offset in the header and using it as the flag, for example. But which
would reduce the code readability and the extensibility of the feature.
Per discussion, it's not worth paying those prices to save only one-byte, so we
decided to add the one-byte flag to the header.

This commit doesn't introduce any new compression algorithm like lz4.
Currently a full page image is compressed using the existing PGLZ algorithm.
Per discussion, we decided to use it at least in the first version of the
feature because there were no performance reports showing that its compression
ratio is unacceptably lower than that of other algorithm. Of course,
in the future, it's worth considering the support of other compression
algorithm for the better compression.

Rahila Syed and Michael Paquier, reviewed in various versions by myself,
Andres Freund, Robert Haas, Abhijit Menon-Sen and many others.
2015-03-11 15:52:24 +09:00
Alvaro Herrera
e491bd2ee3 Move BRIN page type to page's last two bytes
... which is the usual convention among AMs, so that pg_filedump and
similar utilities can tell apart pages of different AMs.  It was also
the intent of the original code, but I failed to realize that alignment
considerations would move the whole thing to the previous-to-last word
in the page.

The new definition of the associated macro makes surrounding code a bit
leaner, too.

Per note from Heikki at
http://www.postgresql.org/message-id/546A16EF.9070005@vmware.com
2015-03-10 12:27:15 -03:00
Heikki Linnakangas
f1fd515b39 Move WAL-related definitions from dbcommands.h to separate header file.
This makes it easier to write frontend programs that needs to understand
the WAL record format of CREATE/DROP DATABASE. dbcommands.h cannot easily
be #included in a frontend program, because it pulls in other header files
that need backend stuff, but the new dbcommands_xlog.h header file has
fewer dependencies.
2015-03-09 15:50:49 +02:00
Alvaro Herrera
c6ee39bc85 Fix contrib/file_fdw's expected file
I forgot to update it on yesterday's cf34e373fc.
2015-03-06 11:47:09 -03:00
Robert Haas
e5f3690249 pgbench: Fix mistakes in Makefile.
My commit 878fdcb843 was not quite
right.  Tom Lane pointed out one of the mistakes fixed here, and I
noticed the other myself while reviewing what I'd committed.
2015-03-03 10:48:16 -05:00
Robert Haas
878fdcb843 pgbench: Add a real expression syntax to \set
Previously, you could do \set variable operand1 operator operand2, but
nothing more complicated.  Now, you can \set variable expression, which
makes it much simpler to do multi-step calculations here.  This also
adds support for the modulo operator (%), with the same semantics as in
C.

Robert Haas and Fabien Coelho, reviewed by Álvaro Herrera and
Stephen Frost
2015-03-02 14:21:41 -05:00
Tom Lane
2e211211a7 Use FLEXIBLE_ARRAY_MEMBER in a number of other places.
I think we're about done with this...
2015-02-21 16:12:14 -05:00
Tom Lane
e1a11d9311 Use FLEXIBLE_ARRAY_MEMBER for HeapTupleHeaderData.t_bits[].
This requires changing quite a few places that were depending on
sizeof(HeapTupleHeaderData), but it seems for the best.

Michael Paquier, some adjustments by me
2015-02-21 15:13:06 -05:00
Tom Lane
c110eff132 Use FLEXIBLE_ARRAY_MEMBER in struct RecordIOData.
I (tgl) fixed this last night in rowtypes.c, but I missed that the
code had been copied into a couple of other places.

Michael Paquier
2015-02-20 17:03:12 -05:00
Tom Lane
09d8d110a6 Use FLEXIBLE_ARRAY_MEMBER in a bunch more places.
Replace some bogus "x[1]" declarations with "x[FLEXIBLE_ARRAY_MEMBER]".
Aside from being more self-documenting, this should help prevent bogus
warnings from static code analyzers and perhaps compiler misoptimizations.

This patch is just a down payment on eliminating the whole problem, but
it gets rid of a lot of easy-to-fix cases.

Note that the main problem with doing this is that one must no longer rely
on computing sizeof(the containing struct), since the result would be
compiler-dependent.  Instead use offsetof(struct, lastfield).  Autoconf
also warns against spelling that offsetof(struct, lastfield[0]).

Michael Paquier, review and additional fixes by me.
2015-02-20 00:11:42 -05:00
Kevin Grittner
c923e82a23 Eliminate unnecessary NULL checks in picksplit method of intarray.
Where these checks were being done there was no code path which
could leave them NULL.

Michael Paquier per Coverity
2015-02-16 15:26:23 -06:00
Tom Lane
80986e85aa Avoid returning undefined bytes in chkpass_in().
We can't really fix the problem that the result is defined to depend on
random(), so it is still going to fail the "unstable input conversion"
test in parse_type.c.  However, we can at least satify valgrind.  (It
looks like this code used to be valgrind-clean, actually, until somebody
did a careless s/strncpy/strlcpy/g on it.)

In passing, let's just make real sure that chkpass_out doesn't overrun
its output buffer.

No need for backpatch, I think, since this is just to satisfy debugging
tools.

Asif Naeem
2015-02-14 12:20:56 -05:00
Bruce Momjian
dc01efa5cc pg_upgrade: improve checksum mismatch error message
Patch by Greg Sabino Mullane, slight adjustments by me
2015-02-11 22:22:26 -05:00
Bruce Momjian
056764b102 pg_upgrade: quote directory names in delete_old_cluster script
This allows the delete script to properly function when special
characters appear in directory paths, e.g. spaces.

Backpatch through 9.0
2015-02-11 22:06:04 -05:00
Heikki Linnakangas
c619c2351f Move pg_crc.c to src/common, and remove pg_crc_tables.h
To get CRC functionality in a client program, you now need to link with
libpgcommon instead of libpgport. The CRC code has nothing to do with
portability, so libpgcommon is a better home. (libpgcommon didn't exist
when pg_crc.c was originally moved to src/port.)

Remove the possibility to get CRC functionality by just #including
pg_crc_tables.h. I'm not aware of any extensions that actually did that and
couldn't simply link with libpgcommon.

This also moves the pg_crc.h header file from src/include/utils to
src/include/common, which will require changes to any external programs
that currently does #include "utils/pg_crc.h". That seems acceptable, as
include/common is clearly the right home for it now, and the change needed
to any such programs is trivial.
2015-02-09 11:17:56 +02:00
Robert Haas
370b3a4618 pgcrypto: Code cleanup for decrypt_internal.
Remove some unnecessary null-tests, and replace a goto-label construct
with an "if" block.

Michael Paquier, reviewed by me.
2015-02-04 08:46:32 -05:00
Heikki Linnakangas
4eaafa0453 Remove dead code.
Commit 13629df changed metaphone() function to return an empty string on
empty input, but it left the old error message in place. It's now dead code.

Michael Paquier, per Coverity warning.
2015-02-03 09:43:44 +02:00
Noah Misch
59b919822a Prevent Valgrind Memcheck errors around px_acquire_system_randomness().
This function uses uninitialized stack and heap buffers as supplementary
entropy sources.  Mark them so Memcheck will not complain.  Back-patch
to 9.4, where Valgrind Memcheck cooperation first appeared.

Marko Tiikkaja
2015-02-02 10:00:45 -05:00
Noah Misch
8b59672d8d Cherry-pick security-relevant fixes from upstream imath library.
This covers alterations to buffer sizing and zeroing made between imath
1.3 and imath 1.20.  Valgrind Memcheck identified the buffer overruns
and reliance on uninitialized data; their exploit potential is unknown.
Builds specifying --with-openssl are unaffected, because they use the
OpenSSL BIGNUM facility instead of imath.  Back-patch to 9.0 (all
supported versions).

Security: CVE-2015-0243
2015-02-02 10:00:45 -05:00
Noah Misch
1dc7551586 Fix buffer overrun after incomplete read in pullf_read_max().
Most callers pass a stack buffer.  The ensuing stack smash can crash the
server, and we have not ruled out the viability of attacks that lead to
privilege escalation.  Back-patch to 9.0 (all supported versions).

Marko Tiikkaja

Security: CVE-2015-0243
2015-02-02 10:00:45 -05:00
Tom Lane
a59ee88197 Fix Coverity warning about contrib/pgcrypto's mdc_finish().
Coverity points out that mdc_finish returns a pointer to a local buffer
(which of course is gone as soon as the function returns), leaving open
a risk of misbehaviors possibly as bad as a stack overwrite.

In reality, the only possible call site is in process_data_packets()
which does not examine the returned pointer at all.  So there's no
live bug, but nonetheless the code is confusing and risky.  Refactor
to avoid the issue by letting process_data_packets() call mdc_finish()
directly instead of going through the pullf_read() API.

Although this is only cosmetic, it seems good to back-patch so that
the logic in pgp-decrypt.c stays in sync across all branches.

Marko Kreen
2015-01-30 13:05:30 -05:00
Tom Lane
37507962c3 Handle unexpected query results, especially NULLs, safely in connectby().
connectby() didn't adequately check that the constructed SQL query returns
what it's expected to; in fact, since commit 08c33c426b it wasn't
checking that at all.  This could result in a null-pointer-dereference
crash if the constructed query returns only one column instead of the
expected two.  Less excitingly, it could also result in surprising data
conversion failures if the constructed query returned values that were
not I/O-conversion-compatible with the types specified by the query
calling connectby().

In all branches, insist that the query return at least two columns;
this seems like a minimal sanity check that can't break any reasonable
use-cases.

In HEAD, insist that the constructed query return the types specified by
the outer query, including checking for typmod incompatibility, which the
code never did even before it got broken.  This is to hide the fact that
the implementation does a conversion to text and back; someday we might
want to improve that.

In back branches, leave that alone, since adding a type check in a minor
release is more likely to break things than make people happy.  Type
inconsistencies will continue to work so long as the actual type and
declared type are I/O representation compatible, and otherwise will fail
the same way they used to.

Also, in all branches, be on guard for NULL results from the constructed
query, which formerly would cause null-pointer dereference crashes.
We now print the row with the NULL but don't recurse down from it.

In passing, get rid of the rather pointless idea that
build_tuplestore_recursively() should return the same tuplestore that's
passed to it.

Michael Paquier, adjusted somewhat by me
2015-01-29 20:18:33 -05:00
Andres Freund
ed127002d8 Align buffer descriptors to cache line boundaries.
Benchmarks has shown that aligning the buffer descriptor array to
cache lines is important for scalability; especially on bigger,
multi-socket, machines.

Currently the array sometimes already happens to be aligned by
happenstance, depending how large previous shared memory allocations
were. That can lead to wildly varying performance results after minor
configuration changes.

In addition to aligning the start of descriptor array, also force the
size of individual descriptors to be of a common cache line size (64
bytes). That happens to already be the case on 64bit platforms, but
this way we can change the struct BufferDesc more easily.

As the alignment primarily matters in highly concurrent workloads
which probably all are 64bit these days, and the space wastage of
element alignment would be a bit more noticeable on 32bit systems, we
don't force the stride to be cacheline sized on 32bit platforms for
now. If somebody does actual performance testing, we can reevaluate
that decision by changing the definition of BUFFERDESC_PADDED_SIZE.

Discussion: 20140202151319.GD32123@awork2.anarazel.de

Per discussion with Bruce Momjan, Tom Lane, Robert Haas, and Peter
Geoghegan.
2015-01-29 22:48:45 +01:00
Heikki Linnakangas
670bf71f65 Remove dead NULL-pointer checks in GiST code.
gist_poly_compress() and gist_circle_compress() checked for a NULL-pointer
key argument, but that was dead code; the gist code never passes a
NULL-pointer to the "compress" method.

This commit also removes a documentation note added in commit a0a3883,
about doing NULL-pointer checks in the "compress" method. It was added
based on the fact that some implementations were doing NULL-pointer
checks, but those checks were unnecessary in the first place.

The NULL-pointer check in gbt_var_same() function was also unnecessary.
The arguments to the "same" method come from the "compress", "union", or
"picksplit" methods, but none of them return a NULL pointer.

None of this is to be confused with SQL NULL values. Those are dealt with
by the gist machinery, and are never passed to the GiST opclass methods.

Michael Paquier
2015-01-28 10:03:58 +02:00
Tom Lane
dabda64152 Fix volatile-safety issue in dblink's materializeQueryResult().
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself.  I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
2015-01-26 15:17:33 -05:00
Tom Lane
586dd5d6a5 Replace a bunch more uses of strncpy() with safer coding.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.

A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about.  So just use memcpy() in
these cases.

In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).

I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution).  There are also a
few such calls in ecpg that could possibly do with more analysis.

AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior.  These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
2015-01-24 13:05:42 -05:00
Tom Lane
eb213acfe2 Prevent duplicate escape-string warnings when using pg_stat_statements.
contrib/pg_stat_statements will sometimes run the core lexer a second time
on submitted statements.  Formerly, if you had standard_conforming_strings
turned off, this led to sometimes getting two copies of any warnings
enabled by escape_string_warning.  While this is probably no longer a big
deal in the field, it's a pain for regression testing.

To fix, change the lexer so it doesn't consult the escape_string_warning
GUC variable directly, but looks at a copy in the core_yy_extra_type state
struct.  Then, pg_stat_statements can change that copy to disable warnings
while it's redoing the lexing.

It seemed like a good idea to make this happen for all three of the GUCs
consulted by the lexer, not just escape_string_warning.  There's not an
immediate use-case for callers to adjust the other two AFAIK, but making
it possible is easy enough and seems like good future-proofing.

Arguably this is a bug fix, but there doesn't seem to be enough interest to
justify a back-patch.  We'd not be able to back-patch exactly as-is anyway,
for fear of breaking ABI compatibility of the struct.  (We could perhaps
back-patch the addition of only escape_string_warning by adding it at the
end of the struct, where there's currently alignment padding space.)
2015-01-22 18:11:00 -05:00
Tom Lane
8e166e164c Rearrange explain.c's API so callers need not embed sizeof(ExplainState).
The folly of the previous arrangement was just demonstrated: there's no
convenient way to add fields to ExplainState without breaking ABI, even
if callers have no need to touch those fields.  Since we might well need
to do that again someday in back branches, let's change things so that
only explain.c has to have sizeof(ExplainState) compiled into it.  This
costs one extra palloc() per EXPLAIN operation, which is surely pretty
negligible.
2015-01-15 13:39:33 -05:00
Robert Haas
0b49642b99 pg_standby: Avoid writing one byte beyond the end of the buffer.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end.  This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.

Spotted by Coverity; patch by Michael Paquier, reviewed by me.
2015-01-15 09:26:03 -05:00
Robert Haas
4a0a5f21fa vacuumlo: Avoid unlikely memory leak.
Spotted by Coverity.  This isn't likely to matter in practice, but
there's no harm in fixing it.

Michael Paquier
2015-01-14 15:14:20 -05:00
Heikki Linnakangas
e37d474f91 Silence Coverity warnings about unused return values from pushJsonbValue()
Similar warnings from backend were silenced earlier by commit c8315930,
but there were a few more contrib/hstore.

Michael Paquier
2015-01-13 14:33:05 +02:00
Bruce Momjian
ac7009abd2 pg_upgrade: fix one-byte per empty db memory leak
Report by Tatsuo Ishii, Coverity
2015-01-09 12:12:30 -05:00
Bruce Momjian
4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Andres Freund
8cadeb792c Correctly handle test durations of more than 2147s in pg_test_timing.
Previously the computation of the total test duration, measured in
microseconds, accidentally overflowed due to accidentally using signed
32bit arithmetic.  As the only consequence is that pg_test_timing
invocations with such, overly large, durations never finished the
practical consequences of this bug are minor.

Pointed out by Coverity.

Backpatch to 9.2 where pg_test_timing was added.
2015-01-04 15:44:49 +01:00
Andres Freund
d1c575230d Fix off-by-one in pg_xlogdump's fuzzy_open_file().
In the unlikely case of stdin (fd 0) being closed, the off-by-one
would lead to pg_xlogdump failing to open files.

Spotted by Coverity.

Backpatch to 9.3 where pg_xlogdump was introduced.
2015-01-04 15:35:46 +01:00
Andres Freund
58bc4747be Add missing va_end() call to a early exit in dmetaphone.c's StringAt().
Pointed out by Coverity.

Backpatch to all supported branches, the code has been that way for a
long while.
2015-01-04 15:35:46 +01:00
Tatsuo Ishii
3b5a89c482 Fix resource leak pointed out by Coverity. 2014-12-30 20:33:01 +09:00
Bruce Momjian
83bcc70459 pgbench: remove odd trailing period in init progress output 2014-12-24 09:21:09 -05:00
Heikki Linnakangas
7f0dccaed6 Turn much of the btree_gin macros into real functions.
This makes the functions much nicer to read and edit, and also makes
debugging easier.
2014-12-22 17:11:53 +02:00
Tom Lane
4a14f13a0a Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create().  Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate.  Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions.  hash_create() itself will
take care of optimizing when the key size is four bytes.

This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.

In future we could look into offering a similar optimized hashing function
for 8-byte keys.  Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.

For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules.  Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.

Teodor Sigaev and Tom Lane
2014-12-18 13:36:36 -05:00
Noah Misch
f6dc6dd5ba Lock down regression testing temporary clusters on Windows.
Use SSPI authentication to allow connections exclusively from the OS
user that launched the test suite.  This closes on Windows the
vulnerability that commit be76a6d39e
closed on other platforms.  Users of "make installcheck" or custom test
harnesses can run "pg_regress --config-auth=DATADIR" to activate the
same authentication configuration that "make check" would use.
Back-patch to 9.0 (all supported versions).

Security: CVE-2014-0067
2014-12-17 22:48:40 -05:00
Tom Lane
fc2ac1fb41 Allow CHECK constraints to be placed on foreign tables.
As with NOT NULL constraints, we consider that such constraints are merely
reports of constraints that are being enforced by the remote server (or
other underlying storage mechanism).  Their only real use is to allow
planner optimizations, for example in constraint-exclusion checks.  Thus,
the code changes here amount to little more than removal of the error that
was formerly thrown for applying CHECK to a foreign table.

(In passing, do a bit of cleanup of the ALTER FOREIGN TABLE reference page,
which had accumulated some weird decisions about ordering etc.)

Shigeru Hanada and Etsuro Fujita, reviewed by Kyotaro Horiguchi and
Ashutosh Bapat.
2014-12-17 17:00:53 -05:00
Magnus Hagander
cef0ae498c Update .gitignore for pg_upgrade
Add Windows versions of generated scripts, and make sure we only
ignore the scripts int he root directory.

Michael Paquier
2014-12-17 11:55:22 +01:00
Tom Lane
de8e46f5f5 Suppress bogus statistics when pgbench failed to complete any transactions.
Code added in 9.4 would attempt to divide by zero in such cases.
Noted while testing fix for missing-pclose problem.
2014-12-16 14:53:55 -05:00
Tom Lane
d38e8d30ce Fix file descriptor leak after failure of a \setshell command in pgbench.
If the called command fails to return data, runShellCommand forgot to
pclose() the pipe before returning.  This is fairly harmless in the current
code, because pgbench would then abandon further processing of that client
thread; so no more than nclients descriptors could be leaked this way.  But
it's not hard to imagine future improvements whereby that wouldn't be true.
In any case, it's sloppy coding, so patch all branches.  Found by Coverity.
2014-12-16 13:31:42 -05:00