Commit Graph

39264 Commits

Author SHA1 Message Date
Tom Lane
2acb682f68 Fix startup so that log prefix %h works for the log_connections message.
We entirely randomly chose to initialize port->remote_host just after
printing the log_connections message, when we could perfectly well do it
just before, allowing %h and %r to work for that message.  Per gripe from
Artem Tomyuk.
2016-01-26 15:38:33 -05:00
Tatsuo Ishii
e8267aba5f Revert "Fix broken multibyte regression tests."
This reverts commit 479cb1e420.

The commit was plain wrong as pointed out in:
http://www.postgresql.org/message-id/27771.1448736909@sss.pgh.pa.us
2016-01-26 09:00:33 +09:00
Alvaro Herrera
1e910cf5ba pg_dump: Fix quoting of domain constraint names
The original code was adding double quotes to an already-quoted
identifier, leading to nonsensical results.  Remove the quoting call.

I introduced the broken code in 7eca575d1c of 9.5 era, so backpatch to
9.5.

Report and patch by Elvis Pranskevichus
Reviewed by Michael Paquier
2016-01-22 20:04:35 -03:00
Tom Lane
042a3e2353 Improve levenshtein() docs.
Fix chars-vs-bytes confusion here too.  Improve poor grammar and
markup.
2016-01-22 12:29:22 -05:00
Tom Lane
47acf3add3 Remove new coupling between NAMEDATALEN and MAX_LEVENSHTEIN_STRLEN.
Commit e529cd4ffa introduced an Assert requiring NAMEDATALEN to be
less than MAX_LEVENSHTEIN_STRLEN, which has been 255 for a long time.
Since up to that instant we had always allowed NAMEDATALEN to be
substantially more than that, this was ill-advised.

It's debatable whether we need MAX_LEVENSHTEIN_STRLEN at all (versus
putting a CHECK_FOR_INTERRUPTS into the loop), or whether it has to be
so tight; but this patch takes the narrower approach of just not applying
the MAX_LEVENSHTEIN_STRLEN limit to calls from the parser.

Trusting the parser for this seems reasonable, first because the strings
are limited to NAMEDATALEN which is unlikely to be hugely more than 256,
and second because the maximum distance is tightly constrained by
MAX_FUZZY_DISTANCE (though we'd forgotten to make use of that limit in one
place).  That means the cost is not really O(mn) but more like O(max(m,n)).

Relaxing the limit for user-supplied calls is left for future research;
given the lack of complaints to date, it doesn't seem very high priority.

In passing, fix confusion between lengths-in-bytes and lengths-in-chars
in comments and error messages.

Per gripe from Kevin Day; solution suggested by Robert Haas.  Back-patch
to 9.5 where the unwanted restriction was introduced.
2016-01-22 11:53:06 -05:00
Tom Lane
e80c85e4e8 Add defenses against putting expanded objects into Const nodes.
Putting a reference to an expanded-format value into a Const node would be
a bad idea for a couple of reasons.  It'd be possible for the supposedly
immutable Const to change value, if something modified the referenced
variable ... in fact, if the Const's reference were R/W, any function that
has the Const as argument might itself change it at runtime.  Also, because
datumIsEqual() is pretty simplistic, the Const might fail to compare equal
to other Consts that it should compare equal to, notably including copies
of itself.  This could lead to unexpected planner behavior, such as "could
not find pathkey item to sort" errors or inferior plans.

I have not been able to find any way to get an expanded value into a Const
within the existing core code; but Paul Ramsey was able to trigger the
problem by writing a datatype input function that returns an expanded
value.

The best fix seems to be to establish a rule that varlena values being
placed into Const nodes should be passed through pg_detoast_datum().
That will do nothing (and cost little) in normal cases, but it will flatten
expanded values and thereby avoid the above problems.  Also, it will
convert short-header or compressed values into canonical format, which will
avoid possible unexpected lack-of-equality issues for those cases too.
And it provides a last-ditch defense against putting a toasted value into
a Const, which we already knew was dangerous, cf commit 2b0c86b665.
(In the light of this discussion, I'm no longer sure that that commit
provided 100% protection against such cases, but this fix should do it.)

The test added in commit 65c3d05e18 to catch datatype input functions
with unstable results would fail for functions that returned expanded
values; but it seems a bit uncharitable to deem a result unstable just
because it's expressed in expanded form, so revise the coding so that we
check for bitwise equality only after applying pg_detoast_datum().  That's
a sufficient condition anyway given the new rule about detoasting when
forming a Const.

Back-patch to 9.5 where the expanded-object facility was added.  It's
possible that this should go back further; but in the absence of clear
evidence that there's any live bug in older branches, I'll refrain for now.
2016-01-21 12:55:59 -05:00
Bruce Momjian
34bda20ae5 Properly install dynloader.h on MSVC builds
This will enable PL/Java to be cleanly compiled, as dynloader.h is a
requirement.

Report by Chapman Flack

Patch by Michael Paquier

Backpatch through 9.1
2016-01-19 23:30:29 -05:00
Tatsuo Ishii
77a1863c94 Fix typo.
Reported by KOIZUMI Satoru.
2016-01-18 21:28:10 +09:00
Tom Lane
6b3b3a502e Remove dead code in pg_dump.
Coverity quite reasonably complained that this check for fout==NULL
occurred after we'd already dereferenced fout.  However, the check
is just dead code since there is no code path by which CreateArchive
can return a null pointer.  Errors such as can't-open-that-file are
reported down inside CreateArchive, and control doesn't return.
So let's silence the warning by removing the dead code, rather than
continuing to pretend it does something.

Coverity didn't complain about this before 5b5fea2a1, so back-patch
to 9.5 like that patch.
2016-01-17 11:38:55 -05:00
Robert Haas
0760225ccf Fix spelling mistake.
Same patch submitted independently by David Rowley and Peter Geoghegan.
2016-01-14 23:15:04 -05:00
Magnus Hagander
3276ca303d Properly close token in sspi authentication
We can never leak more than one token, but we shouldn't do that. We
don't bother closing it in the error paths since the process will
exit shortly anyway.

Christian Ullrich
2016-01-14 13:07:20 +01:00
Tom Lane
c42df2d46c Handle extension members when first setting object dump flags in pg_dump.
pg_dump's original approach to handling extension member objects was to
run around and clear (or set) their dump flags rather late in its data
collection process.  Unfortunately, quite a lot of code expects those flags
to be valid before that; which was an entirely reasonable expectation
before we added extensions.  In particular, this explains Karsten Hilbert's
recent report of pg_upgrade failing on a database in which an extension
has been installed into the pg_catalog schema.  Its objects are initially
marked as not-to-be-dumped on the strength of their schema, and later we
change them to must-dump because we're doing a binary upgrade of their
extension; but we've already skipped essential tasks like making associated
DO_SHELL_TYPE objects.

To fix, collect extension membership data first, and incorporate it in the
initial setting of the dump flags, so that those are once again correct
from the get-go.  This has the undesirable side effect of slightly
lengthening the time taken before pg_dump acquires table locks, but testing
suggests that the increase in that window is not very much.

Along the way, get rid of ugly special-case logic for deciding whether
to dump procedural languages, FDWs, and foreign servers; dump decisions
for those are now correct up-front, too.

In 9.3 and up, this also fixes erroneous logic about when to dump event
triggers (basically, they were *always* dumped before).  In 9.5 and up,
transform objects had that problem too.

Since this problem came in with extensions, back-patch to all supported
versions.
2016-01-13 18:55:27 -05:00
Tom Lane
6adba13e42 Access pg_dump's options structs through Archive struct, not directly.
Rather than passing around DumpOptions and RestoreOptions as separate
arguments, add fields to struct Archive to carry pointers to these objects,
and access them through those fields when needed.  There already was a
RestoreOptions pointer in Archive, though for no obvious reason it was part
of the "private" struct rather than out where pg_dump.c could see it.

Doing this allows reversion of quite a lot of parameter-addition changes
made in commit 0eea8047bf, which is a good thing IMO because this will
reduce the code delta between 9.4 and 9.5, probably easing a few future
back-patch efforts.  Moreover, the previous commit only added a DumpOptions
argument to functions that had to have it at the time, which means we could
anticipate still more code churn (and more back-patch hazard) as the
requirement spread further.  I'd hit exactly that problem in my upcoming
patch to fix extension membership marking, which is what motivated me to
do this.
2016-01-13 17:48:33 -05:00
Tom Lane
5ef26b8de3 Use LOAD not actual code execution to pull in plpython library.
Commit 866566a690 is insufficient to prevent dump/reload failures
when using transform modules in a database with both plpython2 and
plpython3 installed.  The reason is that the transform extension scripts
use DO blocks as a mechanism to pull in the libpython library before
creating the transform function.  It's necessary to preload the library
because the dynamic loader won't do it for us on every platform, leading
to "unresolved symbol" failures when the transform library is loaded.
But it's *not* necessary to execute Python code, and doing so will
provoke a multiple-Pythons-are-loaded error even after the preceding
commit.

To fix, use LOAD instead of a DO block.  That requires superuser privilege,
but creation of a C function does anyway.  It also embeds knowledge of
the underlying library name for each PL language; but that's wired into
the initdb-time contents of pg_pltemplate too, so that doesn't seem like
a large problem either.  Note that CREATE TRANSFORM as such doesn't call
the language module at all.

Per a report from Paul Jones.  Back-patch to 9.5 where transform modules
were introduced.
2016-01-11 20:06:47 -05:00
Tom Lane
db8fa56d6a Avoid dump/reload problems when using both plpython2 and plpython3.
Commit 803716013d installed a safeguard against loading plpython2
and plpython3 at the same time, but asserted that both could still be
used in the same database, just not in the same session.  However, that's
not actually all that practical because dumping and reloading will fail
(since both libraries necessarily get loaded into the restoring session).
pg_upgrade is even worse, because it checks for missing libraries by
loading every .so library mentioned in the entire installation into one
session, so that you can have only one across the whole cluster.

We can improve matters by not throwing the error immediately in _PG_init,
but only when and if we're asked to do something that requires calling
into libpython.  This ameliorates both of the above situations, since
while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in
loading plpython, it isn't asked to do anything interesting (at least
not if check_function_bodies is off, as it will be during a restore).

It's possible that this opens some corner-case holes in which a crash
could be provoked with sufficient effort.  However, since plpython
only exists as an untrusted language, any such crash would require
superuser privileges, making it "don't do that" not a security issue.
To reduce the hazards in this area, the error is still FATAL when it
does get thrown.

Per a report from Paul Jones.  Back-patch to 9.2, which is as far back
as the patch applies without work.  (It could be made to work in 9.1,
but given the lack of previous complaints, I'm disinclined to expend
effort so far back.  We've been pretty desultory about support for
Python 3 in 9.1 anyway.)
2016-01-11 19:55:39 -05:00
Peter Eisentraut
c43129e714 doc: Fix typo in logical decoding documentation
From: Petr Jelinek <petr@2ndquadrant.com>
2016-01-10 20:13:59 -05:00
Tom Lane
57da416933 Remove a useless PG_GETARG_DATUM() call from jsonb_build_array.
This loop uselessly fetched the argument after the one it's currently
looking at.  No real harm is done since we couldn't possibly fetch off
the end of memory, but it's confusing to the reader.

Also remove a duplicate (and therefore confusing) PG_ARGISNULL check in
jsonb_build_object.

I happened to notice these things while trolling for missed null-arg
checks earlier today.  Back-patch to 9.5, not because there is any
real bug, but just because 9.5 and HEAD are still in sync in this
file and we might as well keep them so.

In passing, re-pgindent.
2016-01-09 17:39:45 -05:00
Tom Lane
aa7e04cb56 Clean up some lack-of-STRICT issues in the core code, too.
A scan for missed proisstrict markings in the core code turned up
these functions:

brin_summarize_new_values
pg_stat_reset_single_table_counters
pg_stat_reset_single_function_counters
pg_create_logical_replication_slot
pg_create_physical_replication_slot
pg_drop_replication_slot

The first three of these take OID, so a null argument will normally look
like a zero to them, resulting in "ERROR: could not open relation with OID
0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX
functions.  The other three will dump core on a null argument, though this
is mitigated by the fact that they won't do so until after checking that
the caller is superuser or has rolreplication privilege.

In addition, the pg_logical_slot_get/peek[_binary]_changes family was
intentionally marked nonstrict, but failed to make nullness checks on all
the arguments; so again a null-pointer-dereference crash is possible but
only for superusers and rolreplication users.

Add the missing ARGISNULL checks to the latter functions, and mark the
former functions as strict in pg_proc.  Make that change in the back
branches too, even though we can't force initdb there, just so that
installations initdb'd in future won't have the issue.  Since none of these
bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX
functions hardly misbehave at all), it seems sufficient to do this.

In addition, fix some order-of-operations oddities in the slot_get_changes
family, mostly cosmetic, but not the part that moves the function's last
few operations into the PG_TRY block.  As it stood, there was significant
risk for an error to exit without clearing historical information from
the system caches.

The slot_get_changes bugs go back to 9.4 where that code was introduced.
Back-patch appropriate subsets of the pg_proc changes into all active
branches, as well.
2016-01-09 16:58:32 -05:00
Tom Lane
605afeca34 Clean up code for widget_in() and widget_out().
Given syntactically wrong input, widget_in() could call atof() with an
indeterminate pointer argument, typically leading to a crash; or if it
didn't do that, it might return a NULL pointer, which again would lead
to a crash since old-style C functions aren't supposed to do things
that way.  Fix that by correcting the off-by-one syntax test and
throwing a proper error rather than just returning NULL.

Also, since widget_in and widget_out have been marked STRICT for a
long time, their tests for null inputs are just dead code; remove 'em.
In the oldest branches, also improve widget_out to use snprintf not
sprintf, just to be sure.

In passing, get rid of a long-since-useless sprintf into a local buffer
that nothing further is done with, and make some other minor coding
style cleanups.

In the intended regression-testing usage of these functions, none of
this is very significant; but if the regression test database were
left around in a production installation, these bugs could amount
to a minor security hazard.

Piotr Stefaniak, Michael Paquier, and Tom Lane
2016-01-09 13:44:27 -05:00
Tom Lane
4e1ccc7e44 Add STRICT to some C functions created by the regression tests.
These functions readily crash when passed a NULL input value.  The tests
themselves do not pass NULL values to them; but when the regression
database is used as a basis for fuzz testing, they cause a lot of noise.
Also, if someone were to leave a regression database lying about in a
production installation, these would create a minor security hazard.

Andreas Seltenreich
2016-01-09 13:03:10 -05:00
Tom Lane
a66c1fcdd7 PL/Python: Make tests pass with Python 3.5
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file.  In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python.  So use a generic
exception instead, which has a message that doesn't vary across
versions.

Back-patch of commit f16d52269a, which
was previously back-patched into 9.2-9.4, but missed 9.5.
2016-01-08 11:39:28 -05:00
Magnus Hagander
6a1a44037b Fix typo in comment
Tatsuro Yamada
2016-01-08 08:56:03 +01:00
Tom Lane
506ef1d07f Fix unobvious interaction between -X switch and subdirectory creation.
Turns out the only reason initdb -X worked is that pg_mkdir_p won't
whine if you point it at something that's a symlink to a directory.
Otherwise, the attempt to create pg_xlog/ just like all the other
subdirectories would have failed.  Let's be a little more explicit
about what's happening.  Oversight in my patch for bug #13853
(mea culpa for not testing -X ...)
2016-01-07 18:20:57 -05:00
Tom Lane
32e7e7fa77 Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.
When we're creating subdirectories of PGDATA during initdb, we know darn
well that the parent directory exists (or should exist) and that the new
subdirectory doesn't (or shouldn't).  There is therefore no need to use
anything more complicated than mkdir().  Using pg_mkdir_p() just opens us
up to unexpected failure modes, such as the one exhibited in bug #13853
from Nuri Boardman.  It's not very clear why pg_mkdir_p() went wrong there,
but it is clear that we didn't need to be trying to create parent
directories in the first place.  We're not even saving any code, as proven
by the fact that this patch nets out at minus five lines.

Since this is a response to a field bug report, back-patch to all branches.
2016-01-07 15:22:01 -05:00
Alvaro Herrera
744d01c9a1 Windows: Make pg_ctl reliably detect service status
pg_ctl is using isatty() to verify whether the process is running in a
terminal, and if not it sends its output to Windows' Event Log ... which
does the wrong thing when the output has been redirected to a pipe, as
reported in bug #13592.

To fix, make pg_ctl use the code we already have to detect service-ness:
in the master branch, move src/backend/port/win32/security.c to src/port
(with suitable tweaks so that it runs properly in backend and frontend
environments); pg_ctl already has access to pgport so it Just Works.  In
older branches, that's likely to cause trouble, so instead duplicate the
required code in pg_ctl.c.

Author: Michael Paquier
Bug report and diagnosis: Egon Kocjan
Backpatch: all supported branches
2016-01-07 11:59:08 -03:00
Tatsuo Ishii
9919f4fef1 Fix typo in create_transform.sgml. 2016-01-06 08:01:40 +09:00
Tom Lane
8805af0886 Sort $(wildcard) output where needed for reproducible build output.
The order of inclusion of .o files makes a difference in linker output;
not a functional difference, but still a bitwise difference, which annoys
some packagers who would like reproducible builds.

Report and patch by Christoph Berg
2016-01-05 15:47:05 -05:00
Alvaro Herrera
7ef311eb42 Make pg_receivexlog silent with 9.3 and older servers
A pointless and confusing error message is shown to the user when
attempting to identify a 9.3 or older remote server with a 9.5/9.6
pg_receivexlog, because the return signature of IDENTIFY_SYSTEM was
changed in 9.4.  There's no good reason for the warning message, so
shuffle code around to keep it quiet.

(pg_recvlogical is also affected by this commit, but since it obviously
cannot work with 9.3 that doesn't actually matter much.)

Backpatch to 9.5.

Reported by Marco Nenciarini, who also wrote the initial patch.  Further
tweaked by Robert Haas and Fujii Masao; reviewed by Michael Paquier and
Craig Ringer.
2016-01-05 17:25:12 -03:00
Tom Lane
41abf18502 Add to_regnamespace() and to_regrole() to the documentation.
Commits cb9fa802b3 and 0c90f6769d added these functions,
but did not bother with documentation.
2016-01-05 12:35:18 -05:00
Tom Lane
cdd4ed5449 Stamp 9.5.0. 2016-01-04 16:29:34 -05:00
Tom Lane
d878b115c3 Docs: provide a concrete discussion and example for RLS race conditions.
Commit 43cd468cf0 added some wording to create_policy.sgml purporting
to warn users against a race condition of the sort that had been noted some
time ago by Peter Geoghegan.  However, that warning was far too vague to be
useful (or at least, I completely failed to grasp what it was on about).
Since the problem case occurs with a security design pattern that lots of
people are likely to try to use, we need to be as clear as possible about
it.  Provide a concrete example in the main-line docs in place of the
original warning.
2016-01-04 15:11:44 -05:00
Tom Lane
6a77404f5c Adjust behavior of row_security GUC to match the docs.
Some time back we agreed that row_security=off should not be a way to
bypass RLS entirely, but only a way to get an error if it was being
applied.  However, the code failed to act that way for table owners.
Per discussion, this is a must-fix bug for 9.5.0.

Adjust the logic in rls.c to behave as expected; also, modify the
error message to be more consistent with the new interpretation.
The regression tests need minor corrections as well.  Also update
the comments about row_security in ddl.sgml to be correct.  (The
official description of the GUC in config.sgml is already correct.)

I failed to resist the temptation to do some other very minor
cleanup as well, such as getting rid of a duplicate extern declaration.
2016-01-04 12:21:43 -05:00
Robert Haas
fa39e891b0 Fix typo in comment.
Masahiko Sawada
2016-01-04 10:13:04 -05:00
Peter Eisentraut
00dfd5c94c Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3b0ccc27cf917446ea0a6c680b70534cfcaba81e
2016-01-04 08:18:48 -05:00
Tom Lane
de93252386 Fix regrole and regnamespace output functions to do quoting, too.
We discussed this but somehow failed to implement it...
2016-01-04 01:53:36 -05:00
Tom Lane
fa038f830b Fix regrole and regnamespace types to honor quoting like other reg* types.
Aside from any consistency arguments, this is logically necessary because
the I/O functions for these types also handle numeric OID values.  Without
a quoting rule it is impossible to distinguish numeric OIDs from role or
namespace names that happen to contain only digits.

Also change the to_regrole and to_regnamespace functions to dequote their
arguments.  While not logically essential, this seems like a good idea
since the other to_reg* functions do it.  Anyone who really wants raw
lookup of an uninterpreted name can fall back on the time-honored solution
of (SELECT oid FROM pg_namespace WHERE nspname = whatever).

Report and patch by Jim Nasby, reviewed by Michael Paquier
2016-01-04 01:03:53 -05:00
Tom Lane
c244a511ba Fix bogus lock release in RemovePolicyById and RemoveRoleFromObjectPolicy.
Can't release the AccessExclusiveLock on the target table until commit.
Otherwise there is a race condition whereby other backends might service
our cache invalidation signals before they can actually see the updated
catalog rows.

Just to add insult to injury, RemovePolicyById was closing the rel (with
incorrect lock drop) and then passing the now-dangling rel pointer to
CacheInvalidateRelcache.  Probably the reason this doesn't fall over on
CLOBBER_CACHE buildfarm members is that some outer level of the DROP logic
is still holding the rel open ... but it'd have bit us on the arse
eventually, no doubt.
2016-01-03 20:53:41 -05:00
Tom Lane
35adf6e44c Do some copy-editing on the docs for row-level security.
Clarifications, markup improvements, corrections of misleading or
outright wrong statements.
2016-01-03 20:04:11 -05:00
Tom Lane
ab1f08a3a4 Guard against null arguments in binary_upgrade_create_empty_extension().
The CHECK_IS_BINARY_UPGRADE macro is not sufficient security protection
if we're going to dereference pass-by-reference arguments before it.

But in any case we really need to explicitly check PG_ARGISNULL for all
the arguments of a non-strict function, not only the ones we expect null
values for.

Oversight in commits 30982be4e5 and
f92fc4c95d.  Found by Andreas Seltenreich.
(The other usages in pg_upgrade_support.c seem safe.)
2016-01-03 16:26:38 -05:00
Tom Lane
2e5c9284f6 Do some copy-editing on the docs for replication origins.
Minor grammar and markup improvements.
2016-01-03 16:03:42 -05:00
Tom Lane
78d0e582ab Do a final round of copy-editing on the 9.5 release notes. 2016-01-03 15:33:12 -05:00
Tom Lane
29692bdbb1 Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.
pgwin32_recv() has treated a non-error return of zero bytes from WSARecv()
as being a reason to block ever since the current implementation was
introduced in commit a4c40f140d.  However, so far as one can tell
from Microsoft's documentation, that is just wrong: what it means is
graceful connection closure (in stream protocols) or receipt of a
zero-length message (in message protocols), and neither case should result
in blocking here.  The only reason the code worked at all was that control
then fell into the retry loop, which did *not* treat zero bytes specially,
so we'd get out after only wasting some cycles.  But as of 9.5 we do not
normally reach the retry loop and so the bug is exposed, as reported by
Shay Rojansky and diagnosed by Andres Freund.

Remove the unnecessary test on the byte count, and rearrange the code
in the retry loop so that it looks identical to the initial sequence.

Back-patch to 9.5.  The code is wrong all the way back, AFAICS, but
since it's relatively harmless in earlier branches we'll leave it alone.
2016-01-03 13:56:29 -05:00
Tom Lane
b01828e97d Teach pg_dump to quote reloption values safely.
Commit c7e27becd2 fixed this on the backend side, but we neglected
the fact that several code paths in pg_dump were printing reloptions
values that had not gotten massaged by ruleutils.  Apply essentially the
same quoting logic in those places, too.
2016-01-02 19:04:45 -05:00
Tom Lane
7013035900 Fix overly-strict assertions in spgtextproc.c.
spg_text_inner_consistent is capable of reconstructing an empty string
to pass down to the next index level; this happens if we have an empty
string coming in, no prefix, and a dummy node label.  (In practice, what
is needed to trigger that is insertion of a whole bunch of empty-string
values.)  Then, we will arrive at the next level with in->level == 0
and a non-NULL (but zero length) in->reconstructedValue, which is valid
but the Assert tests weren't expecting it.

Per report from Andreas Seltenreich.  This has no impact in non-Assert
builds, so should not be a problem in production, but back-patch to
all affected branches anyway.

In passing, remove a couple of useless variable initializations and
shorten the code by not duplicating DatumGetPointer() calls.
2016-01-02 16:24:59 -05:00
Tom Lane
9200e56644 Adjust back-branch release note description of commits a2a718b22 et al.
As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist
in 9.0-9.3, making the release note text not very useful.  Instead make it
talk about recovery_target_xid, which did exist then.

9.0 is already out of support, but we can fix the text in the newer
branches' copies of its release notes.
2016-01-02 15:29:03 -05:00
Bruce Momjian
d47bc474b3 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:39 -05:00
Tom Lane
404c45bac6 Teach flatten_reloptions() to quote option values safely.
flatten_reloptions() supposed that it didn't really need to do anything
beyond inserting commas between reloption array elements.  However, in
principle the value of a reloption could be nearly anything, since the
grammar allows a quoted string there.  Any restrictions on it would come
from validity checking appropriate to the particular option, if any.

A reloption value that isn't a simple identifier or number could thus lead
to dump/reload failures due to syntax errors in CREATE statements issued
by pg_dump.  We've gotten away with not worrying about this so far with
the core-supported reloptions, but extensions might allow reloption values
that cause trouble, as in bug #13840 from Kouhei Sutou.

To fix, split the reloption array elements explicitly, and then convert
any value that doesn't look like a safe identifier to a string literal.
(The details of the quoting rule could be debated, but this way is safe
and requires little code.)  While we're at it, also quote reloption names
if they're not safe identifiers; that may not be a likely problem in the
field, but we might as well try to be bulletproof here.

It's been like this for a long time, so back-patch to all supported
branches.

Kouhei Sutou, adjusted some by me
2016-01-01 15:27:53 -05:00
Tom Lane
d932391fd8 Add some more defenses against silly estimates to gincostestimate().
A report from Andy Colson showed that gincostestimate() was not being
nearly paranoid enough about whether to believe the statistics it finds in
the index metapage.  The problem is that the metapage stats (other than the
pending-pages count) are only updated by VACUUM, and in the worst case
could still reflect the index's original empty state even when it has grown
to many entries.  We attempted to deal with that by scaling up the stats to
match the current index size, but if nEntries is zero then scaling it up
still gives zero.  Moreover, the proportion of pages that are entry pages
vs. data pages vs. pending pages is unlikely to be estimated very well by
scaling if the index is now orders of magnitude larger than before.

We can improve matters by expanding the use of the rule-of-thumb estimates
I introduced in commit 7fb008c5ee: if the index has grown by more
than a cutoff amount (here set at 4X growth) since VACUUM, then use the
rule-of-thumb numbers instead of scaling.  This might not be exactly right
but it seems much less likely to produce insane estimates.

I also improved both the scaling estimate and the rule-of-thumb estimate
to account for numPendingPages, since it's reasonable to expect that that
is accurate in any case, and certainly pages that are in the pending list
are not either entry or data pages.

As a somewhat separate issue, adjust the estimation equations that are
concerned with extra fetches for partial-match searches.  These equations
suppose that a fraction partialEntries / numEntries of the entry and data
pages will be visited as a consequence of a partial-match search.  Now,
it's physically impossible for that fraction to exceed one, but our
estimate of partialEntries is mostly bunk, and our estimate of numEntries
isn't exactly gospel either, so we could arrive at a silly value.  In the
example presented by Andy we were coming out with a value of 100, leading
to insane cost estimates.  Clamp the fraction to one to avoid that.

Like the previous patch, back-patch to all supported branches; this
problem can be demonstrated in one form or another in all of them.
2016-01-01 13:42:30 -05:00
Tom Lane
2d774aaf18 Split out pg_operator.h function declarations to new file pg_operator_fn.h.
Commit a2e35b53c3 added an #include of catalog/objectaddress.h to
pg_operator.h, making it impossible for client-side code to #include
pg_operator.h.  It's not entirely clear whether any client-side code needs
to include pg_operator.h, but it seems prudent to assume that there is some
such code somewhere.  Therefore, split off the function definitions into a
new file pg_operator_fn.h, similarly to what we've done for some other
catalog header files.

Back-patch of part of commit 0dab5ef39b.
2016-01-01 13:00:13 -05:00
Tom Lane
69892d58c9 Add a comment noting that FDWs don't have to implement EXCEPT or LIMIT TO.
postgresImportForeignSchema pays attention to IMPORT's EXCEPT and LIMIT TO
options, but only as an efficiency hack, not for correctness' sake.  The
FDW documentation does explain that, but someone using postgres_fdw.c
as a coding guide might not remember it, so let's add a comment here.
Per question from Regina Obe.
2015-12-31 17:59:16 -05:00