Commit Graph

27548 Commits

Author SHA1 Message Date
Tom Lane ff402ae11b Improve handling of password reuse in src/bin/scripts programs.
This reverts most of commit 83dec5a71 in favor of having connectDatabase()
store the possibly-reusable password in a static variable, similar to the
coding we've had for a long time in pg_dump's version of that function.
To avoid possible problems with unwanted password reuse, make callers
specify whether it's reasonable to attempt to re-use the password.
This is a wash for cases where re-use isn't needed, but it is far simpler
for callers that do want that.  Functionally there should be no difference.

Even though we're past RC1, it seems like a good idea to back-patch this
into 9.5, like the prior commit.  Otherwise, if there are any third-party
users of connectDatabase(), they'll have to deal with an API change in
9.5 and then another one in 9.6.

Michael Paquier
2015-12-23 15:45:43 -05:00
Tom Lane 1aa41e3eae In pg_dump, remember connection passwords no matter how we got them.
When pg_dump prompts the user for a password, it remembers the password
for possible re-use by parallel worker processes.  However, libpq might
have extracted the password from a connection string originally passed
as "dbname".  Since we don't record the original form of dbname but
break it down to host/port/etc, the password gets lost.  Fix that by
retrieving the actual password from the PGconn.

(It strikes me that this whole approach is rather broken, as it will also
lose other information such as options that might have been present in
the connection string.  But we'll leave that problem for another day.)

In passing, get rid of rather silly use of malloc() for small fixed-size
arrays.

Back-patch to 9.3 where parallel pg_dump was introduced.

Report and fix by Zeus Kronion, adjusted a bit by Michael Paquier and me
2015-12-23 14:25:53 -05:00
Robert Haas bc7fcab5e3 Read from the same worker repeatedly until it returns no tuple.
The original coding read tuples from workers in round-robin fashion,
but performance testing shows that it works much better to read enough
to empty one queue before moving on to the next.  I believe the
reason for this is that, with the old approach, we could easily wake
up a worker repeatedly to write only one new tuple into the shm_mq
each time.  With this approach, by the time the process gets scheduled,
it has a decent chance of being able to fill the entire buffer in
one go.

Patch by me.  Dilip Kumar helped with performance testing.
2015-12-23 14:06:52 -05:00
Robert Haas 51d152f18e Change Gather not to use a physical tlist.
This should have been part of the original commit, but was missed.
Pushing data between processes is expensive, so we definitely want
to project away unneeded columns here, just as we do for other nodes
like Sort and Hash that care about the volume of data.
2015-12-23 13:41:06 -05:00
Peter Eisentraut 30c0c4bf12 Remove unnecessary escaping in C character literals
'\"' is more commonly written simply as '"'.
2015-12-22 22:43:46 -05:00
Tom Lane 6efbded6e4 Allow omitting one or both boundaries in an array slice specifier.
Omitted boundaries represent the upper or lower limit of the corresponding
array subscript.  This allows simpler specification of many common
use-cases.

(Revised version of commit 9246af6799)

YUriy Zhuravlev
2015-12-22 21:05:29 -05:00
Robert Haas 0ba3f3bc65 Comment improvements for abbreviated keys.
Peter Geoghegan and Robert Haas
2015-12-22 13:57:18 -05:00
Robert Haas ccd8f97922 postgres_fdw: Consider requesting sorted data so we can do a merge join.
When use_remote_estimate is enabled, consider adding ORDER BY to the
query we sending to the remote server so that we can use that ordered
data for a merge join.  Commit f18c944b61
arranges to push down the query pathkeys, which seems like the case
mostly likely to be a win, but testing shows this can sometimes win,
too.

For a regular table, we know which indexes are present and therefore
test whether the ordering provided by each such index is useful.  Here,
we take the opposite approach: guess what orderings would be useful if
they could be generated cheaply, and then ask the remote side what those
will cost.

Ashutosh Bapat, with very substantial cosmetic revisions by me.  Also
reviewed by Rushabh Lathia.
2015-12-22 13:46:40 -05:00
Tom Lane f5a4370aea Fix calculation of space needed for parsed words in tab completion.
Yesterday in commit d854118c8, I had a serious brain fade leading me to
underestimate the number of words that the tab-completion logic could
divide a line into.  On input such as "(((((", each character will get
seen as a separate word, which means we do indeed sometimes need more
space for the words than for the original line.  Fix that.
2015-12-21 15:08:56 -05:00
Stephen Frost 6f8cb1e234 Make viewquery a copy in rewriteTargetView()
Rather than expect the Query returned by get_view_query() to be
read-only and then copy bits and pieces of it out, simply copy the
entire structure when we get it.  This addresses an issue where
AcquireRewriteLocks, which is called by acquireLocksOnSubLinks(),
scribbles on the parsetree passed in, which was actually an entry
in relcache, leading to segfaults with certain view definitions.
This also future-proofs us a bit for anyone adding more code to this
path.

The acquireLocksOnSubLinks() was added in commit c3e0ddd40.

Back-patch to 9.3 as that commit was.
2015-12-21 10:34:14 -05:00
Tom Lane 99ccb23092 Remove silly completion for "DELETE FROM tabname ...".
psql offered USING, WHERE, and SET in this context, but SET is not a valid
possibility here.  Seems to have been a thinko in commit f5ab0a14ea
which added DELETE's USING option.
2015-12-20 18:29:51 -05:00
Tom Lane d854118c8d Teach psql's tab completion to consider the entire input string.
Up to now, the tab completion logic has only examined the last few words
of the current input line; "last few" being originally as few as four
words, but lately up to nine words.  Furthermore, it only looked at what
libreadline considers the current line of input, which made it rather
myopic if you split your command across lines.  This was tolerable,
sort of, so long as the match patterns were only designed to consider the
last few words of input; but with the recent addition of HeadMatches()
and Matches() matching rules, we really have to do better if we want
those to behave sanely.

Hence, change the code to break the entire line down into words, and to
include any previous lines in the command buffer along with the active
readline input buffer.

This will be a little bit slower than the previous coding, but some
measurements say that even a query of several thousand characters can be
parsed in a hundred or so microseconds on modern machines; so it's really
not going to be significant for interactive tab completion.  To reduce
the cost some, I arranged to avoid the per-word malloc calls that used
to occur: all the words are now kept in one malloc'd buffer.
2015-12-20 13:28:18 -05:00
Peter Eisentraut 69e7c44fc6 psql: Review of new help output strings 2015-12-20 11:50:04 -05:00
Tom Lane 654218138b Add missing COSTS OFF to EXPLAIN commands in rowsecurity.sql.
Commit e5e11c8cc added a bunch of EXPLAIN statements without COSTS OFF
to the regression tests.  This is contrary to project policy since it
results in unnecessary platform dependencies in the output (it's just
luck that we didn't get buildfarm failures from it).  Per gripe from
Mike Wilson.
2015-12-19 16:55:14 -05:00
Tom Lane d37b816dc9 Adopt a more compact, less error-prone notation for tab completion code.
Replace tests like

    else if (pg_strcasecmp(prev4_wd, "CREATE") == 0 &&
             pg_strcasecmp(prev3_wd, "TRIGGER") == 0 &&
             (pg_strcasecmp(prev_wd, "BEFORE") == 0 ||
              pg_strcasecmp(prev_wd, "AFTER") == 0))

with new notation like this:

    else if (TailMatches4("CREATE", "TRIGGER", MatchAny, "BEFORE|AFTER"))

In addition, provide some macros COMPLETE_WITH_LISTn() to reduce the amount
of clutter needed to specify a small number of predetermined completion
alternatives.

This makes the code substantially more compact: tab-complete.c gets over a
thousand lines shorter in this patch, despite the addition of a couple of
hundred lines of infrastructure for the new notations.  The new way of
specifying match rules seems a whole lot more readable and less
error-prone, too.

There's a lot more that could be done now to make matching faster and more
reliable; for example I suspect that most of the TailMatches() rules should
now be Matches() rules.  That would allow them to be skipped after a single
integer comparison if there aren't the right number of words on the line,
and it would reduce the risk of unintended matches.  But for now, (mostly)
refrain from reworking any match rules in favor of just converting what
we've got into the new notation.

Thomas Munro, reviewed by Michael Paquier, some adjustments by me
2015-12-19 16:03:14 -05:00
Andres Freund 130d94a7b8 Fix tab completion for ALTER ... TABLESPACE ... OWNED BY.
Previously the completion used the wrong word to match 'BY'. This was
introduced brokenly, in b2de2a. While at it, also add completion of
IN TABLESPACE ... OWNED BY and fix comments referencing nonexistent
syntax.

Reported-By: Michael Paquier
Author: Michael Paquier and Andres Freund
Discussion: CAB7nPqSHDdSwsJqX0d2XzjqOHr==HdWiubCi4L=Zs7YFTUne8w@mail.gmail.com
Backpatch: 9.4, like the commit introducing the bug
2015-12-19 17:37:11 +01:00
Teodor Sigaev bbbd807097 Revert 9246af6799 because
I miss too much. Patch is returned to commitfest process.
2015-12-18 21:35:22 +03:00
Robert Haas 3c7042a7d7 pgbench: Change terminology from "threshold" to "parameter".
Per a recommendation from Tomas Vondra, it's more helpful to refer to
the value that determines how skewed a Gaussian or exponential
distribution is as a parameter rather than a threshold.

Since it's not quite too late to get this right in 9.5, where it was
introduced, back-patch this.  Most of the patch changes only comments
and documentation, but a few pgbench messages are altered to match.

Fabien Coelho, reviewed by Michael Paquier and by me.
2015-12-18 13:24:51 -05:00
Robert Haas 6e7b335930 Remove duplicate word.
Kyotaro Horiguchi
2015-12-18 12:43:52 -05:00
Robert Haas 2bdfcb52c5 Fix TupleQueueReaderNext not to ignore its nowait argument.
This was a silly goof on my (rhaas's) part.

Report and fix by Rushabh Lathia.
2015-12-18 12:37:43 -05:00
Robert Haas 4496226782 Fix copy-and-paste error in logical decoding callback.
This could result in the error context misidentifying where the error
actually occurred.

Craig Ringer
2015-12-18 12:17:35 -05:00
Robert Haas 9a51698bae Fix typo in comment.
Amit Langote
2015-12-18 12:03:15 -05:00
Teodor Sigaev 9246af6799 Allow to omit boundaries in array subscript
Allow to omiy lower or upper or both boundaries in array subscript
for selecting slice of array.

Author: YUriy Zhuravlev
2015-12-18 15:18:58 +03:00
Tom Lane 3d0c50ffa0 Remove unreferenced function declarations.
datapagemap_create() and datapagemap_destroy() were declared extern,
but they don't actually exist anywhere.  Per YUriy Zhuravlev and
Michael Paquier.
2015-12-17 20:21:42 -05:00
Tom Lane c4a8812cf6 Use just one standalone-backend session for initdb's post-bootstrap steps.
Previously, each subroutine in initdb fired up its own standalone backend
session.  Over time we'd grown as many as fifteen of these sessions,
and the cumulative startup and shutdown work for them was getting pretty
noticeable.  Combining things so that all these steps share a single
backend session cuts a good 10% off the total runtime of initdb, more
if you're not fsync'ing.

The main stumbling block to doing this before was that some of the sessions
were run with -j and some not.  The improved definition of -j mode
implemented by my previous commit makes it possible to fix that by running
all the post-bootstrap steps with -j; we just have to use double instead of
single newlines to end command strings.  (This is only absolutely necessary
around the VACUUM and CREATE DATABASE steps, since those can't be run in a
transaction block.  But it seems best to make them all use double newlines
so that the commands remain separate for error-reporting purposes.)

A minor disadvantage is that since initdb can't tell how much of its
output the backend has executed, we can no longer have the per-step
progress reporting initdb used to print.  But things are fast enough
nowadays that that's not really all that useful anyway.

In passing, add more const decoration to some of the static arrays in
initdb.c.
2015-12-17 19:38:21 -05:00
Tom Lane 66d947b9d3 Adjust behavior of single-user -j mode for better initdb error reporting.
Previously, -j caused the entire input file to be read in and executed as
a single command string.  That's undesirable, not least because any error
causes the entire file to be regurgitated as the "failing query".  Some
experimentation suggests a better rule: end the command string when we see
a semicolon immediately followed by two newlines, ie, an empty line after
a query.  This serves nicely to break up the existing examples such as
information_schema.sql and system_views.sql.  A limitation is that it's
no longer possible to write such a sequence within a string literal or
multiline comment in a file meant to be read with -j; but there are no
instances of such a problem within the data currently used by initdb.
(If someone does make such a mistake in future, it'll be obvious because
they'll get an unterminated-literal or unterminated-comment syntax error.)
Other than that, there shouldn't be any negative consequences; you're not
forced to end statements that way, it's just a better idea in most cases.

In passing, remove src/include/tcop/tcopdebug.h, which is dead code
because it's not included anywhere, and hasn't been for more than
ten years.  One of the debug-support symbols it purported to describe
has been unreferenced for at least the same amount of time, and the
other is removed by this commit on the grounds that it was useless:
forcing -j mode all the time would have broken initdb.  The lack of
complaints about that, or about the missing inclusion, shows that
no one has tried to use TCOP_DONTUSENEWLINE in many years.
2015-12-17 19:34:15 -05:00
Tom Lane aee7705be5 Fix improper initialization order for readline.
Turns out we must set rl_basic_word_break_characters *before* we call
rl_initialize() the first time, because it will quietly copy that value
elsewhere --- but only on the first call.  (Love these undocumented
dependencies.)  I broke this yesterday in commit 2ec477dc8108339d;
like that commit, back-patch to all active branches.  Per report from
Pavel Stehule.
2015-12-17 16:55:23 -05:00
Alvaro Herrera 756e7b4c9d Rework internals of changing a type's ownership
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite.  Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.

To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly.  AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.

I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised.  Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.

Per bug #13666 reported by Chris Pacejo.

Backpatch to 9.5.

(I would back-patch this all the way back, except that it doesn't apply
cleanly in 9.4 and earlier because 59367fdf9 wasn't backpatched.  If we
decide that we need this in earlier branches too, we should backpatch
both.)
2015-12-17 14:25:41 -03:00
Tom Lane 2ec477dc81 Cope with Readline's failure to track SIGWINCH events outside of input.
It emerges that libreadline doesn't notice terminal window size change
events unless they occur while collecting input.  This is easy to stumble
over if you resize the window while using a pager to look at query output,
but it can be demonstrated without any pager involvement.  The symptom is
that queries exceeding one line are misdisplayed during subsequent input
cycles, because libreadline has the wrong idea of the screen dimensions.

The safest, simplest way to fix this is to call rl_reset_screen_size()
just before calling readline().  That causes an extra ioctl(TIOCGWINSZ)
for every command; but since it only happens when reading from a tty, the
performance impact should be negligible.  A more valid objection is that
this still leaves a tiny window during entry to readline() wherein delivery
of SIGWINCH will be missed; but the practical consequences of that are
probably negligible.  In any case, there doesn't seem to be any good way to
avoid the race, since readline exposes no functions that seem safe to call
from a generic signal handler --- rl_reset_screen_size() certainly isn't.

It turns out that we also need an explicit rl_initialize() call, else
rl_reset_screen_size() dumps core when called before the first readline()
call.

rl_reset_screen_size() is not present in old versions of libreadline,
so we need a configure test for that.  (rl_initialize() is present at
least back to readline 4.0, so we won't bother with a test for it.)
We would need a configure test anyway since libedit's emulation of
libreadline doesn't currently include such a function.  Fortunately,
libedit seems not to have any corresponding bug.

Merlin Moncure, adjusted a bit by me
2015-12-16 16:59:35 -05:00
Robert Haas b648b70342 Speed up CREATE INDEX CONCURRENTLY's TID sort.
Encode TIDs as 64-bit integers to speed up comparisons.  This seems to
speed things up on all platforms, but is even more beneficial when
8-byte integers are passed by value.

Peter Geoghegan.  Design suggestions and review by Tom Lane.  Review
also by Simon Riggs and by me.
2015-12-16 15:23:45 -05:00
Robert Haas f27a6b15e6 Mark CHECK constraints declared NOT VALID valid if created with table.
FOREIGN KEY constraints have behaved this way for a long time, but for
some reason the behavior of CHECK constraints has been inconsistent up
until now.

Amit Langote and Amul Sul, with assorted tweaks by me.
2015-12-16 07:43:56 -05:00
Robert Haas 049469e7e7 Teach mdnblocks() not to create zero-length files.
It's entirely surprising that mdnblocks() has the side effect of
creating new files on disk, so let's make it not do that.  One
consequence of the old behavior is that, if running on a damaged
cluster that is missing a file, mdnblocks() can recreate the file
and allow a subsequent _mdfd_getseg() for a higher segment to succeed.
This happens because, while mdnblocks() stops when it finds a segment
that is shorter than 1GB, _mdfd_getseg() has no such check, and thus
the empty file created by mdnblocks() can allow it to continue its
traversal and find higher-numbered segments which remain.

It might be a good idea for _mdfd_getseg() to actually verify that
each segment it finds is exactly 1GB before proceeding to the next
one, but that would involve some additional system calls, so for
now I'm just doing this much.

Patch by me, per off-list analysis by Kevin Grittner and Rahila Syed.
Review by Andres Freund.
2015-12-15 13:57:45 -05:00
Robert Haas 6150a1b08a Move buffer I/O and content LWLocks out of the main tranche.
Move the content lock directly into the BufferDesc, so that locking and
pinning a buffer touches only one cache line rather than two.  Adjust
the definition of BufferDesc slightly so that this doesn't make the
BufferDesc any larger than one cache line (at least on platforms where
a spinlock is only 1 or 2 bytes).

We can't fit the I/O locks into the BufferDesc and stay within one
cache line, so move those to a completely separate tranche.  This
leaves a relatively limited number of LWLocks in the main tranche, so
increase the padding of those remaining locks to a full cache line,
rather than allowing adjacent locks to share a cache line, hopefully
reducing false sharing.

Performance testing shows that these changes make little difference
on laptop-class machines, but help significantly on larger servers,
especially those with more than 2 sockets.

Andres Freund, originally based on an earlier patch by Simon Riggs.
Review and cosmetic adjustments (including heavy rewriting of the
comments) by me.
2015-12-15 13:32:54 -05:00
Robert Haas 3fed417452 Provide a way to predefine LWLock tranche IDs.
It's a bit cumbersome to use LWLockNewTrancheId(), because the returned
value needs to be shared between backends so that each backend can call
LWLockRegisterTranche() with the correct ID.  So, for built-in tranches,
use a hard-coded value instead.

This is motivated by an upcoming patch adding further built-in tranches.

Andres Freund and Robert Haas
2015-12-15 11:48:19 -05:00
Stephen Frost e5e11c8cca Collect the global OR of hasRowSecurity flags for plancache
We carry around information about if a given query has row security or
not to allow the plancache to use that information to invalidate a
planned query in the event that the environment changes.

Previously, the flag of one of the subqueries was simply being copied
into place to indicate if the query overall included RLS components.
That's wrong as we need the global OR of all subqueries.  Fix by
changing the code to match how fireRIRules works, which is results
in OR'ing all of the flags.

Noted by Tom.

Back-patch to 9.5 where RLS was introduced.
2015-12-14 20:05:43 -05:00
Tom Lane db81329eed Add missing cleanup logic in pg_rewind/t/005_same_timeline.pl test.
Per Michael Paquier
2015-12-14 19:22:50 -05:00
Alvaro Herrera 0d8f3d5d11 Add missing CHECK_FOR_INTERRUPTS in lseg_inside_poly
Apparently, there are bugs in this code that cause it to loop endlessly.
That bug still needs more research, but in the meantime it's clear that
the loop is missing a check for interrupts so that it can be cancelled
timely.

Backpatch to 9.1 -- this has been missing since 49475aab8d.
2015-12-14 16:44:40 -03:00
Kevin Grittner e2f1765ce0 Remove xmlparse(document '') test
This one test was behaving differently between the ubuntu fix for
CVE-2015-7499 and the base "expected" file.  It's not worth having
yet another version of the expected file for this test, so drop it.
Perhaps at some point when all distros have settled down to the
same behavior on this test, it can be restored.

Problem found by me on libxml2 (2.9.1+dfsg1-3ubuntu4.6).
Solution suggested by Tom Lane.
Backpatch to 9.5, where the test was added.
2015-12-14 11:37:26 -06:00
Heikki Linnakangas 7b96bf445a Fix out-of-memory error handling in ParameterDescription message processing.
If libpq ran out of memory while constructing the result set, it would hang,
waiting for more data from the server, which might never arrive. To fix,
distinguish between out-of-memory error and not-enough-data cases, and give
a proper error message back to the client on OOM.

There are still similar issues in handling COPY start messages, but let's
handle that as a separate patch.

Michael Paquier, Amit Kapila and me. Backpatch to all supported versions.
2015-12-14 18:19:10 +02:00
Andres Freund cca705a5d9 Fix bug in SetOffsetVacuumLimit() triggered by find_multixact_start() failure.
Previously, if find_multixact_start() failed, SetOffsetVacuumLimit() would
install 0 into MultiXactState->offsetStopLimit if it previously succeeded.
Luckily, there are no known cases where find_multixact_start() will return
an error in 9.5 and above. But if it were to happen, for example due to
filesystem permission issues, it'd be somewhat bad: GetNewMultiXactId()
could continue allocating mxids even if close to a wraparound, or it could
erroneously stop allocating mxids, even if no wraparound is looming.  The
wrong value would be corrected the next time SetOffsetVacuumLimit() is
called, or by a restart.

Reported-By: Noah Misch, although this is not his preferred fix
Discussion: 20151210140450.GA22278@alap3.anarazel.de
Backpatch: 9.5, where the bug was introduced as part of 4f627f
2015-12-14 11:34:16 +01:00
Andres Freund 2a3544960e Correct statement to actually be the intended assert statement.
e3f4cfc7 introduced a LWLockHeldByMe() call, without the corresponding
Assert() surrounding it.

Spotted by Coverity.

Backpatch: 9.1+, like the previous commit
2015-12-14 11:23:24 +01:00
Tom Lane fcbbf82d2b Code and docs review for multiple -c and -f options in psql.
Commit d5563d7df9 drew complaints from Coverity, which quite
correctly complained that one copy of each -c or -f string was being
leaked.  What's more, simple_action_list_append was allocating enough space
for still a third copy of each string as part of the SimpleActionListCell,
even though that coding method had been superseded by a separate strdup
operation.  There were some other minor coding infelicities too.  The
documentation needed more work as well, eg it forgot to explain that -c
causes psql not to accept any interactive input.
2015-12-13 14:52:07 -05:00
Magnus Hagander a91bdf67c4 Consistently set all fields in pg_stat_replication to null instead of 0
Previously the "sent" field would be set to 0 and all other xlog
pointers be set to NULL if there were no valid values (such as when
in a backup sending walsender).
2015-12-13 16:53:38 +01:00
Magnus Hagander 263c19572b Properly initialize write, flush and replay locations in walsender slots
These would leak random xlog positions if a walsender used for backup would
a walsender slot previously used by a replication walsender.

In passing also fix a couple of cases where the xlog pointer is directly
compared to zero instead of using XLogRecPtrIsInvalid, noted by
Michael Paquier.
2015-12-13 16:46:56 +01:00
Andres Freund f54d0629ec Fix ALTER TABLE ... SET TABLESPACE for unlogged relations.
Changing the tablespace of an unlogged relation did not WAL log the
creation and content of the init fork. Thus, after a standby is
promoted, unlogged relation cannot be accessed anymore, with errors
like:
ERROR:  58P01: could not open file "pg_tblspc/...": No such file or directory
Additionally the init fork was not synced to disk, independent of the
configured wal_level, a relatively small durability risk.

Investigation of that problem also brought to light that, even for
permanent relations, the creation of !main forks was not WAL logged,
i.e. no XLOG_SMGR_CREATE record were emitted. That mostly turns out not
to be a problem, because these files were created when the actual
relation data is copied; nonexistent files are not treated as an error
condition during replay. But that doesn't work for empty files, and
generally feels a bit haphazard. Luckily, outside init and main forks,
empty forks don't occur often or are not a problem.

Add the required WAL logging and syncing to disk.

Reported-By: Michael Paquier
Author: Michael Paquier and Andres Freund
Discussion: 20151210163230.GA11331@alap3.anarazel.de
Backpatch: 9.1, where unlogged relations were introduced
2015-12-12 14:17:39 +01:00
Tom Lane 085423e3e3 Add an expected-file to match behavior of latest libxml2.
Recent releases of libxml2 do not provide error context reports for errors
detected at the very end of the input string.  This appears to be a bug, or
at least an infelicity, introduced by the fix for libxml2's CVE-2015-7499.
We can hope that this behavioral change will get undone before too long;
but the security patch is likely to spread a lot faster/further than any
follow-on cleanup, which means this behavior is likely to be present in the
wild for some time to come.  As a stopgap, add a variant regression test
expected-file that matches what you get with a libxml2 that acts this way.
2015-12-11 19:09:04 -05:00
Peter Eisentraut 6b34e55638 pg_rewind: Don't error if the two clusters are already on the same timeline
This previously resulted in an error and a nonzero exit status, but
after discussion this should rather be a noop with a zero exit status.
2015-12-11 18:32:03 -05:00
Alvaro Herrera 8c1615531f For REASSIGN OWNED for foreign user mappings
As reported in bug #13809 by Alexander Ashurkov, the code for REASSIGN
OWNED hadn't gotten word about user mappings.  Deal with them in the
same way default ACLs do, which is to ignore them altogether; they are
handled just fine by DROP OWNED.  The other foreign object cases are
already handled correctly by both commands.

Also add a REASSIGN OWNED statement to foreign_data test to exercise the
foreign data objects.  (The changes are just before the "cleanup" phase,
so it shouldn't remove any existing live test.)

Reported by Alexander Ashurkov, then independently by Jaime Casanova.
2015-12-11 18:39:09 -03:00
Stephen Frost 833728d4c8 Handle policies during DROP OWNED BY
DROP OWNED BY handled GRANT-based ACLs but was not removing roles from
policies.  Fix that by having DROP OWNED BY remove the role specified
from the list of roles the policy (or policies) apply to, or the entire
policy (or policies) if it only applied to the role specified.

As with ACLs, the DROP OWNED BY caller must have permission to modify
the policy or a WARNING is thrown and no change is made to the policy.
2015-12-11 16:12:25 -05:00
Tom Lane 4fcf48450d Get rid of the planner's LateralJoinInfo data structure.
I originally modeled this data structure on SpecialJoinInfo, but after
commit acfcd45cac that looks like a pretty poor decision.
All we really need is relid sets identifying laterally-referenced rels;
and most of the time, what we want to know about includes indirect lateral
references, a case the LateralJoinInfo data was unsuited to compute with
any efficiency.  The previous commit redefined RelOptInfo.lateral_relids
as the transitive closure of lateral references, so that it easily supports
checking indirect references.  For the places where we really do want just
direct references, add a new RelOptInfo field direct_lateral_relids, which
is easily set up as a copy of lateral_relids before we perform the
transitive closure calculation.  Then we can just drop lateral_info_list
and LateralJoinInfo and the supporting code.  This makes the planner's
handling of lateral references noticeably more efficient, and shorter too.

Such a change can't be back-patched into stable branches for fear of
breaking extensions that might be looking at the planner's data structures;
but it seems not too late to push it into 9.5, so I've done so.
2015-12-11 15:52:38 -05:00