Commit Graph

15212 Commits

Author SHA1 Message Date
Andres Freund 18e8613564 Address points made in post-commit review of replication origins.
Amit reviewed the replication origins patch and made some good
points. Address them. This fixes typos in error messages, docs and
comments and adds a missing error check (although in a
should-never-happen scenario).

Discussion: CAA4eK1JqUBVeWWKwUmBPryFaje4190ug0y-OAUHWQ6tD83V4xg@mail.gmail.com
Backpatch: 9.5, where replication origins were introduced.
2015-08-07 15:09:05 +02:00
Tom Lane bab163e121 Fix old oversight in join removal logic.
Commit 9e7e29c75a introduced an Assert that
join removal didn't reduce the eval_at set of any PlaceHolderVar to empty.
At first glance it looks like join_is_removable ensures that's true --- but
actually, the loop in join_is_removable skips PlaceHolderVars that are not
referenced above the join due to be removed.  So, if we don't want any
empty eval_at sets, the right thing to do is to delete any now-unreferenced
PlaceHolderVars from the data structure entirely.

Per fuzz testing by Andreas Seltenreich.  Back-patch to 9.3 where the
aforesaid Assert was added.
2015-08-06 22:14:27 -04:00
Tom Lane cde35cf4ae Fix eclass_useful_for_merging to give valid results for appendrel children.
Formerly, this function would always return "true" for an appendrel child
relation, because it would think that the appendrel parent was a potential
join target for the child.  In principle that should only lead to some
inefficiency in planning, but fuzz testing by Andreas Seltenreich disclosed
that it could lead to "could not find pathkey item to sort" planner errors
in odd corner cases.  Specifically, we would think that all columns of a
child table's multicolumn index were interesting pathkeys, causing us to
generate a MergeAppend path that sorts by all the columns.  However, if any
of those columns weren't actually used above the level of the appendrel,
they would not get added to that rel's targetlist, which would result in
being unable to resolve the MergeAppend's sort keys against its targetlist
during createplan.c.

Backpatch to 9.3.  In older versions, columns of an appendrel get added
to its targetlist even if they're not mentioned above the scan level,
so that the failure doesn't occur.  It might be worth back-patching this
fix to older versions anyway, but I'll refrain for the moment.
2015-08-06 20:14:53 -04:00
Tom Lane 8703059c6b Further fixes for degenerate outer join clauses.
Further testing revealed that commit f69b4b9495 was still a few
bricks shy of a load: minor tweaking of the previous test cases resulted
in the same wrong-outer-join-order problem coming back.  After study
I concluded that my previous changes in make_outerjoininfo() were just
accidentally masking the problem, and should be reverted in favor of
forcing syntactic join order whenever an upper outer join's predicate
doesn't mention a lower outer join's LHS.  This still allows the
chained-outer-joins style that is the normally optimizable case.

I also tightened things up some more in join_is_legal().  It seems to me
on review that what's really happening in the exception case where we
ignore a mismatched special join is that we're allowing the proposed join
to associate into the RHS of the outer join we're comparing it to.  As
such, we should *always* insist that the proposed join be a left join,
which eliminates a bunch of rather dubious argumentation.  The case where
we weren't enforcing that was the one that was already known buggy anyway
(it had a violatable Assert before the aforesaid commit) so it hardly
deserves a lot of deference.

Back-patch to all active branches, like the previous patch.  The added
regression test case failed in all branches back to 9.1, and I think it's
only an unrelated change in costing calculations that kept 9.0 from
choosing a broken plan.
2015-08-06 15:35:46 -04:00
Robert Haas df0a67f754 Fix incorrect calculation in shm_mq_receive.
If some, but not all, of the length word has already been read, and the
next attempt to read sees exactly the number of bytes needed to complete
the length word, or fewer, then we'll incorrectly read less than all of
the available data.

Antonin Houska
2015-08-06 13:25:45 -04:00
Robert Haas 0e141c0fbb Reduce ProcArrayLock contention by removing backends in batches.
When a write transaction commits, it must clear its XID advertised via
the ProcArray, which requires that we hold ProcArrayLock in exclusive
mode in order to prevent concurrent processes running GetSnapshotData
from seeing inconsistent results.  When many processes try to commit
at once, ProcArrayLock must change hands repeatedly, with each
concurrent process trying to commit waking up to acquire the lock in
turn.  To make things more efficient, when more than one backend is
trying to commit a write transaction at the same time, have just one
of them acquire ProcArrayLock in exclusive mode and clear the XIDs of
all processes in the group.  Benchmarking reveals that this is much
more efficient at very high client counts.

Amit Kapila, heavily revised by me, with some review also from Pavan
Deolasee.
2015-08-06 12:02:12 -04:00
Noah Misch b8fe12a836 Reconcile nodes/*funcs.c with recent work.
A few of the discrepancies had semantic significance, but I did not
track down the resulting user-visible bugs, if any.  Back-patch to 9.5,
where all but one discrepancy appeared.  The _equalCreateEventTrigStmt()
situation dates to 9.3 but does not affect semantics.

catversion bump due to readfuncs.c field order changes.
2015-08-05 20:44:27 -04:00
Alvaro Herrera 2834855cb9 Fix BRIN to use SnapshotAny during summarization
For correctness of summarization results, it is critical that the
snapshot used during the summarization scan is able to see all tuples
that are live to all transactions -- including tuples inserted or
deleted by in-progress transactions.  Otherwise, it would be possible
for a transaction to insert a tuple, then idle for a long time while a
concurrent transaction executes summarization of the range: this would
result in the inserted value not being considered in the summary.
Previously we were trying to use a MVCC snapshot in conjunction with
adding a "placeholder" tuple in the index: the snapshot would see all
committed tuples, and the placeholder tuple would catch insertions by
any new inserters.  The hole is that prior insertions by transactions
that are still in progress by the time the MVCC snapshot was taken were
ignored.

Kevin Grittner reported this as a bogus error message during vacuum with
default transaction isolation mode set to repeatable read (because the
error report mentioned a function name not being invoked during), but
the problem is larger than that.

To fix, tweak IndexBuildHeapRangeScan to have a new mode that behaves
the way we need using SnapshotAny visibility rules.  This change
simplifies the BRIN code a bit, mainly by removing large comments that
were mistaken.  Instead, rely on the SnapshotAny semantics to provide
what it needs.  (The business about a placeholder tuple needs to remain:
that covers the case that a transaction inserts a a tuple in a page that
summarization already scanned.)

Discussion: https://www.postgresql.org/message-id/20150731175700.GX2441@postgresql.org

In passing, remove a couple of unused declarations from brin.h and
reword a comment to be proper English.  This part submitted by Kevin
Grittner.

Backpatch to 9.5, where BRIN was introduced.
2015-08-05 16:20:50 -03:00
Tom Lane 6af9ee4c8c Make real sure we don't reassociate joins into or out of SEMI/ANTI joins.
Per the discussion in optimizer/README, it's unsafe to reassociate anything
into or out of the RHS of a SEMI or ANTI join.  An example from Piotr
Stefaniak showed that join_is_legal() wasn't sufficiently enforcing this
rule, so lock it down a little harder.

I couldn't find a reasonably simple example of the optimizer trying to
do this, so no new regression test.  (Piotr's example involved the random
search in GEQO accidentally trying an invalid case and triggering a sanity
check way downstream in clause selectivity estimation, which did not seem
like a sequence of events that would be useful to memorialize in a
regression test as-is.)

Back-patch to all active branches.
2015-08-05 14:39:29 -04:00
Andres Freund de6fd1c898 Rely on inline functions even if that causes warnings in older compilers.
So far we have worked around the fact that some very old compilers do
not support 'inline' functions by only using inline functions
conditionally (or not at all). Since such compilers are very rare by
now, we have decided to rely on inline functions from 9.6 onwards.

To avoid breaking these old compilers inline is defined away when not
supported. That'll cause "function x defined but not used" type of
warnings, but since nobody develops on such compilers anymore that's
ok.

This change in policy will allow us to more easily employ inline
functions.

I chose to remove code previously conditional on PG_USE_INLINE as it
seemed confusing to have code dependent on a define that's always
defined.

Blacklisting of compilers, like in c53f73879f, now has to be done
differently. A platform template can define PG_FORCE_DISABLE_INLINE to
force inline to be defined empty.

Discussion: 20150701161447.GB30708@awork2.anarazel.de
2015-08-05 18:19:52 +02:00
Andres Freund a855118be3 Fix debug message output when connecting to a logical slot.
Previously the message erroneously printed the same LSN twice as the
assignment to the start_lsn variable was before the message. Correct
that.

Reported-By: Marko Tiikkaja
Author: Marko Tiikkaja
Backpatch: 9.5, where logical decoding was introduced
2015-08-05 13:26:01 +02:00
Tom Lane 8ea3e7a75c Fix bogus "out of memory" reports in tuplestore.c.
The tuplesort/tuplestore memory management logic assumed that the chunk
allocation overhead for its memtuples array could not increase when
increasing the array size.  This is and always was true for tuplesort,
but we (I, I think) blindly copied that logic into tuplestore.c without
noticing that the assumption failed to hold for the much smaller array
elements used by tuplestore.  Given rather small work_mem, this could
result in an improper complaint about "unexpected out-of-memory situation",
as reported by Brent DeSpain in bug #13530.

The easiest way to fix this is just to increase tuplestore's initial
array size so that the assumption holds.  Rather than relying on magic
constants, though, let's export a #define from aset.c that represents
the safe allocation threshold, and make tuplestore's calculation depend
on that.

Do the same in tuplesort.c to keep the logic looking parallel, even though
tuplesort.c isn't actually at risk at present.  This will keep us from
breaking it if we ever muck with the allocation parameters in aset.c.

Back-patch to all supported versions.  The error message doesn't occur
pre-9.3, not so much because the problem can't happen as because the
pre-9.3 tuplestore code neglected to check for it.  (The chance of
trouble is a great deal larger as of 9.3, though, due to changes in the
array-size-increasing strategy.)  However, allowing LACKMEM() to become
true unexpectedly could still result in less-than-desirable behavior,
so let's patch it all the way back.
2015-08-04 18:18:46 -04:00
Tom Lane 85e5e222b1 Fix a PlaceHolderVar-related oversight in star-schema planning patch.
In commit b514a7460d, I changed the planner
so that it would allow nestloop paths to remain partially parameterized,
ie the inner relation might need parameters from both the current outer
relation and some upper-level outer relation.  That's fine so long as we're
talking about distinct parameters; but the patch also allowed creation of
nestloop paths for cases where the inner relation's parameter was a
PlaceHolderVar whose eval_at set included the current outer relation and
some upper-level one.  That does *not* work.

In principle we could allow such a PlaceHolderVar to be evaluated at the
lower join node using values passed down from the upper relation along with
values from the join's own outer relation.  However, nodeNestloop.c only
supports simple Vars not arbitrary expressions as nestloop parameters.
createplan.c is also a few bricks shy of being able to handle such cases;
it misplaces the PlaceHolderVar parameters in the plan tree, which is why
the visible symptoms of this bug are "plan should not reference subplan's
variable" and "failed to assign all NestLoopParams to plan nodes" planner
errors.

Adding the necessary complexity to make this work doesn't seem like it
would be repaid in significantly better plans, because in cases where such
a PHV exists, there is probably a corresponding join order constraint that
would allow a good plan to be found without using the star-schema exception.
Furthermore, adding complexity to nodeNestloop.c would create a run-time
penalty even for plans where this whole consideration is irrelevant.
So let's just reject such paths instead.

Per fuzz testing by Andreas Seltenreich; the added regression test is based
on his example query.  Back-patch to 9.2, like the previous patch.
2015-08-04 14:55:50 -04:00
Robert Haas 369342cf70 Cap wal_buffers to avoid a server crash when it's set very large.
It must be possible to multiply wal_buffers by XLOG_BLCKSZ without
overflowing int, or calculations in StartupXLOG will go badly wrong
and crash the server.  Avoid that by imposing a maximum value on
wal_buffers.  This will be just under 2GB, assuming the usual value
for XLOG_BLCKSZ.

Josh Berkus, per an analysis by Andrew Gierth.
2015-08-04 12:58:54 -04:00
Robert Haas a6a2357820 Update comment to match behavior of latest code.
Peter Geoghegan
2015-08-04 11:45:29 -04:00
Heikki Linnakangas 804163bc25 Share transition state between different aggregates when possible.
If there are two different aggregates in the query with same inputs, and
the aggregates have the same initial condition and transition function,
only calculate the state value once, and only call the final functions
separately. For example, AVG(x) and SUM(x) aggregates have the same
transition function, which accumulates the sum and number of input tuples.
For a query like "SELECT AVG(x), SUM(x) FROM x", we can therefore
accumulate the state function only once, which gives a nice speedup.

David Rowley, reviewed and edited by me.
2015-08-04 17:53:10 +03:00
Stephen Frost dee0200f02 RLS: Keep deny policy when only restrictive exist
Only remove the default deny policy when a permissive policy exists
(either from the hook or defined by the user).  If only restrictive
policies exist then no rows will be visible, as restrictive policies
shouldn't make rows visible.  To address this requirement, a single
"USING (true)" permissive policy can be created.

Update the test_rls_hooks regression tests to create the necessary
"USING (true)" permissive policy.

Back-patch to 9.5 where RLS was added.

Per discussion with Dean.
2015-08-03 15:32:49 -04:00
Fujii Masao dd85acf0c4 Make recovery rename tablespace_map to *.old if backup_label is not present.
If tablespace_map file is present without backup_label file, there is
no use of such file.  There is no harm in retaining it, but it is better
to get rid of the map file so that we don't have any redundant file
in data directory and it will avoid any sort of confusion. It seems
prudent though to just rename the file out of the way rather than
delete it completely, also we ignore any error that occurs in rename
operation as even if map file is present without backup_label file,
it is harmless.

Back-patch to 9.5 where tablespace_map file was introduced.

Amit Kapila, reviewed by Robert Haas, Alvaro Herrera and me.
2015-08-03 23:04:41 +09:00
Tom Lane 09cecdf285 Fix a number of places that produced XX000 errors in the regression tests.
It's against project policy to use elog() for user-facing errors, or to
omit an errcode() selection for errors that aren't supposed to be "can't
happen" cases.  Fix all the violations of this policy that result in
ERRCODE_INTERNAL_ERROR log entries during the standard regression tests,
as errors that can reliably be triggered from SQL surely should be
considered user-facing.

I also looked through all the files touched by this commit and fixed
other nearby problems of the same ilk.  I do not claim to have fixed
all violations of the policy, just the ones in these files.

In a few places I also changed existing ERRCODE choices that didn't
seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR
by something more specific.

Back-patch to 9.5, but no further; changing ERRCODE assignments in
stable branches doesn't seem like a good idea.
2015-08-02 23:49:19 -04:00
Tom Lane 13bba02271 Avoid calling memcpy() with a NULL source pointer and count == 0.
As in commit 0a52d378b0, avoid doing something that has undefined
results according to the C standard, even though in practice there does
not seem to be any problem with it.

This fixes two places in numeric.c that demonstrably could call memcpy()
with such arguments.  I looked through that file and didn't see any other
places with similar hazards; this is not to claim that there are not such
places in other files.

Per report from Piotr Stefaniak.  Back-patch to 9.5 which is where the
previous commit was added.  We're more or less setting a precedent that
we will not worry about this type of issue in pre-9.5 branches unless
someone demonstrates a problem in the field.
2015-08-02 15:48:31 -04:00
Tom Lane d73d14c271 Fix incorrect order of lock file removal and failure to close() sockets.
Commit c9b0cbe98b accidentally broke the
order of operations during postmaster shutdown: it resulted in removing
the per-socket lockfiles after, not before, postmaster.pid.  This creates
a race-condition hazard for a new postmaster that's started immediately
after observing that postmaster.pid has disappeared; if it sees the
socket lockfile still present, it will quite properly refuse to start.
This error appears to be the explanation for at least some of the
intermittent buildfarm failures we've seen in the pg_upgrade test.

Another problem, which has been there all along, is that the postmaster
has never bothered to close() its listen sockets, but has just allowed them
to close at process death.  This creates a different race condition for an
incoming postmaster: it might be unable to bind to the desired listen
address because the old postmaster is still incumbent.  This might explain
some odd failures we've seen in the past, too.  (Note: this is not related
to the fact that individual backends don't close their client communication
sockets.  That behavior is intentional and is not changed by this patch.)

Fix by adding an on_proc_exit function that closes the postmaster's ports
explicitly, and (in 9.3 and up) reshuffling the responsibility for where
to unlink the Unix socket files.  Lock file unlinking can stay where it
is, but teach it to unlink the lock files in reverse order of creation.
2015-08-02 14:55:03 -04:00
Heikki Linnakangas 358cde320b Fix race condition that lead to WALInsertLock deadlock with commit_delay.
If a call to WaitForXLogInsertionsToFinish() returned a value in the middle
of a page, and another backend then started to insert a record to the same
page, and then you called WaitXLogInsertionsToFinish() again, the second
call might return a smaller value than the first call. The problem was in
GetXLogBuffer(), which always updated the insertingAt value to the
beginning of the requested page, not the actual requested location. Because
of that, the second call might return a xlog pointer to the beginning of
the page, while the first one returned a later position on the same page.
XLogFlush() performs two calls to WaitXLogInsertionsToFinish() in
succession, and holds WALWriteLock on the second call, which can deadlock
if the second call to WaitXLogInsertionsToFinish() blocks.

Reported by Spiros Ioannou. Backpatch to 9.4, where the more scalable
WALInsertLock mechanism, and this bug, was introduced.
2015-08-02 20:08:10 +03:00
Andres Freund a4b09af3e9 Micro optimize LWLockAttemptLock() a bit.
LWLockAttemptLock pointlessly read the lock's state in every loop
iteration, even though pg_atomic_compare_exchange_u32() returns the old
value. Instead do that only once before the loop iteration.

Additionally there's no need to have the expected_state variable,
old_state mostly had the same value anyway.

Noticed-By: Heikki Linnakangas
Backpatch: 9.5, no reason to let the branches diverge at this point
2015-08-02 18:41:23 +02:00
Andres Freund 7039760114 Fix issues around the "variable" support in the lwlock infrastructure.
The lwlock scalability work introduced two race conditions into the
lwlock variable support provided for xlog.c. First, and harmlessly on
most platforms, it set/read the variable without the spinlock in some
places. Secondly, due to the removal of the spinlock, it was possible
that a backend missed changes to the variable's state if it changed in
the wrong moment because checking the lock's state, the variable's state
and the queuing are not protected by a single spinlock acquisition
anymore.

To fix first move resetting the variable's from LWLockAcquireWithVar to
WALInsertLockRelease, via a new function LWLockReleaseClearVar. That
prevents issues around waiting for a variable's value to change when a
new locker has acquired the lock, but not yet set the value. Secondly
re-check that the variable hasn't changed after enqueing, that prevents
the issue that the lock has been released and already re-acquired by the
time the woken up backend checks for the lock's state.

Reported-By: Jeff Janes
Analyzed-By: Heikki Linnakangas
Reviewed-By: Heikki Linnakangas
Discussion: 5592DB35.2060401@iki.fi
Backpatch: 9.5, where the lwlock scalability went in
2015-08-02 18:41:23 +02:00
Tom Lane f69b4b9495 Fix some planner issues with degenerate outer join clauses.
An outer join clause that didn't actually reference the RHS (perhaps only
after constant-folding) could confuse the join order enforcement logic,
leading to wrong query results.  Also, nested occurrences of such things
could trigger an Assertion that on reflection seems incorrect.

Per fuzz testing by Andreas Seltenreich.  The practical use of such cases
seems thin enough that it's not too surprising we've not heard field
reports about it.

This has been broken for a long time, so back-patch to all active branches.
2015-08-01 20:57:41 -04:00
Tom Lane dea1491ffb Teach predtest.c that "foo" implies "foo IS NOT NULL".
Per complaint from Peter Holzer.  It's useful to cover this special case,
since for a boolean variable "foo", earlier parts of the planner will have
reduced variants like "foo = true" to just "foo", and thus we may fail
to recognize the applicability of a partial index with predicate
"foo IS NOT NULL".

Back-patch to 9.5, but not further; given the lack of previous complaints
this doesn't seem like behavior to change in stable branches.
2015-08-01 14:31:46 -04:00
Tom Lane a6492ff897 Fix an oversight in checking whether a join with LATERAL refs is legal.
In many cases, we can implement a semijoin as a plain innerjoin by first
passing the righthand-side relation through a unique-ification step.
However, one of the cases where this does NOT work is where the RHS has
a LATERAL reference to the LHS; that makes the RHS dependent on the LHS
so that unique-ification is meaningless.  joinpath.c understood this,
and so would not generate any join paths of this kind ... but join_is_legal
neglected to check for the case, so it would think that we could do it.
The upshot would be a "could not devise a query plan for the given query"
failure once we had failed to generate any join paths at all for the bogus
join pair.

Back-patch to 9.3 where LATERAL was added.
2015-07-31 19:26:33 -04:00
Alvaro Herrera c81276241b Fix broken assertion in BRIN code
The code was assuming that any NULL value in scan keys was due to IS
NULL or IS NOT NULL, but it turns out to be possible to get them with
other operators too, if they are used in contrived-enough ways.  Easiest
way out of the problem seems to check explicitely for the IS NOT NULL
flag, instead of assuming it must be set if the IS NULL flag is not set,
when a null scan key is found; if neither flag is set, follow the lead
of other index AMs and assume that all indexable operators must be
strict, and thus the query is never satisfiable.

Also, add a comment to try and lure some future hacker into improving
analysis of scan keys in brin.

Per report from Andreas Seltenreich; diagnosis by Tom Lane.
Backpatch to 9.5.

Discussion: http://www.postgresql.org/message-id/20646.1437919632@sss.pgh.pa.us
2015-07-30 15:07:19 -03:00
Joe Conway 1e15b21229 Use appropriate command type when retrieving relation's policies.
When retrieving policies, if not working on the root target relation,
we actually want the relation's SELECT policies, regardless of
the top level query command type. For example in UPDATE t1...FROM t2
we need to apply t1's UPDATE policies and t2's SELECT policies.
Previously top level query command type was applied to all relations,
which was wrong. Add some regression coverage to ensure we don't
violate this principle in the future.

Report and patch by Dean Rasheed. Cherry picked from larger refactoring
patch and tweaked by me. Back-patched to 9.5 where RLS was introduced.
2015-07-30 09:38:15 -07:00
Tom Lane 8693ebe37d Avoid some zero-divide hazards in the planner.
Although I think on all modern machines floating division by zero
results in Infinity not SIGFPE, we still don't want infinities
running around in the planner's costing estimates; too much risk
of that leading to insane behavior.

grouping_planner() failed to consider the possibility that final_rel
might be known dummy and hence have zero rowcount.  (I wonder if it
would be better to set a rows estimate of 1 for dummy relations?
But at least in the back branches, changing this convention seems
like a bad idea, so I'll leave that for another day.)

Make certain that get_variable_numdistinct() produces a nonzero result.
The case that can be shown to be broken is with stadistinct < 0.0 and
small ntuples; we did not prevent the result from rounding to zero.
For good luck I applied clamp_row_est() to all the nonconstant return
values.

In ExecChooseHashTableSize(), Assert that we compute positive nbuckets
and nbatch.  I know of no reason to think this isn't the case, but it
seems like a good safety check.

Per reports from Piotr Stefaniak.  Back-patch to all active branches.
2015-07-30 12:11:23 -04:00
Andrew Dunstan 2cd40adb85 Add IF NOT EXISTS processing to ALTER TABLE ADD COLUMN
Fabrízio de Royes Mello, reviewed by Payal Singh, Alvaro Herrera and
Michael Paquier.
2015-07-29 21:30:00 -04:00
Joe Conway 632cd9f892 Create new ParseExprKind for use by policy expressions.
Policy USING and WITH CHECK expressions were using EXPR_KIND_WHERE for
parse analysis, which results in inappropriate ERROR messages when
the expression contains unsupported constructs such as aggregates.
Create a new ParseExprKind called EXPR_KIND_POLICY and tailor the
related messages to fit.

Reported by Noah Misch. Reviewed by Dean Rasheed, Alvaro Herrera,
and Robert Haas. Back-patch to 9.5 where RLS was introduced.
2015-07-29 15:40:24 -07:00
Robert Haas f04ce31475 Fix incorrect comment.
Amit Langote
2015-07-29 16:47:12 -04:00
Joe Conway efe72a82aa Add missing post create and alter hooks to policy objects.
AlterPolicy() and CreatePolicy() lacked their respective hook invocations.
Noted by Noah Misch, review by Dean Rasheed. Back-patch to 9.5 where
RLS was introduced.
2015-07-29 09:47:49 -07:00
Andres Freund 3bc9356ddd Remove outdated comment in LWLockDequeueSelf's header.
Noticed-By: Robert Haas
Backpatch: 9.5, where the function was added
2015-07-29 10:13:10 +02:00
Heikki Linnakangas a309ebd6b9 Fix typo in comment.
Amit Langote
2015-07-29 10:55:43 +03:00
Tom Lane 2c698f438a Suppress "variable may be used uninitialized" warning.
Also re-pgindent, just because I'm a neatnik.
2015-07-28 19:55:59 -04:00
Joe Conway d824e2800f Disallow converting a table to a view if row security is present.
When DefineQueryRewrite() is about to convert a table to a view, it checks
the table for features unavailable to views.  For example, it rejects tables
having triggers.  It omits to reject tables having relrowsecurity or a
pg_policy record. Fix that. To faciliate the repair, invent
relation_has_policies() which indicates the presence of policies on a
relation even when row security is disabled for that relation.

Reported by Noah Misch. Patch by me, review by Stephen Frost. Back-patch
to 9.5 where RLS was introduced.
2015-07-28 16:24:01 -07:00
Joe Conway f781a0f1d8 Create a pg_shdepend entry for each role in TO clause of policies.
CreatePolicy() and AlterPolicy() omit to create a pg_shdepend entry for
each role in the TO clause. Fix this by creating a new shared dependency
type called SHARED_DEPENDENCY_POLICY and assigning it to each role.

Reported by Noah Misch. Patch by me, reviewed by Alvaro Herrera.
Back-patch to 9.5 where RLS was introduced.
2015-07-28 16:01:53 -07:00
Andrew Dunstan 6d10f4e9d7 Only adjust negative indexes in json_get up to the length of the path.
The previous code resulted in memory access beyond the path bounds. The
cure is to move it into a code branch that checks the value of lex_level
is within the correct bounds.

Bug reported and diagnosed by Piotr Stefaniak.
2015-07-28 17:54:13 -04:00
Tom Lane d8f15c95be Reduce chatter from signaling of autovacuum workers.
Don't print a WARNING if we get ESRCH from a kill() that's attempting
to cancel an autovacuum worker.  It's possible (and has been seen in the
buildfarm) that the worker is already gone by the time we are able to
execute the kill, in which case the failure is harmless.  About the only
plausible reason for reporting such cases would be to help debug corrupted
lock table contents, but this is hardly likely to be the most important
symptom if that happens.  Moreover issuing a WARNING might scare users
more than is warranted.

Also, since sending a signal to an autovacuum worker is now entirely a
routine thing, and the worker will log the query cancel on its end anyway,
reduce the message saying we're doing that from LOG to DEBUG1 level.

Very minor cosmetic cleanup as well.

Since the main practical reason for doing this is to avoid unnecessary
buildfarm failures, back-patch to all active branches.
2015-07-28 17:34:23 -04:00
Joe Conway 7b4bfc87d5 Plug RLS related information leak in pg_stats view.
The pg_stats view is supposed to be restricted to only show rows
about tables the user can read. However, it sometimes can leak
information which could not otherwise be seen when row level security
is enabled. Fix that by not showing pg_stats rows to users that would
be subject to RLS on the table the row is related to. This is done
by creating/using the newly introduced SQL visible function,
row_security_active().

Along the way, clean up three call sites of check_enable_rls(). The second
argument of that function should only be specified as other than
InvalidOid when we are checking as a different user than the current one,
as in when querying through a view. These sites were passing GetUserId()
instead of InvalidOid, which can cause the function to return incorrect
results if the current user has the BYPASSRLS privilege and row_security
has been set to OFF.

Additionally fix a bug causing RI Trigger error messages to unintentionally
leak information when RLS is enabled, and other minor cleanup and
improvements. Also add WITH (security_barrier) to the definition of pg_stats.

Bumped CATVERSION due to new SQL functions and pg_stats view definition.

Back-patch to 9.5 where RLS was introduced. Reported by Yaroslav.
Patch by Joe Conway and Dean Rasheed with review and input by
Michael Paquier and Stephen Frost.
2015-07-28 13:21:22 -07:00
Andres Freund 426746b930 Remove ssl renegotiation support.
While postgres' use of SSL renegotiation is a good idea in theory, it
turned out to not work well in practice. The specification and openssl's
implementation of it have lead to several security issues. Postgres' use
of renegotiation also had its share of bugs.

Additionally OpenSSL has a bunch of bugs around renegotiation, reported
and open for years, that regularly lead to connections breaking with
obscure error messages. We tried increasingly complex workarounds to get
around these bugs, but we didn't find anything complete.

Since these connection breakages often lead to hard to debug problems,
e.g. spuriously failing base backups and significant latency spikes when
synchronous replication is used, we have decided to change the default
setting for ssl renegotiation to 0 (disabled) in the released
backbranches and remove it entirely in 9.5 and master.

Author: Andres Freund
Discussion: 20150624144148.GQ4797@alap3.anarazel.de
Backpatch: 9.5 and master, 9.0-9.4 get a different patch
2015-07-28 22:06:31 +02:00
Robert Haas 6f2871f12e Centralize decision-making about where to get a backend's PGPROC.
This code was originally written as part of parallel query effort, but
it seems to have independent value, because if we make one decision
about where to get a PGPROC when we allocate and then put it back on a
different list at backend-exit time, bad things happen.  This isn't
just a theoretical risk; we fixed an actual problem of this type in
commit e280c630a8.
2015-07-28 14:51:57 -04:00
Tom Lane 95f4e59c32 Remove an unsafe Assert, and explain join_clause_is_movable_into() better.
join_clause_is_movable_into() is approximate, in the sense that it might
sometimes return "false" when actually it would be valid to push the given
join clause down to the specified level.  This is okay ... but there was
an Assert in get_joinrel_parampathinfo() that's only safe if the answers
are always exact.  Comment out the Assert, and add a bunch of commentary
to clarify what's going on.

Per fuzz testing by Andreas Seltenreich.  The added regression test is
a pretty silly query, but it's based on his crasher example.

Back-patch to 9.2 where the faulty logic was introduced.
2015-07-28 13:20:39 -04:00
Heikki Linnakangas 5e65f45c6e Another attempt at fixing memory leak in xlogreader.
max_block_id is also reset between reading records.

Michael Paquier
2015-07-28 09:09:36 +03:00
Stephen Frost 3d5cb31c9a Improve RLS handling in copy.c
To avoid a race condition where the relation being COPY'd could be
changed into a view or otherwise modified, keep the original lock
on the relation.  Further, fully qualify the relation when building
the query up.

Also remove the poorly thought-out Assert() and check the entire
relationOids list as, post-RLS, there can certainly be multiple
relations involved and the planner does not guarantee their ordering.

Per discussion with Noah and Andres.

Back-patch to 9.5 where RLS was introduced.
2015-07-27 16:48:26 -04:00
Tom Lane 4c8f8ffaca Further code review for pg_stat_ssl patch.
Fix additional bogosity in commit 9029f4b374.  Include the
BackendSslStatusBuffer in the BackendStatusShmemSize calculation,
avoid ugly and error-prone casts to char* and back, put related
code stanzas into a consistent order (and fix a couple of previous
instances of that sin).  All cosmetic except for the size oversight.
2015-07-27 16:29:14 -04:00
Tom Lane 7d791ed49b Fix pointer-arithmetic thinko in pg_stat_ssl patch.
Nasty memory-stomp bug in commit 9029f4b374.  It's not apparent how
this survived even cursory testing :-(.  Per report from Peter Holzer.
2015-07-27 15:58:46 -04:00
Heikki Linnakangas 820d1ced1b Don't assume that PageIsEmpty() returns true on an all-zeros page.
It does currently, and I don't see us changing that any time soon, but we
don't make that assumption anywhere else.

Per Tom Lane's suggestion. Backpatch to 9.2, like the previous patch that
added this assumption.
2015-07-27 18:54:09 +03:00
Heikki Linnakangas 61a65c53bd Fix memory leak in xlogreader facility.
XLogReaderFree failed to free the per-block data buffers, when they
happened to not be used by the latest read WAL record.

Michael Paquier. Backpatch to 9.5, where the per-block buffers were added.
2015-07-27 18:29:31 +03:00
Heikki Linnakangas 334445179c Reuse all-zero pages in GIN.
In GIN, an all-zeros page would be leaked forever, and never reused. Just
add them to the FSM in vacuum, and they will be reinitialized when grabbed
from the FSM. On master and 9.5, attempting to access the page's opaque
struct also caused an assertion failure, although that was otherwise
harmless.

Reported by Jeff Janes. Backpatch to all supported versions.
2015-07-27 12:30:26 +03:00
Heikki Linnakangas 023430abf7 Fix handling of all-zero pages in SP-GiST vacuum.
SP-GiST initialized an all-zeros page at vacuum, but that was not
WAL-logged, which is not safe. You might get a torn page write, when it gets
flushed to disk, and end-up with a half-initialized index page. To fix,
leave it in the all-zeros state, and add it to the FSM. It will be
initialized when reused. Also don't set the page-deleted flag when recycling
an empty page. That was also not WAL-logged, and a torn write of that would
cause the page to have an invalid checksum.

Backpatch to 9.2, where SP-GiST indexes were added.
2015-07-27 12:28:21 +03:00
Heikki Linnakangas 65c384c5ab Avoid calling PageGetSpecialPointer() on an all-zeros page.
That was otherwise harmless, but tripped the new assertion in
PageGetSpecialPointer().

Reported by Amit Langote. Backpatch to 9.5, where the assertion was added.
2015-07-27 12:24:27 +03:00
Heikki Linnakangas e3a9a194b7 Remove false comment about speculative insertion.
There is no full discussion of speculative insertions in the executor
README. There is a high-level explanation in execIndexing.c, but it doesn't
seem necessary to refer it from here.

Peter Geoghegan
2015-07-27 11:46:11 +03:00
Tom Lane fca8e59c1c Fix oversight in flattening of subqueries with empty FROM.
I missed a restriction that commit f4abd0241d
should have enforced: we can't pull up an empty-FROM subquery if it's under
an outer join, because then we'd need to wrap its output columns in
PlaceHolderVars.  As the code currently stands, the PHVs end up with empty
relid sets, which doesn't work (and is correctly caught by an Assert).

It's possible that this could be fixed by assigning the PHVs the relid
sets of the parent FromExpr/JoinExpr, but getting that to work is more
complication than I care to add right now; indeed it's likely that
we'll never bother, since pulling up empty-FROM subqueries is a rather
marginal optimization anyway.

Per report from Andreas Seltenreich.  Back-patch to 9.5 where the faulty
code was added.
2015-07-26 17:44:27 -04:00
Tom Lane 358eaa01bf Make entirely-dummy appendrels get marked as such in set_append_rel_size.
The planner generally expects that the estimated rowcount of any relation
is at least one row, *unless* it has been proven empty by constraint
exclusion or similar mechanisms, which is marked by installing a dummy path
as the rel's cheapest path (cf. IS_DUMMY_REL).  When I split up
allpaths.c's processing of base rels into separate set_base_rel_sizes and
set_base_rel_pathlists steps, the intention was that dummy rels would get
marked as such during the "set size" step; this is what justifies an Assert
in indxpath.c's get_loop_count that other relations should either be dummy
or have positive rowcount.  Unfortunately I didn't get that quite right
for append relations: if all the child rels have been proven empty then
set_append_rel_size would come up with a rowcount of zero, which is
correct, but it didn't then do set_dummy_rel_pathlist.  (We would have
ended up with the right state after set_append_rel_pathlist, but that's
too late, if we generate indexpaths for some other rel first.)

In addition to fixing the actual bug, I installed an Assert enforcing this
convention in set_rel_size; that then allows simplification of a couple
of now-redundant tests for zero rowcount in set_append_rel_size.

Also, to cover the possibility that third-party FDWs have been careless
about not returning a zero rowcount estimate, apply clamp_row_est to
whatever an FDW comes up with as the rows estimate.

Per report from Andreas Seltenreich.  Back-patch to 9.2.  Earlier branches
did not have the separation between set_base_rel_sizes and
set_base_rel_pathlists steps, so there was no intermediate state where an
appendrel would have had inconsistent rowcount and pathlist.  It's possible
that adding the Assert to set_rel_size would be a good idea in older
branches too; but since they're not under development any more, it's likely
not worth the trouble.
2015-07-26 16:19:08 -04:00
Andres Freund 159cff58cf Check the relevant index element in ON CONFLICT unique index inference.
ON CONFLICT unique index inference had a thinko that could affect cases
where the user-supplied inference clause required that an attribute
match a particular (user specified) collation and/or opclass.

infer_collation_opclass_match() has to check for opclass and/or
collation matches and that the attribute is in the list of attributes or
expressions known to be in the definition of the index under
consideration. The bug was that these two conditions weren't necessarily
evaluated for the same index attribute.

Author: Peter Geoghegan
Discussion: CAM3SWZR4uug=WvmGk7UgsqHn2MkEzy9YU-+8jKGO4JPhesyeWg@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
2015-07-26 18:20:41 +02:00
Andres Freund faab14ecb8 Fix flattening of nested grouping sets.
Previously nested grouping set specifications accidentally weren't
flattened, but instead contained the nested specification as a element
in the outer list.

Fix this by, as actually documented in comments, concatenating the
nested set specification into the outer one. Also add tests to prevent
this from breaking again.

Author: Andrew Gierth, with tests from Jeevan Chalke
Reported-By: Jeevan Chalke
Discussion: CAM2+6=V5YvuxB+EyN4iH=GbD-XTA435TCNvnDFSD--YvXs+pww@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
2015-07-26 16:50:29 +02:00
Andres Freund 61444bfb80 Allow to push down clauses from HAVING to WHERE when grouping sets are used.
Previously we disallowed pushing down quals to WHERE in the presence of
grouping sets. That's overly restrictive.

We now instead copy quals to WHERE if applicable, leaving the
one in HAVING in place. That's because, at that stage of the planning
process, it's nontrivial to determine if it's safe to remove the one in
HAVING.

Author: Andrew Gierth
Discussion: 874mkt3l59.fsf@news-spur.riddles.org.uk
Backpatch: 9.5, where grouping sets were introduced. This isn't exactly
    a bugfix, but it seems better to keep the branches in sync at this point.
2015-07-26 16:50:20 +02:00
Andres Freund e6d8cb77c0 Recognize GROUPING() as a aggregate expression.
Previously GROUPING() was not recognized as a aggregate expression,
erroneously allowing the planner to move it from HAVING to WHERE.

Author: Jeevan Chalke
Reviewed-By: Andrew Gierth
Discussion: CAM2+6=WG9omG5rFOMAYBweJxmpTaapvVp5pCeMrE6BfpCwr4Og@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
2015-07-26 16:50:02 +02:00
Andres Freund 144666f65b Build column mapping for grouping sets in all required cases.
The previous coding frequently failed to fail because for one it's
unusual to have rollup clauses with one column, and for another
sometimes the wrong mapping didn't cause obvious problems.

Author: Jeevan Chalke
Reviewed-By: Andrew Gierth
Discussion: CAM2+6=W=9=hQOipH0HAPbkun3Z3TFWij_EiHue0_6UX=oR=1kw@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
2015-07-26 16:46:27 +02:00
Tom Lane d9476b8380 Dodge portability issue (apparent compiler bug) in new tablesample code.
Some of the older OS X critters in the buildfarm are failing regression,
with symptoms showing that a request for 100% sampling in BERNOULLI or
SYSTEM methods actually gets only around 50% of the table.  gdb revealed
that the computation of the "cutoff" number was producing 0x7FFFFFFF
rather than the expected 0x100000000.  Inspecting the assembly code,
it looks like gcc is trying to use lrint() instead of rint() and then
fumbling the conversion from long double to uint64.  This seems like a
clear compiler bug, but assigning the intermediate result into a plain
double variable works around it, so let's just do that.  (Another idea
would be to give up one bit of hash width so that we don't need to use
a uint64 cutoff, but let's see if this is enough.)
2015-07-25 19:42:32 -04:00
Tom Lane dd7a8f66ed Redesign tablesample method API, and do extensive code review.
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM.  (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.)  Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type.  This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.

Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.

Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.

Back-patch to 9.5 so that we don't have to support the original API
in production.
2015-07-25 14:39:00 -04:00
Joe Conway b26e3d660d Make RLS work with UPDATE ... WHERE CURRENT OF
UPDATE ... WHERE CURRENT OF would not work in conjunction with
RLS. Arrange to allow the CURRENT OF expression to be pushed down.
Issue noted by Peter Geoghegan. Patch by Dean Rasheed. Back patch
to 9.5 where RLS was introduced.
2015-07-24 12:55:30 -07:00
Andrew Dunstan d9a356ff2e Fix treatment of nulls in jsonb_agg and jsonb_object_agg
The wrong is_null flag was being passed to datum_to_json. Also, null
object key values are not permitted, and this was not being checked
for. Add regression tests covering these cases, and also add those tests
to the json set, even though it was doing the right thing.

Fixes bug #13514, initially diagnosed by Tom Lane.
2015-07-24 09:40:46 -04:00
Andres Freund c1ca3a19df Fix bug around assignment expressions containing indirections.
Handling of assigned-to expressions with indirection (e.g. set f1[1] =
3) was broken for ON CONFLICT DO UPDATE.  The problem was that
ParseState was consulted to determine if an INSERT-appropriate or
UPDATE-appropriate behavior should be used when transforming expressions
with indirections. When the wrong path was taken the old row was
substituted with NULL, leading to wrong results..

To fix remove p_is_update and only use p_is_insert to decide how to
transform the assignment expression, and uset p_is_insert while parsing
the on conflict statement. This isn't particularly pretty, but it's not
any worse than before.

Author: Peter Geoghegan, slightly edited by me
Discussion: CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com
Backpatch: 9.5, where the feature was introduced
2015-07-24 11:52:07 +02:00
Heikki Linnakangas 766dcfb16c Fix off-by-one error in calculating subtrans/multixact truncation point.
If there were no subtransactions (or multixacts) active, we would calculate
the oldestxid == next xid. That's correct, but if next XID happens to be
on the next pg_subtrans (pg_multixact) page, the page does not exist yet,
and SimpleLruTruncate will produce an "apparent wraparound" warning. The
warning is harmless in this case, but looks very alarming to users.

Backpatch to all supported versions. Patch and analysis by Thomas Munro.
2015-07-23 01:29:59 +03:00
Tom Lane 46d0a9bfac Fix add_rte_to_flat_rtable() for recent feature additions.
The TABLESAMPLE and row security patches each overlooked this function,
though their errors of omission were opposite: RLS failed to zero out the
securityQuals field, leading to wasteful copying of useless expression
trees in finished plans, while TABLESAMPLE neglected to add a comment
saying that it intentionally *isn't* deleting the tablesample subtree.
There probably should be a similar comment about ctename, too.

Back-patch as appropriate.
2015-07-21 20:03:58 -04:00
Tom Lane 434873806a Fix some oversights in BRIN patch.
Remove HeapScanDescData.rs_initblock, which wasn't being used for anything
in the final version of the patch.

Fix IndexBuildHeapScan so that it supports syncscan again; the patch
broke synchronous scanning for index builds by forcing rs_startblk
to zero even when the caller did not care about that and had asked
for syncscan.

Add some commentary and usage defenses to heap_setscanlimits().

Fix heapam so that asking for rs_numblocks == 0 does what you would
reasonably expect.  As coded it amounted to requesting a whole-table
scan, because those "--x <= 0" tests on an unsigned variable would
behave surprisingly.
2015-07-21 13:38:24 -04:00
Alvaro Herrera 149b1dd840 Fix omission of OCLASS_TRANSFORM in object_classes[]
This was forgotten in cac7658205 (and its fixup ad89a5d115).  Since it
seems way too easy to miss this, this commit also introduces a mechanism
to enforce that the array is consistent with the enum.

Problem reported independently by Robert Haas and Jaimin Pan.
Patches proposed by Jaimin Pan, Jim Nasby, Michael Paquier and myself,
though I didn't use any of these and instead went with a cleaner
approach suggested by Tom Lane.

Backpatch to 9.5.

Discussion:
https://www.postgresql.org/message-id/CA+Tgmoa6SgDaxW_n_7SEhwBAc=mniYga+obUj5fmw4rU9_mLvA@mail.gmail.com
https://www.postgresql.org/message-id/29788.1437411581@sss.pgh.pa.us
2015-07-21 13:20:53 +02:00
Heikki Linnakangas eb11de8ff5 Sanity-check that a page zeroed by redo routine is marked with WILL_INIT.
There was already a sanity-check in the other direction: if a page was
marked with WILL_INIT, it had to be initialized by the redo routine. It's
not strictly necessary for correctness that a page is marked with WILL_INIT
if it's going to be initialized at redo, but it's a missed optimization if
nothing else.

Fix a few instances of this issue in SP-GiST, where a block in WAL record
was not marked with WILL_INIT, but was in fact always initialized at redo.
We were creating a full-page image of the page unnecessarily in those
cases.

Backpatch to 9.5, where the new WILL_INIT flag was added.
2015-07-20 22:34:01 +03:00
Alvaro Herrera e52b690cf5 Don't handle PUBLIC/NONE separately
Since those role specifiers are checked in the grammar, there's no need
for the old checks to remain in place after 31eae6028e.  Remove them.

Backpatch to 9.5.

Noted and patch by Jeevan Chalke
2015-07-20 18:47:15 +02:00
Alvaro Herrera 8d90736924 Improve BRIN documentation somewhat
This removes some info about support procedures being used, which was
obsoleted by commit db5f98ab4f, as well as add some more documentation
on how to create new opclasses using the Minmax infrastructure.
(Hopefully we can get something similar for Inclusion as well.)

In passing, fix some obsolete mentions of "mmtuples" in source code
comments.

Backpatch to 9.5, where BRIN was introduced.
2015-07-20 12:16:40 +02:00
Andrew Dunstan 9aa663463b Remove dead code.
Defect noticed by Coverity.
2015-07-19 13:19:38 -04:00
Tom Lane 576a95b3a1 Make WaitLatchOrSocket's timeout detection more robust.
In the previous coding, timeout would be noticed and reported only when
poll() or socket() returned zero (or the equivalent behavior on Windows).
Ordinarily that should work well enough, but it seems conceivable that we
could get into a state where poll() always returns a nonzero value --- for
example, if it is noticing a condition on one of the file descriptors that
we do not think is reason to exit the loop.  If that happened, we'd be in a
busy-wait loop that would fail to terminate even when the timeout expires.

We can make this more robust at essentially no cost, by deciding to exit
of our own accord if we compute a zero or negative time-remaining-to-wait.
Previously the code noted this but just clamped the time-remaining to zero,
expecting that we'd detect timeout on the next loop iteration.

Back-patch to 9.2.  While 9.1 had a version of WaitLatchOrSocket, it was
primitive compared to later versions, and did not guarantee reliable
detection of timeouts anyway.  (Essentially, this is a refinement of
commit 3e7fdcffd6, which was back-patched only as far as 9.2.)
2015-07-18 11:47:13 -04:00
Andrew Dunstan e02d44b8a7 Support JSON negative array subscripts everywhere
Previously, there was an inconsistency across json/jsonb operators that
operate on datums containing JSON arrays -- only some operators
supported negative array count-from-the-end subscripting.  Specifically,
only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
integer" operator).  This inconsistency seemed likely to be
counter-intuitive to users.  To fix, allow all places where the user can
supply an integer subscript to accept a negative subscript value,
including path-orientated operators and functions, as well as other
extraction operators.  This will need to be called out as an
incompatibility in the 9.5 release notes, since it's possible that users
are relying on certain established extraction operators changed here
yielding NULL in the event of a negative subscript.

For the json type, this requires adding a way of cheaply getting the
total JSON array element count ahead of time when parsing arrays with a
negative subscript involved, necessitating an ad-hoc lex and parse.
This is followed by a "conversion" from a negative subscript to its
equivalent positive-wise value using the count.  From there on, it's as
if a positive-wise value was originally provided.

Note that there is still a minor inconsistency here across jsonb
deletion operators.  Unlike the aforementioned new "-" deletion operator
that accepts an integer on its right hand side, the new "#-" path
orientated deletion variant does not throw an error when it appears like
an array subscript (input that could be recognized by as an integer
literal) is being used on an object, which is wrong-headed.  The reason
for not being stricter is that it could be the case that an object pair
happens to have a key value that looks like an integer; in general,
these two possibilities are impossible to differentiate with rhs path
text[] argument elements.  However, we still don't allow the "#-"
path-orientated deletion operator to perform array-style subscripting.
Rather, we just return the original left operand value in the event of a
negative subscript (which seems analogous to how the established
"jsonb/json #> text[]" path-orientated operator may yield NULL in the
event of an invalid subscript).

In passing, make SetArrayPath() stricter about not accepting cases where
there is trailing non-numeric garbage bytes rather than a clean NUL
byte.  This means, for example, that strings like "10e10" are now not
accepted as an array subscript of 10 by some new-to-9.5 path-orientated
jsonb operators (e.g. the new #- operator).  Finally, remove dead code
for jsonb subscript deletion; arguably, this should have been done in
commit b81c7b409.

Peter Geoghegan and Andrew Dunstan
2015-07-17 21:13:47 -04:00
Robert Haas a04bb65f70 Add new function pg_notification_queue_usage.
This tells you what fraction of NOTIFY's queue is currently filled.

Brendan Jurd, reviewed by Merlin Moncure and Gurjeet Singh.  A few
further tweaks by me.
2015-07-17 09:12:03 -04:00
Tom Lane 9d6077abf9 Fix a low-probability crash in our qsort implementation.
It's standard for quicksort implementations, after having partitioned the
input into two subgroups, to recurse to process the smaller partition and
then handle the larger partition by iterating.  This method guarantees
that no more than log2(N) levels of recursion can be needed.  However,
Bentley and McIlroy argued that checking to see which partition is smaller
isn't worth the cycles, and so their code doesn't do that but just always
recurses on the left partition.  In most cases that's fine; but with
worst-case input we might need O(N) levels of recursion, and that means
that qsort could be driven to stack overflow.  Such an overflow seems to
be the only explanation for today's report from Yiqing Jin of a SIGSEGV
in med3_tuple while creating an index of a couple billion entries with a
very large maintenance_work_mem setting.  Therefore, let's spend the few
additional cycles and lines of code needed to choose the smaller partition
for recursion.

Also, fix up the qsort code so that it properly uses size_t not int for
some intermediate values representing numbers of items.  This would only
be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg)
or tuples (in qsort_tuple), which I believe would never happen with any
caller in the current core code --- but perhaps it could happen with
call sites in third-party modules?  In any case, this is trouble waiting
to happen, and the corrected code is probably if anything shorter and
faster than before, since it removes sign-extension steps that had to
happen when converting between int and size_t.

In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's
not necessary to preserve the value of "r" across them, and prettify
the output of gen_qsort_tuple.pl a little.

Back-patch to all supported branches.  The odds of hitting this issue
are probably higher in 9.4 and up than before, due to the new ability
to allocate sort workspaces exceeding 1GB, but there's no good reason
to believe that it's impossible to crash older branches this way.
2015-07-16 22:57:46 -04:00
Magnus Hagander 828df727a6 Fix spelling error
David Rowley
2015-07-16 10:31:58 +03:00
Magnus Hagander 64c9d8a6c8 Fix copy/past error in comment
David Christensen
2015-07-16 10:28:44 +03:00
Noah Misch bcd7c41206 AIX: Link the postgres executable with -Wl,-brtllib.
This allows PostgreSQL modules and their dependencies to have undefined
symbols, resolved at runtime.  Perl module shared objects rely on that
in Perl 5.8.0 and later.  This fixes the crash when PL/PerlU loads such
modules, as the hstore_plperl test suite does.  Module authors can link
using -Wl,-G to permit undefined symbols; by default, linking will fail
as it has.  Back-patch to 9.0 (all supported versions).
2015-07-15 21:00:26 -04:00
Heikki Linnakangas d5c0495cd4 Fix event trigger support for the new ALTER OPERATOR command.
Also, the lock on pg_operator should not be released until end of
transaction.
2015-07-14 19:50:18 +03:00
Heikki Linnakangas 321eed5f0f Add ALTER OPERATOR command, for changing selectivity estimator functions.
Other options cannot be changed, as it's not totally clear if cached plans
would need to be invalidated if one of the other options change. Selectivity
estimator functions only change plan costs, not correctness of plans, so
those should be safe.

Original patch by Uriy Zhuravlev, heavily edited by me.
2015-07-14 18:17:55 +03:00
Heikki Linnakangas e42375fc81 Retain comments on indexes and constraints at ALTER TABLE ... TYPE ...
When a column's datatype is changed, ATExecAlterColumnType() rebuilds all
the affected indexes and constraints, and the comments from the old
indexes/constraints were not carried over.

To fix, create a synthetic COMMENT ON command in the work queue, to re-add
any comments on constraints. For indexes, there's a comment field in
IndexStmt that is used.

This fixes bug #13126, reported by Kirill Simonov. Original patch by
Michael Paquier, reviewed by Petr Jelinek and me. This bug is present in
all versions, but only backpatch to 9.5. Given how minor the issue is, it
doesn't seem worth the work and risk to backpatch further than that.
2015-07-14 11:40:22 +03:00
Heikki Linnakangas 1ab9faaecb Reformat code in ATPostAlterTypeParse.
The code in ATPostAlterTypeParse was very deeply indented, mostly because
there were two nested switch-case statements, which add a lot of
indentation. Use if-else blocks instead, to make the code less indented
and more readable.

This is in preparation for next patch that makes some actualy changes to
the function. These cosmetic parts have been separated to make it easier
to see the real changes in the other patch.
2015-07-14 11:38:08 +03:00
Andres Freund 3ed26e5f87 For consistency add a pfree to ON CONFLICT set_plan_refs code.
Backpatch to 9.5 where ON CONFLICT was introduced.

Author: Peter Geoghegan
2015-07-12 22:18:57 +02:00
Tom Lane 0a0fe2ff6e Add now-required #include.
Fixes compiler warning induced by 808ea8fc7b.
2015-07-11 23:34:41 -04:00
Joe Conway 808ea8fc7b Add assign_expr_collations() to CreatePolicy() and AlterPolicy().
As noted by Noah Misch, CreatePolicy() and AlterPolicy() omit to call
assign_expr_collations() on the node trees. Fix the omission and add
his test case to the rowsecurity regression test.
2015-07-11 14:19:31 -07:00
Tom Lane 45811be94e Fix postmaster's handling of a startup-process crash.
Ordinarily, a failure (unexpected exit status) of the startup subprocess
should be considered fatal, so the postmaster should just close up shop
and quit.  However, if we sent the startup process a SIGQUIT or SIGKILL
signal, the failure is hardly "unexpected", and we should attempt restart;
this is necessary for recovery from ordinary backend crashes in hot-standby
scenarios.  I attempted to implement the latter rule with a two-line patch
in commit 442231d7f7, but it now emerges that
that patch was a few bricks shy of a load: it failed to distinguish the
case of a signaled startup process from the case where the new startup
process crashes before reaching database consistency.  That resulted in
infinitely respawning a new startup process only to have it crash again.

To handle this properly, we really must track whether we have sent the
*current* startup process a kill signal.  Rather than add yet another
ad-hoc boolean to the postmaster's state, I chose to unify this with the
existing RecoveryError flag into an enum tracking the startup process's
state.  That seems more consistent with the postmaster's general state
machine design.

Back-patch to 9.0, like the previous patch.
2015-07-09 13:22:22 -04:00
Fujii Masao c2e5f4d1c1 Make wal_compression PGC_SUSET rather than PGC_USERSET.
When enabling wal_compression, there is a risk to leak data similarly to
the BREACH and CRIME attacks on SSL where the compression ratio of
a full page image gives a hint of what is the existing data of this page.
This vulnerability is quite cumbersome to exploit in practice, but doable.

So this patch makes wal_compression PGC_SUSET in order to prevent
non-superusers from enabling it and exploiting the vulnerability while
DBA thinks the risk very seriously and disables it in postgresql.conf.

Back-patch to 9.5 where wal_compression was introduced.
2015-07-09 22:30:52 +09:00
Noah Misch bfb4cf12ab Add .gitignore entries for AIX-specific intermediate build artifacts. 2015-07-08 20:44:22 -04:00
Noah Misch be8b06c364 Revoke support for strxfrm() that write past the specified array length.
This formalizes a decision implicit in commit
4ea51cdfe8 and adds clean detection of
affected systems.  Vendor updates are available for each such known bug.
Back-patch to 9.5, where the aforementioned commit first appeared.
2015-07-08 20:44:21 -04:00
Andres Freund b2f6f749c7 Fix logical decoding bug leading to inefficient reopening of files.
When spilling transaction data to disk a simple typo caused the output
file to be closed and reopened for every serialized change. That happens
to not have a huge impact on linux, which is why it probably wasn't
noticed so far, but on windows that appears to trigger actual disk
writes after every change. Not fun.

The bug fortunately does not have any impact besides speed. A change
could end up being in the wrong segment (last instead of next), but
since we read all files to the end, that's just ugly, not really
problematic. It's not a problem to upgrade, since transaction spill
files do not persist across restarts.

Bug: #13484
Reported-By: Olivier Gosseaume
Discussion: 20150703090217.1190.63940@wrigleys.postgresql.org

Backpatch to 9.4, where logical decoding was added.
2015-07-07 13:12:46 +02:00
Joe Conway 02eac01f91 Make RLS related error messages more consistent and compliant.
Also updated regression expected output to match. Noted and patch by Daniele Varrazzo.
2015-07-06 19:16:53 -07:00
Heikki Linnakangas 8e33fc1784 Call getsockopt() on the correct socket.
We're interested in the buffer size of the socket that's connected to the
client, not the one that's listening for new connections. It happened to
work, as default buffer size is the same on both, but it was clearly not
wrong.

Spotted by Tom Lane
2015-07-06 16:36:48 +03:00
Heikki Linnakangas 4f33621f3f Don't set SO_SNDBUF on recent Windows versions that have a bigger default.
It's unnecessary to set it if the default is higher in the first place.
Furthermore, setting SO_SNDBUF disables the so-called "dynamic send
buffering" feature, which hurts performance further. This can be seen
especially when the network between the client and the server has high
latency.

Chen Huajun
2015-07-06 16:10:58 +03:00
Tom Lane ac50f84866 Fix misuse of TextDatumGetCString().
"TextDatumGetCString(PG_GETARG_TEXT_P(x))" is formally wrong: a text*
is not a Datum.  Although this coding will accidentally fail to fail on
all known platforms, it risks leaking memory if a detoast step is needed,
unlike "TextDatumGetCString(PG_GETARG_DATUM(x))" which is what's used
elsewhere.  Make pg_get_object_address() fall in line with other uses.

Noted while reviewing two-arg current_setting() patch.
2015-07-02 17:02:08 -04:00
Tom Lane 10fb48d66d Add an optional missing_ok argument to SQL function current_setting().
This allows convenient checking for existence of a GUC from SQL, which is
particularly useful when dealing with custom variables.

David Christensen, reviewed by Jeevan Chalke
2015-07-02 16:41:07 -04:00
Heikki Linnakangas 7261172430 Remove obsolete heap_formtuple/modifytuple/deformtuple functions.
These variants used the old-style 'n'/' ' NULL indicators. The new-style
functions have been available since version 8.1. That should be long enough
that if there is still any old external code using these functions, they
can just switch to the new functions without worrying about backwards
compatibility

Peter Geoghegan
2015-07-02 21:21:23 +03:00