Commit Graph

16012 Commits

Author SHA1 Message Date
Tom Lane 8383486f10 Force idle_in_transaction_session_timeout off in pg_dump and autovacuum.
We disable statement_timeout and lock_timeout during dump and restore, to
prevent any global settings that might exist from breaking routine backups.
Commit c6dda1f48 should have added idle_in_transaction_session_timeout to
that list, but failed to.

Another place where these timeouts get turned off is autovacuum.  While
I doubt an idle timeout could fire there, it seems better to be safe than
sorry.

pg_dump issue noted by Bernd Helmle, the other one found by grepping.

Report: <352F9B77DB5D3082578D17BB@eje.land.credativ.lan>
2016-06-15 10:53:03 -04:00
Tom Lane 783cb6e48b Fix multiple minor infelicities in aclchk.c error reports.
pg_type_aclmask reported the wrong type's OID when complaining that
it could not find a type's typelem.  It also failed to provide a
suitable errcode when the initially given OID doesn't exist (which
is a user-facing error, since that OID can be user-specified).
pg_foreign_data_wrapper_aclmask and pg_foreign_server_aclmask likewise
lacked errcode specifications.  Trivial cosmetic adjustments too.

The wrong-type-OID problem was reported by Petru-Florin Mihancea in
bug #14186; the other issues noted by me while reading the code.
These errors all seem to be aboriginal in the respective routines, so
back-patch as necessary.

Report: <20160613163159.5798.52928@wrigleys.postgresql.org>
2016-06-13 13:53:10 -04:00
Tom Lane 89d53515e5 In planner.c, avoid assuming that all PathTargets have sortgrouprefs.
The struct definition for PathTarget specifies that a NULL sortgrouprefs
pointer means no sortgroupref labels.  While it's likely that there
should always be at least one labeled column in the places that were
unconditionally fetching through the pointer, it seems wiser to adhere to
the data structure specification and test first.  Add a macro to make this
convenient.  Per experimentation with running the regression tests with a
very small parallelization threshold --- the crash I observed may well
represent a bug elsewhere, but still this coding was not very robust.

Report: <20756.1465834072@sss.pgh.pa.us>
2016-06-13 12:59:25 -04:00
Noah Misch 3be0a62ffe Finish pgindent run for 9.6: Perl files. 2016-06-12 04:19:56 -04:00
Andres Freund 4bc0f165cb Change default of backend_flush_after GUC to 0 (disabled).
While beneficial, both for throughput and average/worst case latency, in
a significant number of workloads, there are other workloads in which
backend_flush_after can cause significant performance regressions in
comparison to < 9.6 releases. The regression is most likely when the hot
data set is bigger than shared buffers, but significantly smaller than
the operating system's page cache.

I personally think that the benefit of enabling backend flush control is
considerably bigger than the potential downsides, but a fair argument
can be made that not regressing is more important than improving
performance/latency. As the latter is the consensus, change the default
to 0.

The other settings introduced in 428b1d6b2 do not have the same
potential for regressions, so leave them enabled.

Benchmarks leading up to changing the default have been performed by
Mithun Cy, Ashutosh Sharma and Robert Haas.

Discussion: CAD__OuhPmc6XH=wYRm_+Q657yQE88DakN4=Ybh2oveFasHkoeA@mail.gmail.com
2016-06-10 15:31:11 -07:00
Tom Lane 3303ea1a32 Remove reltarget_has_non_vars flag.
Commit b12fd41c6 added a "reltarget_has_non_vars" field to RelOptInfo,
but failed to maintain it accurately.  Since its only purpose was to skip
calls to has_parallel_hazard() in the simple case where a rel's targetlist
is all Vars, and that call is really pretty cheap in that case anyway, it
seems like this is just a case of premature optimization.  Let's drop the
flag and do the calls unconditionally until it's proven that we need more
smarts here.
2016-06-10 16:20:03 -04:00
Tom Lane 2f153ddfdd Refactor to reduce code duplication for function property checking.
As noted by Andres Freund, we'd accumulated quite a few similar functions
in clauses.c that examine all functions in an expression tree to see if
they satisfy some boolean test.  Reduce the duplication by inventing a
function check_functions_in_node() that applies a simple callback function
to each SQL function OID appearing in a given expression node.  This also
fixes some arguable oversights; for example, contain_mutable_functions()
did not check aggregate or window functions for mutability.  I doubt that
that represents a live bug at the moment, because we don't really consider
mutability for aggregates; but it might someday be one.

I chose to put check_functions_in_node() in nodeFuncs.c because it seemed
like other modules might wish to use it in future.  That in turn forced
moving set_opfuncid() et al into nodeFuncs.c, as the alternative was for
nodeFuncs.c to depend on optimizer/setrefs.c which didn't seem very clean.

In passing, teach contain_leaked_vars_walker() about a few more expression
node types it can safely look through, and improve the rather messy and
undercommented code in has_parallel_hazard_walker().

Discussion: <20160527185853.ziol2os2zskahl7v@alap3.anarazel.de>
2016-06-10 16:03:46 -04:00
Kevin Grittner 13761bccb1 Rename local variable for consistency.
Pointed out by Robert Haas.
2016-06-10 11:24:01 -05:00
Kevin Grittner bf9a60ee33 Fix interaction between CREATE INDEX and "snapshot too old".
Since indexes are created without valid LSNs, an index created
while a snapshot older than old_snapshot_threshold existed could
cause queries to return incorrect results when those old snapshots
were used, if any relevant rows had been subject to early pruning
before the index was built.  Prevent usage of a newly created index
until all such snapshots are released, for relations where this can
happen.

Questions about the interaction of "snapshot too old" with index
creation were initially raised by Andres Freund.

Reviewed by Robert Haas.
2016-06-10 09:25:31 -05:00
Tom Lane cae1c788b9 Improve the situation for parallel query versus temp relations.
Transmit the leader's temp-namespace state to workers.  This is important
because without it, the workers do not really have the same search path
as the leader.  For example, there is no good reason (and no extant code
either) to prevent a worker from executing a temp function that the
leader created previously; but as things stood it would fail to find the
temp function, and then either fail or execute the wrong function entirely.

We still prohibit a worker from creating a temp namespace on its own.
In effect, a worker can only see the session's temp namespace if the leader
had created it before starting the worker, which seems like the right
semantics.

Also, transmit the leader's BackendId to workers, and arrange for workers
to use that when determining the physical file path of a temp relation
belonging to their session.  While the original intent was to prevent such
accesses entirely, there were a number of holes in that, notably in places
like dbsize.c which assume they can safely access temp rels of other
sessions anyway.  We might as well get this right, as a small down payment
on someday allowing workers to access the leader's temp tables.  (With
this change, directly using "MyBackendId" as a relation or buffer backend
ID is deprecated; you should use BackendIdForTempRelations() instead.
I left a couple of such uses alone though, as they're not going to be
reachable in parallel workers until we do something about localbuf.c.)

Move the thou-shalt-not-access-thy-leader's-temp-tables prohibition down
into localbuf.c, which is where it actually matters, instead of having it
in relation_open().  This amounts to recognizing that access to temp
tables' catalog entries is perfectly safe in a worker, it's only the data
in local buffers that is problematic.

Having done all that, we can get rid of the test in has_parallel_hazard()
that says that use of a temp table's rowtype is unsafe in parallel workers.
That test was unduly expensive, and if we really did need such a
prohibition, that was not even close to being a bulletproof guard for it.
(For example, any user-defined function executed in a parallel worker
might have attempted such access.)
2016-06-09 20:16:11 -04:00
Robert Haas 2410a2543e Repair a bit of pgindent damage.
Inserting line-breaks into the middle of a URL is, to put it mildly, not
very helpful, so persuade pgindent to leave it alone.
2016-06-09 18:09:17 -04:00
Robert Haas 4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Robert Haas b12fd41c69 Don't generate parallel paths for rels with parallel-restricted outputs.
Such paths are unsafe.  To make it cheaper to detect when this case
applies, track whether a relation's default PathTarget contains any
non-Vars.  In most cases, the answer will be no, which enables us to
determine cheaply that the target list for a proposed path is
parallel-safe.  However, subquery pull-up can create cases that
require us to inspect the target list more carefully.

Amit Kapila, reviewed by me.
2016-06-09 12:43:36 -04:00
Tom Lane e4158319f3 Mop-up for parallel degree-ectomy.
Fix a couple of overlooked uses of "degree" terminology.  Make the parallel
worker count selection logic in create_plain_partial_paths more robust (in
particular, it failed with max_parallel_workers_per_gather set to zero).
2016-06-09 11:16:26 -04:00
Robert Haas c9ce4a1c61 Eliminate "parallel degree" terminology.
This terminology provoked widespread complaints.  So, instead, rename
the GUC max_parallel_degree to max_parallel_workers_per_gather
(leaving room for a possible future GUC max_parallel_workers that acts
as a system-wide limit), and rename the parallel_degree reloption to
parallel_workers.  Rename structure members to match.

These changes create a dump/restore hazard for users of PostgreSQL
9.6beta1 who have set the reloption (or applied the GUC using ALTER
USER or ALTER DATABASE).
2016-06-09 10:00:26 -04:00
Alvaro Herrera 4f04b66f97 Fix loose ends for SQL ACCESS METHOD objects
COMMENT ON ACCESS METHOD was missing; add it, along psql tab-completion
support for it.

psql was also missing a way to list existing access methods; the new \dA
command does that.

Also add tab-completion support for DROP ACCESS METHOD.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqTzdZdu8J7EF8SXr_R2U5bSUUYNOT3oAWBZdEoggnwhGA@mail.gmail.com
2016-06-07 17:59:34 -04:00
Tom Lane 77ba610805 Revert "Use Foreign Key relationships to infer multi-column join selectivity".
This commit reverts 137805f89 as well as the associated commits 015e88942,
5306df283, and 68d704edb.  We found multiple bugs in this feature, and
there was concern about possible planner slowdown (though to be fair,
exhibiting a very large slowdown proved difficult).  The way forward
requires a considerable rewrite, which may or may not be possible to
accomplish in time for beta2.  In my judgment reviewing the rewrite will
be easier to accomplish starting from a clean slate, so let's temporarily
revert what's there now.  This also leaves us in a safe state if it turns
out to be necessary to postpone the rewrite to the next development cycle.

Discussion: <20160429102531.GA13701@huehner.biz>
2016-06-07 17:21:17 -04:00
Peter Eisentraut 5c6d2a5e7c Message style and wording fixes 2016-06-07 14:18:55 -04:00
Simon Riggs 1f74a90888 Correct phrasing in dsm.c comments 2016-06-07 17:34:33 +01:00
Stephen Frost 40fc457520 Minor typos / copy-editing for snapmgr.c
Noticed while reviewing snapshot management.
2016-06-07 11:14:48 -04:00
Tom Lane f64340e743 Don't reset changes_since_analyze after a selective-columns ANALYZE.
If we ANALYZE only selected columns of a table, we should not postpone
auto-analyze because of that; other columns may well still need stats
updates.  As committed, the counter is left alone if a column list is
given, whether or not it includes all analyzable columns of the table.
Per complaint from Tomasz Ostrowski.

It's been like this a long time, so back-patch to all supported branches.

Report: <ef99c1bd-ff60-5f32-2733-c7b504eb960c@ato.waw.pl>
2016-06-06 17:44:17 -04:00
Robert Haas c6dbf1fe79 Stop the executor if no more tuples can be sent from worker to leader.
If a Gather node has read as many tuples as it needs (for example, due
to Limit) it may detach the queue connecting it to the worker before
reading all of the worker's tuples.  Rather than let the worker
continue to generate and send all of the results, have it stop after
sending the next tuple.

More could be done here to stop the worker even quicker, but this is
about as well as we can hope to do for 9.6.

This is in response to a problem report from Andreas Seltenreich.
Commit 44339b892a should be actually be
sufficient to fix that example even without this change, but it seems
better to do this, too, since we might otherwise waste quite a large
amount of effort in one or more workers.

Discussion: CAA4eK1KOKGqmz9bGu+Z42qhRwMbm4R5rfnqsLCNqFs9j14jzEA@mail.gmail.com

Amit Kapila
2016-06-06 14:52:58 -04:00
Robert Haas 44339b892a shm_mq: After a send fails with SHM_MQ_DETACHED, later ones should too.
Prior to this patch, it was occasionally possible, after shm_mq_sendv
had previously returned SHM_MQ_DETACHED, for a later shm_mq_sendv
operation to fail an assertion instead of just again returning
SHM_MQ_ATTACHED.  From the shm_mq code's point of view, it was
expecting to be called again with the same arguments, since the
previous operation had only partially completed.  However, a caller
who isn't using non-blocking mode won't be prepared to repeat the call
with the same arguments, and this code shouldn't expect that they
will.  Repair in such a way that we'll be OK whether the next call
uses the same arguments or not.

Found by Andreas Seltenreich.  Analysis and sketch of fix by Amit
Kapila.  Patch by me, reviewed by Amit Kapila.
2016-06-06 14:35:30 -04:00
Robert Haas 932b97a011 Fix typo.
Jim Nasby
2016-06-06 07:58:50 -04:00
Peter Eisentraut 6201a8ef3a Fix whitespace 2016-06-05 17:02:56 -04:00
Tom Lane 8a859691d5 Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.
Fix still another bug in commit 35fcb1b3d: it failed to fully initialize
the SortSupport states it introduced to allow the executor to re-check
ORDER BY expressions containing distance operators.  That led to a null
pointer dereference if the sortsupport code tried to use ssup_cxt.  The
problem only manifests in narrow cases, explaining the lack of previous
field reports.  It requires a GiST-indexable distance operator that lacks
SortSupport and is on a pass-by-ref data type, which among core+contrib
seems to be only btree_gist's interval opclass; and it requires the scan
to be done as an IndexScan not an IndexOnlyScan, which explains how
btree_gist's regression test didn't catch it.  Per bug #14134 from
Jihyun Yu.

Peter Geoghegan

Report: <20160511154904.2603.43889@wrigleys.postgresql.org>
2016-06-05 11:53:06 -04:00
Tom Lane 05104f6936 Fix grammar's AND/OR flattening to work with operator_precedence_warning.
It'd be good for "(x AND y) AND z" to produce a three-child AND node
whether or not operator_precedence_warning is on, but that failed to
happen when it's on because makeAndExpr() didn't look through the added
AEXPR_PAREN node.  This has no effect on generated plans because prepqual.c
would flatten the AND nest anyway; but it does affect the number of parens
printed in ruleutils.c, for example.  I'd already fixed some similar
hazards in parse_expr.c in commit abb164655, but didn't think to search
gram.y for problems of this ilk.  Per gripe from Jean-Pierre Pelletier.

Report: <fa0535ec6d6428cfec40c7e8a6d11156@mail.gmail.com>
2016-06-03 19:12:29 -04:00
Tom Lane d50183c578 Inline the easy cases in MakeExpandedObjectReadOnly().
This attempts to buy back some of whatever performance we lost from fixing
bug #14174 by inlining the initial checks in MakeExpandedObjectReadOnly()
into the callers.  We can do that in a macro without creating multiple-
evaluation hazards, so it's pretty much free notationally; and the amount
of code added to callers should be minimal as well.  (Testing a value can't
take many more instructions than passing it to a subroutine.)

Might as well inline DatumIsReadWriteExpandedObject() while we're at it.

This is an ABI break for callers, so it doesn't seem safe to put into 9.5,
but I see no reason not to do it in HEAD.
2016-06-03 18:34:05 -04:00
Tom Lane 9eaf5be506 Mark read/write expanded values as read-only in ValuesNext(), too.
Further thought about bug #14174 motivated me to try the case of a
R/W datum being returned from a VALUES list, and sure enough it was
broken.  Fix that.

Also add a regression test case exercising the same scenario for
FunctionScan.  That's not broken right now, because the function's
result will get shoved into a tuplestore between generation and use;
but it could easily become broken whenever we get around to optimizing
FunctionScan better.

There don't seem to be any other places where we put the result of
expression evaluation into a virtual tuple slot that could then be
the source for Vars of further expression evaluation, so I think
this is the end of this bug.
2016-06-03 18:07:14 -04:00
Tom Lane 69f526aa49 Mark read/write expanded values as read-only in ExecProject().
If a plan node output expression returns an "expanded" datum, and that
output column is referenced in more than one place in upper-level plan
nodes, we need to ensure that what is returned is a read-only reference
not a read/write reference.  Otherwise one of the referencing sites could
scribble on or even delete the expanded datum before we have evaluated the
others.  Commit 1dc5ebc907, which introduced this feature, supposed
that it'd be sufficient to make SubqueryScan nodes force their output
columns to read-only state.  The folly of that was revealed by bug #14174
from Andrew Gierth, and really should have been immediately obvious
considering that the planner will happily optimize SubqueryScan nodes
out of the plan without any regard for this issue.

The safest fix seems to be to make ExecProject() force its results into
read-only state; that will cover every case where a plan node returns
expression results.  Actually we can delegate this to ExecTargetList()
since we can recursively assume that plain Vars will not reference
read-write datums.  That should keep the extra overhead down to something
minimal.  We no longer need ExecMakeSlotContentsReadOnly(), which was
introduced only in support of the idea that just a few plan node types
would need to do this.

In the future it would be nice to have the planner account for this problem
and inject force-to-read-only expression evaluation nodes into only the
places where there's a risk of multiple evaluation.  That's not a suitable
solution for 9.5 or even 9.6 at this point, though.

Report: <20160603124628.9932.41279@wrigleys.postgresql.org>
2016-06-03 15:14:50 -04:00
Robert Haas 04ae11f62e Remove bogus code to apply PathTargets to partial paths.
The partial paths that get modified may already have been used as
part of a GatherPath which appears in the path list, so modifying
them is not a good idea at this stage - especially because this
code has no check that the PathTarget is in fact parallel-safe.

When partial aggregation is being performed, this is actually
harmless because we'll end up replacing the pathtargets here with
the correct ones within create_grouping_paths().  But if we've got
a query tree containing only scan/join operations then this can
result in incorrectly pushing down parallel-restricted target
list entries.  If those are, for example, references to subqueries,
that can crash the server; but it's wrong in any event.

Amit Kapila
2016-06-03 14:27:33 -04:00
Kevin Grittner 370a46fc01 Add new snapshot fields to serialize/deserialize functions.
The "snapshot too old" condition was not being recognized when
using a copied snapshot, since the original timestamp and lsn were
not being passed along.  Noticed when testing the combination of
"snapshot too old" with parallel query execution.
2016-06-03 11:13:28 -05:00
Robert Haas 6436a853f1 Fix comment to be more accurate.
Now that we skip vacuuming all-frozen pages, this comment needs
updating.

Masahiko Sawada
2016-06-03 11:56:57 -04:00
Greg Stark e1623c3959 Fix various common mispellings.
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.

Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
2016-06-03 16:08:45 +01:00
Robert Haas fdfaccfa79 Cosmetic improvements to freeze map code.
Per post-commit review comments from Andres Freund, improve variable
names, comments, and in one place, slightly improve the code structure.

Masahiko Sawada
2016-06-03 08:43:41 -04:00
Greg Stark a3b30763cc Be conservative about alignment requirements of struct epoll_event.
Use MAXALIGN size/alignment to guarantee that later uses of memory are
aligned correctly. E.g. epoll_event might need 8 byte alignment on some
platforms, but earlier allocations like WaitEventSet and WaitEvent might
not sized to guarantee that when purely using sizeof().

Found by myself while testing on an Sun Ultra 5 (Sparc IIi) with some
editorializing by Andres Freund.

In passing fix a couple typos in the area
2016-06-02 19:38:52 +01:00
Kevin Grittner 7392eed7c2 Fix btree mark/restore bug.
Commit 2ed5b87f96 introduced a bug in
mark/restore, in an attempt to optimize repeated restores to the
same page.  This caused an assertion failure during a merge join
which fed directly from an index scan, although the impact would
not be limited to that case.  Revert the bad chunk of code from
that commit.

While investigating this bug it was discovered that a particular
"paranoia" set of the mark position field would not prevent bad
behavior; it would just make it harder to diagnose.  Change that
into an assertion, which will draw attention to any future problem
in that area more directly.

Backpatch to 9.5, where the bug was introduced.

Bug #14169 reported by Shinta Koyanagi.
Preliminary analysis by Tom Lane identified which commit caused
the bug.
2016-06-02 12:23:01 -05:00
Tom Lane 22b27b4c9e Avoid useless closely-spaced writes of statistics files.
The original intent in the stats collector was that we should not write out
stats data oftener than every PGSTAT_STAT_INTERVAL msec.  Backends will not
make requests at all if they see the existing data is newer than that, and
the stats collector is supposed to disregard requests having a cutoff_time
older than its most recently written data, so that close-together requests
don't result in multiple writes.  But the latter part of that got broken
in commit 187492b6c2, so that if two backends concurrently decide
the existing stats are too old, the collector would write the data twice.
(In principle the collector's logic would still merge requests as long as
the second one arrives before we've actually written data ... but since
the message collection loop would write data immediately after processing
a single inquiry message, that never happened in practice, and in any case
the window in which it might work would be much shorter than
PGSTAT_STAT_INTERVAL.)

To fix, improve pgstat_recv_inquiry so that it checks whether the cutoff
time is too old, and doesn't add a request to the queue if so.  This means
that we do not need DBWriteRequest.request_time, because the decision is
taken before making a queue entry.  And that means that we don't really
need the DBWriteRequest data structure at all; an OID list of database
OIDs will serve and allow removal of some rather verbose and crufty code.

In passing, improve the comments in this area, which have been rather
neglected.  Also change backend_read_statsfile so that it's not silently
relying on MyDatabaseId to have some particular value in the autovacuum
launcher process.  It accidentally worked as desired because MyDatabaseId
is zero in that process; but that does not seem like a dependency we want,
especially with no documentation about it.

Although this patch is mine, it turns out I'd rediscovered a known bug,
for which Tomas Vondra had already submitted a patch that's functionally
equivalent to the non-cosmetic aspects of this patch.  Thanks to Tomas
for reviewing this version.

Back-patch to 9.3 where the bug was introduced.

Prior-Discussion: <1718942738eb65c8407fcd864883f4c8@fuzzy.cz>
Patch: <4625.1464202586@sss.pgh.pa.us>
2016-05-31 15:55:15 -04:00
Noah Misch 2195c5afaa Mirror struct Aggref field order in _copyAggref().
This is cosmetic, and no supported release has the affected fields.
2016-05-31 00:01:03 -04:00
Alvaro Herrera 975ad4e602 Fix PageAddItem BRIN bug
BRIN was relying on the ability to remove a tuple from an index page,
then putting another tuple in the same line pointer.  But PageAddItem
refuses to add a tuple beyond the first free item past the last used
item, and in particular, it rejects an attempt to add an item to an
empty page anywhere other than the first line pointer.  PageAddItem
issues a WARNING and indicates to the caller that it failed, which in
turn causes the BRIN calling code to issue a PANIC, so the whole
sequence looks like this:
	WARNING:  specified item offset is too large
	PANIC:  failed to add BRIN tuple

To fix, create a new function PageAddItemExtended which is like
PageAddItem except that the two boolean arguments become a flags bitmap;
the "overwrite" and "is_heap" boolean flags in PageAddItem become
PAI_OVERWITE and PAI_IS_HEAP flags in the new function, and a new flag
PAI_ALLOW_FAR_OFFSET enables the behavior required by BRIN.
PageAddItem() retains its original signature, for compatibility with
third-party modules (other callers in core code are not modified,
either).

Also, in the belt-and-suspenders spirit, I added a new sanity check in
brinGetTupleForHeapBlock to raise an error if an TID found in the revmap
is not marked as live by the page header.  This causes it to react with
"ERROR: corrupted BRIN index" to the bug at hand, rather than a hard
crash.

Backpatch to 9.5.

Bug reported by Andreas Seltenreich as detected by his handy sqlsmith
fuzzer.
Discussion: https://www.postgresql.org/message-id/87mvni77jh.fsf@elite.ansel.ydns.eu
2016-05-30 14:47:22 -04:00
Tom Lane 83dbde94f7 Fix DROP ACCESS METHOD IF EXISTS.
The IF EXISTS option was documented, and implemented in the grammar, but
it didn't actually work for lack of support in does_not_exist_skipping().
Per bug #14160.

Report and patch by Kouhei Sutou

Report: <20160527070433.19424.81712@wrigleys.postgresql.org>
2016-05-27 11:03:18 -04:00
Tom Lane 9dd4178cec Be more predictable about reporting "lock timeout" vs "statement timeout".
If both timeout indicators are set when we arrive at ProcessInterrupts,
we've historically just reported "lock timeout".  However, some buildfarm
members have been observed to fail isolationtester's timeouts test by
reporting "lock timeout" when the statement timeout was expected to fire
first.  The cause seems to be that the process is allowed to sleep longer
than expected (probably due to heavy machine load) so that the lock
timeout happens before we reach the point of reporting the error, and
then this arbitrary tiebreak rule does the wrong thing.  We can improve
matters by comparing the scheduled timeout times to decide which error
to report.

I had originally proposed greatly reducing the 1-second window between
the two timeouts in the test cases.  On reflection that is a bad idea,
at least for the case where the lock timeout is expected to fire first,
because that would assume that it takes negligible time to get from
statement start to the beginning of the lock wait.  Thus, this patch
doesn't completely remove the risk of test failures on slow machines.
Empirically, however, the case this handles is the one we are seeing
in the buildfarm.  The explanation may be that the other case requires
the scheduler to take the CPU away from a busy process, whereas the
case fixed here only requires the scheduler to not give the CPU back
right away to a process that has been woken from a multi-second sleep
(and, perhaps, has been swapped out meanwhile).

Back-patch to 9.3 where the isolationtester timeouts test was added.

Discussion: <8693.1464314819@sss.pgh.pa.us>
2016-05-27 10:40:20 -04:00
Tom Lane aeb9ae6457 Disable physical tlist if any Var would need multiple sortgroupref labels.
As part of upper planner pathification (commit 3fc6e2d7f5) I redid
createplan.c's approach to the physical-tlist optimization, in which scan
nodes are allowed to return exactly the underlying table's columns so as
to save doing a projection step at runtime.  The logic was intentionally
more aggressive than before about applying the optimization, which is
generally a good thing, but Andres Freund found a case in which it got
too aggressive.  Namely, if any column is referenced more than once in
the parent plan node's sorting or grouping column list, we can't optimize
because then that column would need to have more than one ressortgroupref
label, and we only have space for one.

Add logic to detect this situation in use_physical_tlist(), and also add
some error checking in apply_pathtarget_labeling_to_tlist(), which this
example proves was being overly cavalier about whether what it was doing
made any sense.

The added test case exposes the problem only because we do not eliminate
duplicate grouping keys.  That might be something to fix someday, but it
doesn't seem like appropriate post-beta work.

Report: <20160526021235.w4nq7k3gnheg7vit@alap3.anarazel.de>
2016-05-26 14:52:30 -04:00
Tom Lane b898eb6367 Remove option to write USING before opclass name in CREATE INDEX.
Dating back to commit f10b63923, our grammar has allowed "USING" to
optionally appear before an opclass name in CREATE INDEX (and, lately,
some related places such as ON CONFLICT specifications).  Nikolay Shaplov
noticed that this syntax existed but wasn't documented, and proposed
documenting it.  But what seems like a better idea is to remove the
production, thereby making the code match the docs not vice versa.
This isn't our usual modus operandi for such cases, but there are a
couple of good reasons to proceed this way:

* So far as I can find, this syntax has never been documented anywhere.
It isn't relied on by any of our own code or test cases, and there seems
little reason to suppose that it's been used in the wild either.

* Documenting it would mean that there would be two separate uses of
USING in the CREATE INDEX syntax, the other being "USING access_method".
That can lead to nothing but confusion.

So, let's just remove it.  On the off chance that somebody somewhere
is using it, this isn't something to back-patch, but we can fix it
in HEAD.

Discussion: <1593237.l7oKHRpxSe@nataraj-amd64>
2016-05-25 19:11:00 -04:00
Tom Lane 52e8fc3e2e Ensure that backends see up-to-date statistics for shared catalogs.
Ever since we split the statistics collector's reports into per-database
files (commit 187492b6c2), backends have been seeing stale statistics
for shared catalogs.  This is because the inquiry message only prompts the
collector to write the per-database file for the requesting backend's own
database.  Stats for shared catalogs are in a separate file for "DB 0",
which didn't get updated.

In normal operation this was partially masked by the fact that the
autovacuum launcher would send an inquiry message at least once per
autovacuum_naptime that asked for "DB 0"; so the shared-catalog stats would
never be more than a minute out of date.  However the problem becomes very
obvious with autovacuum disabled, as reported by Peter Eisentraut.

To fix, redefine the semantics of inquiry messages so that both the
specified DB and DB 0 will be dumped.  (This might seem a bit inefficient,
but we have no good way to know whether a backend's transaction will look
at shared-catalog stats, so we have to read both groups of stats whenever
we request stats.  Sending two inquiry messages would definitely not be
better.)

Back-patch to 9.3 where the bug was introduced.

Report: <56AD41AC.1030509@gmx.net>
2016-05-25 17:48:15 -04:00
Tom Lane 2d2e40e3be Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results.  Fetch each one just once to ensure sane behavior
in the minimum calculation.  Noted while reviewing Alexander Korotkov's
patch in the same area.

Discussion: <8564.1464116473@sss.pgh.pa.us>
2016-05-24 15:47:51 -04:00
Tom Lane 996d273978 Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future".  That was probably fine when written, but in a lazy
vacuum we won't have assigned an XID, so calling GetCurrentTransactionId()
causes an XID to be assigned when otherwise one would not be.  Most of the
time that's not a big problem ... but if we are hard up against the
wraparound limit, consuming XIDs during antiwraparound vacuums is a very
bad thing.

Instead, use ReadNewTransactionId(), which not only avoids this problem
but is in itself a better comparison point to test whether wraparound
has already occurred.

Report and patch by Alexander Korotkov.  Back-patch to all versions.

Report: <CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvM7-s6A@mail.gmail.com>
2016-05-24 15:20:36 -04:00
Alvaro Herrera 0c7cd45b6d Fix range check for effective_io_concurrency
Commit 1aba62ec moved the range check of that option form guc.c into
bufmgr.c, but introduced a bug by changing a >= 0.0 to > 0.0, which made
the value 0 no longer accepted.  Put it back.

Reported by Jeff Janes, diagnosed by Tom Lane
2016-05-24 14:55:34 -04:00
Tom Lane 1e0d6512e5 Fix BTREE_BUILD_STATS build.
Commit 65c5fcd353 broke this by removing a
header include directive that is conditionally required.  Add that back
to nbtree.c, with annotation to keep pgrminclude from re-breaking it.

Peter Geoghegan

Report: <CAM3SWZTNjHFYW_UG8bu0BnogqQ2HfsTgkzXLueuUhfTcYbu5HA@mail.gmail.com>
2016-05-23 19:41:11 -04:00
Tom Lane eae1ad9b64 Support IndexElem in raw_expression_tree_walker().
Needed for cases in which INSERT ... ON CONFLICT appears inside a
recursive CTE item.  Per bug #14153 from Thomas Alton.

Patch by Peter Geoghegan, slightly adjusted by me

Report: <20160521232802.22598.13537@wrigleys.postgresql.org>
2016-05-23 19:23:36 -04:00