Commit Graph

28568 Commits

Author SHA1 Message Date
Tom Lane 8cf739de85 Fix building of large (bigger than shared_buffers) hash indexes.
When the index is predicted to need more than NBuffers buckets,
CREATE INDEX attempts to sort the index entries by hash key before
insertion, so as to reduce thrashing.  This code path got broken by
commit 9f03ca9151, which overlooked that _hash_form_tuple() is not
just an alias for index_form_tuple().  The index got built anyway, but
with garbage data, so that searches for pre-existing tuples always
failed.  Fix by refactoring to separate construction of the indexable
data from calling index_form_tuple().

Per bug #14210 from Daniel Newman.  Back-patch to 9.5 where the
bug was introduced.

Report: <20160623162507.17237.39471@wrigleys.postgresql.org>
2016-06-24 16:57:36 -04:00
Peter Eisentraut bd406af168 psql: Improve \crosstabview error messages 2016-06-24 01:08:08 -04:00
Andrew Dunstan 562e449724 Add tab completion for pager_min_lines to psql.
This was inadvertantly omitted from commit
7655f4ccea. Mea culpa.

Backpatched to 9.5 where pager_min_lines was introduced.
2016-06-23 16:10:15 -04:00
Tom Lane bd1693e89e Fix small memory leak in partial-aggregate deserialization functions.
A deserialize function's result is short-lived data during partial
aggregation, since we're just going to pass it to the combine function
and then it's of no use anymore.  However, the built-in deserialize
functions allocated their results in the aggregate state context,
resulting in a query-lifespan memory leak.  It's probably not possible for
this to amount to anything much at present, since the number of leaked
results would only be the number of worker processes.  But it might become
a problem in future.  To fix, don't use the same convenience subroutine for
setting up results that the aggregate transition functions use.

David Rowley

Report: <10050.1466637736@sss.pgh.pa.us>
2016-06-23 10:55:59 -04:00
Tom Lane 63ae052367 Update oidjoins regression test for 9.6.
Looks like we had some more catalog drift recently.
2016-06-22 17:12:55 -04:00
Tom Lane f8ace5477e Fix type-safety problem with parallel aggregate serial/deserialization.
The original specification for this called for the deserialization function
to have signature "deserialize(serialtype) returns transtype", which is a
security violation if transtype is INTERNAL (which it always would be in
practice) and serialtype is not (which ditto).  The patch blithely overrode
the opr_sanity check for that, which was sloppy-enough work in itself,
but the indisputable reason this cannot be allowed to stand is that CREATE
FUNCTION will reject such a signature and thus it'd be impossible for
extensions to create parallelizable aggregates.

The minimum fix to make the signature type-safe is to add a second, dummy
argument of type INTERNAL.  But to lock it down a bit more and make misuse
of INTERNAL-accepting functions less likely, let's get rid of the ability
to specify a "serialtype" for an aggregate and just say that the only
useful serialtype is BYTEA --- which, in practice, is the only interesting
value anyway, due to the usefulness of the send/recv infrastructure for
this purpose.  That means we only have to allow "serialize(internal)
returns bytea" and "deserialize(bytea, internal) returns internal" as
the signatures for these support functions.

In passing fix bogus signature of int4_avg_combine, which I found thanks
to adding an opr_sanity check on combinefunc signatures.

catversion bump due to removing pg_aggregate.aggserialtype and adjusting
signatures of assorted built-in functions.

David Rowley and Tom Lane

Discussion: <27247.1466185504@sss.pgh.pa.us>
2016-06-22 16:52:41 -04:00
Tom Lane e45e990e4b Make "postgres -C guc" print "" not "(null)" for null-valued GUCs.
Commit 0b0baf262 et al made this case print "(null)" on the grounds that
that's what happened on platforms that didn't crash.  But neither behavior
was actually intentional.  What we should print is just an empty string,
for compatibility with the behavior of SHOW and other ways of examining
string GUCs.  Those code paths don't distinguish NULL from empty strings,
so we should not here either.  Per gripe from Alain Radix.

Like the previous patch, back-patch to 9.2 where -C option was introduced.

Discussion: <CA+YdpwxPUADrmxSD7+Td=uOshMB1KkDN7G7cf+FGmNjjxMhjbw@mail.gmail.com>
2016-06-22 11:55:18 -04:00
Peter Eisentraut 6a9c51810f Improve cleanup in rolenames test
Drop test_schema at the end, because that otherwise interferes with the
collate.linux.utf8 test.
2016-06-21 21:52:35 -04:00
Bruce Momjian 3e9df858f0 Update comment about allowing GUCs to change scanning.
Reported-by: David G. Johnston

Discussion: CAKFQuwZZvnxwSq9tNtvL+uyuDKGgV91zR_agtPxQHRWMWQRP8g@mail.gmail.com
2016-06-21 20:23:31 -04:00
Tom Lane 8b9d323cb9 Refactor planning of projection steps that don't need a Result plan node.
The original upper-planner-pathification design (commit 3fc6e2d7f5)
assumed that we could always determine during Path formation whether or not
we would need a Result plan node to perform projection of a targetlist.
That turns out not to work very well, though, because createplan.c still
has some responsibilities for choosing the specific target list associated
with sorting/grouping nodes (in particular it might choose to add resjunk
columns for sorting).  We might not ever refactor that --- doing so would
push more work into Path formation, which isn't attractive --- and we
certainly won't do so for 9.6.  So, while create_projection_path and
apply_projection_to_path can tell for sure what will happen if the subpath
is projection-capable, they can't tell for sure when it isn't.  This is at
least a latent bug in apply_projection_to_path, which might think it can
apply a target to a non-projecting node when the node will end up computing
something different.

Also, I'd tied the creation of a ProjectionPath node to whether or not a
Result is needed, but it turns out that we sometimes need a ProjectionPath
node anyway to avoid modifying a possibly-shared subpath node.  Callers had
to use create_projection_path for such cases, and we added code to them
that knew about the potential omission of a Result node and attempted to
adjust the cost estimates for that.  That was uncertainly correct and
definitely ugly/unmaintainable.

To fix, have create_projection_path explicitly check whether a Result
is needed and adjust its cost estimate accordingly, though it creates
a ProjectionPath in either case.  apply_projection_to_path is now mostly
just an optimized version that can avoid creating an extra Path node when
the input is known to not be shared with any other live path.  (There
is one case that create_projection_path doesn't handle, which is pushing
parallel-safe expressions below a Gather node.  We could make it do that
by duplicating the GatherPath, but there seems no need as yet.)

create_projection_plan still has to recheck the tlist-match condition,
which means that if the matching situation does get changed by createplan.c
then we'll have made a slightly incorrect cost estimate.  But there seems
no help for that in the near term, and I doubt it occurs often enough,
let alone would change planning decisions often enough, to be worth
stressing about.

I added a "dummypp" field to ProjectionPath to track whether
create_projection_path thinks a Result is needed.  This is not really
necessary as-committed because create_projection_plan doesn't look at the
flag; but it seems like a good idea to remember what we thought when
forming the cost estimate, if only for debugging purposes.

In passing, get rid of the target_parallel parameter added to
apply_projection_to_path by commit 54f5c5150.  I don't think that's a good
idea because it involves callers in what should be an internal decision,
and opens us up to missing optimization opportunities if callers think they
don't need to provide a valid flag, as most don't.  For the moment, this
just costs us an extra has_parallel_hazard call when planning a Gather.
If that starts to look expensive, I think a better solution would be to
teach PathTarget to carry/cache knowledge of parallel-safety of its
contents.
2016-06-21 18:38:20 -04:00
Tom Lane 936b62ddf2 Stamp 9.6beta2. 2016-06-20 16:23:47 -04:00
Tom Lane 1fe1204e87 Add missing check for malloc failure in plpgsql_extra_checks_check_hook().
Per report from Andreas Seltenreich.  Back-patch to affected versions.

Report: <874m8nn0hv.fsf@elite.ansel.ydns.eu>
2016-06-20 15:36:54 -04:00
Peter Eisentraut 47981a4665 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0c374f8d25ed31833a10d24252bc928d41438838
2016-06-20 09:48:08 -04:00
Tom Lane 9bc3332372 Improve error message annotation for GRANT/REVOKE on untrusted PLs.
The annotation for "ERROR: language "foo" is not trusted" used to say
"HINT: Only superusers can use untrusted languages", which was fairly
poorly thought out.  For one thing, it's not a hint about what to do,
but a statement of fact, which makes it errdetail.  But also, this
fails to clarify things much, because there's a missing step in the
chain of reasoning.  I think it's more useful to say "GRANT and REVOKE
are not allowed on untrusted languages, because only superusers can use
untrusted languages".

It's been like this for a long time, but given the lack of previous
complaints, I don't think this is worth back-patching.

Discussion: <1417.1466289901@sss.pgh.pa.us>
2016-06-18 19:38:59 -04:00
Tom Lane 100340e2dc Restore foreign-key-aware estimation of join relation sizes.
This patch provides a new implementation of the logic added by commit
137805f89 and later removed by 77ba61080.  It differs from the original
primarily in expending much less effort per joinrel in large queries,
which it accomplishes by doing most of the matching work once per query not
once per joinrel.  Hopefully, it's also less buggy and better commented.
The never-documented enable_fkey_estimates GUC remains gone.

There remains work to be done to make the selectivity estimates account
for nulls in FK referencing columns; but that was true of the original
patch as well.  We may be able to address this point later in beta.
In the meantime, any error should be in the direction of overestimating
rather than underestimating joinrel sizes, which seems like the direction
we want to err in.

Tomas Vondra and Tom Lane

Discussion: <31041.1465069446@sss.pgh.pa.us>
2016-06-18 15:22:34 -04:00
Tom Lane 598aa194af Still another try at fixing scanjoin_target insertion into parallel plans.
The previous code neglected the fact that the scanjoin_target might
carry sortgroupref labelings that we need to absorb.  Instead, do
create_projection_path() unconditionally, and tweak the path's cost
estimate after the fact.  (I'm now convinced that we ought to refactor
the way we account for sometimes not needing a separate projection step,
but right now is not the time for that sort of cleanup.)

Problem identified by Amit Kapila, patch by me.
2016-06-18 00:28:51 -04:00
Tom Lane 915b703e16 Fix handling of argument and result datatypes for partial aggregation.
When doing partial aggregation, the args list of the upper (combining)
Aggref node is replaced by a Var representing the output of the partial
aggregation steps, which has either the aggregate's transition data type
or a serialized representation of that.  However, nodeAgg.c blindly
continued to use the args list as an indication of the user-level argument
types.  This broke resolution of polymorphic transition datatypes at
executor startup (though it accidentally failed to fail for the ANYARRAY
case, which is likely the only one anyone had tested).  Moreover, the
constructed FuncExpr passed to the finalfunc contained completely wrong
information, which would have led to bogus answers or crashes for any case
where the finalfunc examined that information (which is only likely to be
with polymorphic aggregates using a non-polymorphic transition type).

As an independent bug, apply_partialaggref_adjustment neglected to resolve
a polymorphic transition datatype before assigning it as the output type
of the lower-level Aggref node.  This again accidentally failed to fail
for ANYARRAY but would be unlikely to work in other cases.

To fix the first problem, record the user-level argument types in a
separate OID-list field of Aggref, and look to that rather than the args
list when asking what the argument types were.  (It turns out to be
convenient to include any "direct" arguments in this list too, although
those are not currently subject to being overwritten.)

Rather than adding yet another resolve_aggregate_transtype() call to fix
the second problem, add an aggtranstype field to Aggref, and store the
resolved transition type OID there when the planner first computes it.
(By doing this in the planner and not the parser, we can allow the
aggregate's transition type to change from time to time, although no DDL
support yet exists for that.)  This saves nothing of consequence for
simple non-polymorphic aggregates, but for polymorphic transition types
we save a catalog lookup during executor startup as well as several
planner lookups that are new in 9.6 due to parallel query planning.

In passing, fix an error that was introduced into count_agg_clauses_walker
some time ago: it was applying exprTypmod() to something that wasn't an
expression node at all, but a TargetEntry.  exprTypmod silently returned
-1 so that there was not an obvious failure, but this broke the intended
sensitivity of aggregate space consumption estimates to the typmod of
varchar and similar data types.  This part needs to be back-patched.

Catversion bump due to change of stored Aggref nodes.

Discussion: <8229.1466109074@sss.pgh.pa.us>
2016-06-17 21:44:37 -04:00
Alvaro Herrera 1ca4a1b5d2 Finish up XLOG_HINT renaming
Commit b8fd1a09f3 renamed XLOG_HINT to XLOG_FPI, but neglected two
places.

Backpatch to 9.3, like that commit.
2016-06-17 18:05:55 -04:00
Robert Haas 71d05a2c7b pg_visibility: Add pg_truncate_visibility_map function.
This requires some core changes as well so that we can properly
WAL-log the truncation.  Specifically, it changes the format of the
XLOG_SMGR_TRUNCATE WAL record, so bump XLOG_PAGE_MAGIC.

Patch by me, reviewed but not fully endorsed by Andres Freund.
2016-06-17 17:37:30 -04:00
Robert Haas 54f5c5150f Try again to fix the way the scanjoin_target is used with partial paths.
Commit 04ae11f62e removed some broken
code to apply the scan/join target to partial paths, but its theory
that this processing step is totally unnecessary turns out to be wrong.
Put similar code back again, but this time, check for parallel-safety
and avoid in-place modifications to paths that may already have been
used as part of some other path.

(This is not an entirely elegant solution to this problem; it might
be better, for example, to postpone generate_gather_paths for the
topmost scan/join rel until after the scan/join target has been
applied.  But this is not the time for such redesign work.)

Amit Kapila and Robert Haas
2016-06-17 16:29:07 -04:00
Robert Haas ede62e56fb Add VACUUM (DISABLE_PAGE_SKIPPING) for emergencies.
If you really want to vacuum every single page in the relation,
regardless of apparent visibility status or anything else, you can use
this option.  In previous releases, this behavior could be achieved
using VACUUM (FREEZE), but because we can now recognize all-frozen
pages as not needing to be frozen again, that no longer works.  There
should be no need for routine use of this option, but maybe bugs or
disaster recovery will necessitate its use.

Patch by me, reviewed by Andres Freund.
2016-06-17 15:48:57 -04:00
Robert Haas 9c188a8454 Fix typo.
Thomas Munro
2016-06-17 12:55:30 -04:00
Robert Haas 292794f82b Remove PID from 'parallel worker' context message.
Discussion: <bfd204ab-ab1a-792a-b345-0274a09a4b5f@2ndquadrant.com>
2016-06-17 09:26:17 -04:00
Robert Haas 103512cee9 Attempt to fix broken regression test.
In commit 8c1d9d56e9, I attempted to
add a regression test that would fail if the target list was pushed
into a parallel worker, but due to brain fade on my part, it just
randomly fails whether anything bad or not, because the error check
inside the parallel_restricted() function tests whether there is
*any process in the system* that is not connected to a client, not
whether the process running the query is not connected to a client.

A little experimentation has left me pessimistic about the
prospects of doing better here in a short amount of time, so let's
just fall back to checking that the plan is as we expect and leave
the execution-time check for another day.
2016-06-17 08:35:47 -04:00
Tom Lane 4c56f3269a Fix validation of overly-long IPv6 addresses.
The inet/cidr types sometimes failed to reject IPv6 inputs with too many
colon-separated fields, instead translating them to '::/0'.  This is the
result of a thinko in the original ISC code that seems to be as yet
unreported elsewhere.  Per bug #14198 from Stefan Kaltenbrunner.

Report: <20160616182222.5798.959@wrigleys.postgresql.org>
2016-06-16 17:16:32 -04:00
Tom Lane bfb937427b Fix fuzzy thinking in ReinitializeParallelDSM().
The fact that no workers were successfully launched in the previous
iteration does not excuse us from setting up properly to try again.
This appears to explain crashes I saw in parallel regression testing
due to error_mqh being NULL when it shouldn't be.

Minor other cosmetic fixes too.
2016-06-16 15:20:29 -04:00
Tom Lane 75be66464c Invent min_parallel_relation_size GUC to replace a hard-wired constant.
The main point of doing this is to allow the cutoff to be set very small,
even zero, to allow parallel-query behavior to be tested on relatively
small tables such as we typically use in the regression tests.  But it
might be of use to users too.  The number-of-workers scaling behavior in
create_plain_partial_paths() is pretty ad-hoc and subject to change, so
we won't expose anything about that, but the notion of not considering
parallel query at all for tables below size X seems reasonably stable.

Amit Kapila, per a suggestion from me

Discussion: <17170.1465830165@sss.pgh.pa.us>
2016-06-16 13:47:20 -04:00
Alvaro Herrera 3b5a2a8856 Reword bogus comment 2016-06-16 12:43:35 -04:00
Tom Lane 0b0baf2621 Avoid crash in "postgres -C guc" for a GUC with a null string value.
Emit "(null)" instead, which was the behavior all along on platforms
that don't crash, eg OS X.  Per report from Jehan-Guillaume de Rorthais.
Back-patch to 9.2 where -C option was introduced.

Michael Paquier

Report: <20160615204036.2d35d86a@firost>
2016-06-16 12:17:38 -04:00
Alvaro Herrera b000afea65 Remove unused prototype
Commit 6f56b41ac0 removed function get_pg_database_relfilenode but left
its prototype in place.  Remove it.
2016-06-16 12:06:51 -04:00
Robert Haas 8c1d9d56e9 Add regression test for 04ae11f62e.
The code in this area needs further revision, and it would be best
not to re-break the things we've already fixed.

Per a gripe from Tom Lane.
2016-06-16 12:00:55 -04:00
Tom Lane 9901d8ac2e Use strftime("%c") to format timestamps in psql's \watch command.
This allows the timestamps to follow local conventions (in particular,
they respond to the LC_TIME environment setting).  In C locale you get
the same results as before.  It seems like a good idea to do this now not
later because we already changed the format of \watch headers for 9.6.

Also, increase the buffer sizes a tad to ensure there's enough space for
translated strings.

Discussion: <20160612145532.GA22965@postgresql.kr>
2016-06-15 19:31:13 -04:00
Robert Haas 12f862099d Fix regression test for force_parallel_mode=on.
Commit 14a254fb52 managed not to
exercise the code it was intended to test, and the comment explaining
why no "parallel worker" line showed up in the context wasn't right.

Amit Kapila, tweaked by me per Amit's analysis.
2016-06-15 14:59:07 -04:00
Robert Haas e472ce9624 Add integrity-checking functions to pg_visibility.
The new pg_check_visible() and pg_check_frozen() functions can be used to
verify that the visibility map bits for a relation's data pages match the
actual state of the tuples on those pages.

Amit Kapila and Robert Haas, reviewed (in earlier versions) by Andres
Freund.  Additional testing help by Thomas Munro.
2016-06-15 14:33:58 -04:00
Robert Haas 38e9f90a22 Fix lazy_scan_heap so that it won't mark pages all-frozen too soon.
Commit a892234f83 added a new bit per
page to the visibility map fork indicating whether the page is
all-frozen, but incorrectly assumed that if lazy_scan_heap chose to
freeze a tuple then that tuple would not need to later be frozen
again. This turns out to be false, because xmin and xmax (and
conceivably xvac, if dealing with tuples from very old releases) could
be frozen at separate times.

Thanks to Andres Freund for help in uncovering and tracking down this
issue.
2016-06-15 14:30:06 -04:00
Robert Haas c7a25c242f Mark some functions parallel-unsafe.
currtid() and currtid2() call GetLatestSnapshot(), which fails in
parallel mode.  pg_export_snapshot() calls ExportSnapshot() which
attempts to assign an XID for the current transaction if it does not
already have one; that, too, will fail in parallel mode.

Andreas Seltenreich
2016-06-15 11:40:07 -04:00
Tom Lane 8383486f10 Force idle_in_transaction_session_timeout off in pg_dump and autovacuum.
We disable statement_timeout and lock_timeout during dump and restore, to
prevent any global settings that might exist from breaking routine backups.
Commit c6dda1f48 should have added idle_in_transaction_session_timeout to
that list, but failed to.

Another place where these timeouts get turned off is autovacuum.  While
I doubt an idle timeout could fire there, it seems better to be safe than
sorry.

pg_dump issue noted by Bernd Helmle, the other one found by grepping.

Report: <352F9B77DB5D3082578D17BB@eje.land.credativ.lan>
2016-06-15 10:53:03 -04:00
Peter Eisentraut f0688d6e6c PL/Python: Clean up extended error reporting docs and tests
Format the example and test code more to Python style standards.
Improve whitespace.  Improve documentation formatting.
2016-06-15 10:34:11 -04:00
Tom Lane 783cb6e48b Fix multiple minor infelicities in aclchk.c error reports.
pg_type_aclmask reported the wrong type's OID when complaining that
it could not find a type's typelem.  It also failed to provide a
suitable errcode when the initially given OID doesn't exist (which
is a user-facing error, since that OID can be user-specified).
pg_foreign_data_wrapper_aclmask and pg_foreign_server_aclmask likewise
lacked errcode specifications.  Trivial cosmetic adjustments too.

The wrong-type-OID problem was reported by Petru-Florin Mihancea in
bug #14186; the other issues noted by me while reading the code.
These errors all seem to be aboriginal in the respective routines, so
back-patch as necessary.

Report: <20160613163159.5798.52928@wrigleys.postgresql.org>
2016-06-13 13:53:10 -04:00
Tom Lane 89d53515e5 In planner.c, avoid assuming that all PathTargets have sortgrouprefs.
The struct definition for PathTarget specifies that a NULL sortgrouprefs
pointer means no sortgroupref labels.  While it's likely that there
should always be at least one labeled column in the places that were
unconditionally fetching through the pointer, it seems wiser to adhere to
the data structure specification and test first.  Add a macro to make this
convenient.  Per experimentation with running the regression tests with a
very small parallelization threshold --- the crash I observed may well
represent a bug elsewhere, but still this coding was not very robust.

Report: <20756.1465834072@sss.pgh.pa.us>
2016-06-13 12:59:25 -04:00
Tom Lane cd9b4f24ce Remove extraneous leading whitespace in Windows build script.
Apparently, at least some versions of Microsoft's shell fail on variable
assignments that have leading whitespace.  This instance, introduced in
commit 680513ab7, managed to escape notice for awhile because it's only
invoked if building with OpenSSL.  Per bug #14185 from Torben Dannhauer.

Report: <20160613140119.5798.78501@wrigleys.postgresql.org>
2016-06-13 11:50:27 -04:00
Noah Misch 3be0a62ffe Finish pgindent run for 9.6: Perl files. 2016-06-12 04:19:56 -04:00
Noah Misch b098abf905 Document the authoritative version of perltidy.
Every whole-tree perltidy run has used this version, firmly establishing
it as the de facto standard.
2016-06-12 04:19:44 -04:00
Peter Eisentraut 020140d84d PL/Python: Rename new keyword arguments of plpy.error() etc.
Rename schema -> schema_name etc. to remain consistent with C API and
PL/pgSQL.
2016-06-11 19:27:49 -04:00
Andres Freund 4bc0f165cb Change default of backend_flush_after GUC to 0 (disabled).
While beneficial, both for throughput and average/worst case latency, in
a significant number of workloads, there are other workloads in which
backend_flush_after can cause significant performance regressions in
comparison to < 9.6 releases. The regression is most likely when the hot
data set is bigger than shared buffers, but significantly smaller than
the operating system's page cache.

I personally think that the benefit of enabling backend flush control is
considerably bigger than the potential downsides, but a fair argument
can be made that not regressing is more important than improving
performance/latency. As the latter is the consensus, change the default
to 0.

The other settings introduced in 428b1d6b2 do not have the same
potential for regressions, so leave them enabled.

Benchmarks leading up to changing the default have been performed by
Mithun Cy, Ashutosh Sharma and Robert Haas.

Discussion: CAD__OuhPmc6XH=wYRm_+Q657yQE88DakN4=Ybh2oveFasHkoeA@mail.gmail.com
2016-06-10 15:31:11 -07:00
Tom Lane 3303ea1a32 Remove reltarget_has_non_vars flag.
Commit b12fd41c6 added a "reltarget_has_non_vars" field to RelOptInfo,
but failed to maintain it accurately.  Since its only purpose was to skip
calls to has_parallel_hazard() in the simple case where a rel's targetlist
is all Vars, and that call is really pretty cheap in that case anyway, it
seems like this is just a case of premature optimization.  Let's drop the
flag and do the calls unconditionally until it's proven that we need more
smarts here.
2016-06-10 16:20:03 -04:00
Tom Lane 2f153ddfdd Refactor to reduce code duplication for function property checking.
As noted by Andres Freund, we'd accumulated quite a few similar functions
in clauses.c that examine all functions in an expression tree to see if
they satisfy some boolean test.  Reduce the duplication by inventing a
function check_functions_in_node() that applies a simple callback function
to each SQL function OID appearing in a given expression node.  This also
fixes some arguable oversights; for example, contain_mutable_functions()
did not check aggregate or window functions for mutability.  I doubt that
that represents a live bug at the moment, because we don't really consider
mutability for aggregates; but it might someday be one.

I chose to put check_functions_in_node() in nodeFuncs.c because it seemed
like other modules might wish to use it in future.  That in turn forced
moving set_opfuncid() et al into nodeFuncs.c, as the alternative was for
nodeFuncs.c to depend on optimizer/setrefs.c which didn't seem very clean.

In passing, teach contain_leaked_vars_walker() about a few more expression
node types it can safely look through, and improve the rather messy and
undercommented code in has_parallel_hazard_walker().

Discussion: <20160527185853.ziol2os2zskahl7v@alap3.anarazel.de>
2016-06-10 16:03:46 -04:00
Kevin Grittner 13761bccb1 Rename local variable for consistency.
Pointed out by Robert Haas.
2016-06-10 11:24:01 -05:00
Kevin Grittner bf9a60ee33 Fix interaction between CREATE INDEX and "snapshot too old".
Since indexes are created without valid LSNs, an index created
while a snapshot older than old_snapshot_threshold existed could
cause queries to return incorrect results when those old snapshots
were used, if any relevant rows had been subject to early pruning
before the index was built.  Prevent usage of a newly created index
until all such snapshots are released, for relations where this can
happen.

Questions about the interaction of "snapshot too old" with index
creation were initially raised by Andres Freund.

Reviewed by Robert Haas.
2016-06-10 09:25:31 -05:00
Tom Lane cae1c788b9 Improve the situation for parallel query versus temp relations.
Transmit the leader's temp-namespace state to workers.  This is important
because without it, the workers do not really have the same search path
as the leader.  For example, there is no good reason (and no extant code
either) to prevent a worker from executing a temp function that the
leader created previously; but as things stood it would fail to find the
temp function, and then either fail or execute the wrong function entirely.

We still prohibit a worker from creating a temp namespace on its own.
In effect, a worker can only see the session's temp namespace if the leader
had created it before starting the worker, which seems like the right
semantics.

Also, transmit the leader's BackendId to workers, and arrange for workers
to use that when determining the physical file path of a temp relation
belonging to their session.  While the original intent was to prevent such
accesses entirely, there were a number of holes in that, notably in places
like dbsize.c which assume they can safely access temp rels of other
sessions anyway.  We might as well get this right, as a small down payment
on someday allowing workers to access the leader's temp tables.  (With
this change, directly using "MyBackendId" as a relation or buffer backend
ID is deprecated; you should use BackendIdForTempRelations() instead.
I left a couple of such uses alone though, as they're not going to be
reachable in parallel workers until we do something about localbuf.c.)

Move the thou-shalt-not-access-thy-leader's-temp-tables prohibition down
into localbuf.c, which is where it actually matters, instead of having it
in relation_open().  This amounts to recognizing that access to temp
tables' catalog entries is perfectly safe in a worker, it's only the data
in local buffers that is problematic.

Having done all that, we can get rid of the test in has_parallel_hazard()
that says that use of a temp table's rowtype is unsafe in parallel workers.
That test was unduly expensive, and if we really did need such a
prohibition, that was not even close to being a bulletproof guard for it.
(For example, any user-defined function executed in a parallel worker
might have attempted such access.)
2016-06-09 20:16:11 -04:00
Robert Haas 2410a2543e Repair a bit of pgindent damage.
Inserting line-breaks into the middle of a URL is, to put it mildly, not
very helpful, so persuade pgindent to leave it alone.
2016-06-09 18:09:17 -04:00
Robert Haas 4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Robert Haas b12fd41c69 Don't generate parallel paths for rels with parallel-restricted outputs.
Such paths are unsafe.  To make it cheaper to detect when this case
applies, track whether a relation's default PathTarget contains any
non-Vars.  In most cases, the answer will be no, which enables us to
determine cheaply that the target list for a proposed path is
parallel-safe.  However, subquery pull-up can create cases that
require us to inspect the target list more carefully.

Amit Kapila, reviewed by me.
2016-06-09 12:43:36 -04:00
Robert Haas e7bcd983f5 Yet again update typedefs.list file in preparation for pgindent run
Because the run was delayed, the file had a chance to get out of date.
2016-06-09 12:16:17 -04:00
Tom Lane 7feb60c1bb Clarify documentation of ceil/ceiling/floor functions.
Document these as "nearest integer >= argument" and "nearest integer <=
argument", which will hopefully be less confusing than the old formulation.
New wording is from Matlab via Dean Rasheed.

I changed the pg_description entries as well as the SGML docs.  In the
back branches, this will only affect installations initdb'd in the future,
but it should be harmless otherwise.

Discussion: <CAEZATCW3yzJo-NMSiQs5jXNFbTsCEftZS-Og8=FvFdiU+kYuSA@mail.gmail.com>
2016-06-09 11:58:00 -04:00
Tom Lane e4158319f3 Mop-up for parallel degree-ectomy.
Fix a couple of overlooked uses of "degree" terminology.  Make the parallel
worker count selection logic in create_plain_partial_paths more robust (in
particular, it failed with max_parallel_workers_per_gather set to zero).
2016-06-09 11:16:26 -04:00
Robert Haas c9ce4a1c61 Eliminate "parallel degree" terminology.
This terminology provoked widespread complaints.  So, instead, rename
the GUC max_parallel_degree to max_parallel_workers_per_gather
(leaving room for a possible future GUC max_parallel_workers that acts
as a system-wide limit), and rename the parallel_degree reloption to
parallel_workers.  Rename structure members to match.

These changes create a dump/restore hazard for users of PostgreSQL
9.6beta1 who have set the reloption (or applied the GUC using ALTER
USER or ALTER DATABASE).
2016-06-09 10:00:26 -04:00
Noah Misch 14a254fb52 Test parallel query essentials in "make check".
Clément Prévost and Peter Eisentraut
2016-06-07 23:36:22 -04:00
Alvaro Herrera c588df9971 Make psql_crosstab plans more stable
To achieve this, ANALYZE the data table before querying it, as suggested
by Tom Lane.  On my system, this enables the test to pass with 128 kB of
work_mem (a value with which other tests fail -- so it seems good
enough).

Reported by Michaël Paquier.
2016-06-07 19:18:31 -04:00
Alvaro Herrera 736c95ca16 nls-global.mk: search build dir for source files, too
In VPATH builds, the build directory was not being searched for files in
GETTEXT_FILES, leading to failure to construct the .pot files.  This has
bit me all along, but never hard enough to get it fixed; I suppose not a
lot of people uses VPATH and NLS-enabled builds, and those that do,
don't do "make update-po" often.

This is a longstanding problem, so backpatch all the way back.
2016-06-07 18:55:18 -04:00
Alvaro Herrera 8b3746208c Add missing translate_columns array entry
This omission caused an assertion error in \dA+.
2016-06-07 18:03:31 -04:00
Alvaro Herrera 4f04b66f97 Fix loose ends for SQL ACCESS METHOD objects
COMMENT ON ACCESS METHOD was missing; add it, along psql tab-completion
support for it.

psql was also missing a way to list existing access methods; the new \dA
command does that.

Also add tab-completion support for DROP ACCESS METHOD.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqTzdZdu8J7EF8SXr_R2U5bSUUYNOT3oAWBZdEoggnwhGA@mail.gmail.com
2016-06-07 17:59:34 -04:00
Tom Lane 77ba610805 Revert "Use Foreign Key relationships to infer multi-column join selectivity".
This commit reverts 137805f89 as well as the associated commits 015e88942,
5306df283, and 68d704edb.  We found multiple bugs in this feature, and
there was concern about possible planner slowdown (though to be fair,
exhibiting a very large slowdown proved difficult).  The way forward
requires a considerable rewrite, which may or may not be possible to
accomplish in time for beta2.  In my judgment reviewing the rewrite will
be easier to accomplish starting from a clean slate, so let's temporarily
revert what's there now.  This also leaves us in a safe state if it turns
out to be necessary to postpone the rewrite to the next development cycle.

Discussion: <20160429102531.GA13701@huehner.biz>
2016-06-07 17:21:17 -04:00
Peter Eisentraut 5c6d2a5e7c Message style and wording fixes 2016-06-07 14:18:55 -04:00
Simon Riggs 1f74a90888 Correct phrasing in dsm.c comments 2016-06-07 17:34:33 +01:00
Stephen Frost 40fc457520 Minor typos / copy-editing for snapmgr.c
Noticed while reviewing snapshot management.
2016-06-07 11:14:48 -04:00
Peter Eisentraut d8c2dccfdb psql: Add missing file to nls.mk
crosstabview.c was not added to nls.mk when it was added.  Also remove
redundant gettext markers, since psql_error() is already registered as a
gettext keyword.
2016-06-07 10:58:46 -04:00
Peter Eisentraut 298706de26 pgbench: Fix order in --help output
The new option --progress-timestamp was just added at the end.  Put it
in alphabetical order.
2016-06-07 10:41:20 -04:00
Stephen Frost 562f06f3f0 pg_dump only selected components of ACCESS METHODs
dumpAccessMethod() didn't get the memo that we now have a bitfield for
the components which should be dumped instead of a simple boolean.

Correct that by checking if the relevant bit is set for each component
being dumped out (and not dumping it out if it isn't set).

This corrects an issue where CREATE ACCESS METHOD commands were being
included in non-binary-upgrades when an extension included an access
method (as the bloom extensions does).

Also add a regression test to make sure that we only dump out the
ACCESS METHOD commands, when they are part of an extension, when doing
a binary upgrade.

Pointed out by Thom Brown.
2016-06-07 09:56:02 -04:00
Peter Eisentraut 8359077124 PL/Python: Move ereport wrapper test cases to separate file
In commit 5c3c3cd0a3, the new tests were
apparently just dumped into the first convenient file.  Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
2016-06-07 09:39:11 -04:00
Tom Lane f64340e743 Don't reset changes_since_analyze after a selective-columns ANALYZE.
If we ANALYZE only selected columns of a table, we should not postpone
auto-analyze because of that; other columns may well still need stats
updates.  As committed, the counter is left alone if a column list is
given, whether or not it includes all analyzable columns of the table.
Per complaint from Tomasz Ostrowski.

It's been like this a long time, so back-patch to all supported branches.

Report: <ef99c1bd-ff60-5f32-2733-c7b504eb960c@ato.waw.pl>
2016-06-06 17:44:17 -04:00
Robert Haas c6dbf1fe79 Stop the executor if no more tuples can be sent from worker to leader.
If a Gather node has read as many tuples as it needs (for example, due
to Limit) it may detach the queue connecting it to the worker before
reading all of the worker's tuples.  Rather than let the worker
continue to generate and send all of the results, have it stop after
sending the next tuple.

More could be done here to stop the worker even quicker, but this is
about as well as we can hope to do for 9.6.

This is in response to a problem report from Andreas Seltenreich.
Commit 44339b892a should be actually be
sufficient to fix that example even without this change, but it seems
better to do this, too, since we might otherwise waste quite a large
amount of effort in one or more workers.

Discussion: CAA4eK1KOKGqmz9bGu+Z42qhRwMbm4R5rfnqsLCNqFs9j14jzEA@mail.gmail.com

Amit Kapila
2016-06-06 14:52:58 -04:00
Robert Haas 44339b892a shm_mq: After a send fails with SHM_MQ_DETACHED, later ones should too.
Prior to this patch, it was occasionally possible, after shm_mq_sendv
had previously returned SHM_MQ_DETACHED, for a later shm_mq_sendv
operation to fail an assertion instead of just again returning
SHM_MQ_ATTACHED.  From the shm_mq code's point of view, it was
expecting to be called again with the same arguments, since the
previous operation had only partially completed.  However, a caller
who isn't using non-blocking mode won't be prepared to repeat the call
with the same arguments, and this code shouldn't expect that they
will.  Repair in such a way that we'll be OK whether the next call
uses the same arguments or not.

Found by Andreas Seltenreich.  Analysis and sketch of fix by Amit
Kapila.  Patch by me, reviewed by Amit Kapila.
2016-06-06 14:35:30 -04:00
Robert Haas e191a69005 pg_upgrade: Don't overwrite existing files.
For historical reasons, copyFile and rewriteVisibilityMap took a force
argument which was always passed as true, meaning that any existing
file should be overwritten.  However, it seems much safer to instead
fail if a file we need to write already exists.

While we're at it, remove the "force" argument altogether, since it was
never passed as anything other than true (and now we would never pass
it as anything other than false, if we kept it).

Noted by Andres Freund during post-commit review of the patch that added
rewriteVisibilityMap, commit 7087166a88,
but this also changes the behavior when copying files without rewriting
them.

Patch by Masahiko Sawada.
2016-06-06 09:51:56 -04:00
Robert Haas 932b97a011 Fix typo.
Jim Nasby
2016-06-06 07:58:50 -04:00
Robert Haas aba8943082 pg_upgrade: Improve error checking in rewriteVisibilityMap.
In the old logic, if read() were to return an error, we'd silently stop
rewriting the visibility map at that point in the file.  That's safe,
but reporting the error is better, so do that instead.

Report by Andres Freund.  Patch by Masahiko Sawada, with one correction
by me.
2016-06-06 06:17:10 -04:00
Peter Eisentraut 6201a8ef3a Fix whitespace 2016-06-05 17:02:56 -04:00
Tom Lane 8a859691d5 Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.
Fix still another bug in commit 35fcb1b3d: it failed to fully initialize
the SortSupport states it introduced to allow the executor to re-check
ORDER BY expressions containing distance operators.  That led to a null
pointer dereference if the sortsupport code tried to use ssup_cxt.  The
problem only manifests in narrow cases, explaining the lack of previous
field reports.  It requires a GiST-indexable distance operator that lacks
SortSupport and is on a pass-by-ref data type, which among core+contrib
seems to be only btree_gist's interval opclass; and it requires the scan
to be done as an IndexScan not an IndexOnlyScan, which explains how
btree_gist's regression test didn't catch it.  Per bug #14134 from
Jihyun Yu.

Peter Geoghegan

Report: <20160511154904.2603.43889@wrigleys.postgresql.org>
2016-06-05 11:53:06 -04:00
Tom Lane 05104f6936 Fix grammar's AND/OR flattening to work with operator_precedence_warning.
It'd be good for "(x AND y) AND z" to produce a three-child AND node
whether or not operator_precedence_warning is on, but that failed to
happen when it's on because makeAndExpr() didn't look through the added
AEXPR_PAREN node.  This has no effect on generated plans because prepqual.c
would flatten the AND nest anyway; but it does affect the number of parens
printed in ruleutils.c, for example.  I'd already fixed some similar
hazards in parse_expr.c in commit abb164655, but didn't think to search
gram.y for problems of this ilk.  Per gripe from Jean-Pierre Pelletier.

Report: <fa0535ec6d6428cfec40c7e8a6d11156@mail.gmail.com>
2016-06-03 19:12:29 -04:00
Tom Lane d50183c578 Inline the easy cases in MakeExpandedObjectReadOnly().
This attempts to buy back some of whatever performance we lost from fixing
bug #14174 by inlining the initial checks in MakeExpandedObjectReadOnly()
into the callers.  We can do that in a macro without creating multiple-
evaluation hazards, so it's pretty much free notationally; and the amount
of code added to callers should be minimal as well.  (Testing a value can't
take many more instructions than passing it to a subroutine.)

Might as well inline DatumIsReadWriteExpandedObject() while we're at it.

This is an ABI break for callers, so it doesn't seem safe to put into 9.5,
but I see no reason not to do it in HEAD.
2016-06-03 18:34:05 -04:00
Tom Lane 9eaf5be506 Mark read/write expanded values as read-only in ValuesNext(), too.
Further thought about bug #14174 motivated me to try the case of a
R/W datum being returned from a VALUES list, and sure enough it was
broken.  Fix that.

Also add a regression test case exercising the same scenario for
FunctionScan.  That's not broken right now, because the function's
result will get shoved into a tuplestore between generation and use;
but it could easily become broken whenever we get around to optimizing
FunctionScan better.

There don't seem to be any other places where we put the result of
expression evaluation into a virtual tuple slot that could then be
the source for Vars of further expression evaluation, so I think
this is the end of this bug.
2016-06-03 18:07:14 -04:00
Tom Lane 69f526aa49 Mark read/write expanded values as read-only in ExecProject().
If a plan node output expression returns an "expanded" datum, and that
output column is referenced in more than one place in upper-level plan
nodes, we need to ensure that what is returned is a read-only reference
not a read/write reference.  Otherwise one of the referencing sites could
scribble on or even delete the expanded datum before we have evaluated the
others.  Commit 1dc5ebc907, which introduced this feature, supposed
that it'd be sufficient to make SubqueryScan nodes force their output
columns to read-only state.  The folly of that was revealed by bug #14174
from Andrew Gierth, and really should have been immediately obvious
considering that the planner will happily optimize SubqueryScan nodes
out of the plan without any regard for this issue.

The safest fix seems to be to make ExecProject() force its results into
read-only state; that will cover every case where a plan node returns
expression results.  Actually we can delegate this to ExecTargetList()
since we can recursively assume that plain Vars will not reference
read-write datums.  That should keep the extra overhead down to something
minimal.  We no longer need ExecMakeSlotContentsReadOnly(), which was
introduced only in support of the idea that just a few plan node types
would need to do this.

In the future it would be nice to have the planner account for this problem
and inject force-to-read-only expression evaluation nodes into only the
places where there's a risk of multiple evaluation.  That's not a suitable
solution for 9.5 or even 9.6 at this point, though.

Report: <20160603124628.9932.41279@wrigleys.postgresql.org>
2016-06-03 15:14:50 -04:00
Robert Haas 04ae11f62e Remove bogus code to apply PathTargets to partial paths.
The partial paths that get modified may already have been used as
part of a GatherPath which appears in the path list, so modifying
them is not a good idea at this stage - especially because this
code has no check that the PathTarget is in fact parallel-safe.

When partial aggregation is being performed, this is actually
harmless because we'll end up replacing the pathtargets here with
the correct ones within create_grouping_paths().  But if we've got
a query tree containing only scan/join operations then this can
result in incorrectly pushing down parallel-restricted target
list entries.  If those are, for example, references to subqueries,
that can crash the server; but it's wrong in any event.

Amit Kapila
2016-06-03 14:27:33 -04:00
Robert Haas cac8321970 Mark PostmasterPid as PGDLLIMPORT.
This is so that extensions can use it.

Michael Paquier
2016-06-03 14:06:35 -04:00
Kevin Grittner 370a46fc01 Add new snapshot fields to serialize/deserialize functions.
The "snapshot too old" condition was not being recognized when
using a copied snapshot, since the original timestamp and lsn were
not being passed along.  Noticed when testing the combination of
"snapshot too old" with parallel query execution.
2016-06-03 11:13:28 -05:00
Robert Haas 6436a853f1 Fix comment to be more accurate.
Now that we skip vacuuming all-frozen pages, this comment needs
updating.

Masahiko Sawada
2016-06-03 11:56:57 -04:00
Tom Lane 6c72a28e5c Suppress -Wunused-result warnings about write(), again.
Adopt the same solution as in commit aa90e148ca, but this time
let's put the ugliness inside the write_stderr() macro, instead of
expecting each call site to deal with it.  Back-port that decision
into psql/common.c where I got the macro from in the first place.

Per gripe from Peter Eisentraut.
2016-06-03 11:29:38 -04:00
Greg Stark e1623c3959 Fix various common mispellings.
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.

Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
2016-06-03 16:08:45 +01:00
Robert Haas fdfaccfa79 Cosmetic improvements to freeze map code.
Per post-commit review comments from Andres Freund, improve variable
names, comments, and in one place, slightly improve the code structure.

Masahiko Sawada
2016-06-03 08:43:41 -04:00
Greg Stark a3b30763cc Be conservative about alignment requirements of struct epoll_event.
Use MAXALIGN size/alignment to guarantee that later uses of memory are
aligned correctly. E.g. epoll_event might need 8 byte alignment on some
platforms, but earlier allocations like WaitEventSet and WaitEvent might
not sized to guarantee that when purely using sizeof().

Found by myself while testing on an Sun Ultra 5 (Sparc IIi) with some
editorializing by Andres Freund.

In passing fix a couple typos in the area
2016-06-02 19:38:52 +01:00
Kevin Grittner 4edb7bd2fd C comment improvement & typo fix.
Thomas Munro
2016-06-02 12:52:41 -05:00
Tom Lane e652273e07 Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar
signals and set a flag that was tested in various data-transfer loops.
This was prone to errors of omission (cf commit 3c8aa6654); and even if
the client-side response was prompt, we did nothing that would cause
long-running SQL commands (e.g. CREATE INDEX) to terminate early.
Also, the master process would effectively do nothing at all upon receipt
of SIGINT; the only reason it seemed to work was that in typical scenarios
the signal would also be delivered to the child processes.  We should
support termination when a signal is delivered only to the master process,
though.

Windows builds had no console interrupt handler, so they would just fall
over immediately at control-C, again leaving long-running SQL commands to
finish unmolested.

To fix, remove the flag-checking approach altogether.  Instead, allow the
Unix signal handler to send a cancel request directly and then exit(1).
In the master process, also have it forward the signal to the children.
On Windows, add a console interrupt handler that behaves approximately
the same.  The main difference is that a single execution of the Windows
handler can send all the cancel requests since all the info is available
in one process, whereas on Unix each process sends a cancel only for its
own database connection.

In passing, fix an old problem that DisconnectDatabase tends to send a
cancel request before exiting a parallel worker, even if nothing went
wrong.  This is at least a waste of cycles, and could lead to unexpected
log messages, or maybe even data loss if it happened in pg_restore (though
in the current code the problem seems to affect only pg_dump).  The cause
was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY
state, causing PQtransactionStatus() to report PQTRANS_ACTIVE.  That's
normally harmless because the next PQexec() will silently clear the
PGASYNC_BUSY state; but in a parallel worker we might exit without any
additional SQL commands after a COPY step.  So add an extra PQgetResult()
call after a COPY to allow libpq to return to PGASYNC_IDLE state.

This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore
were introduced.

Thanks to Kyotaro Horiguchi for Windows testing and code suggestions.

Original-Patch: <7005.1464657274@sss.pgh.pa.us>
Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
2016-06-02 13:28:17 -04:00
Kevin Grittner 7392eed7c2 Fix btree mark/restore bug.
Commit 2ed5b87f96 introduced a bug in
mark/restore, in an attempt to optimize repeated restores to the
same page.  This caused an assertion failure during a merge join
which fed directly from an index scan, although the impact would
not be limited to that case.  Revert the bad chunk of code from
that commit.

While investigating this bug it was discovered that a particular
"paranoia" set of the mark position field would not prevent bad
behavior; it would just make it harder to diagnose.  Change that
into an assertion, which will draw attention to any future problem
in that area more directly.

Backpatch to 9.5, where the bug was introduced.

Bug #14169 reported by Shinta Koyanagi.
Preliminary analysis by Tom Lane identified which commit caused
the bug.
2016-06-02 12:23:01 -05:00
Tom Lane 763eec6b6d Clean up some minor inefficiencies in parallel dump/restore.
Parallel dump did a totally pointless query to find out the name of each
table to be dumped, which it already knows.  Parallel restore runs issued
lots of redundant SET commands because _doSetFixedOutputState() was invoked
once per TOC item rather than just once at connection start.  While the
extra queries are insignificant if you're dumping or restoring large
tables, it still seems worth getting rid of them.

Also, give the responsibility for selecting the right client_encoding for
a parallel dump worker to setup_connection() where it naturally belongs,
instead of having ad-hoc code for that in CloneArchive().  And fix some
minor bugs like use of strdup() where pg_strdup() would be safer.

Back-patch to 9.3, mostly to keep the branches in sync in an area that
we're still finding bugs in.

Discussion: <5086.1464793073@sss.pgh.pa.us>
2016-06-01 16:14:21 -04:00
Tom Lane 22b27b4c9e Avoid useless closely-spaced writes of statistics files.
The original intent in the stats collector was that we should not write out
stats data oftener than every PGSTAT_STAT_INTERVAL msec.  Backends will not
make requests at all if they see the existing data is newer than that, and
the stats collector is supposed to disregard requests having a cutoff_time
older than its most recently written data, so that close-together requests
don't result in multiple writes.  But the latter part of that got broken
in commit 187492b6c2, so that if two backends concurrently decide
the existing stats are too old, the collector would write the data twice.
(In principle the collector's logic would still merge requests as long as
the second one arrives before we've actually written data ... but since
the message collection loop would write data immediately after processing
a single inquiry message, that never happened in practice, and in any case
the window in which it might work would be much shorter than
PGSTAT_STAT_INTERVAL.)

To fix, improve pgstat_recv_inquiry so that it checks whether the cutoff
time is too old, and doesn't add a request to the queue if so.  This means
that we do not need DBWriteRequest.request_time, because the decision is
taken before making a queue entry.  And that means that we don't really
need the DBWriteRequest data structure at all; an OID list of database
OIDs will serve and allow removal of some rather verbose and crufty code.

In passing, improve the comments in this area, which have been rather
neglected.  Also change backend_read_statsfile so that it's not silently
relying on MyDatabaseId to have some particular value in the autovacuum
launcher process.  It accidentally worked as desired because MyDatabaseId
is zero in that process; but that does not seem like a dependency we want,
especially with no documentation about it.

Although this patch is mine, it turns out I'd rediscovered a known bug,
for which Tomas Vondra had already submitted a patch that's functionally
equivalent to the non-cosmetic aspects of this patch.  Thanks to Tomas
for reviewing this version.

Back-patch to 9.3 where the bug was introduced.

Prior-Discussion: <1718942738eb65c8407fcd864883f4c8@fuzzy.cz>
Patch: <4625.1464202586@sss.pgh.pa.us>
2016-05-31 15:55:15 -04:00
Noah Misch 2195c5afaa Mirror struct Aggref field order in _copyAggref().
This is cosmetic, and no supported release has the affected fields.
2016-05-31 00:01:03 -04:00
Andres Freund 87a3023c60 Move memory barrier in UnlockBufHdr to before releasing the lock.
This bug appears to have been introduced late in the development of
48354581a4 ("Allow Pin/UnpinBuffer to operate in a lockfree
manner.").

Found while debugging a bug which turned out to be independent of the
commit mentioned above.

Backpatch: -
2016-05-30 15:35:53 -07:00
Alvaro Herrera 975ad4e602 Fix PageAddItem BRIN bug
BRIN was relying on the ability to remove a tuple from an index page,
then putting another tuple in the same line pointer.  But PageAddItem
refuses to add a tuple beyond the first free item past the last used
item, and in particular, it rejects an attempt to add an item to an
empty page anywhere other than the first line pointer.  PageAddItem
issues a WARNING and indicates to the caller that it failed, which in
turn causes the BRIN calling code to issue a PANIC, so the whole
sequence looks like this:
	WARNING:  specified item offset is too large
	PANIC:  failed to add BRIN tuple

To fix, create a new function PageAddItemExtended which is like
PageAddItem except that the two boolean arguments become a flags bitmap;
the "overwrite" and "is_heap" boolean flags in PageAddItem become
PAI_OVERWITE and PAI_IS_HEAP flags in the new function, and a new flag
PAI_ALLOW_FAR_OFFSET enables the behavior required by BRIN.
PageAddItem() retains its original signature, for compatibility with
third-party modules (other callers in core code are not modified,
either).

Also, in the belt-and-suspenders spirit, I added a new sanity check in
brinGetTupleForHeapBlock to raise an error if an TID found in the revmap
is not marked as live by the page header.  This causes it to react with
"ERROR: corrupted BRIN index" to the bug at hand, rather than a hard
crash.

Backpatch to 9.5.

Bug reported by Andreas Seltenreich as detected by his handy sqlsmith
fuzzer.
Discussion: https://www.postgresql.org/message-id/87mvni77jh.fsf@elite.ansel.ydns.eu
2016-05-30 14:47:22 -04:00
Tom Lane 3c8aa6654a Fix missing abort checks in pg_backup_directory.c.
Parallel restore from directory format failed to respond to control-C
in a timely manner, because there were no checkAborting() calls in the
code path that reads data from a file and sends it to the backend.
If any worker was in the midst of restoring data for a large table,
you'd just have to wait.

This fix doesn't do anything for the problem of aborting a long-running
server-side command, but at least it fixes things for data transfers.

Back-patch to 9.3 where parallel restore was introduced.
2016-05-29 13:18:48 -04:00
Tom Lane 210981a4a9 Remove pg_dump/parallel.c's useless "aborting" flag.
This was effectively dead code, since the places that tested it could not
be reached after we entered the on-exit-cleanup routine that would set it.
It seems to have been a leftover from a design in which error abort would
try to send fresh commands to the workers --- a design which could never
have worked reliably, of course.  Since the flag is not cross-platform, it
complicates reasoning about the code's behavior, which we could do without.

Although this is effectively just cosmetic, back-patch anyway, because
there are some actual bugs in the vicinity of this behavior.

Discussion: <15583.1464462418@sss.pgh.pa.us>
2016-05-29 13:00:09 -04:00
Tom Lane 6b3094c26f Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.
The commentary in this file was in extremely sad shape.  The author(s)
had clearly never heard of the project convention that a function header
comment should provide an API spec of some sort for that function.  Much
of it was flat out wrong, too --- maybe it was accurate when written, but
if so it had not been updated to track subsequent code revisions.  Rewrite
and rearrange to try to bring it up to speed, and annotate some of the
places where more work is needed.  (I've refrained from actually fixing
anything of substance ... yet.)

Also, rename a couple of functions for more clarity as to what they do,
do some very minor code rearrangement, remove some pointless Asserts,
fix an incorrect Assert in readMessageFromPipe, and add a missing socket
close in one error exit from pgpipe().  The last would be a bug if we
tried to continue after pgpipe() failure, but since we don't, it's just
cosmetic at present.

Although this is only cosmetic, back-patch to 9.3 where parallel.c was
added.  It's sufficiently invasive that it'll pose a hazard for future
back-patching if we don't.

Discussion: <25239.1464386067@sss.pgh.pa.us>
2016-05-28 14:02:11 -04:00
Tom Lane 807b45375b Clean up thread management in parallel pg_dump for Windows.
Since we start the worker threads with _beginthreadex(), we should use
_endthreadex() to terminate them.  We got this right in the normal-exit
code path, but not so much during an error exit from a worker.
In addition, be sure to apply CloseHandle to the thread handle after
each thread exits.

It's not clear that these oversights cause any user-visible problems,
since the pg_dump run is about to terminate anyway.  Still, it's clearly
better to follow Microsoft's API specifications than ignore them.

Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including
being a bit less random about where to cast between uintptr_t and HANDLE,
and being sure to clear the worker identity field for each dead worker
(not that false matches should be possible later, but let's be careful).

Original observation and patch by Armin Schöffmann, cosmetic improvements
by Michael Paquier and me.  (Armin's patch also included closing sockets
in ShutdownWorkersHard(), but that's been dealt with already in commit
df8d2d8c4.)  Back-patch to 9.3 where parallel pg_dump was introduced.

Discussion: <zarafa.570306bd.3418.074bf1420d8f2ba2@root.aegaeon.de>
2016-05-27 12:02:09 -04:00
Tom Lane 83dbde94f7 Fix DROP ACCESS METHOD IF EXISTS.
The IF EXISTS option was documented, and implemented in the grammar, but
it didn't actually work for lack of support in does_not_exist_skipping().
Per bug #14160.

Report and patch by Kouhei Sutou

Report: <20160527070433.19424.81712@wrigleys.postgresql.org>
2016-05-27 11:03:18 -04:00
Tom Lane 9dd4178cec Be more predictable about reporting "lock timeout" vs "statement timeout".
If both timeout indicators are set when we arrive at ProcessInterrupts,
we've historically just reported "lock timeout".  However, some buildfarm
members have been observed to fail isolationtester's timeouts test by
reporting "lock timeout" when the statement timeout was expected to fire
first.  The cause seems to be that the process is allowed to sleep longer
than expected (probably due to heavy machine load) so that the lock
timeout happens before we reach the point of reporting the error, and
then this arbitrary tiebreak rule does the wrong thing.  We can improve
matters by comparing the scheduled timeout times to decide which error
to report.

I had originally proposed greatly reducing the 1-second window between
the two timeouts in the test cases.  On reflection that is a bad idea,
at least for the case where the lock timeout is expected to fire first,
because that would assume that it takes negligible time to get from
statement start to the beginning of the lock wait.  Thus, this patch
doesn't completely remove the risk of test failures on slow machines.
Empirically, however, the case this handles is the one we are seeing
in the buildfarm.  The explanation may be that the other case requires
the scheduler to take the CPU away from a busy process, whereas the
case fixed here only requires the scheduler to not give the CPU back
right away to a process that has been woken from a multi-second sleep
(and, perhaps, has been swapped out meanwhile).

Back-patch to 9.3 where the isolationtester timeouts test was added.

Discussion: <8693.1464314819@sss.pgh.pa.us>
2016-05-27 10:40:20 -04:00
Magnus Hagander d74048defc Make pg_dump error cleanly with -j against hot standby
Getting a synchronized snapshot is not supported on a hot standby node,
and is by default taken when using -j with multiple sessions. Trying to
do so still failed, but with a server error that would also go in the
log. Instead, proprely detect this case and give a better error message.
2016-05-26 22:14:23 +02:00
Tom Lane aeb9ae6457 Disable physical tlist if any Var would need multiple sortgroupref labels.
As part of upper planner pathification (commit 3fc6e2d7f5) I redid
createplan.c's approach to the physical-tlist optimization, in which scan
nodes are allowed to return exactly the underlying table's columns so as
to save doing a projection step at runtime.  The logic was intentionally
more aggressive than before about applying the optimization, which is
generally a good thing, but Andres Freund found a case in which it got
too aggressive.  Namely, if any column is referenced more than once in
the parent plan node's sorting or grouping column list, we can't optimize
because then that column would need to have more than one ressortgroupref
label, and we only have space for one.

Add logic to detect this situation in use_physical_tlist(), and also add
some error checking in apply_pathtarget_labeling_to_tlist(), which this
example proves was being overly cavalier about whether what it was doing
made any sense.

The added test case exposes the problem only because we do not eliminate
duplicate grouping keys.  That might be something to fix someday, but it
doesn't seem like appropriate post-beta work.

Report: <20160526021235.w4nq7k3gnheg7vit@alap3.anarazel.de>
2016-05-26 14:52:30 -04:00
Tom Lane cae2bb1986 Make pg_dump behave more sanely when built without HAVE_LIBZ.
For some reason the code to emit a warning and switch to uncompressed
output was placed down in the guts of pg_backup_archiver.c.  This is
definitely too late in the case of parallel operation (and I rather
wonder if it wasn't too late for other purposes as well).  Put it in
pg_dump.c's option-processing logic, which seems a much saner place.

Also, the default behavior with custom or directory output format was
to emit the warning telling you the output would be uncompressed.  This
seems unhelpful, so silence that case.

Back-patch to 9.3 where parallel dump was introduced.

Kyotaro Horiguchi, adjusted a bit by me

Report: <20160526.185551.242041780.horiguchi.kyotaro@lab.ntt.co.jp>
2016-05-26 11:51:04 -04:00
Tom Lane df8d2d8c42 In Windows pg_dump, ensure idle workers will shut down during error exit.
The Windows coding of ShutdownWorkersHard() thought that setting termEvent
was sufficient to make workers exit after an error.  But that only helps
if a worker is busy and passes through checkAborting().  An idle worker
will just sit, resulting in pg_dump failing to exit until the user gives up
and hits control-C.  We should close the write end of the command pipe
so that idle workers will see socket EOF and exit, as the Unix coding was
already doing.

Back-patch to 9.3 where parallel pg_dump was introduced.

Kyotaro Horiguchi
2016-05-26 10:50:30 -04:00
Tom Lane b898eb6367 Remove option to write USING before opclass name in CREATE INDEX.
Dating back to commit f10b63923, our grammar has allowed "USING" to
optionally appear before an opclass name in CREATE INDEX (and, lately,
some related places such as ON CONFLICT specifications).  Nikolay Shaplov
noticed that this syntax existed but wasn't documented, and proposed
documenting it.  But what seems like a better idea is to remove the
production, thereby making the code match the docs not vice versa.
This isn't our usual modus operandi for such cases, but there are a
couple of good reasons to proceed this way:

* So far as I can find, this syntax has never been documented anywhere.
It isn't relied on by any of our own code or test cases, and there seems
little reason to suppose that it's been used in the wild either.

* Documenting it would mean that there would be two separate uses of
USING in the CREATE INDEX syntax, the other being "USING access_method".
That can lead to nothing but confusion.

So, let's just remove it.  On the off chance that somebody somewhere
is using it, this isn't something to back-patch, but we can fix it
in HEAD.

Discussion: <1593237.l7oKHRpxSe@nataraj-amd64>
2016-05-25 19:11:00 -04:00
Tom Lane 52e8fc3e2e Ensure that backends see up-to-date statistics for shared catalogs.
Ever since we split the statistics collector's reports into per-database
files (commit 187492b6c2), backends have been seeing stale statistics
for shared catalogs.  This is because the inquiry message only prompts the
collector to write the per-database file for the requesting backend's own
database.  Stats for shared catalogs are in a separate file for "DB 0",
which didn't get updated.

In normal operation this was partially masked by the fact that the
autovacuum launcher would send an inquiry message at least once per
autovacuum_naptime that asked for "DB 0"; so the shared-catalog stats would
never be more than a minute out of date.  However the problem becomes very
obvious with autovacuum disabled, as reported by Peter Eisentraut.

To fix, redefine the semantics of inquiry messages so that both the
specified DB and DB 0 will be dumped.  (This might seem a bit inefficient,
but we have no good way to know whether a backend's transaction will look
at shared-catalog stats, so we have to read both groups of stats whenever
we request stats.  Sending two inquiry messages would definitely not be
better.)

Back-patch to 9.3 where the bug was introduced.

Report: <56AD41AC.1030509@gmx.net>
2016-05-25 17:48:15 -04:00
Tom Lane 9abd64ec99 Fix broken error handling in parallel pg_dump/pg_restore.
In the original design for parallel dump, worker processes reported errors
by sending them up to the master process, which would print the messages.
This is unworkably fragile for a couple of reasons: it risks deadlock if a
worker sends an error at an unexpected time, and if the master has already
died for some reason, the user will never get to see the error at all.
Revert that idea and go back to just always printing messages to stderr.
This approach means that if all the workers fail for similar reasons (eg,
bad password or server shutdown), the user will see N copies of that
message, not only one as before.  While that's slightly annoying, it's
certainly better than not seeing any message; not to mention that we
shouldn't assume that only the first failure is interesting.

An additional problem in the same area was that the master failed to
disable SIGPIPE (at least until much too late), which meant that sending a
command to an already-dead worker would cause the master to crash silently.
That was bad enough in itself but was made worse by the total reliance on
the master to print errors: even if the worker had reported an error, you
would probably not see it, depending on timing.  Instead disable SIGPIPE
right after we've forked the workers, before attempting to send them
anything.

Additionally, the master relies on seeing socket EOF to realize that a
worker has exited prematurely --- but on Windows, there would be no EOF
since the socket is attached to the process that includes both the master
and worker threads, so it remains open.  Make archive_close_connection()
close the worker end of the sockets so that this acts more like the Unix
case.  It's not perfect, because if a worker thread exits without going
through exit_nicely() the closures won't happen; but that's not really
supposed to happen.

This has been wrong all along, so back-patch to 9.3 where parallel dump
was introduced.

Report: <2458.1450894615@sss.pgh.pa.us>
2016-05-25 12:40:12 -04:00
Stephen Frost 018eb027f1 Do not DROP default roles in pg_dumpall -c
When pulling the list of roles to drop, exclude roles whose names
begin with "pg_" (as we do when we are dumping the roles out to
recreate them).

Also add regression tests to cover pg_dumpall -c and this specific
issue.

Noticed by Rushabh Lathia.  Patch by me.
2016-05-24 23:31:55 -04:00
Tom Lane f5e7b2f910 Mark wal_level as PGDLLIMPORT.
Per buildfarm, this is needed to allow extensions to use XLogIsNeeded()
in Windows builds.
2016-05-24 22:48:47 -04:00
Stephen Frost 2e8b4bf804 Qualify table usage in dumpTable() and use regclass
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catalog ACLs work.

Also, use ::regclass (or ::pg_catalog.regclass, where appropriate)
instead of using a poorly constructed query to get the OID for various
catalog tables.

Issues identified by Noah and Alvaro, patch by me.
2016-05-24 20:10:16 -04:00
Tom Lane 2d2e40e3be Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results.  Fetch each one just once to ensure sane behavior
in the minimum calculation.  Noted while reviewing Alexander Korotkov's
patch in the same area.

Discussion: <8564.1464116473@sss.pgh.pa.us>
2016-05-24 15:47:51 -04:00
Tom Lane 996d273978 Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future".  That was probably fine when written, but in a lazy
vacuum we won't have assigned an XID, so calling GetCurrentTransactionId()
causes an XID to be assigned when otherwise one would not be.  Most of the
time that's not a big problem ... but if we are hard up against the
wraparound limit, consuming XIDs during antiwraparound vacuums is a very
bad thing.

Instead, use ReadNewTransactionId(), which not only avoids this problem
but is in itself a better comparison point to test whether wraparound
has already occurred.

Report and patch by Alexander Korotkov.  Back-patch to all versions.

Report: <CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvM7-s6A@mail.gmail.com>
2016-05-24 15:20:36 -04:00
Alvaro Herrera 0c7cd45b6d Fix range check for effective_io_concurrency
Commit 1aba62ec moved the range check of that option form guc.c into
bufmgr.c, but introduced a bug by changing a >= 0.0 to > 0.0, which made
the value 0 no longer accepted.  Put it back.

Reported by Jeff Janes, diagnosed by Tom Lane
2016-05-24 14:55:34 -04:00
Tom Lane 1087aa2314 Fix typo in TAP test identification string.
Michael Paquier
2016-05-23 20:04:27 -04:00
Tom Lane 1e0d6512e5 Fix BTREE_BUILD_STATS build.
Commit 65c5fcd353 broke this by removing a
header include directive that is conditionally required.  Add that back
to nbtree.c, with annotation to keep pgrminclude from re-breaking it.

Peter Geoghegan

Report: <CAM3SWZTNjHFYW_UG8bu0BnogqQ2HfsTgkzXLueuUhfTcYbu5HA@mail.gmail.com>
2016-05-23 19:41:11 -04:00
Tom Lane eae1ad9b64 Support IndexElem in raw_expression_tree_walker().
Needed for cases in which INSERT ... ON CONFLICT appears inside a
recursive CTE item.  Per bug #14153 from Thomas Alton.

Patch by Peter Geoghegan, slightly adjusted by me

Report: <20160521232802.22598.13537@wrigleys.postgresql.org>
2016-05-23 19:23:36 -04:00
Tom Lane 465e09da63 Add support for more extensive testing of raw_expression_tree_walker().
If RAW_EXPRESSION_COVERAGE_TEST is defined, do a no-op tree walk over
every basic DML statement submitted to parse analysis.  If we'd had this
in place earlier, bug #14153 would have been caught by buildfarm testing.
The difficulty is that raw_expression_tree_walker() is only used in
limited cases involving CTEs (particularly recursive ones), so it's
very easy for an oversight in it to not be noticed during testing of a
seemingly-unrelated feature.

The type of error we can expect to catch with this is complete omission
of a node type from raw_expression_tree_walker(), and perhaps also
recursion into a field that doesn't contain a node tree, though that
would be an unlikely mistake.  It won't catch failure to add new fields
that need to be recursed into, unfortunately.

I'll go enable this on one or two of my own buildfarm animals once
bug #14153 is dealt with.

Discussion: <27861.1464040417@sss.pgh.pa.us>
2016-05-23 19:08:26 -04:00
Tom Lane 8a4930e3fa Fix latent crash in do_text_output_multiline().
do_text_output_multiline() would fail (typically with a null pointer
dereference crash) if its input string did not end with a newline.  Such
cases do not arise in our current sources; but it certainly could happen
in future, or in extension code's usage of the function, so we should fix
it.  To fix, replace "eol += len" with "eol = text + len".

While at it, make two cosmetic improvements: mark the input string const,
and rename the argument from "text" to "txt" to dodge pgindent strangeness
(since "text" is a typedef name).

Even though this problem is only latent at present, it seems like a good
idea to back-patch the fix, since it's a very simple/safe patch and it's
not out of the realm of possibility that we might in future back-patch
something that expects sane behavior from do_text_output_multiline().

Per report from Hao Lee.

Report: <CAGoxFiFPAGyPAJLcFxTB5cGhTW2yOVBDYeqDugYwV4dEd1L_Ag@mail.gmail.com>
2016-05-23 14:16:40 -04:00
Peter Eisentraut a50b605aa4 psql: Message style improvements 2016-05-21 22:17:00 -04:00
Tom Lane 16ea51a263 Pin the built-in index access methods.
This was overlooked in commit 473b93287, which introduced DROP ACCESS
METHOD.  Although that command is restricted to superusers, we don't want
even superusers dropping the built-in methods; "DROP ACCESS METHOD btree"
in particular is unrecoverable from.  Pin these objects in the same way
that other initdb-created objects are pinned.

I chose to bump catversion for this fix.  That's not absolutely necessary
perhaps, but it will ensure that no 9.6 production systems are missing
the pin entries.
2016-05-19 14:40:02 -04:00
Teodor Sigaev 7c979c95a3 Allocate all page images at once in generic wal interface
That reduces number of allocation.

Per gripe from Michael Paquier and Tom Lane suggestion.
2016-05-17 22:09:22 +03:00
Teodor Sigaev 7c8345f67f Correctly align page's images in generic wal API
Page image should be MAXALIGN'ed because existing code could directly align
pointers in page instead of align offset from beginning of page.

Found during play with indexes as extenstion, Alexander Korotkov and me
2016-05-17 00:01:35 +03:00
Tom Lane b7a9347c11 Fix comment.
Reference to getThreadLocalPQExpBuffer here seems inappropriate, since
we aren't necessarily using that instantiation of getLocalPQExpBuffer.
2016-05-15 17:04:01 -04:00
Peter Eisentraut 9b7bfc3a88 sql_features: Fix typos
This makes the feature names match the SQL standard.

From: Alexander Law <exclusion@gmail.com>
2016-05-13 21:24:54 -04:00
Peter Eisentraut 5251f2fc4d Update release instructions for translation updates
We don't tag the translations repository any more, because the commits
into postgresql contain the git hashes, and that's authoritative.
2016-05-13 21:21:47 -04:00
Alvaro Herrera cca2a27860 Fix bogus comments
Some comments mentioned XLogReplayBuffer, but there's no such function:
that was an interim name for a function that got renamed to
XLogReadBufferForRedo, before commit 2c03216d83 was pushed.
2016-05-12 16:07:07 -03:00
Alvaro Herrera bdb9e3dc1d Fix obsolete comment 2016-05-12 15:36:51 -03:00
Tom Lane 8a13d5e6d1 Fix infer_arbiter_indexes() to not barf on system columns.
While it could be argued that rejecting system column mentions in the
ON CONFLICT list is an unsupported feature, falling over altogether
just because the table has a unique index on OID is indubitably a bug.

As far as I can tell, fixing infer_arbiter_indexes() is sufficient to
make ON CONFLICT (oid) actually work, though making a regression test
for that case is problematic because of the impossibility of setting
the OID counter to a known value.

Minor cosmetic cleanups along with the bug fix.
2016-05-11 17:06:53 -04:00
Tom Lane 26e66184d6 Fix assorted missing infrastructure for ON CONFLICT.
subquery_planner() failed to apply expression preprocessing to the
arbiterElems and arbiterWhere fields of an OnConflictExpr.  No doubt the
theory was that this wasn't necessary because we don't actually try to
execute those expressions; but that's wrong, because it results in failure
to match to index expressions or index predicates that are changed at all
by preprocessing.  Per bug #14132 from Reynold Smith.

Also add pullup_replace_vars processing for onConflictWhere.  Perhaps
it's impossible to have a subquery reference there, but I'm not exactly
convinced; and even if true today it's a failure waiting to happen.

Also add some comments to other places where one or another field of
OnConflictExpr is intentionally ignored, with explanation as to why it's
okay to do so.

Also, catalog/dependency.c failed to record any dependency on the named
constraint in ON CONFLICT ON CONSTRAINT, allowing such a constraint to
be dropped while rules exist that depend on it, and allowing pg_dump to
dump such a rule before the constraint it refers to.  The normal execution
path managed to error out reasonably for a dangling constraint reference,
but ruleutils.c dumped core; so in addition to fixing the omission, add
a protective check in ruleutils.c, since we can't retroactively add a
dependency in existing databases.

Back-patch to 9.5 where this code was introduced.

Report: <20160510190350.2608.48667@wrigleys.postgresql.org>
2016-05-11 16:20:23 -04:00
Alvaro Herrera 15739393e4 Fix autovacuum for shared relations
The table-skipping logic in autovacuum would fail to consider that
multiple workers could be processing the same shared catalog in
different databases.  This normally wouldn't be a problem: firstly
because autovacuum workers not for wraparound would simply ignore tables
in which they cannot acquire lock, and secondly because most of the time
these tables are small enough that even if multiple for-wraparound
workers are stuck in the same catalog, they would be over pretty
quickly.  But in cases where the catalogs are severely bloated it could
become a problem.

Backpatch all the way back, because the problem has been there since the
beginning.

Reported by Ondřej Světlík

Discussion: https://www.postgresql.org/message-id/572B63B1.3030603%40flexibee.eu
	https://www.postgresql.org/message-id/572A1072.5080308%40flexibee.eu
2016-05-10 16:23:54 -03:00
Tom Lane 8ee29a19d6 Stamp 9.6beta1. 2016-05-09 16:47:49 -04:00
Peter Eisentraut 48aaba4acf Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
2016-05-09 10:04:41 -04:00
Stephen Frost 6f69b96390 Wording quibbles regarding initdb username
Use disallowed instead of reserved, cannot instead of can not, and
double quotes instead of single quotes.

Also add a test to cover the bug which started this discussion.

Per discussion with Tom.
2016-05-08 12:58:21 -04:00
Stephen Frost 7df974ee0b Disallow superuser names starting with 'pg_' in initdb
As with CREATE ROLE, disallow users from specifying initial
superuser names which begin with 'pg_' in initdb.

Per discussion with Tom.
2016-05-08 11:55:44 -04:00
Tom Lane 9eb7a0ac6b Fix poorly-worded log message.
Euler Taveira
2016-05-08 01:37:07 -04:00
Tom Lane b818088408 In new pg_dump TAP tests, remove trailing "$" from regexps using /m.
It emerges that some Perl versions before 5.8.9 have a bug with regexps
that use the /m flag and contain "$".  This is the reason why jacana
is still failing on HEAD, and I was able to duplicate the failure on
prairiedog's host.  There's no real need for "$" in these patterns,
since they are already matching through the statement-terminating
semicolons (or matching an explicit \n in some cases).  So just
remove it.

Note: the reason jacana hasn't actually reported any failures in the
last little while is that the way the pg_dump TAP tests are set up, any
failure of this sort results in echoing the entire pg_dump dump output
to stderr.  Since there were about a hundred such failures, that resulted
in a 30MB log file which choked the buildfarm upload script.  There is
room for improvement here :-(.

Per off-list discussion with Andrew and Stephen.
2016-05-07 16:36:50 -04:00
Tom Lane 74a73b1722 Clean up after pg_dump test runs.
The tmp_check directory needs to be removed by "make clean",
and also ignored by .gitignore.
2016-05-06 22:28:01 -04:00
Tom Lane 1a2c17f8e2 Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old.
This patch essentially reverts commit 4c6780fd17, in favor of a much
simpler solution for the case where the new cluster would choose to create
a TOAST table but the old cluster doesn't have one: just don't create a
TOAST table.

The existing code failed in at least two different ways if the situation
arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the
lock sanity check in create_toast_table failed; (2) pg_upgrade did not
provide a pg_type OID for the new toast table, so that the crosscheck in
TypeCreate failed.  While both these problems were introduced by later
patches, they show that the hack being used to cause TOAST table creation
is overwhelmingly fragile (and untested).  I also note that before the
TypeCreate crosscheck was added, the code would have resulted in assigning
an indeterminate pg_type OID to the toast table, possibly causing a later
OID conflict in that catalog; so that it didn't really work even when
committed.

If we simply don't create a TOAST table, there will only be a problem if
the code tries to store a tuple that's wider than a page, and field
compression isn't sufficient to get it under a page.  Given that the TOAST
creation threshold is intended to be about a quarter of a page, it's very
hard to believe that cross-version differences in the do-we-need-a-toast-
table heuristic could result in an observable problem.  So let's just
follow the old version's conclusion about whether a TOAST table is needed.

(If we ever do change needs_toast_table() so much that this conclusion
doesn't apply, we can devise a solution at that time, and hopefully do
it in a less klugy way than 4c6780fd17 did.)

Back-patch to 9.3, like the previous patch.

Discussion: <8110.1462291671@sss.pgh.pa.us>
2016-05-06 22:05:56 -04:00
Stephen Frost 0f97c722bb Disable BLOB test in pg_dump TAP tests
Buildfarm member jacana appears to have an issue with running this
test.  It's not entirely clear to me why, but rather than try to
fight with it, just disable it for now.

None of the other tests try to write out from psql directly as
this test does, so it seems likely that the rest of the tests will
be fine (as they have been on numerous other systems).
2016-05-06 21:24:31 -04:00
Kevin Grittner 7e3da1c473 Mitigate "snapshot too old" performance regression on NUMA
Limit maintenance of time to xid mapping to once per minute.  At
least in the tested case this brings performance within 5% of when
the feature is off, compared to several times slower without this
patch.

While there, fix comments and whitespace.

Ants Aasma, with cosmetic adjustments suggested by Andres Freund
Reviewed by Kevin Grittner and Andres Freund
2016-05-06 20:05:29 -05:00
Stephen Frost 6e243c43c9 Add test_pg_dump to @contrib_excludes
The test_pg_dump extension doesn't have a C component, so we need
to exclude it from the MSVC build system trying to figure out how
to build it.

Also add a "MODULES" line to the Makefile, as test_extensions has.
Might not be necessary, but seems good to keep things consistent.

Lastly, remove the 'installcheck' line from test_pg_dump, as that
was causing redefinition errors, at least on my box.  This also
makes test_pg_dump consistent with how commit_ts is set up.
2016-05-06 16:39:56 -04:00
Stephen Frost c778e27e13 Correct query in pg_dumpall:dumpRoles
We need to use a new branch due to the 9.5 addition of bypassrls
when adding in the clause to exclude pg_* roles from being dumped
by pg_dumpall.

Pointed out by Noah, patch by me.
2016-05-06 16:15:52 -04:00
Stephen Frost eccfeeb631 Remove MODULES_big from test_pg_dump
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension.  Hopefully
this will fix the Windows buildfarm members which were complaining.

In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
2016-05-06 15:26:57 -04:00
Robert Haas 68d704edbf Minimal fix for crash bug in quals_match_foreign_key.
Discussion is still underway as to whether to revert the entire patch
that added this function, but that discussion may not conclude before
beta1.  So, in the meantime, let's do at least this much.

David Rowley
2016-05-06 15:00:55 -04:00
Robert Haas c7ea68ff8d Limit maximum parallel degree to 1024.
This new limit affects both the max_parallel_degree GUC and the
parallel_degree reloption.  There may some day be a use case for using
more than 1024 CPUs for a single query, but that's surely not the case
right now.  Not only do not very many people have that many CPUs, but
the code hasn't been tested at that kind of scale and is very unlikely
to perform well, or even work at all, without a lot more work.  The
issue addressed by commit 06bd458cb8 is
probably just one problem of many.

The idea of a more reasonable limit here was suggested by Tom Lane;
the value of 1024 was suggested by Amit Kapila.
2016-05-06 14:50:54 -04:00
Tom Lane 73b9952e82 Improve pg_upgrade's report about failure to match up old and new tables.
Ordinarily, pg_upgrade shouldn't have any difficulty in matching up all
the relations it sees in the old and new databases.  If it does, however,
it just goes belly-up with a pretty unhelpful error message.  That seemed
fine as long as we expected the case never to occur in the wild, but
Alvaro reported that it had been seen in a database whose pg_largeobject
table had somehow acquired a TOAST table.  That doesn't quite seem like
a case that pg_upgrade actually needs to handle, but it would be good if
the report were more diagnosable.  Hence, extend the logic to print out
as much information as we can about the mismatch(es) before we quit.

In passing, improve the readability of get_rel_infos()'s data collection
query, which had suffered seriously from lets-not-bother-to-update-comments
syndrome, and generally was unnecessarily disrespectful to readers.

It could be argued that this is a bug fix, but given that we have so few
reports, I don't feel a need to back-patch; at least not before this has
baked awhile in HEAD.
2016-05-06 14:45:01 -04:00