Commit Graph

32793 Commits

Author SHA1 Message Date
Peter Eisentraut 35584fd05f Make spelling of "acknowledgment" consistent
I used the preferred U.S. spelling, as we do in other cases.
2018-10-15 10:06:45 +02:00
Tom Lane b403ea43e4 Make some subquery-using test cases a bit more robust.
These test cases could be adversely affected by an upcoming change
to allow pullup of FROM-less subqueries.  Tweak them to ensure that
they'll continue to test what they did before.

Discussion: https://postgr.es/m/5395.1539275668@sss.pgh.pa.us
2018-10-14 14:02:59 -04:00
Tom Lane 7d4a10e260 Use PlaceHolderVars within the quals of a FULL JOIN.
This prevents failures in cases where we pull up a constant or var-free
expression from a subquery and put it into a full join's qual.  That can
result in not recognizing the qual as containing a mergejoin-able or
hashjoin-able condition.  A PHV prevents the problem because it is still
recognized as belonging to the side of the join the subquery is in.

I'm not very sure about the net effect of this change on plan quality.
In "typical" cases where the join keys are Vars, nothing changes.
In an affected case, the PHV-wrapped expression is less likely to be seen
as equal to PHV-less instances below the join, but more likely to be seen
as equal to similar expressions above the join, so it may end up being a
wash.  In the one existing case where there's any visible change in a
regression-test plan, it amounts to referencing a lower computation of a
COALESCE result instead of recomputing it, which seems like a win.

Given my uncertainty about that and the lack of field complaints,
no back-patch, even though this is a very ancient problem.

Discussion: https://postgr.es/m/32090.1539378124@sss.pgh.pa.us
2018-10-14 13:07:29 -04:00
Tom Lane e9f42d529f Clean up/tighten up coercibility checks in opr_sanity regression test.
With the removal of the old abstime type, there are no longer any cases
in this test where we need to use the weaker castcontext-ignoring form
of binary coercibility check.  (The other major source of such headaches,
apparently-incompatible hash functions, is now hashvalidate()'s problem
not this test script's problem.)  Hence, just use binary_coercible()
everywhere, and remove the comments explaining why we don't do so ---
which were broken anyway by cda6a8d01.

I left physically_coercible() in place but renamed it to better
match what it's actually testing, and added some comments.

Also, in test queries that have an assumption about the maximum number
of function arguments they need to handle, add a clause to make them fail
if someday there's a relevant function with more arguments.  Otherwise
we're likely not to notice that we need to extend the queries.

Discussion: https://postgr.es/m/27637.1539388060@sss.pgh.pa.us
2018-10-14 12:11:16 -04:00
Michael Paquier 1df21ddb19 Avoid duplicate XIDs at recovery when building initial snapshot
On a primary, sets of XLOG_RUNNING_XACTS records are generated on a
periodic basis to allow recovery to build the initial state of
transactions for a hot standby.  The set of transaction IDs is created
by scanning all the entries in ProcArray.  However it happens that its
logic never counted on the fact that two-phase transactions finishing to
prepare can put ProcArray in a state where there are two entries with
the same transaction ID, one for the initial transaction which gets
cleared when prepare finishes, and a second, dummy, entry to track that
the transaction is still running after prepare finishes.  This way
ensures a continuous presence of the transaction so as callers of for
example TransactionIdIsInProgress() are always able to see it as alive.

So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase
transaction finishes to prepare, the record can finish with duplicated
XIDs, which is a state expected by design.  If this record gets applied
on a standby to initial its recovery state, then it would simply fail,
so the odds of facing this failure are very low in practice.  It would
be tempting to change the generation of XLOG_RUNNING_XACTS so as
duplicates are removed on the source, but this requires to hold on
ProcArrayLock for longer and this would impact all workloads,
particularly those using heavily two-phase transactions.

XLOG_RUNNING_XACTS is also actually used only to initialize the standby
state at recovery, so instead the solution is taken to discard
duplicates when applying the initial snapshot.

Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/0c96b653-4696-d4b4-6b5d-78143175d113@postgrespro.ru
Backpatch-through: 9.3
2018-10-14 22:23:21 +09:00
Tom Lane 240cd6bc83 Another round of portability hacking on ECPG regression tests.
Removing the separate Windows expected-files in commit f1885386f
turns out to have been too optimistic: on most (but not all!) of our
Windows buildfarm members, the tests still print floats with three
exponent digits, because they're invoking the native printf()
not snprintf.c.

But rather than put back the extra expected-files, let's hack
the three tests in question so that they adjust float formatting
the same way snprintf.c does.

Discussion: https://postgr.es/m/18890.1539374107@sss.pgh.pa.us
2018-10-12 18:08:47 -04:00
Tom Lane 13cd7209f7 Simplify use of AllocSetContextCreate() wrapper macro.
We can allow this macro to accept either abbreviated or non-abbreviated
allocation parameters by making use of __VA_ARGS__.  As noted by Andres
Freund, it's unlikely that any compiler would have __builtin_constant_p
but not __VA_ARGS__, so this gives up little or no error checking, and
it avoids a minor but annoying API break for extensions.

With this change, there is no reason for anybody to call
AllocSetContextCreateExtended directly, so in HEAD I renamed it to
AllocSetContextCreateInternal.  It's probably too late for an ABI
break like that in 11, though.

Discussion: https://postgr.es/m/20181012170355.bhxi273skjt6sag4@alap3.anarazel.de
2018-10-12 14:26:56 -04:00
Tom Lane 24a2c436a5 Remove dead reference to ecpg resultmap file.
I missed this in my prior commit because it doesn't matter in non-VPATH
builds.

Per buildfarm.
2018-10-12 11:42:28 -04:00
Alvaro Herrera c7d43c4d8a Correct attach/detach logic for FKs in partitions
There was no code to handle foreign key constraints on partitioned
tables in the case of ALTER TABLE DETACH; and if you happened to ATTACH
a partition that already had an equivalent constraint, that one was
ignored and a new constraint was created.  Adding this to the fact that
foreign key cloning reuses the constraint name on the partition instead
of generating a new name (as it probably should, to cater to SQL
standard rules about constraint naming within schemas), the result was a
pretty poor user experience -- the most visible failure was that just
detaching a partition and re-attaching it failed with an error such as

  ERROR:  duplicate key value violates unique constraint "pg_constraint_conrelid_contypid_conname_index"
  DETAIL:  Key (conrelid, contypid, conname)=(26702, 0, test_result_asset_id_fkey) already exists.

because it would try to create an identically-named constraint in the
partition.  To make matters worse, if you tried to drop the constraint
in the now-independent partition, that would fail because the constraint
was still seen as dependent on the constraint in its former parent
partitioned table:
  ERROR:  cannot drop inherited constraint "test_result_asset_id_fkey" of relation "test_result_cbsystem_0001_0050_monthly_2018_09"

This fix attacks the problem from two angles: first, when the partition
is detached, the constraint is also marked as independent, so the drop
now works.  Second, when the partition is re-attached, we scan existing
constraints searching for one matching the FK in the parent, and if one
exists, we link that one to the parent constraint.  So we don't end up
with a duplicate -- and better yet, we don't need to scan the referenced
table to verify that the constraint holds.

To implement this I made a small change to previously planner-only
struct ForeignKeyCacheInfo to contain the constraint OID; also relcache
now maintains the list of FKs for partitioned tables too.

Backpatch to 11.

Reported-by: Michael Vitale (bug #15425)
Discussion: https://postgr.es/m/15425-2dbc9d2aa999f816@postgresql.org
2018-10-12 12:37:37 -03:00
Tom Lane f1885386f6 Make float exponent output on Windows look the same as elsewhere.
Windows, alone among our supported platforms, likes to emit three-digit
exponent fields even when two digits would do.  Adjust such results to
look like the way everyone else does it.  Eliminate a bunch of variant
expected-output files that were needed only because of this quirk.

Discussion: https://postgr.es/m/2934.1539122454@sss.pgh.pa.us
2018-10-12 11:14:27 -04:00
Michael Paquier b34e84f160 Add TAP tests for pg_verify_checksums
All options available in the utility get coverage:
- Tests with disabled page checksums.
- Tests with enabled test checksums.
- Emulation of corruption and broken checksums with a full scan and
single relfilenode scan.

This patch has been contributed mainly by Michael Banck and Magnus
Hagander with things presented on various threads, and I have gathered
all the contents into a single patch.

Author: Michael Banck, Magnus Hagander, Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20181005012645.GE1629@paquier.xyz
2018-10-12 09:12:31 +09:00
Andres Freund cda6a8d01d Remove deprecated abstime, reltime, tinterval datatypes.
These types have been deprecated for a *long* time.

Catversion bump, for obvious reasons.

Author: Andres Freund
Discussion:
    https://postgr.es/m/20181009192237.34wjp3nmw7oynmmr@alap3.anarazel.de
    https://postgr.es/m/20171213080506.cwjkpcz3bkk6yz2u@alap3.anarazel.de
    https://postgr.es/m/25615.1513115237@sss.pgh.pa.us
2018-10-11 11:59:15 -07:00
Andres Freund 2d10defa77 Remove timetravel extension.
The extension depended on old types which are about to be removed. As
the code additionally was pretty crufty and didn't provide much in the
way of functionality, removing the extension seems to be the best way
forward.  It's fairly trivial to write functionality in plpgsql that
more than covers what timetravel did.

Author: Andres Freund
Discussion:
    https://postgr.es/m/20171213080506.cwjkpcz3bkk6yz2u@alap3.anarazel.de
    https://postgr.es/m/25615.1513115237@sss.pgh.pa.us
2018-10-11 11:43:56 -07:00
Andres Freund 86896be60e Move timeofday() implementation out of nabstime.c.
nabstime.c is about to be removed, but timeofday() isn't related to
the rest of the functionality therein, and some find it useful. Move
to timestamp.c.

Discussion:
    https://postgr.es/m/20181009192237.34wjp3nmw7oynmmr@alap3.anarazel.de
    https://postgr.es/m/20180928223240.kgwc4czzzekrpsid%40alap3.anarazel.de
2018-10-11 11:30:59 -07:00
Andres Freund e9edc1ba0b Fix logical decoding error when system table w/ toast is repeatedly rewritten.
Repeatedly rewriting a mapped catalog table with VACUUM FULL or
CLUSTER could cause logical decoding to fail with:
ERROR, "could not map filenode \"%s\" to relation OID"

To trigger the problem the rewritten catalog had to have live tuples
with toasted columns.

The problem was triggered as during catalog table rewrites the
heap_insert() check that prevents logical decoding information to be
emitted for system catalogs, failed to treat the new heap's toast table
as a system catalog (because the new heap is not recognized as a
catalog table via RelationIsLogicallyLogged()). The relmapper, in
contrast to the normal catalog contents, does not contain historical
information. After a single rewrite of a mapped table the new relation
is known to the relmapper, but if the table is rewritten twice before
logical decoding occurs, the relfilenode cannot be mapped to a
relation anymore.  Which then leads us to error out.   This only
happens for toast tables, because the main table contents aren't
re-inserted with heap_insert().

The fix is simple, add a new heap_insert() flag that prevents logical
decoding information from being emitted, and accept during decoding
that there might not be tuple data for toast tables.

Unfortunately that does not fix pre-existing logical decoding
errors. Doing so would require not throwing an error when a filenode
cannot be mapped to a relation during decoding, and that seems too
likely to hide bugs.  If it's crucial to fix decoding for an existing
slot, temporarily changing the ERROR in ReorderBufferCommit() to a
WARNING appears to be the best fix.

Author: Andres Freund
Discussion: https://postgr.es/m/20180914021046.oi7dm4ra3ot2g2kt@alap3.anarazel.de
Backpatch: 9.4-, where logical decoding was introduced
2018-10-10 13:53:02 -07:00
Peter Eisentraut f82d4d666f Slightly correct context check for event triggers
The previous check for a "complete query" omitted the new
PROCESS_UTILITY_QUERY_NONATOMIC value.  This didn't actually make a
difference in practice, because only CALL and SET from PL/pgSQL run in
this state, but it's more correct to include it anyway.

Discussion: https://www.postgresql.org/message-id/4566041d-2567-74d2-d135-19ff6a20fe51%402ndquadrant.com
2018-10-10 22:41:12 +02:00
Peter Eisentraut ae307861d8 Test that event triggers work in functions and procedures
This ensures that we have coverage of all the ProcessUtilityContext
variants.
2018-10-10 22:41:12 +02:00
Peter Eisentraut f8c10f616f Turn transaction_isolation into GUC enum
It was previously a string setting that was converted into an enum by
custom code, but using the GUC enum facility seems much simpler and
doesn't change any functionality, except that

    set transaction_isolation='default';

no longer works, but that was never documented and doesn't work with
any other transaction characteristics.  (Note that this is not the
same as RESET or SET TO DEFAULT, which still work.)

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/457db615-e84c-4838-310e-43841eb806e5@iki.fi
2018-10-09 21:26:00 +02:00
Tom Lane b6b297d20d Make src/common/exec.c's error logging less ugly.
This code used elog where it really ought to use ereport, mainly so that
it can report a SQLSTATE different from ERRCODE_INTERNAL_ERROR.  There
were some other random deviations from typical error report practice too.

In addition, we can make some cleanups that were impractical six months
ago:

* Use one variadic macro, instead of several with different numbers
of arguments, reducing the temptation to force-fit messages into
particular numbers of arguments;

* Use %m, even in the frontend case, simplifying the code.

Discussion: https://postgr.es/m/6025.1527351693@sss.pgh.pa.us
2018-10-09 13:36:24 -04:00
Tom Lane 6eb4378d53 Remove no-longer-needed variant expected regression result files.
numerology_1.out and float8-small-is-zero_1.out differ from their
base files only in showing plain zero rather than minus zero for
some results.  I believe that in the wake of commit 6eb3eb577,
we will print minus zero as such on all IEEE-float platforms
(and non-IEEE floats are going to cause many more regression diffs
than this, anyway).  Hence we should be able to remove these and
eliminate a bit of maintenance pain.  Let's see if the buildfarm
agrees.

Discussion: https://postgr.es/m/29037.1539021687@sss.pgh.pa.us
2018-10-09 11:31:30 -04:00
Tom Lane aed9fa0bd8 Select appropriate PG_PRINTF_ATTRIBUTE for recent NetBSD.
NetBSD-current generates a large number of warnings about "%m" not
being appropriate to use with *printf functions.  While that's true
for their native printf, it's surely not true for snprintf.c, so I
think they have misunderstood gcc's definition of the "gnu_printf"
archetype.  Nonetheless, choosing "__syslog__" instead silences the
warnings; so teach configure about that.

Since this is only a cosmetic warning issue (and anyway it depends
on previous hacking to be self-consistent), no back-patch.

Discussion: https://postgr.es/m/16785.1539046036@sss.pgh.pa.us
2018-10-09 11:10:07 -04:00
Michael Paquier c481016201 Add pg_ls_archive_statusdir function
This function lists the contents of the WAL archive status directory,
and is intended to be used by monitoring tools.  Unlike pg_ls_dir(),
access to it can be granted to non-superusers so that those monitoring
tools can observe the principle of least privilege.  Access is also
given by default to members of pg_monitor.

Author:  Christoph Moench-Tegeder
Reviewed-by: Aya Iwata
Discussion: https://postgr.es/m/20180930205920.GA64534@elch.exwg.net
2018-10-09 22:29:09 +09:00
Thomas Munro 212fab9926 Relax transactional restrictions on ALTER TYPE ... ADD VALUE (redux).
Originally committed as 15bc038f (plus some follow-ups), this was
reverted in 28e07270 due to a problem discovered in parallel
workers.  This new version corrects that problem by sending the
list of uncommitted enum values to parallel workers.

Here follows the original commit message describing the change:

To prevent possibly breaking indexes on enum columns, we must keep
uncommitted enum values from getting stored in tables, unless we
can be sure that any such column is new in the current transaction.

Formerly, we enforced this by disallowing ALTER TYPE ... ADD VALUE
from being executed at all in a transaction block, unless the target
enum type had been created in the current transaction.  This patch
removes that restriction, and instead insists that an uncommitted enum
value can't be referenced unless it belongs to an enum type created
in the same transaction as the value.  Per discussion, this should be
a bit less onerous.  It does require each function that could possibly
return a new enum value to SQL operations to check this restriction,
but there aren't so many of those that this seems unmaintainable.

Author: Andrew Dunstan and Tom Lane, with parallel query fix by Thomas Munro
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D0Ei7g6PaNTbcmAh9tCRahQrk%3Dr5ZWLD-jr7hXweYX3yg%40mail.gmail.com
Discussion: https://postgr.es/m/4075.1459088427%40sss.pgh.pa.us
2018-10-09 12:51:01 +13:00
Tom Lane 7767aadd94 Fix omissions in snprintf.c's coverage of standard *printf functions.
A warning on a NetBSD box revealed to me that pg_waldump/compat.c
is using vprintf(), which snprintf.c did not provide coverage for.
This is not good if we want to have uniform *printf behavior, and
it's pretty silly to omit when it's a one-line function.

I also noted that snprintf.c has pg_vsprintf() but for some reason
it was not exposed to the outside world, creating another way in
which code might accidentally invoke the platform *printf family.

Let's just make sure that we replace all eight of the POSIX-standard
printf family.

Also, upgrade plperl.h and plpython.h to make sure that they do
their undefine/redefine rain dance for all eight, not some random
maybe-sufficient subset thereof.
2018-10-08 19:15:55 -04:00
Tom Lane 82ff0cc91d Advance transaction timestamp for intra-procedure transactions.
Per discussion, this behavior seems less astonishing than not doing so.

Peter Eisentraut and Tom Lane

Discussion: https://postgr.es/m/20180920234040.GC29981@momjian.us
2018-10-08 16:16:36 -04:00
Tom Lane 6eb3eb577d Improve snprintf.c's handling of NaN, Infinity, and minus zero.
Up to now, float4out/float8out handled NaN and Infinity cases explicitly,
and invoked psprintf only for ordinary float values.  This was done because
platform implementations of snprintf produce varying representations of
these special cases.  But now that we use snprintf.c always, it's better
to give it the responsibility to produce a uniform representation of
these cases, so that we have uniformity across the board not only in
float4out/float8out.  Hence, move that work into fmtfloat().

Also, teach fmtfloat() to recognize IEEE minus zero and handle it
correctly.  The previous coding worked only accidentally, and would
fail for e.g. "%+f" format (it'd print "+-0.00000").  Now that we're
using snprintf.c everywhere, it's not acceptable for it to do weird
things in corner cases.  (This incidentally avoids a portability
problem we've seen on some really ancient platforms, that native
sprintf does the wrong thing with minus zero.)

Also, introduce a new entry point in snprintf.c to allow float[48]out
to bypass the work of interpreting a well-known format spec, as well
as bypassing the overhead of the psprintf layer.  I modeled this API
loosely on strfromd().  In my testing, this brings float[48]out back
to approximately the same speed they had when using native snprintf,
fixing one of the main performance issues caused by using snprintf.c.

(There is some talk of more aggressive work to improve the speed of
floating-point output conversion, but these changes seem to provide
a better starting point for such work anyway.)

Getting rid of the previous ad-hoc hack for Infinity/NaN in fmtfloat()
allows removing <ctype.h> from snprintf.c's #includes.  I also removed
a few other #includes that I think are historical, though the buildfarm
may expose that as wrong.

Discussion: https://postgr.es/m/13178.1538794717@sss.pgh.pa.us
2018-10-08 12:19:20 -04:00
Tom Lane f9eb7c14b0 Avoid O(N^2) cost in ExecFindRowMark().
If there are many ExecRowMark structs, we spent O(N^2) time in
ExecFindRowMark during executor startup.  Once upon a time this was
not of great concern, but the addition of native partitioning has
squeezed out enough other costs that this can become the dominant
overhead in some use-cases for tables with many partitions.

To fix, simply replace that List data structure with an array.

This adds a little bit of cost to execCurrentOf(), but not much,
and anyway that code path is neither of large importance nor very
efficient now.  If we ever decide it is a bottleneck, constructing a
hash table for lookup-by-tableoid would likely be the thing to do.

Per complaint from Amit Langote, though this is different from
his fix proposal.

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-08 10:41:34 -04:00
Alvaro Herrera eee01d606e Silence compiler warning in Assert()
gcc 6.3 does not whine about this mistake I made in 39808e8868 but
evidently lots of other compilers do, according to Michael Paquier,
Peter Eisentraut, Arthur Zakirov, Tomas Vondra.

Discussion: too many to list
2018-10-08 10:37:21 -03:00
Peter Eisentraut 634b4b79cb Track procedure calls in pg_stat_user_functions
This was forgotten when procedures were implemented.

Reported-by: Lukas Fittl <lukas@fittl.com>
2018-10-08 11:22:53 +02:00
Michael Paquier 9c2a970d1f Improve two error messages related to foreign keys on partitioned tables
Error messages for creating a foreign key on a partitioned table using
ONLY or NOT VALID were wrong in mentioning the objects they worked on.
This commit adds on the way some regression tests missing for those
cases.

Author: Laurenz Albe
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c11c05810a9ed65e9b2c817a9ef442275a32fe80.camel@cybertec.at
2018-10-08 17:56:13 +09:00
Magnus Hagander a9da329be0 Fix speling error
Reported by Alexander Lakhin in bug #15423
2018-10-08 08:57:24 +02:00
Tom Lane 52ed730d51 Remove some unnecessary fields from Plan trees.
In the wake of commit f2343653f, we no longer need some fields that
were used before to control executor lock acquisitions:

* PlannedStmt.nonleafResultRelations can go away entirely.

* partitioned_rels can go away from Append, MergeAppend, and ModifyTable.
However, ModifyTable still needs to know the RT index of the partition
root table if any, which was formerly kept in the first entry of that
list.  Add a new field "rootRelation" to remember that.  rootRelation is
partly redundant with nominalRelation, in that if it's set it will have
the same value as nominalRelation.  However, the latter field has a
different purpose so it seems best to keep them distinct.

Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-07 14:33:17 -04:00
Alvaro Herrera 39808e8868 Fix catalog insertion order for ATTACH PARTITION
Commit 2fbdf1b38b changed the order in which we inserted catalog rows
when creating partitions, so that we could remove an unsightly hack
required for untimely relcache invalidations.  However, that commit only
changed the ordering for CREATE TABLE PARTITION OF, and left ALTER TABLE
ATTACH PARTITION unchanged, so the latter can be affected when catalog
invalidations occur, for instance when the partition key involves an SQL
function.

Reported-by: Rajkumar Raghuwanshi
Author: Amit Langote
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/CAKcux6=nTz9KSfTr_6Z2mpzLJ_09JN-rK6=dWic6gGyTSWueyQ@mail.gmail.com
2018-10-06 22:13:19 -03:00
Alvaro Herrera ad08006ba0 Fix event triggers for partitioned tables
Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly.  This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing.  Fix the crash by
creating a stack of event trigger state objects.  There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.

Backpatch to 9.5, where DDL deparsing of event triggers was introduced.

Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
2018-10-06 19:17:46 -03:00
Tom Lane 29ef2b310d Restore sane locking behavior during parallel query.
Commit 9a3cebeaa changed things so that parallel workers didn't obtain
any lock of their own on tables they access.  That was clearly a bad
idea, but I'd mistakenly supposed that it was the intended end result
of the series of patches for simplifying the executor's lock management.
Undo that change in relation_open(), and adjust ExecOpenScanRelation()
so that it gets the correct lock if inside a parallel worker.

In passing, clean up some more obsolete comments about when locks
are acquired.

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-06 15:49:37 -04:00
Tom Lane f2343653f5 Remove more redundant relation locking during executor startup.
We already have appropriate locks on every relation listed in the
query's rangetable before we reach the executor.  Take the next step
in exploiting that knowledge by removing code that worries about
taking locks on non-leaf result relations in a partitioned table.

In particular, get rid of ExecLockNonLeafAppendTables and a stanza in
InitPlan that asserts we already have locks on certain such tables.

In passing, clean up some now-obsolete comments in InitPlan.

Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-06 15:12:51 -04:00
Tom Lane 0209f0285d Don't use is_infinite() where isinf() will do.
Places that aren't testing for sign should not use the more expensive
function; it's just wasteful, not to mention being a cognitive load
for readers who may know what isinf() is but not is_infinite().

As things stand, we actually don't need is_infinite() anyplace except
float4out/float8out, which means it could potentially go away altogether
after the changes I proposed in <13178.1538794717@sss.pgh.pa.us>.
2018-10-06 13:18:38 -04:00
Tom Lane 07ee62ce9e Propagate xactStartTimestamp and stmtStartTimestamp to parallel workers.
Previously, a worker process would establish values for these based on
its own start time.  In v10 and up, this can trivially be shown to cause
misbehavior of transaction_timestamp(), timestamp_in(), and related
functions which are (perhaps unwisely?) marked parallel-safe.  It seems
likely that other behaviors might diverge from what happens in the parent
as well.

It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure
it's still possible, so back-patch to all branches containing parallel
worker infrastructure.

In HEAD only, mark now() and statement_timestamp() as parallel-safe
(other affected functions already were).  While in theory we could
still squeeze that change into v11, it doesn't seem important enough
to force a last-minute catversion bump.

Konstantin Knizhnik, whacked around a bit by me

Discussion: https://postgr.es/m/6406dbd2-5d37-4cb6-6eb2-9c44172c7e7c@postgrespro.ru
2018-10-06 12:00:09 -04:00
Dean Rasheed e954a727f0 Improve the accuracy of floating point statistical aggregates.
When computing statistical aggregates like variance, the common
schoolbook algorithm which computes the sum of the squares of the
values and subtracts the square of the mean can lead to a large loss
of precision when using floating point arithmetic, because the
difference between the two terms is often very small relative to the
terms themselves.

To avoid this, re-work these aggregates to use the Youngs-Cramer
algorithm, which is a proven, numerically stable algorithm that
directly aggregates the sum of the squares of the differences of the
values from the mean in a single pass over the data.

While at it, improve the test coverage to test the aggregate combine
functions used during parallel aggregation.

Per report and suggested algorithm from Erich Schubert.

Patch by me, reviewed by Madeleine Thompson.

Discussion: https://postgr.es/m/153313051300.1397.9594490737341194671@wrigleys.postgresql.org
2018-10-06 11:20:09 +01:00
Michael Paquier 38921d1416 Assign constraint name when cloning FK definition for partitions
This is for example used when attaching a partition to a partitioned
table which includes foreign keys, and in this case the constraint name
has been missing in the data cloned.  This could lead to hard crashes,
as when validating the foreign key constraint, the constraint name is
always expected.  Particularly, when using log_min_messages >= DEBUG1, a
log message would be generated with this unassigned constraint name,
leading to an assertion failure on HEAD.

While on it, rename a variable in ATExecAttachPartition which was
declared twice with the same name.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20181005042236.GG1629@paquier.xyz
Backpatch-through: 11
2018-10-06 14:59:36 +09:00
Tom Lane c87cb5f7a6 Allow btree comparison functions to return INT_MIN.
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result.  However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions.  Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.

The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)".  The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.

To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1.  It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN".  That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.

This is a longstanding portability hazard, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
2018-10-05 16:01:29 -04:00
Tom Lane 113a659914 Ensure that PLPGSQL_DTYPE_ROW variables have valid refname fields.
Without this, the syntax-tree-dumping functions in pl_funcs.c crash,
and there are other places that might be at risk too.  Per report
from Pavel Stehule.

Looks like I broke this in commit f9263006d, so back-patch to v11.

Discussion: https://postgr.es/m/CAFj8pRA+3f5n4642q2g8BXCKjbTd7yU9JMYAgDyHgozk6cQ-VA@mail.gmail.com
2018-10-05 12:45:37 -04:00
Michael Paquier 9cd92d1a33 Add pg_ls_tmpdir function
This lists the contents of a temporary directory associated to a given
tablespace, useful to get information about on-disk consumption caused
by temporary files used by a session query.  By default, pg_default is
scanned, and a tablespace can be specified as argument.

This function is intended to be used by monitoring tools, and, unlike
pg_ls_dir(), access to them can be granted to non-superusers so that
those monitoring tools can observe the principle of least privilege.
Access is also given by default to members of pg_monitor.

Author: Nathan Bossart
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/92F458A2-6459-44B8-A7F2-2ADD3225046A@amazon.com
2018-10-05 09:21:48 +09:00
Tom Lane d73f4c74dd In the executor, use an array of pointers to access the rangetable.
Instead of doing a lot of list_nth() accesses to es_range_table,
create a flattened pointer array during executor startup and index
into that to get at individual RangeTblEntrys.

This eliminates one source of O(N^2) behavior with lots of partitions.
(I'm not exactly convinced that it's the most important source, but
it's an easy one to fix.)

Amit Langote and David Rowley

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-04 15:48:17 -04:00
Tom Lane 9ddef36278 Centralize executor's opening/closing of Relations for rangetable entries.
Create an array estate->es_relations[] paralleling the es_range_table,
and store references to Relations (relcache entries) there, so that any
given RT entry is opened and closed just once per executor run.  Scan
nodes typically still call ExecOpenScanRelation, but ExecCloseScanRelation
is no more; relation closing is now done centrally in ExecEndPlan.

This is slightly more complex than one would expect because of the
interactions with relcache references held in ResultRelInfo nodes.
The general convention is now that ResultRelInfo->ri_RelationDesc does
not represent a separate relcache reference and so does not need to be
explicitly closed; but there is an exception for ResultRelInfos in the
es_trig_target_relations list, which are manufactured by
ExecGetTriggerResultRel and have to be cleaned up by
ExecCleanUpTriggerState.  (That much was true all along, but these
ResultRelInfos are now more different from others than they used to be.)

To allow the partition pruning logic to make use of es_relations[] rather
than having its own relcache references, adjust PartitionedRelPruneInfo
to store an RT index rather than a relation OID.

Amit Langote, reviewed by David Rowley and Jesper Pedersen,
some mods by me

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-04 14:03:42 -04:00
Alvaro Herrera fb9e93a2c5 Fix duplicate primary keys in partitions
When using the CREATE TABLE .. PARTITION OF syntax, it's possible to
cause a partition to get two primary keys if the parent already has one.
Tighten the check to disallow that.

Reported-by: Rajkumar Raghuwanshi
Author: Amul Sul
Discussion: https://postgr.es/m/CAKcux6=OnSV3-qd8Gb6W=KPPwcCz6Fe_O_MQYjTa24__Xn8XxA@mail.gmail.com
2018-10-04 11:40:36 -03:00
Michael Paquier 09921f397b Refactor user-facing SQL functions signalling backends
This moves the system administration functions for signalling backends
from backend/utils/adt/misc.c into a separate file dedicated to backend
signalling.  No new functionality is introduced in this commit.

Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/C2C7C3EC-CC5F-44B6-9C78-637C88BD7D14@yesql.se
2018-10-04 18:27:25 +09:00
Michael Paquier 803b1301e8 Add option SKIP_LOCKED to VACUUM and ANALYZE
When specified, this option allows VACUUM to skip the work on a relation
if there is a conflicting lock on it when trying to open it at the
beginning of its processing.

Similarly to autovacuum, this comes with a couple of limitations while
the relation is processed which can cause the process to still block:
- when opening the relation indexes.
- when acquiring row samples for table inheritance trees, partition trees
or certain types of foreign tables, and that a lock is taken on some
leaves of such trees.

Author: Nathan Bossart
Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada
Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com
Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org
2018-10-04 09:00:33 +09:00
Andres Freund d173652797 Replace uint64 use introduced in 4868e44685 in light of 595a0eab7f.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/527.1538598263@sss.pgh.pa.us
2018-10-03 13:28:14 -07:00
Andres Freund 4868e44685 Ensure that snprintf.c's fmtint() doesn't overflow when printing INT64_MIN.
This isn't actually a live bug, as the output happens to be the
same.  But it upsets tools like UBSan, which makes it worthwhile to
fix.

As it's an issue without practical consequences, don't backpatch.

Author: Andres Freund
Discussion: https://postgr.es/m/20180928001121.hhx5n6dsygqxr5wu@alap3.anarazel.de
2018-10-03 13:13:50 -07:00
Tom Lane 9a3cebeaa7 Change executor to just Assert that table locks were already obtained.
Instead of locking tables during executor startup, just Assert that
suitable locks were obtained already during the parse/plan pipeline
(or re-obtained by the plan cache).  This must be so, else we have a
hazard that concurrent DDL has invalidated the plan.

This is pretty inefficient as well as undercommented, but it's all going
to go away shortly, so I didn't try hard.  This commit is just another
attempt to use the buildfarm to see if we've missed anything in the plan
to simplify the executor's table management.

Note that the change needed here in relation_open() exposes that
parallel workers now really are accessing tables without holding any
lock of their own, whereas they were not doing that before this commit.
This does not give me a warm fuzzy feeling about that aspect of parallel
query; it does not seem like a good design, and we now know that it's
had exactly no actual testing.  I think that we should modify parallel
query so that that change can be reverted.

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-03 16:05:12 -04:00
Andres Freund c03c1449c0 Fix issues around EXPLAIN with JIT.
I (Andres) was more than a bit hasty in committing 33001fd7a7
after last minute changes, leading to a number of problems (jit output
was only shown for JIT in parallel workers, and just EXPLAIN without
ANALYZE didn't work).  Lukas luckily found these issues quickly.

Instead of combining instrumentation in in standard_ExecutorEnd(), do
so on demand in the new ExplainPrintJITSummary().

Also update a documentation example of the JIT output, changed in
52050ad8eb.

Author: Lukas Fittl, with minor changes by me
Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com
Backpatch: 11, where JIT compilation was introduced
2018-10-03 12:48:37 -07:00
Tom Lane 595a0eab7f Rationalize snprintf.c's handling of "ll" formats.
Although all known platforms define "long long" as 64 bits, it still feels
a bit shaky to be using "va_arg(args, int64)" to pull out an argument that
the caller thought was declared "long long".  The reason it was coded like
this, way back in commit 3311c7669, was to work around the possibility that
the compiler had no type named "long long" --- and, at the time, that it
maybe didn't have 64-bit ints at all.  Now that we're requiring compilers
to support C99, those concerns are moot.  Let's make the code clearer and
more bulletproof by writing "long long" where we mean "long long".

This does introduce a hazard that we'd inefficiently use 128-bit arithmetic
to convert plain old integers.  The way to tackle that would be to provide
two versions of fmtint(), one for "long long" and one for narrower types.
Since, as of today, no platforms require that, we won't bother with the
extra code for now.

Discussion: https://postgr.es/m/1680.1538587115@sss.pgh.pa.us
2018-10-03 14:33:13 -04:00
Tom Lane 6d842be6c1 Provide fast path in snprintf.c for conversion specs that are just "%s".
This case occurs often enough (around 45% of conversion specs executed
in our regression tests are just "%s") that it's worth an extra test
per conversion spec to allow skipping all the logic associated with
field widths and padding when it happens.

Discussion: https://postgr.es/m/26193.1538582367@sss.pgh.pa.us
2018-10-03 13:05:01 -04:00
Tom Lane abd9ca377d Make assorted performance improvements in snprintf.c.
In combination, these changes make our version of snprintf as fast
or faster than most platforms' native snprintf, except for cases
involving floating-point conversion (which we still delegate to
the native sprintf).  The speed penalty for a float conversion
is down to around 10% though, much better than before.

Notable changes:

* Rather than always parsing the format twice to see if it contains
instances of %n$, do the extra scan only if we actually find a $.
This obviously wins for non-localized formats, and even when there
is use of %n$, we can avoid scanning text before the first % twice.

* Use strchrnul() if available to find the next %, and emit the
literal text between % escapes as strings rather than char-by-char.

* Create a bespoke function (dopr_outchmulti) for the common case
of emitting N copies of the same character, in place of writing
loops around dopr_outch.

* Simplify construction of the format string for invocations of sprintf
for floats.

* Const-ify some internal functions, and avoid unnecessary use of
pass-by-reference arguments.

Patch by me, reviewed by Andres Freund

Discussion: https://postgr.es/m/11787.1534530779@sss.pgh.pa.us
2018-10-03 10:18:15 -04:00
Amit Kapila 9bc9f72b28 MAXALIGN the target address where we store flattened value.
The API (EOH_flatten_into) that flattens the expanded value representation
expects the target address to be maxaligned.  All it's usage adhere to that
principle except when serializing datums for parallel query.  Fix that
usage.

Diagnosed-by: Tom Lane
Author: Tom Lane and Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
2018-10-03 09:15:03 +05:30
Andrew Dunstan a33245a853 Don't build static libraries on Cygwin
Cygwin has been building and linking against static libraries. Although
a bug this has been relatively harmless until now, when this has caused
errors due to changes in the way we build certain libraries. So this
patch makes things work the way we always intended, namely that we would
link against the dynamic libraries (cygpq.dll etc.) and just not build
the static libraries. The downstream packagers have been doing this for
some time, so this just aligns with their practice.

Extracted from a patch by Marco Atzeri, with a suggestion from Tom Lane.

Discussion: https://postgr.es/m/1056.1538235347@sss.pgh.pa.us
2018-10-02 16:46:57 -04:00
Tom Lane 6e35939feb Change rewriter/planner/executor/plancache to depend on RTE rellockmode.
Instead of recomputing the required lock levels in all these places,
just use what commit fdba460a2 made the parser store in the RTE fields.
This already simplifies the code measurably in these places, and
follow-on changes will remove a bunch of no-longer-needed infrastructure.

In a few cases, this change causes us to acquire a higher lock level
than we did before.  This is OK primarily because said higher lock level
should've been acquired already at query parse time; thus, we're saving
a useless extra trip through the shared lock manager to acquire a lesser
lock alongside the original lock.  The only known exception to this is
that re-execution of a previously planned SELECT FOR UPDATE/SHARE query,
for a table that uses ROW_MARK_REFERENCE or ROW_MARK_COPY methods, might
have gotten only AccessShareLock before.  Now it will get RowShareLock
like the first execution did, which seems fine.

While there's more to do, push it in this state anyway, to let the
buildfarm help verify that nothing bad happened.

Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-10-02 14:43:09 -04:00
Andres Freund cc2905e963 Use slots more widely in tuple mapping code and make naming more consistent.
It's inefficient to use a single slot for mapping between tuple
descriptors for multiple tuples, as previously done when using
ConvertPartitionTupleSlot(), as that means the slot's tuple descriptors
change for every tuple.

Previously we also, via ConvertPartitionTupleSlot(), built new tuples
after the mapping even in cases where we, immediately afterwards,
access individual columns again.

Refactor the code so one slot, on demand, is used for each
partition. That avoids having to change the descriptor (and allows to
use the more efficient "fixed" tuple slots). Then use slot->slot
mapping, to avoid unnecessarily forming a tuple.

As the naming between the tuple and slot mapping functions wasn't
consistent, rename them to execute_attr_map_{tuple,slot}.  It's likely
that we'll also rename convert_tuples_by_* to denote that these
functions "only" build a map, but that's left for later.

Author: Amit Khandekar and Amit Langote, editorialized by me
Reviewed-By: Amit Langote, Amit Khandekar, Andres Freund
Discussion:
    https://postgr.es/m/CAJ3gD9fR0wRNeAE8VqffNTyONS_UfFPRpqxhnD9Q42vZB+Jvpg@mail.gmail.com
    https://postgr.es/m/e4f9d743-cd4b-efb0-7574-da21d86a7f36%40lab.ntt.co.jp
Backpatch: -
2018-10-02 11:14:26 -07:00
Tom Lane 625b38ea0e Set snprintf.c's maximum number of NL arguments to be 31.
Previously, we used the platform's NL_ARGMAX if any, otherwise 16.
The trouble with this is that the platform value is hugely variable,
ranging from the POSIX-minimum 9 to as much as 64K on recent FreeBSD.
Values of more than a dozen or two have no practical use and slow down
the initialization of the argtypes array.  Worse, they cause snprintf.c
to consume far more stack space than was the design intention, possibly
resulting in stack-overflow crashes.

Standardize on 31, which is comfortably more than we need (it looks like
no existing translatable message has more than about 10 parameters).
I chose that, not 32, to make the array sizes powers of 2, for some
possible small gain in speed of the memset.

The lack of reported crashes suggests that the set of platforms we
use snprintf.c on (in released branches) may have no overlap with
the set where NL_ARGMAX has unreasonably large values.  But that's
not entirely clear, so back-patch to all supported branches.

Per report from Mateusz Guzik (via Thomas Munro).

Discussion: https://postgr.es/m/CAEepm=3VF=PUp2f8gU8fgZB22yPE_KBS0+e1AHAtQ=09schTHg@mail.gmail.com
2018-10-02 12:41:28 -04:00
Tom Lane 3d0f68dd30 Fix corner-case failures in has_foo_privilege() family of functions.
The variants of these functions that take numeric inputs (OIDs or
column numbers) are supposed to return NULL rather than failing
on bad input; this rule reduces problems with snapshot skew when
queries apply the functions to all rows of a catalog.

has_column_privilege() had careless handling of the case where the
table OID didn't exist.  You might get something like this:
	select has_column_privilege(9999,'nosuchcol','select');
	ERROR:  column "nosuchcol" of relation "(null)" does not exist
or you might get a crash, depending on the platform's printf's response
to a null string pointer.

In addition, while applying the column-number variant to a dropped
column returned NULL as desired, applying the column-name variant
did not:
	select has_column_privilege('mytable','........pg.dropped.2........','select');
	ERROR:  column "........pg.dropped.2........" of relation "mytable" does not exist
It seems better to make this case return NULL as well.

Also, the OID-accepting variants of has_foreign_data_wrapper_privilege,
has_server_privilege, and has_tablespace_privilege didn't follow the
principle of returning NULL for nonexistent OIDs.  Superusers got TRUE,
everybody else got an error.

Per investigation of Jaime Casanova's report of a new crash in HEAD.
These behaviors have been like this for a long time, so back-patch to
all supported branches.

Patch by me; thanks to Stephen Frost for discussion and review

Discussion: https://postgr.es/m/CAJGNTeP=-6Gyqq5TN9OvYEydi7Fv1oGyYj650LGTnW44oAzYCg@mail.gmail.com
2018-10-02 11:54:12 -04:00
Amit Kapila 0fd6a8a7d0 Test passing expanded-value representations to workers.
Currently, we don't have an explicit test to pass expanded-value
representations to workers, so we don't know whether it works on all kind
of platforms.  We suspect that the current code won't work on
alignment-sensitive hardware.  This commit will test that aspect and can
lead to failure on some of the buildfarm machines which we will fix in the
later commit.

Author: Tom Lane and Amit Kapila
Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
2018-10-02 11:01:33 +05:30
Michael Paquier e3a25ab9ea Refactor relation opening for VACUUM and ANALYZE
VACUUM and ANALYZE share similar logic when it comes to opening a
relation to work on in terms of how the relation is opened, in which
order locks are tried and how logs should be generated when something
does not work as expected.

This commit refactors things so as both use the same code path to handle
the way a relation is opened, so as the integration of new options
becomes easier.

Author: Michael Paquier
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/20180927075152.GT1659@paquier.xyz
2018-10-02 08:53:38 +09:00
Peter Eisentraut cf3dfea45b Change PROCEDURE to FUNCTION in CREATE EVENT TRIGGER syntax
This was claimed to have been done in
0a63f996e0, but that actually only
changed the documentation and not the grammar.  (That commit did fully
change it for CREATE TRIGGER.)
2018-10-01 23:02:55 +02:00
Tom Lane b04aeb0a05 Add assertions that we hold some relevant lock during relation open.
Opening a relation with no lock at all is unsafe; there's no guarantee
that we'll see a consistent state of the relevant catalog entries.
While use of MVCC scans to read the catalogs partially addresses that
complaint, it's still possible to switch to a new catalog snapshot
partway through loading the relcache entry.  Moreover, whether or not
you trust the reasoning behind sometimes using less than
AccessExclusiveLock for ALTER TABLE, that reasoning is certainly not
valid if concurrent users of the table don't hold a lock corresponding
to the operation they want to perform.

Hence, add some assertion-build-only checks that require any caller
of relation_open(x, NoLock) to hold at least AccessShareLock.  This
isn't a full solution, since we can't verify that the lock level is
semantically appropriate for the action --- but it's definitely of
some use, because it's already caught two bugs.

We can also assert that callers of addRangeTableEntryForRelation()
hold at least the lock level specified for the new RTE.

Amit Langote and Tom Lane

Discussion: https://postgr.es/m/16565.1538327894@sss.pgh.pa.us
2018-10-01 12:43:21 -04:00
Tom Lane e27453bd83 Fix ALTER COLUMN TYPE to not open a relation without any lock.
If the column being modified is referenced by a foreign key constraint
of another table, ALTER TABLE would open the other table (to re-parse
the constraint's definition) without having first obtained a lock on it.
This was evidently intentional, but that doesn't mean it's really safe.
It's especially not safe in 9.3, which pre-dates use of MVCC scans for
catalog reads, but even in current releases it doesn't seem like a good
idea.

We know we'll need AccessExclusiveLock shortly to drop the obsoleted
constraint, so just get that a little sooner to close the hole.

Per testing with a patch that complains if we open a relation without
holding any lock on it.  I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.

Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
2018-10-01 11:39:13 -04:00
Tom Lane fdba460a26 Create an RTE field to record the query's lock mode for each relation.
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for
each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock,
or RowExclusiveLock depending on the RTE's role in the query).

This patch creates the field and makes all creators of RTE nodes fill it
in reasonably, but for the moment nothing much is done with it.  The plan
is to replace assorted post-parser logic that re-determines the right
lockmode to use with simple uses of rte->rellockmode.  For now, just add
Asserts in each of those places that the rellockmode matches what they are
computing today.  (In some cases the match isn't perfect, so the Asserts
are weaker than you might expect; but this seems OK, as per discussion.)

This passes check-world for me, but it seems worth pushing in this state
to see if the buildfarm finds any problems in cases I failed to test.

catversion bump due to change of stored rules.

Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me

Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
2018-09-30 13:55:51 -04:00
Stephen Frost 8bddc86400 Add application_name to connection authorized msg
The connection authorized message has quite a bit of useful information
in it, but didn't include the application_name (when provided), so let's
add that as it can be very useful.

Note that at the point where we're emitting the connection authorized
message, we haven't processed GUCs, so it's not possible to get this by
using log_line_prefix (which pulls from the GUC).  There's also
something to be said for having this included in the connection
authorized message and then not needing to repeat it for every line, as
having it in log_line_prefix would do.

The GUC cleans the application name to pure-ascii, so do that here too,
but pull out the logic for cleaning up a string into its own function
in common and re-use it from those places, and check_cluster_name which
was doing the same thing.

Author: Don Seiler <don@seiler.us>
Discussion: https://postgr.es/m/CAHJZqBB_Pxv8HRfoh%2BAB4KxSQQuPVvtYCzMg7woNR3r7dfmopw%40mail.gmail.com
2018-09-28 19:04:50 -04:00
Tom Lane 2b04dfc472 Improve error reporting for unsupported effective_io_concurrency setting.
Give a specific error complaining about lack of posix_fadvise() when
someone tries to set effective_io_concurrency > 0 on platforms
without that.

This probably isn't worth extensive back-patching, but I (tgl) felt
cramming it into v11 was reasonable.

James Robinson

Discussion: https://postgr.es/m/153771876450.14994.560017943128223619@wrigleys.postgresql.org
Discussion: https://postgr.es/m/A3942987-5BC7-4F05-B54D-2A0EC2914B33@jlr-photo.com
2018-09-28 16:12:13 -04:00
Tom Lane 61f14cc8c8 Tweak MSVC build system to match changes in 7143b3e82.
Looks like we need to pull in $libpgcommon in a couple more
places than before.

Per buildfarm.
2018-09-28 15:49:05 -04:00
Tom Lane 97c6852ff7 Tweak MSVC build system to match changes in 7143b3e82.
Also try to make the comment suggesting that this might be needed
more intelligible.

Per buildfarm.
2018-09-28 15:17:07 -04:00
Tom Lane 7143b3e821 Build src/common files as a library with -fPIC.
Build a third version of libpgcommon.a, with -fPIC and -DFRONTEND,
as commit ea53100d5 did for src/port.  Use that in libpq to avoid
symlinking+rebuilding source files retail.

Also adjust ecpg to use the new src/port and src/common libraries.

Arrange to install these libraries, too, to simplify out-of-tree
builds of shared libraries that need any of these modules.

Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
2018-09-28 14:28:19 -04:00
Tom Lane f7ab802855 Remove pqsignal() from libpq's official exports list.
Client applications should get this function, if they need it, from
libpgport.

The fact that it's exported from libpq is a hack left over from before
we set up libpgport.  It's never been documented, and there's no good
reason for non-PG code to be calling it anyway, so hopefully this won't
cause any problems.  Moreover, with the previous setup it was not real
clear whether our clients that use the function were getting it from
libpgport or libpq, so this might actually prevent problems.

The reason for changing it now is that in the wake of commit ea53100d5,
some linkers won't export the symbol, apparently because it's coming from
a .a library instead of a .o file.  We could get around that by continuing
to symlink pqsignal.c into libpq as before; but unless somebody complains
very hard, I don't want to adopt such a kluge.

Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
2018-09-28 12:38:10 -04:00
Amit Kapila a86bf6057e Fix assertion failure when updating full_page_writes for checkpointer.
When the checkpointer receives a SIGHUP signal to update its configuration,
it may need to update the shared memory for full_page_writes and need to
write a WAL record for it.  Now, it is quite possible that the XLOG
machinery has not been initialized by that time and it will lead to
assertion failure while doing that.  Fix is to allow the initialization of
the XLOG machinery outside critical section.

This bug has been introduced by the commit 2c03216d83 which added the XLOG
machinery initialization in RecoveryInProgress code path.

Reported-by: Dilip Kumar
Author: Dilip Kumar
Reviewed-by: Michael Paquier and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAFiTN-u4BA8KXcQUWDPNgaKAjDXC=C2whnzBM8TAcv=stckYUw@mail.gmail.com
2018-09-28 16:40:04 +05:30
Andres Freund 92a0342a90 Correct overflow handling in pgbench.
This patch attempts, although it's quite possible there are a few
holes, to properly detect and reported signed integer overflows in
pgbench.

Author: Fabien Coelho
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20171212052943.k2hlckfkeft3eiio@alap3.anarazel.de
2018-09-27 21:50:57 -07:00
Michael Paquier 78ea8b5daa Fix WAL recycling on standbys depending on archive_mode
A restart point or a checkpoint recycling WAL segments treats segments
marked with neither ".done" (archiving is done) or ".ready" (segment is
ready to be archived) in archive_status the same way for archive_mode
being "on" or "always".  While for a primary this is fine, a standby
running a restart point with archive_mode = on would try to mark such a
segment as ready for archiving, which is something that will never
happen except after the standby is promoted.

Note that this problem applies only to WAL segments coming from the
local pg_wal the first time archive recovery is run.  Segments part of a
self-contained base backup are the most common case where this could
happen, however even in this case normally the .done markers would be
most likely part of the backup.  Segments recovered from an archive are
marked as .ready or .done by the startup process, and segments finished
streaming are marked as such by the WAL receiver, so they are handled
already.

Reported-by: Haruka Takatsuka
Author: Michael Paquier
Discussion: https://postgr.es/m/15402-a453c90ed4cf88b2@postgresql.org
Backpatch-through: 9.5, where archive_mode = always has been added.
2018-09-28 11:54:38 +09:00
Tom Lane aaf10f32a3 Fix assorted bugs in pg_get_partition_constraintdef().
It failed if passed a nonexistent relation OID, or one that was a non-heap
relation, because of blindly applying heap_open to a user-supplied OID.
This is not OK behavior for a SQL-exposed function; we have a project
policy that we should return NULL in such cases.  Moreover, since
pg_get_partition_constraintdef ought now to work on indexes, restricting
it to heaps is flat wrong anyway.

The underlying function generate_partition_qual() wasn't on board with
indexes having partition quals either, nor for that matter with rels
having relispartition set but yet null relpartbound.  (One wonders
whether the person who wrote the function comment blocks claiming that
these functions allow a missing relpartbound had ever tested it.)

Fix by testing relispartition before opening the rel, and by using
relation_open not heap_open.  (If any other relkinds ever grow the
ability to have relispartition set, the code will work with them
automatically.)  Also, don't reject null relpartbound in
generate_partition_qual.

Back-patch to v11, and all but the null-relpartbound change to v10.
(It's not really necessary to change generate_partition_qual at all
in v10, but I thought s/heap_open/relation_open/ would be a good
idea anyway just to keep the code in sync with later branches.)

Per report from Justin Pryzby.

Discussion: https://postgr.es/m/20180927200020.GJ776@telsasoft.com
2018-09-27 18:15:17 -04:00
Alexander Korotkov 4ec90f53f1 Minor formatting cleanup for 2a6368343f 2018-09-27 23:29:50 +03:00
Alexander Korotkov 0f64595894 Remove extra usage of BoxPGetDatum() macro
Author: Mark Dilger
Discussion: https://postgr.es/m/B2AEFCD0-836D-4654-9D59-3DF616E0A6F3%40gmail.com
2018-09-27 23:25:22 +03:00
Andres Freund 27e082b0c6 Clean up in the wake of TupleDescGetSlot() removal / 10763358c3.
The previous commit wasn't careful enough to remove all traces of
TupleDescGetSlot().

Besides fixing the oversight of not removing TupleDescGetSlot()'s
declaration, this also removes FuncCallContext->slot. That was
documented to be for use in combination with TupleDescGetSlot(), a
cursory search over extensions finds no users, and there doesn't seem
to be convincing reasons to keep it around. If we later in the v12
release cycle find users, we can re-consider this part of the commit.

Reported-By: Michael Paquier
Discussion: https://postgr.es/m/20180926000413.GC1659@paquier.xyz
2018-09-27 11:38:11 -07:00
Tom Lane ea53100d56 Build src/port files as a library with -fPIC, and use that in libpq.
libpq and ecpg need shared-library-friendly versions of assorted src/port/
and src/common/ modules.  Up to now, they got those by symlinking the
individual source files and compiling them locally.  That's baroque, and a
pain to maintain, and it results in some amount of duplicated compile work.
It might've made sense when only a couple of files were needed, but the
list has grown and grown and grown :-(

It makes more sense to have the originating directory build a third variant
of libpgport.a/libpgcommon.a containing modules built with $(CFLAGS_SL),
and just link that into the shared library.  Unused files won't get linked,
so the end result should be the same.

This patch makes a down payment on that idea by having src/port/ build
such a library and making libpq use it.  If the buildfarm doesn't expose
fatal problems with the approach, I'll extend it to the other cases.

Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
2018-09-27 11:23:43 -04:00
Tom Lane ce4887bd02 Fix another portability issue from commit 758ce9b77.
strerror.c now requires strlcpy() in some cases, and a couple of the
ecpg libraries did not have that at hand.  Pull it in from src/port/
following the usual recipe.  Per buildfarm.
2018-09-26 19:03:40 -04:00
Michael Paquier ba16aade33 Switch flags tracking pending interrupts to sig_atomic_t
Those previously used bool, which should be safe on any modern
platforms, however the C standard is clear that it is better to use
sig_atomic_t for variables manipulated in signal handlers.  This commit
adds at the same time PGDLLIMPORT to ClientConnectionLost.

Author: Michael Paquier
Reviewed-by: Tom Lane, Chris Travers, Andres Freund
Discussion: https://postgr.es/m/20180925011311.GD1354@paquier.xyz
2018-09-27 07:47:20 +09:00
Tom Lane 751f532b97 Try another way to detect the result type of strerror_r().
The method we've traditionally used, of redeclaring strerror_r() to
see if the compiler complains of inconsistent declarations, turns out
not to work reliably because some compilers only report a warning,
not an error.  Amazingly, this has gone undetected for years, even
though it certainly breaks our detection of whether strerror_r
succeeded.

Let's instead test whether the compiler will take the result of
strerror_r() as a switch() argument.  It's possible this won't
work universally either, but it's the best idea I could come up with
on the spur of the moment.

We should probably back-patch this once the dust settles, but
first let's see what the buildfarm thinks of it.

Discussion: https://postgr.es/m/10877.1537993279@sss.pgh.pa.us
2018-09-26 18:23:13 -04:00
Tom Lane 8b91d25884 Clean up *printf macros to avoid conflict with format archetypes.
We must define the macro "printf" with arguments, else it can mess
up format archetype attributes in builds where PG_PRINTF_ATTRIBUTE
is just "printf".  Fortunately, that's easy to do now that we're
requiring C99; we can use __VA_ARGS__.

On the other hand, it's better not to use __VA_ARGS__ for the rest
of the *printf crew, so that one can take the addresses of those
functions without surprises.

I'd proposed doing this some time ago, but forgot to make it happen;
buildfarm failures subsequent to 96bf88d52 reminded me.

Discussion: https://postgr.es/m/22709.1535135640@sss.pgh.pa.us
Discussion: https://postgr.es/m/20180926190934.ea4xvzhkayuw7gkx@alap3.anarazel.de
2018-09-26 17:35:01 -04:00
Tom Lane a6b88d682c Fix link failures due to snprintf/strerror changes.
snprintf.c requires isnan(), which requires -lm on some platforms.
libpq never bothered with -lm before, but now it needs it.

strerror.c tries to translate a string or two, which requires -lintl.
We'd managed never to need that anywhere in ecpg/pgtypeslib/ before,
but now we do.

Per buildfarm and a report from Peter Eisentraut.

Discussion: https://postgr.es/m/20180926190934.ea4xvzhkayuw7gkx@alap3.anarazel.de
Discussion: https://postgr.es/m/f67b5008-9f01-057f-2bff-558cb53af851@2ndquadrant.com
2018-09-26 16:47:44 -04:00
Peter Eisentraut 0320ddaf3c Recurse to sequences on ownership change for all relkinds
When a table ownership is changed, we must apply that also to any owned
sequences.  (Otherwise, it would result in a situation that cannot be
restored, because linked sequences must have the same owner as the
table.)  But this was previously only applied to regular tables and
materialized views.  But it should also apply to at least foreign
tables.  This patch removes the relkind check altogether, because it
doesn't save very much and just introduces the possibility of similar
omissions.

Bug: #15238
Reported-by: Christoph Berg <christoph.berg@credativ.de>
2018-09-26 20:19:15 +02:00
Tom Lane d6c55de1f9 Implement %m in src/port/snprintf.c, and teach elog.c to rely on that.
I started out with the idea that we needed to detect use of %m format specs
in contexts other than elog/ereport calls, because we couldn't rely on that
working in *printf calls.  But a better answer is to fix things so that it
does work.  Now that we're using snprintf.c all the time, we can implement
%m in that and we've fixed the problem.

This requires also adjusting our various printf-wrapping functions so that
they ensure "errno" is preserved when they call snprintf.c.

Remove elog.c's handmade implementation of %m, and let it rely on
snprintf to support the feature.  That should provide some performance
gain, though I've not attempted to measure it.

There are a lot of places where we could now simplify 'printf("%s",
strerror(errno))' into 'printf("%m")', but I'm not in any big hurry
to make that happen.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 13:31:56 -04:00
Tom Lane 96bf88d527 Always use our own versions of *printf().
We've spent an awful lot of effort over the years in coping with
platform-specific vagaries of the *printf family of functions.  Let's just
forget all that mess and standardize on always using src/port/snprintf.c.
This gets rid of a lot of configure logic, and it will allow a saner
approach to dealing with %m (though actually changing that is left for
a follow-on patch).

Preliminary performance testing suggests that as it stands, snprintf.c is
faster than the native printf functions for some tasks on some platforms,
and slower for other cases.  A pending patch will improve that, though
cases with floating-point conversions will doubtless remain slower unless
we want to put a *lot* of effort into that.  Still, we've not observed
that *printf is really a performance bottleneck for most workloads, so
I doubt this matters much.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 13:13:57 -04:00
Tom Lane 758ce9b779 Incorporate strerror_r() into src/port/snprintf.c, too.
This provides the features that used to exist in useful_strerror()
for users of strerror_r(), too.  Also, standardize on the GNU convention
that strerror_r returns a char pointer that may not be NULL.

I notice that libpq's win32.c contains a variant version of strerror_r
that probably ought to be folded into strerror.c.  But lacking a
Windows environment, I should leave that to somebody else.

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 12:35:57 -04:00
Tom Lane 26e9d4d4ef Convert elog.c's useful_strerror() into a globally-used strerror wrapper.
elog.c has long had a private strerror wrapper that handles assorted
possible failures or deficiencies of the platform's strerror.  On Windows,
it also knows how to translate Winsock error codes, which the native
strerror does not.  Move all this code into src/port/strerror.c and
define strerror() as a macro that invokes it, so that both our frontend
and backend code will have all of this behavior.

I believe this constitutes an actual bug fix on Windows, since AFAICS
our frontend code did not report Winsock error codes properly before this.
However, the main point is to lay the groundwork for implementing %m
in src/port/snprintf.c: the behavior we want %m to have is this one,
not the native strerror's.

Note that this throws away the prior use of src/port/strerror.c,
which was to implement strerror() on platforms lacking it.  That's
been dead code for nigh twenty years now, since strerror() was
already required by C89.

We should likewise cause strerror_r to use this behavior, but
I'll tackle that separately.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
2018-09-26 11:06:42 -04:00
Peter Eisentraut a49ceda6a0 Update dummy CREATE ASSERTION grammar
While we are probably still far away from fully implementing
assertions, all patch proposals appear to take issue with the existing
dummy grammar CREATE/DROP ASSERTION productions, so update those a
little bit.  Rename the rule, use any_name instead of name, and remove
some unused code.  Also remove the production for DROP ASSERTION,
since that would most likely be handled via the generic DROP support.

extracted from a patch by Joe Wildish
2018-09-26 13:26:24 +02:00
Tomas Vondra a3d2844852 Improve test coverage of geometric types
This commit significantly increases test coverage of geo_ops.c, adding
tests for various issues addressed by 2e2a392de3 (which went undetected
for a long time, at least partially due to not being covered).

This also removes alternative results expecting -0 on some platforms.
Instead the functions are should return the same results everywhere,
transforming -0 to 0 if needed.

The tests are added to geometric.sql file, sorted by the left hand side
of the operators. There are many cross datatype operators, so this seems
like the best solution.

Author: Emre Hasegeli
Reviewed-by: Tomas Vondra

Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com
2018-09-26 10:45:21 +02:00
Tomas Vondra 2e2a392de3 Fix problems in handling the line data type
According to the source history, the internal format of line data type
has changed, but various functions working with it did were not updated
and thus were producing wrong results.

This patch addresses various such issues, in particular:

* Reject invalid specification A=B=0 on receive
* Reject same points on line_construct_pp()
* Fix perpendicular operator when negative values are involved
* Avoid division by zero on perpendicular operator
* Fix intersection and distance operators when neither A nor B are 1
* Return NULL for closest point when objects are parallel
* Check whether closest point of line segments is the intersection point
* Fix closest point of line segments being on the wrong segment

Aside from handling those issues, the patch also aims to make operators
more symmetric and less sen to precision loss.  The EPSILON interferes
with even minor changes, but the least we can do is applying it to both
sides of the operators equally.

Author: Emre Hasegeli
Reviewed-by: Tomas Vondra

Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com
2018-09-26 10:25:24 +02:00
Michael Paquier f535d5f0c1 Add basic regression tests for default monitoring roles
The following default roles gain some coverage:
- pg_read_all_stats
- pg_read_all_settings

Author: Alexandra Ryzhevich
Discussion: https://postgr.es/m/CAOt4E5S5WJmDc9YpS1BfyAMQ5C1NEmiYynD6nUz42qVxphqkpA@mail.gmail.com
2018-09-26 15:26:45 +09:00
Michael Paquier 8d28bf500f Rework activation of commit timestamps during recovery
The activation and deactivation of commit timestamp tracking has not
been handled consistently for a primary or standbys at recovery.  The
facility can be activated at three different moments of recovery:
- The beginning, where a primary would use the GUC value for the
decision-making, and where a standby relies on the contents of the
control file.
- When replaying a XLOG_PARAMETER_CHANGE record at redo.
- The end, where both primary and standby rely on the GUC value.

Using the GUC value for a primary at the beginning of recovery causes
problems with commit timestamp access when doing crash recovery.
Particularly, when replaying transaction commits, it could be possible
that an attempt to read commit timestamps is done for a transaction
which committed at a moment when track_commit_timestamp was disabled.

A test case is added to reproduce the failure.  The test works down to
v11 as it takes advantage of transaction commits within procedures.

Reported-by: Hailong Li
Author: Masahiko Sawasa, Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/11224478-a782-203b-1f17-e4797b39bdf0@qunar.com
Backpatch-through: 9.5, where commit timestamps have been introduced.
2018-09-26 10:25:54 +09:00
Andres Freund 10763358c3 Remove absolete function TupleDescGetSlot().
TupleDescGetSlot() was kept around for backward compatibility for
user-written SRFs. With the TupleTableSlot abstraction work, that code
will need to be version specific anyway, so there's no point in
keeping the function around any longer.

Author: Ashutosh Bapat
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-09-25 16:28:57 -07:00
Andres Freund 29c94e03c7 Split ExecStoreTuple into ExecStoreHeapTuple and ExecStoreBufferHeapTuple.
Upcoming changes introduce further types of tuple table slots, in
preparation of making table storage pluggable. New storage methods
will have different representation of tuples, therefore the slot
accessor should refer explicitly to heap tuples.

Instead of just renaming the functions, split it into one function
that accepts heap tuples not residing in buffers, and one accepting
ones in buffers.  Previously one function was used for both, but that
was a bit awkward already, and splitting will allow us to represent
slot types for tuples in buffers and normal memory separately.

This is split out from the patch introducing abstract slots, as this
largely consists out of mechanical changes.

Author: Ashutosh Bapat
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-09-25 16:27:48 -07:00
Andres Freund bbdfbb9154 Remove function list from prologue of execTuples.c.
That section is never in sync with the actual routines available and
their functionality.

Author: Ashutosh Bapat
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-09-25 16:27:48 -07:00
Andres Freund a598708ffa Change TupleTableSlot->tts_nvalid to type AttrNumber.
Previously it was an int / 4 bytes. The maximum number of attributes
in a tuple is restricted by the maximum value Var->varattno, which is
an AttrNumber/int16. Hence use the same data type for
TupleTableSlot->tts_nvalid.

Author: Ashutosh Bapat
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-09-25 15:59:46 -07:00
Alvaro Herrera 5913b9bbf3 Remove obsolete comment
The documented shortcoming was actually fixed in 4c728f3829
so the comment is not true anymore.
2018-09-25 17:55:22 -03:00
Alvaro Herrera 62e533d3f1 Remove fmgr.h inclusion from partition.h
It's not needed anymore.
2018-09-25 17:52:07 -03:00
Andres Freund 33001fd7a7 Collect JIT instrumentation from workers.
Previously, when using parallel query, EXPLAIN (ANALYZE)'s JIT
compilation timings did not include the overhead from doing so on the
workers.  Fix that.

We do so by simply aggregating the cost of doing JIT compilation on
workers and the leader together. Arguably that's not quite accurate,
because the total time spend doing so is spent in parallel - but it's
hard to do much better.  For additional detail, when VERBOSE is
specified, the stats for workers are displayed separately.

Author: Amit Khandekar and Andres Freund
Discussion: https://postgr.es/m/CAJ3gD9eLrz51RK_gTkod+71iDcjpB_N8eC6vU2AW-VicsAERpQ@mail.gmail.com
Backpatch: 11-
2018-09-25 13:12:44 -07:00
Tom Lane 5e22171310 Make some fixes to allow building Postgres on macOS 10.14 ("Mojave").
Apple's latest rearrangements of the system-supplied headers have broken
building of PL/Perl and PL/Tcl.  The only practical way to fix PL/Tcl is to
start using the "-isysroot" compiler flag to point to SDK-supplied headers,
as Apple expects.  We must also start distinguishing where to find Perl's
headers from where to find its shared library; but that seems like good
cleanup anyway.

Extensions that formerly did something like -I$(perl_archlibexp)/CORE
should now do -I$(perl_includedir)/CORE instead.  perl_archlibexp
is still the place to look for libperl.so, though.

If for some reason you don't like the default -isysroot setting, you can
override that by setting PG_SYSROOT in configure's arguments.  I don't
currently think people would need to do so, unless maybe for cross-version
build purposes.

In addition, teach configure where to find tclConfig.sh.  Our traditional
method of searching $auto_path hasn't worked for the last couple of macOS
releases, and it now seems clear that Apple's not going to change that.
The workaround of manually specifying --with-tclconfig was annoying
already, but Mojave's made it a lot more so because the sysroot path now
has to be included as well.  Let's just wire the knowledge into configure
instead.  To avoid breaking builds against non-default Tcl installations
(e.g. MacPorts) wherein the $auto_path method probably still works,
arrange to try the additional case only after all else has failed.

Back-patch to all supported versions, since at least the buildfarm
cares about that.  The changes are set up to not do anything on macOS
releases that are old enough to not have functional sysroot trees.
2018-09-25 13:23:29 -04:00
Tom Lane 5b7e036707 Avoid unnecessary precision loss for pgbench's --rate target.
It's fairly silly to truncate the throttle_delay to integer when the only
math we ever do with it requires converting back to double.  Furthermore,
given that people are starting to complain about restrictions like only
supporting 1K client connections, I don't think we're very far away from
situations where the precision loss matters.  As the code stood, for
example, there's no difference between --rate 100001 and --rate 111111;
both get converted to throttle_delay = 9.  Somebody trying to run 100
threads and have each one dispatch around 1K TPS would find this lack of
precision rather surprising, especially since the required per-thread
delays are around 1ms, well within the timing precision of modern systems.
2018-09-25 11:09:18 -04:00
Thomas Munro 64171b3206 Constify dsa_size_class_map and use a better type.
Author: Mark G
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEeOP_Zy_FvVwcAU0UX9nkOhnoR5KN%3D0B6LWX_kv0ZuSc4wbGw%40mail.gmail.com
2018-09-25 14:58:41 +12:00
Michael Paquier 08c9917e24 Ignore publication tables when --no-publications is used
96e1cb4 has added support for --no-publications in pg_dump, pg_dumpall
and pg_restore, but forgot the fact that publication tables also need to
be ignored when this option is used.

Author: Gilles Darold
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3f48e812-b0fa-388e-2043-9a176bdee27e@dalibo.com
Backpatch-through: 10, where publications have been added.
2018-09-25 11:03:56 +09:00
Tom Lane fd582317e1 Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(.  While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time.  Update our
copies of these files.

Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days.  Fortunately,
their source code turns out to be a breeze to build.

Notable changes:

* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.

* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil.  These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.

(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)

Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.

initdb forced because the contents of snowball_create.sql have
changed.

Still TODO: see about updating the stopword lists.

Arthur Zakirov, minor mods and doc work by me

Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 17:29:38 -04:00
Andres Freund 52050ad8eb Make EXPLAIN output for JIT compilation more dense.
A discussion about also reporting JIT compilation overhead on workers
brought unhappiness with the verbosity of the current explain format
to light.  Make the text format more dense, and restructure the
structured output to mirror that more closely.

As we're re-jiggering the output format anyway: The denser format
allows us to report all flags for JIT compilation (now also reporting
PGJIT_EXPR and PGJIT_DEFORM), and report the total time in addition to
the individual times.

Per complaint from Tom Lane.

Author: Andres Freund
Discussion: https://postgr.es/m/27812.1537221015@sss.pgh.pa.us
Backpatch: 11-, where JIT compilation was introduced
2018-09-24 13:35:45 -07:00
Andrew Dunstan 7636e5c60f Fast default trigger and expand_tuple fixes
Ensure that triggers get properly filled in tuples for the OLD value.
Also fix the logic of detecting missing null values. The previous logic
failed to detect a missing null column before the first missing column
with a default. Fixing this has simplified the logic a bit.

Regression tests are added to test changes. This should ensure better
coverage of expand_tuple().

Original bug reports, and some code and test scripts from Tomas Vondra

Backpatch to release 11.
2018-09-24 16:11:24 -04:00
Tom Lane 60e612b602 Use ppoll(2), if available, to wait for input in pgbench.
Previously, pgbench always used select(2) for this purpose, but that's
problematic for very high client counts, because select() can't deal
with file descriptor numbers larger than FD_SETSIZE.  It's pretty common
for that to be only 1024 or so, whereas modern OSes can allow many more
open files than that.  Using poll(2) would surmount that problem, but it
creates another one: poll()'s timeout resolution is only 1ms, which is
poor enough to cause problems with --rate specifications approaching or
exceeding 1K TPS.

On platforms that have ppoll(2), which includes Linux and recent
FreeBSD, we can use that to avoid the FD_SETSIZE problem without any
loss of timeout resolution.  Hence, add configure logic to test for
ppoll(), and use it if available.

This patch introduces an abstraction layer into pgbench that could
be extended to support other kernel event-wait APIs such as kevents.
But actually adding such support is a matter for some future patch.

Doug Rady, reviewed by Robert Haas and Fabien Coelho, and whacked around
a good bit more by me

Discussion: https://postgr.es/m/23D017C9-81B7-484D-8490-FD94DEC4DF59@amazon.com
2018-09-24 14:40:58 -04:00
Tom Lane 87d9bbca13 Fix over-allocation of space for array_out()'s result string.
array_out overestimated the space needed for its output, possibly by
a very substantial amount if the array is multi-dimensional, because
of wrong order of operations in the loop that counts the number of
curly-brace pairs needed.  While the output string is normally
short-lived, this could still cause problems in extreme cases.

An additional minor error was that it counted one more delimiter than
is actually needed.

Repair those errors, add an Assert that the space is now correctly
calculated, and make some minor improvements in the comments.

I also failed to resist the temptation to get rid of an integer
modulus operation per array element; a simple comparison is sufficient.

This bug dates clear back to Berkeley days, so back-patch to all
supported versions.

Keiichi Hirobe, minor additional work by me

Discussion: https://postgr.es/m/CAH=EFxE9W0tRvQkixR2XJRRCToUYUEDkJZk6tnADXugPBRdcdg@mail.gmail.com
2018-09-24 11:30:59 -04:00
Joe Conway c62dd80cdf Document aclitem functions and operators
aclitem functions and operators have been heretofore undocumented.
Fix that. While at it, ensure the non-operator aclitem functions have
pg_description strings.

Does not seem worthwhile to back-patch.

Author: Fabien Coelho, with pg_description from John Naylor, and significant
refactoring and editorialization by me.
Reviewed by: Tom Lane
Discussion: https://postgr.es/m/flat/alpine.DEB.2.21.1808010825490.18204%40lancre
2018-09-24 10:14:57 -04:00
Noah Misch d18f6674bd Initialize random() in bootstrap/stand-alone postgres and in initdb.
This removes a difference between the standard IsUnderPostmaster
execution environment and that of --boot and --single.  In a stand-alone
backend, "SELECT random()" always started at the same seed.

On a system capable of using posix shared memory, initdb could still
conclude "selecting dynamic shared memory implementation ... sysv".
Crashed --boot or --single postgres processes orphaned shared memory
objects having names that collided with the not-actually-random names
that initdb probed.  The sysv fallback appeared after ten crashes of
--boot or --single postgres.  Since --boot and --single are rare in
production use, systems used for PostgreSQL development are the
principal candidate to notice this symptom.

Back-patch to 9.3 (all supported versions).  PostgreSQL 9.4 introduced
dynamic shared memory, but 9.3 does share the "SELECT random()" problem.

Reviewed by Tom Lane and Kyotaro HORIGUCHI.

Discussion: https://postgr.es/m/20180915221546.GA3159382@rfd.leadboat.com
2018-09-23 22:56:39 -07:00
Tom Lane 89b280e139 Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
In a case where we have multiple relation-scan nodes in a cursor plan,
such as a scan of an inheritance tree, it's possible to fetch from a
given scan node, then rewind the cursor and fetch some row from an
earlier scan node.  In such a case, execCurrent.c mistakenly thought
that the later scan node was still active, because ExecReScan hadn't
done anything to make it look not-active.  We'd get some sort of
failure in the case of a SeqScan node, because the node's scan tuple
slot would be pointing at a HeapTuple whose t_self gets reset to
invalid by heapam.c.  But it seems possible that for other relation
scan node types we'd actually return a valid tuple TID to the caller,
resulting in updating or deleting a tuple that shouldn't have been
considered current.  To fix, forcibly clear the ScanTupleSlot in
ExecScanReScan.

Another issue here, which seems only latent at the moment but could
easily become a live bug in future, is that rewinding a cursor does
not necessarily lead to *immediately* applying ExecReScan to every
scan-level node in the plan tree.  Upper-level nodes will think that
they can postpone that call if their child node is already marked
with chgParam flags.  I don't see a way for that to happen today in
a plan tree that's simple enough for execCurrent.c's search_plan_tree
to understand, but that's one heck of a fragile assumption.  So, add
some logic in search_plan_tree to detect chgParam flags being set on
nodes that it descended to/through, and assume that that means we
should consider lower scan nodes to be logically reset even if their
ReScan call hasn't actually happened yet.

Per bug #15395 from Matvey Arye.  This has been broken for a long time,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/153764171023.14986.280404050547008575@wrigleys.postgresql.org
2018-09-23 16:05:45 -04:00
Alexander Korotkov 2f39106a20 Replace CAS loop with single TAS in ProcArrayGroupClearXid()
Single pg_atomic_exchange_u32() is expected to be faster than loop of
pg_atomic_compare_exchange_u32().  Also, it would be consistent with
clog group update code.

Discussion: https://postgr.es/m/CAPpHfdtxLsC-bqfxFcHswZ91OxXcZVNDBBVfg9tAWU0jvn1tQA%40mail.gmail.com
Reviewed-by: Amit Kapila
2018-09-22 16:22:30 +03:00
Michael Paquier db361db2fc Make GUC wal_sender_timeout user-settable
Being able to use a value that can be changed on a connection basis is
useful with clusters distributed geographically, and makes failure
detection more flexible.  A note is added in the documentation about the
use of "options" in primary_conninfo, which can be hard to grasp for
newcomers with the need of two single quotes when listing a set of
parameters.

Author: Tsunakawa Takayuki
Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1FAAD3AE@G01JPEXMBYT05
2018-09-22 15:23:59 +09:00
Tom Lane 4f3b38fe2b Get rid of explicit argument-count markings in tab-complete.c.
This replaces the "TailMatchesN" macros with just "TailMatches",
and likewise "HeadMatchesN" becomes "HeadMatches" and "MatchesN"
becomes "Matches".  The various COMPLETE_WITH_LISTn macros are
reduced to COMPLETE_WITH, and the single-item COMPLETE_WITH_CONST
also gets folded into that.  This eliminates a lot of minor
annoyance in writing tab-completion rules.  Usefully, the compiled
code also gets a bit smaller (10% or so, on my machine).

The implementation depends on variadic macros, so we couldn't have
done this before we required C99.

Andres Freund and Thomas Munro; some cosmetic cleanup by me.

Discussion: https://postgr.es/m/d8jo9djvm7h.fsf@dalvik.ping.uio.no
2018-09-21 20:50:41 -04:00
Tom Lane e8fe426baa Fix bogus tab-completion rule for CREATE PUBLICATION.
You can't use "FOR TABLE" as a single Matches argument, because readline
will consider that input to be two words not one.  It's necessary to make
the pattern contain two arguments.

The case accidentally worked anyway because the words_after_create
test fired ... but only for the first such table name.

Noted by Edmund Horner, though this isn't exactly his proposed fix.
Backpatch to v10 where the faulty code came in.

Discussion: https://postgr.es/m/CAMyN-kDe=gBmHgxWwUUaXuwK+p+7g1vChR7foPHRDLE592nJPQ@mail.gmail.com
2018-09-21 15:58:37 -04:00
Tom Lane 121213d9d8 Improve tab completion for ANALYZE, EXPLAIN, and VACUUM.
Previously, we made no attempt to provide tab completion in these
statements' optional parenthesized options lists.  This patch teaches
psql to do so.

To prevent the option completions from being offered after we've already
seen a complete parenthesized option list, it's necessary to improve
word_matches() so that it allows a wildcard '*' in the middle of an
alternative, not only at the end as formerly.  That requires only a
little more code than before, and it allows us to test for "incomplete
parenthesized options" with a test like

    else if (HeadMatches2("EXPLAIN", "(*") &&
             !HeadMatches2("EXPLAIN", "(*)"))

In addition, add some logic to offer column names in the context of
"ANALYZE tablename ( ...", and likewise for VACUUM.  This isn't real
complete; it won't offer column names again after a comma.  But it's
better than before, and it doesn't take much code.

Justin Pryzby, reviewed at various times by Álvaro Herrera, Arthur
Zakirov, and Edmund Horner; some additional fixups by me

Discussion: https://postgr.es/m/20180529000623.GA21896@telsasoft.com
2018-09-21 15:22:26 -04:00
Tom Lane e3b7a6d165 Rationalize Query_for_list_of_[relations] query names in tab-complete.c.
The previous convention was to use names based on the set of relkinds being
selected for, which was not at all helpful for maintenance, especially
since people had been quite inconsistent about whether to change the names
when they changed the relkinds being selected for.  Instead, use names
based on the functionality we need the relation to have, following the
model established by Query_for_list_of_updatables.

While at it, sort the list of Query constants a bit better; it had the
distinct air of code-assembled-by-dartboard before.

Discussion: https://postgr.es/m/14830.1537481254@sss.pgh.pa.us
2018-09-21 12:41:00 -04:00
Thomas Munro f025bd2ddd Use size_t consistently in dsa.{ch}.
Takeshi Ideriha complained that there is a mixture of Size and size_t
in dsa.c and corresponding header.  Let's use size_t.  Back-patch to 10
where dsa.c landed, to make future back-patching easy.

Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F19ABD9%40G01JPEXMBKW04
2018-09-22 00:40:13 +12:00
Michael Paquier 925673f27b Remove special handling for open() in initdb for Windows
40cfe86 enforces the translation mode to text for all frontends, so this
special handling in initdb is not needed anymore.
2018-09-21 06:41:44 +09:00
Tom Lane c9a8a401f1 Fix psql's tab completion for TABLE.
This should offer the same relation types that SELECT ... FROM would.
You can't select from an index for instance, so offering it here is
unhelpful.  Noted while testing ilmari's recent patch.
2018-09-20 17:21:14 -04:00
Tom Lane a7c4dad1a7 Fix psql's tab completion for ALTER DATABASE ... SET TABLESPACE.
We have the infrastructure to offer a list of tablespace names, but
it wasn't being used here; instead you got "FROM", "CURRENT", and "TO"
which aren't actually legal in this syntax.

Dagfinn Ilmari Mannsåker, reviewed by Arthur Zakirov

Discussion: https://postgr.es/m/d8jo9djvm7h.fsf@dalvik.ping.uio.no
2018-09-20 17:16:06 -04:00
Tom Lane 1dba1b61c2 Add a "return" statement to pacify perlcritic.
Per buildfarm member crake.
2018-09-20 16:10:23 -04:00
Tom Lane b09a64d602 Add missing pg_description strings for pg_type entries.
I noticed that all non-composite, non-array entries in pg_type.dat
had descr strings, except for "json" and the pseudo-types.  The
lack for json seems certainly an oversight, and there's surely
little reason to not have entries for the pseudo-types either.
So add some.

"make reformat-dat-files" turned up some formatting issues in
pg_amop.dat, too, so fix those in passing.

No catversion bump since the backend doesn't care too much what is
in pg_description.
2018-09-20 16:06:18 -04:00
Tom Lane 3dc820c43e Teach genbki.pl to auto-generate pg_type entries for array types.
This eliminates some more tedium in adding new catalog entries,
specifically the need to set up an array type when adding a new
built-in data type.  Now it's sufficient to assign an OID for the
array type and write it in an "array_type_oid" metadata field.
You don't have to fill the base type's typarray link explicitly, either.

No catversion bump since the contents of pg_type aren't changed.
(Well, their order might be different, but that doesn't matter.)

John Naylor, reviewed and whacked around a bit by
Dagfinn Ilmari Mannsåker, and some more by me.

Discussion: https://postgr.es/m/CAJVSVGVTb6m9pJF49b3SuA8J+T-THO9c0hxOmoyv-yGKh-FbNg@mail.gmail.com
2018-09-20 15:14:46 -04:00
Alexander Korotkov 09e99ce86e Fix handling of format string text characters in to_timestamp()/to_date()
cf984672 introduced improvement of handling of spaces and separators in
to_timestamp()/to_date() functions.  In particular, now we're skipping spaces
both before and after fields.  That may cause format string text character to
consume part of field in the situations, when it didn't happen before cf984672.
This commit cause format string text character consume input string characters
only when since previous field (or string beginning) number of skipped input
string characters is not greater than number of corresponding format string
characters (that is we didn't skip any extra characters in input string).
2018-09-20 15:48:04 +03:00
Thomas Munro 38763d6778 Fix segment_bins corruption in dsa.c.
If a segment has been freed by dsa.c because it is entirely empty, other
backends must make sure to unmap it before following links to new
segments that might happen to have the same index number, or they could
finish up looking at a defunct segment and then corrupt the segment_bins
lists.  The correct protocol requires checking freed_segment_counter
after acquiring the area lock and before resolving any index number to a
segment.  Add the missing checks and an assertion.

Back-patch to 10, where dsa.c first arrived.

Author: Thomas Munro
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/CAEepm%3D0thg%2Bja5zGVa7jBy-uqyHrTqTm8HGhEOtMmigGrAqTbw%40mail.gmail.com
2018-09-20 15:52:39 +12:00
Thomas Munro 6c3c9d4189 Defer restoration of libraries in parallel workers.
Several users of extensions complained of crashes in parallel workers
that turned out to be due to syscache access from their _PG_init()
functions.  Reorder the initialization of parallel workers so that
libraries are restored after the caches are initialized, and inside a
transaction.

This was reported in bug #15350 and elsewhere.  We don't consider it
to be a bug: extensions shouldn't do that, because then they can't be
used in shared_preload_libraries.  However, it's a fairly obscure
hazard and these extensions worked in practice before parallel query
came along.  So let's make it work.  Later commits might add a warning
message and eventually an error.

Back-patch to 9.6, where parallel query landed.

Author: Thomas Munro
Reviewed-by: Amit Kapila
Reported-by: Kieran McCusker, Jimmy
Discussion: https://postgr.es/m/153512195228.1489.8545997741965926448%40wrigleys.postgresql.org
2018-09-20 14:21:18 +12:00
Michael Paquier 40cfe86068 Enforce translation mode for Windows frontends to text with open/fopen
Allowing frontends to use concurrent-safe open() and fopen() via 0ba06e0
has the side-effect of switching the default translation mode from text
to binary, so the switch can cause breakages for frontend tools when the
caller of those new versions specifies neither binary and text.  This
commit makes sure to maintain strict compatibility with past versions,
so as no frontends should see a difference when upgrading.

Author: Laurenz Albe
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20180917140202.GF31460@paquier.xyz
2018-09-20 08:54:37 +09:00
Tom Lane 0d38e4ebb7 Fix minor error message style guide violation.
No periods at the ends of primary error messages, please.

Daniel Gustafsson

Discussion: https://postgr.es/m/43E004C0-18C6-42B4-A313-003B43EB0571@yesql.se
2018-09-19 17:06:40 -04:00
Tom Lane 8f0de712c3 Don't ignore locktable-full failures in StandbyAcquireAccessExclusiveLock.
Commit 37c54863c removed the code in StandbyAcquireAccessExclusiveLock
that checked the return value of LockAcquireExtended.  That created a
bug, because it's still passing reportMemoryError = false to
LockAcquireExtended, meaning that LOCKACQUIRE_NOT_AVAIL will be returned
if we're out of shared memory for the lock table.

In such a situation, the startup process would believe it had acquired an
exclusive lock even though it hadn't, with potentially dire consequences.

To fix, just drop the use of reportMemoryError = false, which allows us
to simplify the call into a plain LockAcquire().  It's unclear that the
locktable-full situation arises often enough that it's worth having a
better recovery method than crash-and-restart.  (I strongly suspect that
the only reason the code path existed at all was that it was relatively
simple to do in the pre-37c54863c implementation.  But now it's not.)

LockAcquireExtended's reportMemoryError parameter is now dead code and
could be removed.  I refrained from doing so, however, because there
was some interest in resurrecting the behavior if we do get reports of
locktable-full failures in the field.  Also, it seems unwise to remove
the parameter concurrently with shipping commit f868a8143, which added a
parameter; if there are any third-party callers of LockAcquireExtended,
we want them to get a wrong-number-of-parameters compile error rather
than a possibly-silent misinterpretation of its last parameter.

Back-patch to 9.6 where the bug was introduced.

Discussion: https://postgr.es/m/6202.1536359835@sss.pgh.pa.us
2018-09-19 12:43:51 -04:00
Alexander Korotkov 2a6368343f Add support for nearest-neighbor (KNN) searches to SP-GiST
Currently, KNN searches were supported only by GiST.  SP-GiST also capable to
support them.  This commit implements that support.  SP-GiST scan stack is
replaced with queue, which serves as stack if no ordering is specified.  KNN
support is provided for three SP-GIST opclasses: quad_point_ops, kd_point_ops
and poly_ops (catversion is bumped).  Some common parts between GiST and SP-GiST
KNNs are extracted into separate functions.

Discussion: https://postgr.es/m/570825e8-47d0-4732-2bf6-88d67d2d51c8%40postgrespro.ru
Author: Nikita Glukhov, Alexander Korotkov based on GSoC work by Vlad Sterzhanov
Review: Andrey Borodin, Alexander Korotkov
2018-09-19 01:54:10 +03:00
Tom Lane d0cfc3d6a4 Add a debugging option to stress-test outfuncs.c and readfuncs.c.
In the normal course of operation, query trees will be serialized only if
they are stored as views or rules; and plan trees will be serialized only
if they get passed to parallel-query workers.  This leaves an awful lot of
opportunity for bugs/oversights to not get detected, as indeed we've just
been reminded of the hard way.

To improve matters, this patch adds a new compile option
WRITE_READ_PARSE_PLAN_TREES, which is modeled on the longstanding option
COPY_PARSE_PLAN_TREES; but instead of passing all parse and plan trees
through copyObject, it passes them through nodeToString + stringToNode.
Enabling this option in a buildfarm animal or two will catch problems
at least for cases that are exercised by the regression tests.

A small problem with this idea is that readfuncs.c historically has
discarded location fields, on the reasonable grounds that parse
locations in a retrieved view are not relevant to the current query.
But doing that in WRITE_READ_PARSE_PLAN_TREES breaks pg_stat_statements,
and it could cause problems for future improvements that might try to
report error locations at runtime.  To fix that, provide a variant
behavior in readfuncs.c that makes it restore location fields when
told to.

In passing, const-ify the string arguments of stringToNode and its
subsidiary functions, just because it annoyed me that they weren't
const already.

Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 17:11:54 -04:00
Tom Lane db1071d4ee Fix some minor issues exposed by outfuncs/readfuncs testing.
A test patch to pass parse and plan trees through outfuncs + readfuncs
exposed several issues that need to be fixed to get clean matches:

Query.withCheckOptions failed to get copied; it's intentionally ignored
by outfuncs/readfuncs on the grounds that it'd always be NIL anyway in
stored rules.  This seems less than future-proof, and it's not even
saving very much, so just undo the decision and treat the field like
all others.

Several places that convert a view RTE into a subquery RTE, or similar
manipulations, failed to clear out fields that were specific to the
original RTE type and should be zero in a subquery RTE.  Since readfuncs.c
will leave such fields as zero, equalfuncs.c thinks the nodes are different
leading to a reported mismatch.  It seems like a good idea to clear out the
no-longer-needed fields, even though in principle nothing should look at
them; the node ought to be indistinguishable from how it would look if
we'd built a new node instead of scribbling on the old one.

BuildOnConflictExcludedTargetlist randomly set the resname of some
TargetEntries to "" not NULL.  outfuncs/readfuncs don't distinguish those
cases, and so the string will read back in as NULL ... but equalfuncs.c
does distinguish.  Perhaps we ought to try to make things more consistent
in this area --- but it's just useless extra code space for
BuildOnConflictExcludedTargetlist to not use NULL here, so I fixed it for
now by making it do that.

catversion bumped because the change in handling of Query.withCheckOptions
affects stored rules.

Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 15:08:28 -04:00
Tom Lane 09991e5a47 Fix some probably-minor oversights in readfuncs.c.
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and
colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't
restore them.  This doesn't cause obvious failures, because the only things
that look at those fields are expandRTE() and get_rte_attribute_type(),
which are mostly used during parse analysis, before anything would've
passed the parsetree through outfuncs/readfuncs.  But expandRTE() is used
in build_physical_tlist(), which means that that function will return a
wrong answer for a TABLEFUNC RTE that came from a view.  Very accidentally,
this doesn't cause serious problems, because what it will return is NIL
which callers will interpret as "couldn't build a physical tlist because
of dropped columns".  So you still get a plan that works, though it's
marginally less efficient than it could be.  There are also some other
expandRTE() calls associated with transformation of whole-row Vars in
the planner.  I have been unable to exhibit misbehavior from that, and
it may be unreachable in any case that anyone would care about ... but
I'm not entirely convinced, so this seems like something we should back-
patch a fix for.  Fortunately, we can fix it without forcing a change
of stored rules and a catversion bump, because we can just copy these
lists from the subsidiary TableFunc object.

readfuncs.c was also missing support for NamedTuplestoreScan plan nodes.
This accidentally fails to break parallel query because a query using
a named tuplestore would never be considered parallel-safe anyway.
However, project policy since parallel query came in is that all plan
node types should have outfuncs/readfuncs support, so this is clearly
an oversight that should be repaired.

Noted while fooling around with a patch to test outfuncs/readfuncs more
thoroughly.  That exposed some other issues too, but these are the only
ones that seem worth back-patching.

Back-patch to v10 where both of these features came in.

Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
2018-09-18 13:02:27 -04:00
Thomas Munro 422952ee78 Allow DSM allocation to be interrupted.
Chris Travers reported that the startup process can repeatedly try to
cancel a backend that is in a posix_fallocate()/EINTR loop and cause it
to loop forever.  Teach the retry loop to give up if an interrupt is
pending.  Don't actually check for interrupts in that loop though,
because a non-local exit would skip some clean-up code in the caller.

Back-patch to 9.4 where DSM was added (and posix_fallocate() was later
back-patched).

Author: Chris Travers
Reviewed-by: Ildar Musin, Murat Kabilov, Oleksii Kliukin
Tested-by: Oleksii Kliukin
Discussion: https://postgr.es/m/CAN-RpxB-oeZve_J3SM_6%3DHXPmvEG%3DHX%2B9V9pi8g2YR7YW0rBBg%40mail.gmail.com
2018-09-18 22:56:36 +12:00
Michael Paquier 1d6fbc38d9 Refactor routines for subscription and publication lookups
Those routines gain a missing_ok argument, allowing a caller to get a
NULL result instead of an error if set to true.  This is part of a
larger refactoring effort for objectaddress.c where trying to check for
non-existing objects does not result in cache lookup failures.

Author: Michael Paquier
Reviewed-by: Aleksander Alekseev, Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
2018-09-18 12:00:18 +09:00
Tom Lane 07a3af0ff8 Fix parsetree representation of XMLTABLE(XMLNAMESPACES(DEFAULT ...)).
The original coding for XMLTABLE thought it could represent a default
namespace by a T_String Value node with a null string pointer.  That's
not okay, though; in particular outfuncs.c/readfuncs.c are not on board
with such a representation, meaning you'll get a null pointer crash
if you try to store a view or rule containing this construct.

To fix, change the parsetree representation so that we have a NULL
list element, instead of a bogus Value node.

This isn't really a functional limitation since default XML namespaces
aren't yet implemented in the executor; you'd just get "DEFAULT
namespace is not supported" anyway.  But crashes are not nice, so
back-patch to v10 where this syntax was added.  Ordinarily we'd consider
a parsetree representation change to be un-backpatchable; but since
existing releases would crash on the way to storing such constructs,
there can't be any existing views/rules to be incompatible with.

Per report from Andrey Lepikhov.

Discussion: https://postgr.es/m/3690074f-abd2-56a9-144a-aa5545d7a291@postgrespro.ru
2018-09-17 13:16:32 -04:00
Tom Lane 789ba5029a Remove dead code from pop_next_work_item().
The pref_non_data heuristic has been dead code for nearly ten years,
and as far as I can tell was dead code even when it was first committed.
I'm tired of silencing Coverity complaints about it, so get rid of it.
If anyone is ever interested in pursuing the concept, they can get the
code out of our git history.
2018-09-17 12:43:07 -04:00
Tom Lane db37ab2c60 Fix pgbench lexer's "continuation" rule to cope with Windows newlines.
Our general practice in frontend code is to accept input with either
Unix-style newlines (\n) or DOS-style (\r\n).  pgbench was mostly down
with that, but its rule for line continuations (backslash-newline) was
not.  This had been masked on Windows buildfarm machines before commit
0ba06e0bf by use of Windows text mode to read files.  We could have fixed
it by forcing text mode again, but it's better to fix the parsing code
so that Windows-style text files on Unix systems don't cause problems.

Back-patch to v10 where pgbench grew line continuations.

Discussion: https://postgr.es/m/17194.1537191697@sss.pgh.pa.us
2018-09-17 12:11:43 -04:00
Andrew Gierth 60f6756f92 Fix out-of-tree build for transform modules.
Neither plperl nor plpython installed sufficient header files to
permit transform modules to be built out-of-tree using PGXS. Fix that
by installing all plperl and plpython header files (other than those
with special purposes such as generated data tables), and also install
plpython's special .mk file for mangling regression tests.

(This commit does not fix the windows install, which does not
currently install _any_ plperl or plpython headers.)

Also fix the existing transform modules for hstore and ltree so that
their cross-module #include directives work as anticipated by commit
df163230b9 et seq. This allows them to serve as working examples of
how to reference other modules when doing separate out-of-tree builds.

Discussion: https://postgr.es/m/87o9ej8bgl.fsf%40news-spur.riddles.org.uk
2018-09-16 18:46:45 +01:00
Tom Lane 3844adbf3c Add outfuncs.c support for RawStmt nodes.
I noticed while poking at a report from Andrey Lepikhov that the
recent addition of RawStmt nodes at the top of raw parse trees
makes it impossible to print any raw parse trees whatsoever,
because outfuncs.c doesn't know RawStmt and hence fails to descend
into it.

While we generally lack outfuncs.c support for utility statements,
there is reasonably complete support for what you can find in a
raw SELECT statement.  It was not my intention to make that all
dead code ... so let's add support for RawStmt.

Back-patch to v10 where RawStmt appeared.
2018-09-16 13:02:47 -04:00
Tom Lane 8f32bacc00 In v11, disable JIT by default (it's still enabled by default in HEAD).
Per discussion, JIT isn't quite mature enough to ship enabled-by-default.

I failed to resist the temptation to do a bunch of copy-editing on the
related documentation.  Also, clean up some inconsistencies in which
section of config.sgml the JIT GUCs are documented in vs. what guc.c
and postgresql.config.sample had.

Discussion: https://postgr.es/m/20180914222657.mw25esrzbcnu6qlu@alap3.anarazel.de
2018-09-15 17:24:35 -04:00
Tom Lane 1f4a920b73 Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally.  For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.

This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.

To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment.  This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another.  It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.

It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.

Back-patch all the way.  Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3.  (This isn't the first time back-patches have
needed that, and it exhausted my patience.)  I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.

Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.

Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
2018-09-15 13:42:33 -04:00
Alvaro Herrera 6b78231d91 Move PartitionDispatchData struct definition to execPartition.c
There's no reason to expose the struct definition, so don't.

Author: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Discussion: https://postgr.es/m/d3fa24c1-bc65-7133-81df-6474387ccc4f@lab.ntt.co.jp
2018-09-14 19:06:57 -03:00
Tom Lane 548e50976c Improve parallel scheduling logic in pg_dump/pg_restore.
Previously, the way this worked was that a parallel pg_dump would
re-order the TABLE_DATA items in the dump's TOC into decreasing size
order, and separately re-order (some of) the INDEX items into decreasing
size order.  Then pg_dump would dump the items in that order.  Later,
parallel pg_restore just followed the TOC order.  This method had lots
of deficiencies:

* TOC ordering randomly differed between parallel and non-parallel
dumps, and was hard to predict in the former case, causing problems
for building stable pg_dump test cases.

* Parallel restore only followed a well-chosen order if the dump had
been done in parallel; in particular, this never happened for restore
from custom-format dumps.

* The best order for restore isn't necessarily the same as for dump,
and it's not really static either because of locking considerations.

* TABLE_DATA and INDEX items aren't the only things that might take a lot
of work during restore.  Scheduling was particularly stupid for the BLOBS
item, which might require lots of work during dump as well as restore,
but was left to the end in either case.

This patch removes the logic that changed the TOC order, fixing the
test instability problem.  Instead, we sort the parallelizable items
just before processing them during a parallel dump.  Independently
of that, parallel restore prioritizes the ready-to-execute tasks
based on the size of the underlying table.  In the case of dependent
tasks such as index, constraint, or foreign key creation, the largest
relevant table is used as the metric for estimating the task length.
(This is pretty crude, but it should be enough to avoid the case we
want to avoid, which is ending the run with just a few large tasks
such that we can't make use of all N workers.)

Patch by me, responding to a complaint from Peter Eisentraut,
who also reviewed the patch.

Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 17:31:51 -04:00
Alvaro Herrera 20bef2c311 Fix ALTER/TYPE on columns referenced by FKs in partitioned tables
When ALTER TABLE ... SET DATA TYPE affects a column referenced by
constraints and indexes, it drop those constraints and indexes and
recreates them afterwards, so that the definitions match the new data
type.  The original code did this by dropping one object at a time
(commit 077db40fa1 of May 2004), which worked fine because the
dependencies between the objects were pretty straightforward, and
ordering the objects in a specific way was enough to make this work.
However, when there are foreign key constraints in partitioned tables,
the dependencies are no longer so straightforward, and we were getting
errors when attempted:
  ERROR:  cache lookup failed for constraint 16398

This can be fixed by doing all the drops in one pass instead, using
performMultipleDeletions (introduced by df18c51f29 of Aug 2006).  With
this change we can also remove the code to carefully order the list of
objects to be deleted.

Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKcux6nWS_m+s=1Udk_U9B+QY7pA-Ac58qR5BdUfOyrwnWHDew@mail.gmail.com
2018-09-14 13:41:20 -03:00
Andrew Gierth 728202b63c Order active window clauses for greater reuse of Sort nodes.
By sorting the active window list lexicographically by the sort clause
list but putting longer clauses before shorter prefixes, we generate
more chances to elide Sort nodes when building the path.

Author: Daniel Gustafsson (with some editorialization by me)
Reviewed-by: Alexander Kuzmenkov, Masahiko Sawada, Tom Lane
Discussion: https://postgr.es/m/124A7F69-84CD-435B-BA0E-2695BE21E5C2%40yesql.se
2018-09-14 17:35:42 +01:00
Amit Kapila 75f9c4ca5a Don't allow LIMIT/OFFSET clause within sub-selects to be pushed to workers.
Allowing sub-select containing LIMIT/OFFSET in workers can lead to
inconsistent results at the top-level as there is no guarantee that the
row order will be fully deterministic.  The fix is to prohibit pushing
LIMIT/OFFSET within sub-selects to workers.

Reported-by: Andrew Fletcher
Bug: 15324
Author: Amit Kapila
Reviewed-by: Dilip Kumar
Backpatch-through: 9.6
Discussion: https://postgr.es/m/153417684333.10284.11356259990921828616@wrigleys.postgresql.org
2018-09-14 09:36:30 +05:30
Michael Paquier 0ba06e0bfb Allow concurrent-safe open() and fopen() in frontend code for Windows
PostgreSQL uses a custom wrapper for open() and fopen() which is
concurrent-safe, allowing multiple processes to open and work on the
same file.  This has a couple of advantages:
- pg_test_fsync does not handle O_DSYNC correctly otherwise, leading to
false claims that disks are unsafe.
- TAP tests can run into race conditions when a postmaster and pg_ctl
open postmaster.pid, fixing some random failures in the buildfam.

pg_upgrade is one frontend tool using workarounds to bypass file locking
issues with the log files it generates, however the interactions with
pg_ctl are proving to be tedious to get rid of, so this is left for
later.

Author: Laurenz Albe
Reviewed-by: Michael Paquier, Kuntal Ghosh
Discussion: https://postgr.es/m/1527846213.2475.31.camel@cybertec.at
Discussion: https://postgr.es/m/16922.1520722108@sss.pgh.pa.us
2018-09-14 10:04:14 +09:00
Michael Paquier 28a8fa984c Improve autovacuum logging for aggressive and anti-wraparound runs
A log message was being generated when log_min_duration is reached for
autovacuum on a given relation to indicate if it was an aggressive run,
and missed the point of mentioning if it is doing an anti-wrapround
run.  The log message generated is improved so as one, both or no extra
details are added depending on the option set.

Author: Sergei Kornilov
Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/11587951532155118@sas1-19a94364928d.qloud-c.yandex.net
2018-09-14 07:35:39 +09:00
Peter Eisentraut f48fa2bc8b Message style improvements
Fix one untranslatable string concatenation in pg_rewind.

Fix one message in pg_verify_checksums to use a style use elsewhere
and avoid plural issues.

Fix one gratuitous abbreviation in psql.
2018-09-13 23:35:43 +02:00
Tom Lane 23bd3cec6e Attempt to identify system timezone by reading /etc/localtime symlink.
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database.  Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones().  This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases.  (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter.  More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)

If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.

Also, tweak initdb so that it prints the zone name it selected.

In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.

Patch by me, per a suggestion from Robert Haas; review by Michael Paquier

Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
2018-09-13 12:36:21 -04:00
Amit Kapila bc153c941d Attach FPI to the first record after full_page_writes is turned on.
XLogInsert fails to attach a required FPI to the first record after
full_page_writes is turned on by the last checkpoint.  This bug got
introduced in 9.5 due to code rearrangement in commits 2c03216d83 and
2076db2aea.  Fix it by ensuring that XLogInsertRecord performs a
recomputation when the given record is generated with FPW as off but
found that the flag has been turned on while actually inserting the
record.

Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi
Reviewed-by: Amit Kapila
Backpatch-through: 9.5 where this problem was introduced
Discussion: https://postgr.es/m/20180420.151043.74298611.horiguchi.kyotaro@lab.ntt.co.jp
2018-09-13 15:32:50 +05:30
Michael Paquier 514a731ddc Simplify static function in extension.c
An extra argument for the filename defining the extension script
location was present, aimed at being used for error reporting, but has
never been used.  This was around since extensions have been added in
d9572c4.

Author: Yugo Nagata
Reviewed-by: Tatsuo Ishii
Discussion: https://postgr.es/m/20180907180504.1ff19e1675bb44a67e9c7ab1@sraoss.co.jp
2018-09-13 16:56:57 +09:00
Peter Eisentraut e5f1bb92cf Simplify index tuple descriptor initialization
We have two code paths for initializing the tuple descriptor for a new
index: For a normal index, we copy the tuple descriptor from the table
and reset a number of fields that are not applicable to indexes.  For an
expression index, we make a blank tuple descriptor and fill in the
needed fields based on the provided expressions.  As pg_attribute has
grown over time, the number of fields that we need to reset in the first
case is now bigger than the number of fields we actually want to copy,
so it's sensible to do it the other way around: Make a blank descriptor
and copy just the fields we need.  This also allows more code sharing
between the two branches, and it avoids having to touch this code for
almost every unrelated change to the pg_attribute structure.

Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
2018-09-13 08:22:03 +02:00
Tom Lane 7046d30246 Minor fixes for psql tab completion.
* Include partitioned tables in what's offered after ANALYZE.

* Include toast_tuple_target in what's offered after ALTER TABLE ... SET|RESET.

* Include HASH in what's offered after PARTITION BY.

This is extracted from a larger patch; these bits seem like
uncontroversial bug fixes for v11 features, so back-patch them into v11.

Justin Pryzby

Discussion: https://postgr.es/m/20180529000623.GA21896@telsasoft.com
2018-09-12 15:25:12 -04:00
Andrew Gierth b7f6bcbffc Repair bug in regexp split performance improvements.
Commit c8ea87e4b introduced a temporary conversion buffer for
substrings extracted during regexp splits. Unfortunately the code that
sized it was failing to ignore the effects of ignored degenerate
regexp matches, so for regexp_split_* calls it could under-size the
buffer in such cases.

Fix, and add some regression test cases (though those will only catch
the bug if run in a multibyte encoding).

Backpatch to 9.3 as the faulty code was.

Thanks to the PostGIS project, Regina Obe and Paul Ramsey for the
report (via IRC) and assistance in analysis. Patch by me.
2018-09-12 19:31:06 +01:00
Peter Eisentraut ba37349cff ecpg: Change --version output to common style
When we removed the ecpg-specific versions, we also removed the
"(PostgreSQL)" from the --version output, which we show in other
programs.

Reported-by: Ioseph Kim <pgsql-kr@postgresql.kr>
2018-09-12 14:33:15 +02:00
Tom Lane 2970afa6cf Add PQresultMemorySize function to report allocated size of a PGresult.
This number can be useful for application memory management, and the
overhead to track it seems pretty trivial.

Lars Kanis, reviewed by Pavel Stehule, some mods by me

Discussion: https://postgr.es/m/fa16a288-9685-14f2-97c8-b8ac84365a4f@greiz-reinsdorf.de
2018-09-11 18:45:12 -04:00
Michael Paquier e7a2217978 Parse more strictly integer parameters from connection strings in libpq
The following parameters have been parsed in lossy ways when specified
in a connection string processed by libpq:
- connect_timeout
- keepalives
- keepalives_count
- keepalives_idle
- keepalives_interval
- port

Overflowing values or the presence of incorrect characters were not
properly checked, leading to libpq trying to use such values and fail
with unhelpful error messages.  This commit hardens the parsing of those
parameters so as it is possible to find easily incorrect values.

Author: Fabien Coelho
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808171206180.20841@lancre
2018-09-12 06:46:01 +09:00
Tom Lane fedc97cdfd Remove ruleutils.c's special case for BIT [VARYING] literals.
Up to now, get_const_expr() insisted on prefixing BIT and VARBIT
literals with 'B'.  That's not really necessary, because we always
append explicit-cast syntax to identify the constant's type.
Moreover, it's subtly wrong for VARBIT, because the parser will
interpret B'...' as '...'::"bit"; see make_const() which explicitly
assigns type BITOID for a T_BitString literal.  So what had been
a simple VARBIT literal is reconstructed as ('...'::"bit")::varbit,
which is not the same thing, at least not before constant folding.
This results in odd differences after dump/restore, as complained
of by the patch submitter, and it could result in actual failures in
partitioning or inheritance DDL operations (see commit 542320c2b,
which repaired similar misbehaviors for some other data types).

Fixing it is pretty easy: just remove the special case and let the
default code path handle these types.  We could have kept the special
case for BIT only, but there seems little point in that.

Like the previous patch, I judge that back-patching this into stable
branches wouldn't be a good idea.  However, it seems not quite too
late for v11, so let's fix it there.

Paul Guo, reviewed by Davy Machado and John Naylor, minor adjustments
by me

Discussion: https://postgr.es/m/CABQrizdTra=2JEqA6+Ms1D1k1Kqw+aiBBhC9TreuZRX2JzxLAA@mail.gmail.com
2018-09-11 16:32:25 -04:00
Andrew Gierth 500d49794f Repair double-free in SP-GIST rescan (bug #15378)
spgrescan would first reset traversalCxt, and then traverse a
potentially non-empty stack containing pointers to traversalValues
which had been allocated in those contexts, freeing them a second
time. This bug originates in commit ccd6eb49a where traversalValue was
introduced.

Repair by traversing the stack before the context reset; this isn't
ideal, since it means doing retail pfree in a context that's about to
be reset, but the freeing of a stack entry is also done in other
places in the code during the scan so it's not worth trying to
refactor it further. Regression test added.

Backpatch to 9.6 where the problem was introduced.

Per bug #15378; analysis and patch by me, originally from a report on
IRC by user velix; see also PostGIS ticket #4174; review by Alexander
Korotkov.

Discussion: https://postgr.es/m/153663176628.23136.11901365223750051490@wrigleys.postgresql.org
2018-09-11 18:14:19 +01:00
Tom Lane 4fa3741d1c Use -Bsymbolic for shared libraries on HP-UX and Solaris.
These platforms are also subject to the mis-linking problem addressed
in commit e3d77ea6b.  It's not clear whether we could solve it with
a solution equivalent to GNU ld's version scripts, but -Bsymbolic
appears to fix it, so let's use that.

Like the previous commit, back-patch as far as v10.

Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
2018-09-10 22:22:12 -04:00
Tom Lane 14ea365203 Hide a static inline from FRONTEND code.
For some reason pg_waldump is including tuptable.h, and the recent
addition of a static inline function to it is causing problems on
older buildfarm members that fail to optimize such functions away
completely.  I wonder if this situation doesn't mean that some header
refactoring is called for ... but as a band-aid, wrap the static
function in "#ifndef FRONTEND".

Discussion: https://postgr.es/m/20180824154237.mabsv6fsz5q37bma@alap3.anarazel.de
2018-09-10 12:47:02 -04:00
Tom Lane e3d77ea6b4 Prevent mis-linking of src/port and src/common functions on *BSD.
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib.  This is the cause
of bug #15367 from Jeremy Evans, and is likely to lead to more problems
in future; it's accidental that we've failed to notice any bad effects
up to now.

The recommended way to fix this on ELF-based platforms is to use a
linker "version script" that makes the shlib's versions of the functions
local.  (Apparently, -Bsymbolic would fix it as well, but with other
side effects that we don't want.)  Doing so has the additional benefit
that we can make sure the shlib only exposes the symbols that are meant
to be part of its API, and not ones that are just for cross-file
references within the shlib.  So we'd already been using a version
script for libpq on popular platforms, but it's now apparent that it's
necessary for correctness on every ELF-based platform.

Hence, add appropriate logic to the openbsd, freebsd, and netbsd stanzas
of Makefile.shlib; this is just a copy-and-paste from the linux stanza.
There may be additional work to do if commit ed0cdf0e0 reveals that the
problem exists elsewhere, but this is all that is known to be needed
right now.

Back-patch to v10 where SCRAM support came in.  The problem is ancient,
but analysis suggests that there were no really severe consequences
in older branches.  Hence, I won't take the risk of such a large change
in the build process for older branches.

In passing, remove a rather opaque comment about -Bsymbolic; I don't
think it's very on-point about why we don't use that, if indeed that's
what it's talking about at all.

Patch by me; thanks to Andrew Gierth for helping to diagnose the problem,
and for additional testing.

Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
2018-09-09 15:17:01 -04:00
Alexander Korotkov cf98467242 Improve behavior of to_timestamp()/to_date() functions
to_timestamp()/to_date() functions were introduced mainly for Oracle
compatibility, and became very popular among PostgreSQL users.  However, some
behavior of to_timestamp()/to_date() functions are both incompatible with Oracle
and confusing for our users.  This behavior is related to handling of spaces and
separators in non FX (fixed format) mode.  This commit reworks this behavior
making less confusing, better documented and more compatible with Oracle.

Nevertheless, there are still following incompatibilities with Oracle.
1) We don't insist that there are no format string patterns unmatched to
   input string.
2) In FX mode we don't insist space and separators in format string to exactly
   match input string.
3) When format string patterns are divided by mix of spaces and separators, we
   don't distinguish them, while Oracle takes into account only last group of
   spaces/separators.

Discussion: https://postgr.es/m/1873520224.1784572.1465833145330.JavaMail.yahoo%40mail.yahoo.com
Author: Artur Zakirov, Alexander Korotkov, Liudmila Mantrova
Review: Amul Sul, Robert Haas, Tom Lane, Dmitry Dolgov, David G. Johnston
2018-09-09 21:19:51 +03:00
Alexander Korotkov 5f08accdad Fix past pd_upper write in ginRedoRecompress()
ginRedoRecompress() replays actions over compressed segments of posting list
in-place.  However, it might lead to write past pg_upper, because intermediate
state during playing the changes can take more space than both original state
and final state.  This commit fixes that by refuse from in-place modification.
Instead page tail is copied once modification is started, and then it's used
as the source of original segments.  Backpatch to 9.4 where posting list
compression was introduced.

Reported-by: Sivasubramanian Ramasubramanian
Discussion: https://postgr.es/m/1536091151804.6588%40amazon.com
Author: Alexander Korotkov based on patch from and ideas by Sivasubramanian Ramasubramanian
Review: Sivasubramanian Ramasubramanian
Backpatch-through: 9.4
2018-09-09 21:19:29 +03:00
Tom Lane ff47d4bf1f Work around stdbool problem in dfmgr.c.
Commit 842cb9fa6 refactored things so that dfmgr.c includes <dlfcn.h>,
which before that had only been directly included in platform-specific
stub files.  It turns out that on macOS, <dlfcn.h> includes <stdbool.h>,
and that causes problems on platforms where _Bool is not char-sized ...
which happens to include the PPC versions of macOS.  Work around it
much as we have in plperl.h, by #undef'ing bool after including the
problematic file, but only if we're not using stdbool-style booleans.

Discussion: https://postgr.es/m/E1fxqjl-0003YS-NS@gemulon.postgresql.org
2018-09-09 12:41:27 -04:00
Tom Lane ed0cdf0e05 Install a check for mis-linking of src/port and src/common functions.
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib.  This is definitely
not what we want, because the frontend versions often behave slightly
differently.  Up to now it's been "slight" enough that nobody noticed;
but with the addition of SCRAM support functions in src/common, we're
observing crashes due to the difference between palloc and malloc
memory allocation rules, as reported in bug #15367 from Jeremy Evans.

The purpose of this patch is to create a direct test for this type of
mis-linking, so that we know whether any given platform requires extra
measures to prevent using the wrong functions.  If the test fails, it
will lead to connection failures in the contrib/postgres_fdw regression
test.  At the moment, *BSD platforms using ELF format are known to have
the problem and can be expected to fail; but we need to know whether
anything else does, and we need a reliable ongoing check for future
platforms.

Actually fixing the problem will be the subject of later commit(s).

Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
2018-09-09 12:23:23 -04:00
Noah Misch c85ad9cc63 Allow ENOENT in check_mode_recursive().
Buildfarm member tern failed src/bin/pg_ctl/t/001_start_stop.pl when a
check_mode_recursive() call overlapped a server's startup-time deletion
of pg_stat/global.stat.  Just warn.  Also, include errno in the message.
Back-patch to v11, where check_mode_recursive() first appeared.
2018-09-08 18:26:10 -07:00
Noah Misch 076a3c2112 Fix logical subscriber wait in test.
Buildfarm members sungazer and tern revealed this deficit.  Back-patch
to v10, like commit 4f10e7ea7b, which
introduced the test.
2018-09-08 16:20:50 -07:00
Tom Lane f47f314801 Minor cleanup/future-proofing for pg_saslprep().
Ensure that pg_saslprep() initializes its output argument to NULL in
all failure paths, and then remove the redundant initialization that
some (not all) of its callers did.  This does not fix any live bug,
but it reduces the odds of future bugs of omission.

Also add a comment about why the existing failure-path coding is
adequate.

Back-patch so as to keep the function's API consistent across branches,
again to forestall future bug introduction.

Patch by me, reviewed by Michael Paquier

Discussion: https://postgr.es/m/16558.1536407783@sss.pgh.pa.us
2018-09-08 18:20:36 -04:00
Michael Paquier 9226a3b89b Remove duplicated words split across lines in comments
This has been detected using some interesting tricks with sed, and the
method used is mentioned in details in the discussion below.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20180908013109.GB15350@telsasoft.com
2018-09-08 12:24:19 -07:00
Tom Lane 361844fe56 Save/restore SPI's global variables in SPI_connect() and SPI_finish().
This patch removes two sources of interference between nominally
independent functions when one SPI-using function calls another,
perhaps without knowing that it does so.

Chapman Flack pointed out that xml.c's query_to_xml_internal() expects
SPI_tuptable and SPI_processed to stay valid across datatype output
function calls; but it's possible that such a call could involve
re-entrant use of SPI.  It seems likely that there are similar hazards
elsewhere, if not in the core code then in third-party SPI users.
Previously SPI_finish() reset SPI's API globals to zeroes/nulls, which
would typically make for a crash in such a situation.  Restoring them
to the values they had at SPI_connect() seems like a considerably more
useful behavior, and it still meets the design goal of not leaving any
dangling pointers to tuple tables of the function being exited.

Also, cause SPI_connect() to reset these variables to zeroes/nulls after
saving them.  This prevents interference in the opposite direction: it's
possible that a SPI-using function that's only ever been tested standalone
contains assumptions that these variables start out as zeroes.  That was
the case as long as you were the outermost SPI user, but not so much for
an inner user.  Now it's consistent.

Report and fix suggestion by Chapman Flack, actual patch by me.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/9fa25bef-2e4f-1c32-22a4-3ad0723c4a17@anastigmatix.net
2018-09-07 20:09:57 -04:00
Tom Lane f510412df3 Limit depth of forced recursion for CLOBBER_CACHE_RECURSIVELY.
It's somewhat surprising that we got away with this before.  (Actually,
since nobody tests this routinely AFAIK, it might've been broken for
awhile.  But it's definitely broken in the wake of commit f868a8143.)
It seems sufficient to limit the forced recursion to a small number
of levels.

Back-patch to all supported branches, like the preceding patch.

Discussion: https://postgr.es/m/12259.1532117714@sss.pgh.pa.us
2018-09-07 18:13:29 -04:00
Tom Lane f868a8143a Fix longstanding recursion hazard in sinval message processing.
LockRelationOid and sibling routines supposed that, if our session already
holds the lock they were asked to acquire, they could skip calling
AcceptInvalidationMessages on the grounds that we must have already read
any remote sinval messages issued against the relation being locked.
This is normally true, but there's a critical special case where it's not:
processing inside AcceptInvalidationMessages might attempt to access system
relations, resulting in a recursive call to acquire a relation lock.

Hence, if the outer call had acquired that same system catalog lock, we'd
fall through, despite the possibility that there's an as-yet-unread sinval
message for that system catalog.  This could, for example, result in
failure to access a system catalog or index that had just been processed
by VACUUM FULL.  This is the explanation for buildfarm failures we've been
seeing intermittently for the past three months.  The bug is far older
than that, but commits a54e1f158 et al added a new recursion case within
AcceptInvalidationMessages that is apparently easier to hit than any
previous case.

To fix this, we must not skip calling AcceptInvalidationMessages until
we have *finished* a call to it since acquiring a relation lock, not
merely acquired the lock.  (There's already adequate logic inside
AcceptInvalidationMessages to deal with being called recursively.)
Fortunately, we can implement that at trivial cost, by adding a flag
to LOCALLOCK hashtable entries that tracks whether we know we have
completed such a call.

There is an API hazard added by this patch for external callers of
LockAcquire: if anything is testing for LOCKACQUIRE_ALREADY_HELD,
it might be fooled by the new return code LOCKACQUIRE_ALREADY_CLEAR
into thinking the lock wasn't already held.  This should be a fail-soft
condition, though, unless something very bizarre is being done in
response to the test.

Also, I added an additional output argument to LockAcquireExtended,
assuming that that probably isn't called by any outside code given
the very limited usefulness of its additional functionality.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/12259.1532117714@sss.pgh.pa.us
2018-09-07 18:04:54 -04:00
Michael Paquier 8582b4d044 Improve handling of corrupted two-phase state files at recovery
When a corrupted two-phase state file is found by WAL replay, be it for
crash recovery or archive recovery, then the file is simply skipped and
a WARNING is logged to the user, causing the transaction to be silently
lost.  Facing an on-disk WAL file which is corrupted is as likely to
happen as what is stored in WAL records, but WAL records are already
able to fail hard if there is a CRC mismatch.  On-disk two-phase state
files, on the contrary, are simply ignored if corrupted.  Note that when
restoring the initial two-phase data state at recovery, files newer than
the horizon XID are discarded hence no files present in pg_twophase/
should be torned and have been made durable by a previous checkpoint, so
recovery should never see any corrupted two-phase state file by design.

The situation got better since 978b2f6 which has added two-phase state
information directly in WAL instead of using on-disk files, so the risk
is limited to two-phase transactions which live across at least one
checkpoint for long periods.  Backups having legit two-phase state files
on-disk could also lose silently transactions when restored if things
get corrupted.

This behavior exists since two-phase commit has been introduced, no
back-patch is done for now per the lack of complaints about this
problem.

Author: Michael Paquier
Discussion: https://postgr.es/m/20180709050309.GM1467@paquier.xyz
2018-09-07 11:00:16 -07:00
Andrew Gierth 7b6b167fa3 Refactor installation of extension headers.
Commit be54b3777 failed on gmake 3.80 due to a chained conditional,
which on closer examination could be removed entirely with some
refactoring elsewhere for a net simplification and more robustness
against empty expansions. Along the way, add some more comments.

Also make explicit in the documentation and comments that built
headers are not removed by 'make clean', since we don't typically want
that for headers generated by a separate ./configure step, and it's
much easier to add your own 'distclean' rule or use EXTRA_CLEAN than
to try and override a deletion rule in pgxs.mk.

Per buildfarm member prariedog and comments by Michael Paquier, though
all the actual changes are my fault.
2018-09-07 14:19:14 +01:00
Peter Eisentraut 1fea1e3254 libpq: Change "options" dispchar to normal
libpq connection options as returned by PQconndefaults() have a
"dispchar" field that determines (among other things) whether an option
is a "debug" option, which shouldn't be shown by default to clients.
postgres_fdw makes use of that to control which connection options to
accept from a foreign server configuration.

Curiously, the "options" option, which allows passing configuration
settings to the backend server, was listed as a debug option, which
prevented it from being used by postgres_fdw.  Maybe it was once meant
for debugging, but it's clearly in general use nowadays.

So change the dispchar for it to be the normal non-debug case.  Also
remove the "debug" reference from its label field.

Reported-by: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
2018-09-07 15:01:25 +02:00
Peter Eisentraut 98afa68d93 Use C99 designated initializers for some structs
These are just a few particularly egregious cases that were hard to read
and write, and error prone because of many similar adjacent types.

Discussion: https://www.postgresql.org/message-id/flat/4c9f01be-9245-2148-b569-61a8562ef190%402ndquadrant.com
2018-09-07 11:40:03 +02:00
Tom Lane 75f7855369 Fix inconsistent argument naming.
Typo in commit 842cb9fa6.
2018-09-06 11:14:22 -04:00
Peter Eisentraut 842cb9fa62 Refactor dlopen() support
Nowadays, all platforms except Windows and older HP-UX have standard
dlopen() support.  So having a separate implementation per platform
under src/backend/port/dynloader/ is a bit excessive.  Instead, treat
dlopen() like other library functions that happen to be missing
sometimes and put a replacement implementation under src/port/.

Discussion: https://www.postgresql.org/message-id/flat/e11a49cb-570a-60b7-707d-7084c8de0e61%402ndquadrant.com#54e735ae37476a121abb4e33c2549b03
2018-09-06 11:33:04 +02:00
Amit Kapila ac27c74def Fix the overrun in hash index metapage for smaller block sizes.
The commit 620b49a1 changed the value of HASH_MAX_BITMAPS with the intent
to allow many non-unique values in hash indexes without worrying to reach
the limit of the number of overflow pages.  At that time, this didn't
occur to us that it can overrun the block for smaller block sizes.

Choose the value of HASH_MAX_BITMAPS based on BLCKSZ such that it gives
the same answer as now for the cases where the overrun doesn't occur, and
some other sufficiently-value for the cases where an overrun currently
does occur.  This allows us not to change the behavior in any case that
currently works, so there's really no reason for a HASH_VERSION bump.

Author: Dilip Kumar
Reviewed-by: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA4eK1LtF4VmU4mx_+i72ff1MdNZ8XaJMGkt2HV8+uSWcn8t4A@mail.gmail.com
2018-09-06 09:27:19 +05:30
Andrew Gierth be54b3777f Allow extensions to install built as well as unbuilt headers.
Commit df163230b overlooked the case that an out-of-tree extension
might need to build its header files (e.g. via ./configure). If it is
also doing a VPATH build, the HEADERS_* rules in the original commit
would then fail to find the files, since they would be looking only
under $(srcdir) and not in the build directory.

Fix by adding HEADERS_built and HEADERS_built_$(MODULE) which behave
like DATA_built in that they look in the build dir rather than the
source dir (and also make the files dependencies of the "all" target).

No Windows support appears to be needed for this, since it is only
relevant to out-of-tree builds (no support exists in Mkvcbuild.pm to
build extension header files in any case).
2018-09-05 22:01:21 +01:00
Tom Lane 54b01b9293 Remove no-longer-used variable.
Oversight in 2fbdf1b38.  Per buildfarm.
2018-09-05 14:29:58 -04:00
Tom Lane ae5205c8a8 Make argument names of pg_get_object_address consistent, and fix docs.
pg_get_object_address and pg_identify_object_as_address are supposed
to be inverses, but they disagreed as to the names of the arguments
representing the textual form of an object address.  Moreover, the
documented argument names didn't agree with reality at all, either
for these functions or pg_identify_object.

In HEAD and v11, I think we can get away with renaming the input
arguments of pg_get_object_address to match the outputs of
pg_identify_object_as_address.  In theory that might break queries
using named-argument notation to call pg_get_object_address, but
it seems really unlikely that anybody is doing that, or that they'd
have much trouble adjusting if they were.  In older branches, we'll
just live with the lack of consistency.

Aside from fixing the documentation of these functions to match reality,
I couldn't resist the temptation to do some copy-editing.

Per complaint from Jean-Pierre Pelletier.  Back-patch to 9.5 where these
functions were introduced.  (Before v11, this is a documentation change
only.)

Discussion: https://postgr.es/m/CANGqjDnWH8wsTY_GzDUxbt4i=y-85SJreZin4Hm8uOqv1vzRQA@mail.gmail.com
2018-09-05 13:47:28 -04:00
Alvaro Herrera 2fbdf1b38b Simplify partitioned table creation vs. relcache
In the original code, we were storing the pg_inherits row for a
partitioned table too early: enough that we had a hack for relcache to
avoid falling flat on its face while reading such a partial entry.  If
we finish the pg_class creation first and *then* store the pg_inherits
entry, we don't need that hack.

Also recognize that pg_class.relpartbound is not marked NOT NULL and
therefore it's entirely possible to read null values, so having only
Assert() protection isn't enough.  Change those to if/elog tests
instead.  This qualifies as a robustness fix, so backpatch to pg11.

In passing, remove one access that wasn't actually needed, and reword
one message to be like all the others that check for the same thing.

Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20180903213916.hh6wasnrdg6xv2ud@alvherre.pgsql
2018-09-05 14:36:13 -03:00
Peter Eisentraut f5a6509bb1 PL/Python: Remove use of simple slicing API
The simple slicing API (sq_slice, sq_ass_slice) has been deprecated
since Python 2.0 and has been removed altogether in Python 3, so remove
those functions from the PLyResult class.  Instead, the non-slice
mapping functions mp_subscript and mp_ass_subscript can take slice
objects as an index.  Since we just pass the index through to the
underlying list object, we already support that.  Test coverage was
already in place.
2018-09-05 16:32:38 +02:00
Michael Paquier d6e98ebe37 Improve some error message strings and errcodes
This makes a bit less work for translators, by unifying error strings a
bit more with what the rest of the code does, this time for three error
strings in autoprewarm and one in base backup code.

After some code review of slot.c, some file-access errcodes are reported
but lead to an incorrect internal error, while corrupted data makes the
most sense, similarly to the previous work done in e41d0a1.  Also,
after calling rmtree(), a WARNING gets reported, which is a duplicate of
what the internal call report, so make the code more consistent with all
other code paths calling this function.

Author: Michael Paquier
Discussion: https://postgr.es/m/20180902200747.GC1343@paquier.xyz
2018-09-04 11:06:04 -07:00
Tom Lane 17b7c302b5 Fully enforce uniqueness of constraint names.
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.

Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.

To fix, create a unique index on (conrelid, contypid, conname).  Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain.  (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed.  But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.)  This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.

Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.

Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André.  And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.

While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.

Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
2018-09-04 13:45:35 -04:00
Amit Kapila 14e9b2a752 Prohibit pushing subqueries containing window function calculation to
workers.

Allowing window function calculation in workers leads to inconsistent
results because if the input row ordering is not fully deterministic, the
output of window functions might vary across workers.  The fix is to treat
them as parallel-restricted.

In the passing, improve the coding pattern in max_parallel_hazard_walker
so that it has a chain of mutually-exclusive if ... else if ... else if
... else if ... IsA tests.

Reported-by: Marko Tiikkaja
Bug: 15324
Author: Amit Kapila
Reviewed-by: Tom Lane
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CAL9smLAnfPJCDUUG4ckX2iznj53V7VSMsYefzZieN93YxTNOcw@mail.gmail.com
2018-09-04 10:28:08 +05:30
Amit Kapila 7c9e19ca9a During the split, set checksum on an empty hash index page.
On a split, we allocate a new splitpoint's worth of bucket pages wherein
we initialize the last page with zeros which is fine, but we forgot to set
the checksum for that last page.

We decided to back-patch this fix till 10 because we don't have an easy
way to test it in prior versions.  Another reason is that the hash-index
code is changed heavily in 10, so it is not advisable to push the fix
without testing it in prior versions.

Author: Amit Kapila
Reviewed-by: Yugo Nagata
Backpatch-through: 10
Discussion: https://postgr.es/m/5d03686d-727c-dbf8-0064-bf8b97ffe850@2ndquadrant.com
2018-09-04 08:35:42 +05:30
Alvaro Herrera c076f3d74a Remove pg_constraint.conincluding
This column was added in commit 8224de4f42 ("Indexes with INCLUDE
columns and their support in B-tree") to ease writing the ruleutils.c
supporting code for that feature, but it turns out to be unnecessary --
we can do the same thing with just one more syscache lookup.

Even the documentation for the new column being removed in this commit
is awkward.

Discussion: https://postgr.es/m/20180902165018.33otxftp3olgtu4t@alvherre.pgsql
2018-09-03 12:59:26 -03:00
Tomas Vondra 4ddd8f5f55 Fix memory leak in TRUNCATE decoding
When decoding a TRUNCATE record, the relids array was being allocated in
the main ReorderBuffer memory context, but not released with the change
resulting in a memory leak.

The array was also ignored when serializing/deserializing the change,
assuming all the information is stored in the change itself.  So when
spilling the change to disk, we've only we have serialized only the
pointer to the relids array.  Thanks to never releasing the array,
the pointer however remained valid even after loading the change back
to memory, preventing an actual crash.

This fixes both the memory leak and (de)serialization.  The relids array
is still allocated in the main ReorderBuffer memory context (none of the
existing ones seems like a good match, and adding an extra context seems
like an overkill).  The allocation is wrapped in a new ReorderBuffer API
functions, to keep the details within reorderbuffer.c, just like the
other ReorderBufferGet methods do.

Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/66175a41-9342-2845-652f-1bd4c3ee50aa%402ndquadrant.com
Backpatch: 11, where decoding of TRUNCATE was introduced
2018-09-03 02:10:24 +02:00
Michael Paquier caa0c6ceba Fix initial sync of slot parent directory when restoring status
At the beginning of recovery, information from replication slots is
recovered from disk to memory.  In order to ensure the durability of the
information, the status file as well as its parent directory are
synced.  It happens that the sync on the parent directory was done
directly using the status file path, which is logically incorrect, and
the current code has been doing a sync on the same object twice in a
row.

Reported-by: Konstantin Knizhnik
Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/9eb1a6d5-b66f-2640-598d-c5ea46b8f68a@postgrespro.ru
Backpatch-through: 9.4-
2018-09-02 12:40:30 -07:00
Tom Lane 44cac93464 Avoid using potentially-under-aligned page buffers.
There's a project policy against using plain "char buf[BLCKSZ]" local
or static variables as page buffers; preferred style is to palloc or
malloc each buffer to ensure it is MAXALIGN'd.  However, that policy's
been ignored in an increasing number of places.  We've apparently got
away with it so far, probably because (a) relatively few people use
platforms on which misalignment causes core dumps and/or (b) the
variables chance to be sufficiently aligned anyway.  But this is not
something to rely on.  Moreover, even if we don't get a core dump,
we might be paying a lot of cycles for misaligned accesses.

To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock
that the compiler must allocate with sufficient alignment, and use
those in place of plain char arrays.

I used these types even for variables where there's no risk of a
misaligned access, since ensuring proper alignment should make
kernel data transfers faster.  I also changed some places where
we had been palloc'ing short-lived buffers, for coding style
uniformity and to save palloc/pfree overhead.

Since this seems to be a live portability hazard (despite the lack
of field reports), back-patch to all supported versions.

Patch by me; thanks to Michael Paquier for review.

Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
2018-09-01 15:27:17 -04:00