Commit Graph

18972 Commits

Author SHA1 Message Date
Tom Lane 1c53c4dec3 Finish reverting "recheck_on_update" patch.
This reverts commit c203d6cf8 and some follow-on fixes, completing the
task begun in commit 5d28c9bd7.  If that feature is ever resurrected,
the code will look quite a bit different from this, so it seems best
to start from a clean slate.

The v11 branch is not touched; in that branch, the recheck_on_update
storage option remains present, but nonfunctional and undocumented.

Discussion: https://postgr.es/m/20190114223409.3tcvejfhlvbucrv5@alap3.anarazel.de
2019-01-15 12:07:10 -05:00
Andres Freund 0944ec54de Don't include genam.h from execnodes.h and relscan.h anymore.
This is the genam.h equivalent of 4c850ecec6 (which removed
heapam.h from a lot of other headers).  There's still a few header
includes of genam.h, but not from central headers anymore.

As a few headers are not indirectly included anymore, execnodes.h and
relscan.h need a few additional includes. Some of the depended on
types were replacable by using the underlying structs, but e.g. for
Snapshot in execnodes.h that'd have gotten more invasive than
reasonable in this commit.

Like the aforementioned commit 4c850ecec6, this requires adding new
genam.h includes to a number of backend files, which likely is also
required in a few external projects.

Author: Andres Freund
Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
2019-01-14 17:02:12 -08:00
Andres Freund 774a975c9a Make naming of tupdesc related structs more consistent with the rest of PG.
We usually don't change the name of structs between the struct name
itself and the name of the typedef. Additionally, structs that are
usually used via a typedef that hides being a pointer, are commonly
suffixed Data.  Change tupdesc code to follow those convention.

This is triggered by a future patch that intends to forward declare
TupleDescData in another header - keeping with the naming scheme makes
that easier to understand.

Author: Andres Freund
Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
2019-01-14 16:25:50 -08:00
Andres Freund 4c850ecec6 Don't include heapam.h from others headers.
heapam.h previously was included in a number of widely used
headers (e.g. execnodes.h, indirectly in executor.h, ...). That's
problematic on its own, as heapam.h contains a lot of low-level
details that don't need to be exposed that widely, but becomes more
problematic with the upcoming introduction of pluggable table storage
- it seems inappropriate for heapam.h to be included that widely
afterwards.

heapam.h was largely only included in other headers to get the
HeapScanDesc typedef (which was defined in heapam.h, even though
HeapScanDescData is defined in relscan.h). The better solution here
seems to be to just use the underlying struct (forward declared where
necessary). Similar for BulkInsertState.

Another problem was that LockTupleMode was used in executor.h - parts
of the file tried to cope without heapam.h, but due to the fact that
it indirectly included it, several subsequent violations of that goal
were not not noticed. We could just reuse the approach of declaring
parameters as int, but it seems nicer to move LockTupleMode to
lockoptions.h - that's not a perfect location, but also doesn't seem
bad.

As a number of files relied on implicitly included heapam.h, a
significant number of files grew an explicit include. It's quite
probably that a few external projects will need to do the same.

Author: Andres Freund
Reviewed-By: Alvaro Herrera
Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
2019-01-14 16:24:41 -08:00
Michael Paquier 42e2a58071 Fix typos in documentation and for one wait event
These have been found while cross-checking for the use of unique words
in the documentation, and a wait event was not getting generated in a way
consistent to what the documentation provided.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/9b5a3a85-899a-ae62-dbab-1e7943aa5ab1@gmail.com
2019-01-15 08:47:01 +09:00
Andres Freund de66987adb Re-add default_with_oids GUC to avoid breaking old dump files.
After 578b229718 / the removal of WITH OIDS support, older dump files
containing
    SET default_with_oids = false;
either report unnecessary errors (as the subsequent tables have no
oids) or even fail to restore entirely (when using transaction mode).
To avoid that, re-add the GUC, but don't allow setting it to true.

Per complaint from Tom Lane.

Author: Amit Khandekar, editorialized by me
Discussion: https://postgr.es/m/CAJ3gD9dZyxrtL0rJfoNoOj6v7fJSDaXBngi9wy5XU8m-ioXhAA@mail.gmail.com
2019-01-14 15:30:24 -08:00
Alvaro Herrera 0ad41cf537 Fix unique INCLUDE indexes on partitioned tables
We were considering the INCLUDE columns as part of the key, allowing
unicity-violating rows to be inserted in different partitions.

Concurrent development conflict in eb7ed3f306 and 8224de4f42.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20190109065109.GA4285@telsasoft.com
2019-01-14 19:28:10 -03:00
Michael Paquier 9f527a6e9a Fix error message for logical replication targets
This fixes an oversight from 373bda6.

Noted by Erik Rijkers.
2019-01-13 22:36:23 +09:00
Peter Eisentraut 0acb3bc33a Change default of recovery_target_timeline to 'latest'
This is what one usually wants for recovery and almost always wants
for a standby.

Discussion: https://www.postgresql.org/message-id/flat/6dd2c23a-4162-8469-410f-bfe146e28c0c@2ndquadrant.com/
Reviewed-by: David Steele <david@pgmasters.net>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-01-13 10:01:05 +01:00
Michael Paquier 373bda61d2 Improve error messages for incorrect types of logical replication targets
If trying to use something else than a plain table as logical
replication target, a rather-generic error message gets used to report
the problem.  This can be confusing when it comes to foreign tables and
partitioned tables, so use more dedicated messages in these cases.

Author: Amit Langote
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion: https://postgr.es/m/41799bee-40eb-7bb5-80b1-325ce17518bc@lab.ntt.co.jp
2019-01-13 16:39:49 +09:00
Andres Freund 1845ca2cfd Remove heapam.h include made superfluous by b60c397599.
Noticed this while working on another patch.

Author: Andres Freund
2019-01-12 22:27:35 -08:00
Andrew Dunstan e33884d412 Free pre-modification HeapTuple in ALTER TABLE ... TYPE ...
This was an oversight in commit 3b174b1a3.

Per offline gripe from Alvaro Herrera

Backpatch to release 11.
2019-01-11 17:12:54 -05:00
Tom Lane 1db5667bac Avoid sharing PARAM_EXEC slots between different levels of NestLoop.
Up to now, createplan.c attempted to share PARAM_EXEC slots for
NestLoopParams across different plan levels, if the same underlying Var
was being fed down to different righthand-side subplan trees by different
NestLoops.  This was, I think, more of an artifact of using subselect.c's
PlannerParamItem infrastructure than an explicit design goal, but anyway
that was the end result.

This works well enough as long as the plan tree is executing synchronously,
but the feature whereby Gather can execute the parallelized subplan locally
breaks it.  An upper NestLoop node might execute for a row retrieved from
a parallel worker, and assign a value for a PARAM_EXEC slot from that row,
while the leader's copy of the parallelized subplan is suspended with a
different active value of the row the Var comes from.  When control
eventually returns to the leader's subplan, it gets the wrong answers if
the same PARAM_EXEC slot is being used within the subplan, as reported
in bug #15577 from Bartosz Polnik.

This is pretty reminiscent of the problem fixed in commit 46c508fbc, and
the proper fix seems to be the same: don't try to share PARAM_EXEC slots
across different levels of controlling NestLoop nodes.

This requires decoupling NestLoopParam handling from PlannerParamItem
handling, although the logic remains somewhat similar.  To avoid bizarre
division of labor between subselect.c and createplan.c, I decided to move
all the param-slot-assignment logic for both cases out of those files
and put it into a new file paramassign.c.  Hopefully it's a bit better
documented now, too.

A regression test case for this might be nice, but we don't know a
test case that triggers the problem with a suitably small amount
of data.

Back-patch to 9.6 where we added Gather nodes.  It's conceivable that
related problems exist in older branches; but without some evidence
for that, I'll leave the older branches alone.

Discussion: https://postgr.es/m/15577-ca61ab18904af852@postgresql.org
2019-01-11 15:54:06 -05:00
Peter Eisentraut ff85306055 Add value 'current' for recovery_target_timeline
This value represents the default behavior of using the current
timeline.  Previously, this was represented by an empty string.

(Before the removal of recovery.conf, this setting could not be chosen
explicitly but was used when recovery_target_timeline was not
mentioned at all.)

Discussion: https://www.postgresql.org/message-id/flat/6dd2c23a-4162-8469-410f-bfe146e28c0c@2ndquadrant.com/
Reviewed-by: David Steele <david@pgmasters.net>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2019-01-11 11:02:03 +01:00
Andrew Dunstan 3b174b1a35 Fix missing values when doing ALTER TABLE ALTER COLUMN TYPE
This was an oversight in commit 16828d5c. If the table is going to be
rewritten, we simply clear all the missing values from all the table's
attributes, since there will no longer be any rows with the attributes
missing. Otherwise, we repackage the missing value in an array
constructed with the new type specifications.

Backpatch to release 11.

This fixes bug #15446, reported by Dmitry Molotkov

Reviewed by Dean Rasheed
2019-01-10 15:53:45 -05:00
Tom Lane eaf0380ecc Fix C++ compile failures in headers.
Avoid using "typeid" as a parameter name in header files, since that
is a C++ keyword.  These cases were introduced recently, in 04fe805a1
and 586b98fdf.

Since I'm an incurable neatnik, also rename these parameters in the
underlying function definitions.  That's not really necessary per
project rules, but I don't like function declarations that don't
quite agree with the underlying definitions.

Per src/tools/pginclude/cpluspluscheck.
2019-01-10 14:07:01 -05:00
Alvaro Herrera b60c397599 Move inheritance expansion code into its own file
This commit moves expand_inherited_tables and underlings from
optimizer/prep/prepunionc.c to optimizer/utils/inherit.c.
Also, all of the AppendRelInfo-based expression manipulation routines
are moved to optimizer/utils/appendinfo.c.

No functional code changes.  One exception is the introduction of
make_append_rel_info, but that's still just moving around code.

Also, stop including <limits.h> in prepunion.c, which no longer needs
it since 3fc6e2d7f5.  I (Álvaro) noticed this because Amit was copying
that to inherit.c, which likewise doesn't need it.

Author: Amit Langote
Discussion: https://postgr.es/m/3be67028-a00a-502c-199a-da00eec8fb6e@lab.ntt.co.jp
2019-01-10 14:54:31 -03:00
Michael Paquier 5d59a6c5ea Fix grammar mistakes in md.c
Author: Kirk Jamison
Discussion: https://postgr.es/m/D09B13F772D2274BB348A310EE3027C640AC54@g01jpexmbkw24
2019-01-10 09:36:25 +09:00
Tom Lane 8ff5f824dc Reduce the size of the fmgr_builtin_oid_index[] array.
This index array was originally defined to have 10000 entries (ranging
up to FirstGenbkiObjectId), but we really only need entries up to the
last existing builtin function OID, currently 6121.  That saves close
to 8K of never-accessed space in the server executable, at the small
price of one more fetch in fmgr_isbuiltin().

We could reduce the array size still further by renumbering a few of
the highest-numbered builtin functions; but there's a small risk of
breaking clients that have chosen to hardwire those function OIDs,
so it's not clear if it'd be worth the trouble.  (We should, however,
discourage future patches from choosing function OIDs above 6K as long
as there's still lots of space below that.)

Discussion: https://postgr.es/m/12359.1547063064@sss.pgh.pa.us
2019-01-09 15:22:43 -05:00
Tom Lane afb0d0712f Replace the data structure used for keyword lookup.
Previously, ScanKeywordLookup was passed an array of string pointers.
This had some performance deficiencies: the strings themselves might
be scattered all over the place depending on the compiler (and some
quick checking shows that at least with gcc-on-Linux, they indeed
weren't reliably close together).  That led to very cache-unfriendly
behavior as the binary search touched strings in many different pages.
Also, depending on the platform, the string pointers might need to
be adjusted at program start, so that they couldn't be simple constant
data.  And the ScanKeyword struct had been designed with an eye to
32-bit machines originally; on 64-bit it requires 16 bytes per
keyword, making it even more cache-unfriendly.

Redesign so that the keyword strings themselves are allocated
consecutively (as part of one big char-string constant), thereby
eliminating the touch-lots-of-unrelated-pages syndrome.  And get
rid of the ScanKeyword array in favor of three separate arrays:
uint16 offsets into the keyword array, uint16 token codes, and
uint8 keyword categories.  That reduces the overhead per keyword
to 5 bytes instead of 16 (even less in programs that only need
one of the token codes and categories); moreover, the binary search
only touches the offsets array, further reducing its cache footprint.
This also lets us put the token codes somewhere else than the
keyword strings are, which avoids some unpleasant build dependencies.

While we're at it, wrap the data used by ScanKeywordLookup into
a struct that can be treated as an opaque type by most callers.
That doesn't change things much right now, but it will make it
less painful to switch to a hash-based lookup method, as is being
discussed in the mailing list thread.

Most of the change here is associated with adding a generator
script that can build the new data structure from the same
list-of-PG_KEYWORD header representation we used before.
The PG_KEYWORD lists that plpgsql and ecpg used to embed in
their scanner .c files have to be moved into headers, and the
Makefiles have to be taught to invoke the generator script.
This work is also necessary if we're to consider hash-based lookup,
since the generator script is what would be responsible for
constructing a hash table.

Aside from saving a few kilobytes in each program that includes
the keyword table, this seems to speed up raw parsing (flex+bison)
by a few percent.  So it's worth doing even as it stands, though
we think we can gain even more with a follow-on patch to switch
to hash-based lookup.

John Naylor, with further hacking by me

Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
2019-01-06 17:02:57 -05:00
Alvaro Herrera 807ae415c5 Don't create relfilenode for relations without storage
Some relation kinds had relfilenode set to some non-zero value, but
apparently the actual files did not really exist because creation was
prevented elsewhere.  Get rid of the phony pg_class.relfilenode values.

Catversion bumped, but only because the sanity_test check will fail if
run in a system initdb'd with the previous version.

Reviewed-by: Kyotaro HORIGUCHI, Michael Paquier
Discussion: https://postgr.es/m/20181206215552.fm2ypuxq6nhpwjuc@alvherre.pgsql
2019-01-04 14:51:17 -03:00
Alvaro Herrera df5be63763 Rename macro to RELKIND_HAS_STORAGE
The original name was an unfortunate choice.

Discussion: https://postgr.es/m/20181218.145600.172055615.horiguchi.kyotaro@lab.ntt.co.jp
2019-01-04 14:34:18 -03:00
Tom Lane d33faa285b Move the built-in conversions into the initial catalog data.
Instead of running a SQL script to create the standard conversion
functions and pg_conversion entries, put those entries into the
initial data in postgres.bki.

This shaves a few percent off the runtime of initdb, and also allows
accurate comments to be attached to the conversion functions; the
previous script labeled them with machine-generated comments that
were not quite right for multi-purpose conversion functions.
Also, we can get rid of the duplicative Makefile and MSVC perl
implementations of the generation code for that SQL script.

A functional change is that these pg_proc and pg_conversion entries
are now "pinned" by initdb.  Leaving them unpinned was perhaps a
good thing back while the conversions feature was under development,
but there seems no valid reason for it now.

Also, the conversion functions are now marked as immutable, where
before they were volatile by virtue of lacking any explicit
specification.  That seems like it was just an oversight.

To avoid using magic constants in pg_conversion.dat, extend
genbki.pl to allow encoding names to be converted, much as it
does for language, access method, etc names.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
2019-01-03 19:47:53 -05:00
Tom Lane 814c9019aa Use symbolic references for pg_language OIDs in the bootstrap data.
This patch teaches genbki.pl to replace pg_language names by OIDs
in much the same way as it already does for pg_am names etc, and
converts pg_proc.dat to use such symbolic references in the prolang
column.

Aside from getting rid of a few more magic numbers in the initial
catalog data, this means that Gen_fmgrtab.pl no longer needs to read
pg_language.dat, since it doesn't have to know the OID of the "internal"
language; now it's just looking for the string "internal".

No need for a catversion bump, since the contents of postgres.bki
don't actually change at all.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
2019-01-03 18:38:49 -05:00
Tom Lane 7170268efd Improve ANALYZE's handling of concurrent-update scenarios.
This patch changes the rule for whether or not a tuple seen by ANALYZE
should be included in its sample.

When we last touched this logic, in commit 51e1445f1, we weren't
thinking very hard about tuples being UPDATEd by a long-running
concurrent transaction.  In such a case, we might see the pre-image as
either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see
the post-image not at all, or as INSERT_IN_PROGRESS.  Since the existing
code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS
tuples, this leads to concurrently-updated rows being omitted from the
sample entirely.  That's not very helpful, and it's especially the wrong
thing if the concurrent transaction ends up rolling back.

The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if
they were live.  This makes the "sample it" and "count it" decisions the
same, which seems good for consistency.  It's clearly the right thing
if the concurrent transaction ends up rolling back; in effect, we are
sampling as though IN_PROGRESS transactions haven't happened yet.
Also, this combination of choices ensures maximum robustness against
the different combinations of whether and in which state we might see the
pre- and post-images of an update.

It's slightly annoying that we end up recording immediately-out-of-date
stats in the case where the transaction does commit, but on the other
hand the stats are fine for columns that didn't change in the update.
And the alternative of sampling INSERT_IN_PROGRESS rows instead seems
like a bad idea, because then the sampling would be inconsistent with
the way rows are counted for the stats report.

Per report from Mark Chambers; thanks to Jeff Janes for diagnosing
what was happening.  Back-patch to all supported versions.

Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com
2019-01-03 17:00:08 -05:00
Tom Lane 68a13f28be Don't believe MinMaxExpr is leakproof without checking.
MinMaxExpr invokes the btree comparison function for its input datatype,
so it's only leakproof if that function is.  Many such functions are
indeed leakproof, but others are not, and we should not just assume that
they are.  Hence, adjust contain_leaked_vars to verify the leakproofness
of the referenced function explicitly.

I didn't add a regression test because it would need to depend on
some particular comparison function being leaky, and that's a moving
target, per discussion.

This has been wrong all along, so back-patch to supported branches.

Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us
2019-01-02 16:34:04 -05:00
Tom Lane 69ae9dcb44 Ensure link commands list *.o files before LDFLAGS.
It's important for link commands to list *.o input files before -l
switches for libraries, as library code may not get pulled into the link
unless referenced by an earlier command-line entry.  This is certainly
necessary for static libraries (.a style).  Apparently on some platforms
it is also necessary for shared libraries, as reported by Donald Dong.

We often put -l switches for within-tree libraries into LDFLAGS, meaning
that link commands that list *.o files after LDFLAGS are hazardous.
Most of our link commands got this right, but a few did not.  In
particular, places that relied on gmake's default implicit link rule
failed, because that puts LDFLAGS first.  Fix that by overriding the
built-in rule with our own.  The implicit link rules in
src/makefiles/Makefile.* for single-.o-file shared libraries mostly
got this wrong too, so fix them.  I also changed the link rules for the
backend and a couple of other places for consistency, even though they
are not (currently) at risk because they aren't adding any -l switches
to LDFLAGS.

Arguably, the real problem here is that we're abusing LDFLAGS by
putting -l switches in it and we should stop doing that.  But changing
that would be quite invasive, so I'm not eager to do so.

Perhaps this is a candidate for back-patching, but so far it seems
that problems can only be exhibited in test code we don't normally
build, and at least some of the problems are new in HEAD anyway.
So I'll refrain for now.

Donald Dong and Tom Lane

Discussion: https://postgr.es/m/CAKABAquXn-BF-vBeRZxhzvPyfMqgGuc74p8BmQZyCFDpyROBJQ@mail.gmail.com
2019-01-02 13:57:54 -05:00
Bruce Momjian 97c39498e5 Update copyright for 2019
Backpatch-through: certain files through 9.4
2019-01-02 12:44:25 -05:00
Michael Paquier 1707a0d2aa Remove configure switch --disable-strong-random
This removes a portion of infrastructure introduced by fe0a0b5 to allow
compilation of Postgres in environments where no strong random source is
available, meaning that there is no linking to OpenSSL and no
/dev/urandom (Windows having its own CryptoAPI).  No systems shipped
this century lack /dev/urandom, and the buildfarm is actually not
testing this switch at all, so just remove it.  This simplifies
particularly some backend code which included a fallback implementation
using shared memory, and removes a set of alternate regression output
files from pgcrypto.

Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20181230063219.GG608@paquier.xyz
2019-01-01 20:05:51 +09:00
Michael Paquier 8d3b389ec3 Improve comments and logs in do_pg_stop/start_backup
The function name pg_stop_backup() has been included for ages in some
log messages when stopping the backup, which is confusing for base
backups taken with the replication protocol because this function is
never called.  Some other comments and messages in this area are
improved while on it.

The new wording is based on input and suggestions from several people,
all listed below.

Author: Michael Paquier
Reviewed-by: Peter Eisentraut, Álvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/20181221040510.GA12599@paquier.xyz
2019-01-01 08:53:02 +09:00
Alvaro Herrera e439c6f0c3 Remove some useless code
In commit 8b08f7d482 I added member relationId to IndexStmt struct.
I'm now not sure why; DefineIndex doesn't need it, since the relation
OID is passed as a separate argument anyway.  Remove it.

Also remove a redundant assignment to the relationId argument (it wasn't
redundant when added by commit e093dcdd28, but should have been removed
in commit 5f173040e3), and use relationId instead of stmt->relation when
locking the relation in the second phase of CREATE INDEX CONCURRENTLY,
which is not only confusing but it means we resolve the name twice for
no reason.
2018-12-31 14:50:48 -03:00
Tom Lane b2edbbd02d Fix oversight in commit b5415e3c21.
While rearranging code in tidpath.c, I overlooked the fact that we ought
to check restriction_is_securely_promotable when trying to use a join
clause as a TID qual.  Since tideq itself is leakproof, this wouldn't
really allow any interesting leak AFAICT, but it still seems like we
had better check it.

For consistency with the corresponding logic in indxpath.c, also
check rinfo->pseudoconstant.  I'm not sure right now that it's
possible for that to be set in a join clause, but if it were,
a match couldn't be made anyway.
2018-12-31 12:39:15 -05:00
Peter Eisentraut 60d99797bf Change "checkpoint starting" message to use "wal"
This catches up with the recent renaming of all user-facing mentions
of "xlog" to "wal".

Discussion: https://www.postgresql.org/message-id/flat/20181129084708.GA9562%40msg.credativ.de
2018-12-30 22:23:01 +01:00
Tom Lane 0a6ea4001a Add a hash opclass for type "tid".
Up to now we've not worried much about joins where the join key is a
relation's CTID column, reasoning that storing a table's CTIDs in some
other table would be pretty useless.  However, there are use-cases for
this sort of query involving self-joins, so that argument doesn't really
hold water.

With larger relations, a merge or hash join is desirable.  We had a btree
opclass for type "tid", allowing merge joins on CTID, but no hash opclass
so that hash joins weren't possible.  Add the missing infrastructure.

This also potentially enables hash aggregation on "tid", though the
use-cases for that aren't too clear.

Discussion: https://postgr.es/m/1853.1545453106@sss.pgh.pa.us
2018-12-30 15:40:04 -05:00
Tom Lane b5415e3c21 Support parameterized TidPaths.
Up to now we've not worried much about joins where the join key is a
relation's CTID column, reasoning that storing a table's CTIDs in some
other table would be pretty useless.  However, there are use-cases for
this sort of query involving self-joins, so that argument doesn't really
hold water.

This patch allows generating plans for joins on CTID that use a nestloop
with inner TidScan, similar to what we might do with an index on the join
column.  This is the most efficient way to join when the outer side of
the nestloop is expected to yield relatively few rows.

This change requires upgrading tidpath.c and the generated TidPaths
to work with RestrictInfos instead of bare qual clauses, but that's
long-postponed technical debt anyway.

Discussion: https://postgr.es/m/17443.1545435266@sss.pgh.pa.us
2018-12-30 15:24:28 -05:00
Tom Lane 6f19a8c41f Teach eval_const_expressions to constant-fold LEAST/GREATEST expressions.
Doing this requires an assumption that the invoked btree comparison
function is immutable.  We could check that explicitly, but in other
places such as contain_mutable_functions we just assume that it's true,
so we may as well do likewise here.  (If the comparison function's
behavior isn't immutable, the sort order in indexes built with it would
be unstable, so it seems certainly wrong for it not to be so.)

Vik Fearing

Discussion: https://postgr.es/m/c6e8504c-4c43-35fa-6c8f-3c0b80a912cc@2ndquadrant.com
2018-12-30 13:42:04 -05:00
Tom Lane 4203842a1c Use pg_strong_random() to select each server process's random seed.
Previously we just set the seed based on process ID and start timestamp.
Both those values are directly available within the session, and can
be found out or guessed by other users too, making the session's series
of random(3) values fairly predictable.  Up to now, our backend-internal
uses of random(3) haven't seemed security-critical, but commit 88bdbd3f7
added one that potentially is: when using log_statement_sample_rate, a
user might be able to predict which of his SQL statements will get logged.

To improve this situation, upgrade the per-process seed initialization
method to use pg_strong_random() if available, greatly reducing the
predictability of the initial seed value.  This adds a few tens of
microseconds to process start time, but since backend startup time is
at least a couple of milliseconds, that seems an acceptable price.

This means that pg_strong_random() needs to be able to run without
reliance on any backend infrastructure, since it will be invoked
before any of that is up.  It was safe for that already, but adjust
comments and #include commands to make it clearer.

Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
2018-12-29 17:56:06 -05:00
Tom Lane 6645ad6bdd Use a separate random seed for SQL random()/setseed() functions.
Previously, the SQL random() function depended on libc's random(3),
and setseed() invoked srandom(3).  This results in interference between
these functions and backend-internal uses of random(3).  We'd never paid
too much mind to that, but in the wake of commit 88bdbd3f7 which added
log_statement_sample_rate, the interference arguably has a security
consequence: if log_statement_sample_rate is active then an unprivileged
user could probably control which if any of his SQL commands get logged,
by issuing setseed() at the right times.  That seems bad.

To fix this reliably, we need random() and setseed() to use their own
private random state variable.  Standard random(3) isn't amenable to such
usage, so let's switch to pg_erand48().  It's hard to say whether that's
more or less "random" than any particular platform's version of random(3),
but it does have a wider seed value and a longer period than are required
by POSIX, so we can hope that this isn't a big downgrade.  Also, we should
now have uniform behavior of random() across platforms, which is worth
something.

While at it, upgrade the per-process seed initialization method to use
pg_strong_random() if available, greatly reducing the predictability
of the initial seed value.  (I'll separately do something similar for
the internal uses of random().)

In addition to forestalling the possible security problem, this has a
benefit in the other direction, which is that we can now document
setseed() as guaranteeing a reproducible sequence of random() values.
Previously, because of the possibility of internal calls of random(3),
we could not promise any such thing.

Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
2018-12-29 17:33:27 -05:00
Alvaro Herrera 4ed6c071b8 Fix thinko in previous commit 2018-12-28 15:18:00 -03:00
Alvaro Herrera e8b0e6b82d Rewrite ExecPartitionCheckEmitError for clarity
The original was hard to follow and failed to comply with DRY principle.

Discussion: https://postgr.es/m/20181206222221.g5witbsklvqthjll@alvherre.pgsql
2018-12-28 14:47:05 -03:00
Peter Eisentraut ae4472c619 Remove obsolete IndexIs* macros
Remove IndexIsValid(), IndexIsReady(), IndexIsLive() in favor of
accessing the index structure directly.  These macros haven't been
used consistently, and the original reason of maintaining source
compatibility with PostgreSQL 9.2 is gone.

Discussion: https://www.postgresql.org/message-id/flat/d419147c-09d4-6196-5d9d-0234b230880a%402ndquadrant.com
2018-12-27 10:07:46 +01:00
Alexander Korotkov b450abd255 Remove entry tree root conflict checking from GIN predicate locking
According to README we acquire predicate locks on entry tree leafs and posting
tree roots.  However, when ginFindLeafPage() is going to lock leaf in exclusive
mode, then it checks root for conflicts regardless whether it's a entry or
posting tree.  Assuming that we never place predicate lock on entry tree root
(excluding corner case when root is leaf), this check is redundant.  This
commit removes this check.  Now, root conflict checking is controlled by
separate argument of ginFindLeafPage().

Discussion: https://postgr.es/m/CAPpHfdv7rrDyy%3DMgsaK-L9kk0AH7az0B-mdC3w3p0FSb9uoyEg%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 11
2018-12-27 04:24:20 +03:00
Michael Paquier 1e504f01da Ignore inherited temp relations from other sessions when truncating
Inheritance trees can include temporary tables if the parent is
permanent, which makes possible the presence of multiple temporary
children from different sessions.  Trying to issue a TRUNCATE on the
parent in this scenario causes a failure, so similarly to any other
queries just ignore such cases, which makes TRUNCATE work
transparently.

This makes truncation behave similarly to any other DML query working on
the parent table with queries which need to be work on the children.  A
set of isolation tests is added to cover basic cases.

Reported-by: Zhou Digoal
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/15565-ce67a48d0244436a@postgresql.org
Backpatch-through: 9.4
2018-12-27 10:16:19 +09:00
Tom Lane 8528e3d849 Fix failure to check for open() or fsync() failures.
While it seems OK to not be concerned about fsync() failure for a
pre-existing signal file, it's not OK to not even check for open()
failure.  This at least causes complaints from static analyzers,
and I think on some platforms passing -1 to fsync() or close() might
trigger assertion-type failures.  Also add (void) casts to make clear
that we're ignoring fsync's result intentionally.

Oversights in commit 2dedf4d9a, noted by Coverity.
2018-12-26 16:08:17 -05:00
Michael Paquier b981df4cc0 Prioritize history files when archiving
At the end of recovery for the post-promotion process, a new history
file is created followed by the last partial segment of the previous
timeline.  Based on the timing, the archiver would first try to archive
the last partial segment and then the history file.  This can delay the
detection of a new timeline taken, particularly depending on the time it
takes to transfer the last partial segment as it delays the moment the
history file of the new timeline gets archived.  This can cause promoted
standbys to use the same timeline as one already taken depending on the
circumstances if multiple instances look at archives at the same
location.

This commit changes the order of archiving so as history files are
archived in priority over other file types, which reduces the likelihood
of the same timeline being taken (still not reducing the window to
zero), and it makes the archiver behave more consistently with the
startup process doing its post-promotion business.

Author: David Steele
Reviewed-by: Michael Paquier, Kyotaro Horiguchi
Discussion: https://postgr.es/m/929068cf-69e1-bba2-9dc0-e05986aed471@pgmasters.net
Backpatch-through: 9.5
2018-12-24 20:24:16 +09:00
Michael Paquier bf491a9073 Disable WAL-skipping optimization for COPY on views and foreign tables
COPY can skip writing WAL when loading data on a table which has been
created in the same transaction as the one loading the data, however
this cannot work on views or foreign table as this would result in
trying to flush relation files which do not exist.  So disable the
optimization so as commands are able to work the same way with any
configuration of wal_level.

Tests are added to cover the different cases, which need to have
wal_level set to minimal to allow the problem to show up, and that is
not the default configuration.

Reported-by: Luis M. Carril, Etsuro Fujita
Author: Amit Langote, Michael Paquier
Reviewed-by: Etsuro Fujita
Discussion: https://postgr.es/m/15552-c64aa14c5c22f63c@postgresql.org
Backpatch-through: 10, where support for COPY on views has been added,
while v11 has added support for COPY on foreign tables.
2018-12-23 16:42:22 +09:00
Peter Eisentraut 66ca44084d Add WRITE_*_ARRAY macros
Add WRITE_ATTRNUMBER_ARRAY, WRITE_OID_ARRAY, WRITE_INT_ARRAY,
WRITE_BOOL_ARRAY macros to outfuncs.c, mirroring the existing
READ_*_ARRAY macros in readfuncs.c.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8f2ebc67-e75f-9478-f5a5-bbbf090b1f8d%402ndquadrant.com
2018-12-22 07:45:13 +01:00
Peter Eisentraut 323eaf9825 Add some const decorations
These mainly help understanding the function signatures better.
2018-12-22 07:45:09 +01:00
Alexander Korotkov c952eae52a Check for conflicting queries during replay of gistvacuumpage()
013ebc0a7b implements so-called GiST microvacuum.  That is gistgettuple() marks
index tuples as dead when kill_prior_tuple is set.  Later, when new tuple
insertion claims page space, those dead index tuples are physically deleted
from page.  When this deletion is replayed on standby, it might conflict with
read-only queries.  But 013ebc0a7b doesn't handle this.  That may lead to
disappearance of some tuples from read-only snapshots on standby.

This commit implements resolving of conflicts between replay of GiST microvacuum
and standby queries.  On the master we implement new WAL record type
XLOG_GIST_DELETE, which comprises necessary information.  On stable releases
we've to be tricky to keep WAL compatibility.  Information required for conflict
processing is just appended to data of XLOG_GIST_PAGE_UPDATE record.  So,
PostgreSQL version, which doesn't know about conflict processing, will just
ignore that.

Reported-by: Andres Freund
Diagnosed-by: Andres Freund
Discussion: https://postgr.es/m/20181212224524.scafnlyjindmrbe6%40alap3.anarazel.de
Author: Alexander Korotkov
Backpatch-through: 9.6
2018-12-21 02:37:37 +03:00
Tom Lane 7c15cef86d Base information_schema.sql_identifier domain on name, not varchar.
The SQL spec says that sql_identifier is a domain over varchar,
but it also says that that domain is supposed to represent the set
of valid identifiers for the implementation, in particular applying
a length limit matching the implementation's identifier length limit.
We were declaring sql_identifier as just "character varying", thus
duplicating what the spec says about base type, but entirely failing
at the rest of it.

Instead, let's declare sql_identifier as a domain over type "name".
(We can drop the COLLATE "C" added by commit 6b0faf723, since that's
now implicit in "name".)  With the recent improvements to name's
comparison support, there's not a lot of functional difference between
name and varchar.  So although in principle this is a spec deviation,
it's a pretty minor one.  And correctly enforcing PG's name length limit
is a good thing; on balance this seems closer to the intent of the spec
than what we had.

But that's all just language-lawyering.  The *real* reason to do this is
that it makes sql_identifier columns exposed by information_schema views
be just direct representations of the underlying "name" catalog columns,
eliminating a semantic mismatch that was disastrous for performance of
typical queries on the information_schema.  In combination with the
recent change to allow dropping no-op CoerceToDomain nodes, this allows
(for example) queries such as

    select ... from information_schema.tables where table_name = 'foo';

to produce an indexscan rather than a seqscan on pg_class.

Discussion: https://postgr.es/m/CAFj8pRBUCX4LZ2rA2BbEkdD6NN59mgx+BLo1gO08Wod4RLtcTg@mail.gmail.com
2018-12-20 16:21:59 -05:00