Commit Graph

19002 Commits

Author SHA1 Message Date
Michael Paquier 8fb569e978 Introduce new extended routines for FDW and foreign server lookups
The cache lookup routines for foreign-data wrappers and foreign servers
are extended with an extra argument to handle a set of flags.  The only
value which can be used now is to indicate if a missing object should
result in an error or not, and are designed to be extensible on need.
Those new routines are added into the existing set of user-visible
FDW APIs and documented in consequence.  They will be used for future
patches to improve the SQL interface for object addresses.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
2018-12-14 08:59:35 +09:00
Andres Freund 09568ec3d3 Create a separate oid range for oids assigned by genbki.pl.
The changes I made in 578b229718 assigned oids below
FirstBootstrapObjectId to objects in include/catalog/*.dat files that
did not have an oid assigned, starting at the max oid explicitly
assigned.  Tom criticized that for mainly two reasons:
1) It's not clear which values are manually and which explicitly
   assigned.
2) The space below FirstBootstrapObjectId gets pretty crowded, and
   some PostgreSQL forks have used oids >= 9000 for their own objects,
   to avoid conflicting.

Thus create a new range for objects not assigned explicit oids, but
assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids,
FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for
genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids
assigned during bootstrap.  It's possible that we'll have to adjust
these boundaries, but there's some headroom for now.

Add a note suggesting that oids in forks should be assigned in the
9000-9999 range.

Catversion bump for obvious reasons.

Per complaint from Tom Lane.

Author: Andres Freund
Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
2018-12-13 14:50:57 -08:00
Tom Lane 84d514887f Fix bogus logic for skipping unnecessary partcollation dependencies.
The idea here is to not call recordDependencyOn for the default collation,
since we know that's pinned.  But what the code actually did was to record
the partition key's dependency on the opclass twice, instead.

Evidently introduced by sloppy coding in commit 2186b608b.  Back-patch
to v10 where that came in.
2018-12-13 15:11:09 -05:00
Tom Lane 04fe805a17 Drop no-op CoerceToDomain nodes from expressions at planning time.
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType.  This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column).  However, we do have to support invalidating the plan later if
a constraint gets added to the domain.  That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions.  We
need only duplicate or share that logic for domains.

ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row.  (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)

Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache.  Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.

We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs.  But I've left that idea for later.)

Other stuff fixed in passing:

* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains.  We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.

* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list.  That eliminates a potential
performance problem in DropCachedPlan.

* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type.  Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.

Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
2018-12-13 13:24:43 -05:00
Alexander Korotkov 52ac6cd2d0 Prevent GIN deleted pages from being reclaimed too early
When GIN vacuum deletes a posting tree page, it assumes that no concurrent
searchers can access it, thanks to ginStepRight() locking two pages at once.
However, since 9.4 searches can skip parts of posting trees descending from the
root.  That leads to the risk that page is deleted and reclaimed before
concurrent search can access it.

This commit prevents the risk of above by waiting for every transaction, which
might wait to reference this page, to finish.  Due to binary compatibility
we can't change GinPageOpaqueData to store corresponding transaction id.
Instead we reuse page header pd_prune_xid field, which is unused in index pages.

Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Andrey Borodin, Alexander Korotkov
Reviewed-by: Alexander Korotkov
Backpatch-through: 9.4
2018-12-13 06:55:34 +03:00
Alexander Korotkov c6ade7a8cd Prevent deadlock in ginRedoDeletePage()
On standby ginRedoDeletePage() can work concurrently with read-only queries.
Those queries can traverse posting tree in two ways.
1) Using rightlinks by ginStepRight(), which locks the next page before
   unlocking its left sibling.
2) Using downlinks by ginFindLeafPage(), which locks at most one page at time.

Original lock order was: page, parent, left sibling.  That lock order can
deadlock with ginStepRight().  In order to prevent deadlock this commit changes
lock order to: left sibling, page, parent.  Note, that position of parent in
locking order seems insignificant, because we only lock one page at time while
traversing downlinks.

Reported-by: Chen Huajun
Diagnosed-by: Chen Huajun, Peter Geoghegan, Andrey Borodin
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Alexander Korotkov
Backpatch-through: 9.4
2018-12-13 06:55:34 +03:00
Alexander Korotkov fd83c83d09 Fix deadlock in GIN vacuum introduced by 218f51584d
Before 218f51584d if posting tree page is about to be deleted, then the whole
posting tree is locked by LockBufferForCleanup() on root preventing all the
concurrent inserts.  218f51584d reduced locking to the subtree containing
page to be deleted.  However, due to concurrent parent split, inserter doesn't
always holds pins on all the pages constituting path from root to the target
leaf page.  That could cause a deadlock between GIN vacuum process and GIN
inserter.  And we didn't find non-invasive way to fix this.

This commit reverts VACUUM behavior to lock the whole posting tree before
delete any page.  However, we keep another useful change by 218f51584d5: the
tree is locked only if there are pages to be deleted.

Reported-by: Chen Huajun
Diagnosed-by: Chen Huajun, Andrey Borodin, Peter Geoghegan
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Alexander Korotkov, based on ideas from Andrey Borodin and Peter Geoghegan
Reviewed-by: Andrey Borodin
Backpatch-through: 10
2018-12-13 06:55:34 +03:00
Tom Lane 77d4d88afb Repair bogus EPQ plans generated for postgres_fdw foreign joins.
postgres_fdw's postgresGetForeignPlan() assumes without checking that the
outer_plan it's given for a join relation must have a NestLoop, MergeJoin,
or HashJoin node at the top.  That's been wrong at least since commit
4bbf6edfb (which could cause insertion of a Sort node on top) and it seems
like a pretty unsafe thing to Just Assume even without that.

Through blind good fortune, this doesn't seem to have any worse
consequences today than strange EXPLAIN output, but it's clearly trouble
waiting to happen.

To fix, test the node type explicitly before touching Join-specific
fields, and avoid jamming the new tlist into a node type that can't
do projection.  Export a new support function from createplan.c
to avoid building low-level knowledge about the latter into FDWs.

Back-patch to 9.6 where the faulty coding was added.  Note that the
associated regression test cases don't show any changes before v11,
apparently because the tests back-patched with 4bbf6edfb don't actually
exercise the problem case before then (there's no top-level Sort
in those plans).

Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
2018-12-12 16:08:30 -05:00
Tom Lane 0f7ec8d9c3 Repair bogus handling of multi-assignment Params in upper plan levels.
Our support for multiple-set-clauses in UPDATE assumes that the Params
referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan
in the targetlist of the plan node that calculates the updated row.
(Yeah, it's a hack...)  In some PG branches it's possible that a Result
node gets inserted between the primary calculation of the update tlist
and the ModifyTable node.  setrefs.c did the wrong thing in this case
and left the upper-level Params as Params, causing a crash at runtime.
What it should do is replace them with "outer" Vars referencing the child
plan node's output.  That's a result of careless ordering of operations
in fix_upper_expr_mutator, so we can fix it just by reordering the code.

Fix fix_join_expr_mutator similarly for consistency, even though join
nodes could never appear in such a context.  (In general, it seems
likely to be a bit cheaper to use Vars than Params in such situations
anyway, so this patch might offer a tiny performance improvement.)

The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff
was introduced, so back-patch that far.  However, this may be a live
bug only in 9.6.x and 10.x, as the other branches don't seem to want
to calculate the final tlist below the Result node.  (That plan shape
change between branches might be a mini-bug in itself, but I'm not
really interested in digging into the reasons for that right now.
Still, add a regression test memorializing what we expect there,
so we'll notice if it changes again.)

Per bug report from Eduards Bezverhijs.

Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com
2018-12-12 13:49:41 -05:00
Michael Paquier cc53123bcc Tweak pg_partition_tree for undefined relations and unsupported relkinds
This fixes a crash which happened when calling the function directly
with a relation OID referring to a non-existing object, and changes the
behavior so as NULL is returned for unsupported relkinds instead of
generating an error.  This puts the new function in line with many other
system functions, and eases actions like full scans of pg_class.

Author: Michael Paquier
Reviewed-by: Amit Langote, Stephen Frost
Discussion: https://postgr.es/m/20181207010406.GO2407@paquier.xyz
2018-12-12 09:49:39 +09:00
Tom Lane 001bb9f3ed Add stack depth checks to key recursive functions in backend/nodes/*.c.
Although copyfuncs.c has a check_stack_depth call in its recursion,
equalfuncs.c, outfuncs.c, and readfuncs.c lacked one.  This seems
unwise.

Likewise fix planstate_tree_walker(), in branches where that exists.

Discussion: https://postgr.es/m/30253.1544286631@sss.pgh.pa.us
2018-12-10 11:12:43 -05:00
Tom Lane b7a29695f7 Make TupleDescInitBuiltinEntry throw error for unsupported types.
Previously, it would just pass back a partially-uninitialized tupdesc,
which doesn't seem like a safe or useful behavior.

Backpatch to v10 where this code came in.

Discussion: https://postgr.es/m/30830.1544384975@sss.pgh.pa.us
2018-12-10 10:38:48 -05:00
Stephen Frost 96c702c1ed Remove dead code in toast_fetch_datum_slice
In toast_fetch_datum_slice(), we Assert() that what is passed in isn't
compressed, but we then later had a check to see what the length of if
what was passed in is compressed.  That later check is rather confusing
since toast_fetch_datum_slice() is only ever called with non-compressed
datums and the Assert() earlier makes it clear that one shouldn't be
passing in compressed datums.

Add a comment to make it clear that toast_fetch_datum_slice() is just
for non-compressed datums, and remove the dead code.
2018-12-10 09:31:38 -05:00
Michael Paquier 6d8727f95e Ensure cleanup of orphan archive status files
When a WAL segment is recycled, its ".ready" and ".done" status files
get also automatically removed, however this is not done in a durable
manner.  Hence, in a subsequent crash, it could be possible that a
".ready" status file is still around with its corresponding segment
already gone.

If the backend reaches such a state, the archive command would most
likely complain about a segment non-existing and would keep retrying,
causing WAL segments to bloat pg_wal/, potentially making Postgres crash
hard when running out of space.

As status files are removed after each individual segment, using
durable_unlink() does not completely close the window either, as a crash
could happen between the moment the WAL segment is recycled and the
moment its status files are removed.  This has also some performance
impact with the additional fsync() calls needed to make the removal in a
durable manner.  Doing the cleanup at recovery is not cost-free either
as this makes crash recovery potentially take longer than necessary.

So, instead, as per an idea of Stephen Frost, make the archiver aware of
orphan status files and remove them on-the-fly if the corresponding
segment goes missing.  Removal failures follow a model close to what
happens for WAL segments, where multiple attempts are done before giving
up temporarily, and where a successful orphan removal makes the archiver
move immediately to the next WAL segment thought as ready to be
archived.

Author: Michael Paquier
Reviewed-by: Nathan Bossart, Andres Freund, Stephen Frost, Kyotaro
Horiguchi
Discussion: https://postgr.es/m/20180928032827.GF1500@paquier.xyz
2018-12-10 15:00:59 +09:00
Michael Paquier 7fee252f6f Add timestamp of last received message from standby to pg_stat_replication
The timestamp generated by the standby at message transmission has been
included in the protocol since its introduction for both the status
update message and hot standby feedback message, but it has never
appeared in pg_stat_replication.  Seeing this timestamp does not matter
much with a cluster which has a lot of activity, but on a mostly-idle
cluster, this makes monitoring able to react faster than the configured
timeouts.

Author: MyungKyu LIM
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/1657809367.407321.1533027417725.JavaMail.jboss@ep2ml404
2018-12-09 16:35:06 +09:00
Tom Lane 5deadfef28 Fix misapplication of pgstat_count_truncate to wrong relation.
The stanza of ExecuteTruncate[Guts] that truncates a target table's toast
relation re-used the loop local variable "rel" to reference the toast rel.
This was safe enough when written, but commit d42358efb added code below
that that supposed "rel" still pointed to the parent table.  Therefore,
the stats counter update was applied to the wrong relcache entry (the
toast rel not the user rel); and if we were unlucky and that relcache
entry had been flushed during reindex_relation, very bad things could
ensue.

(I'm surprised that CLOBBER_CACHE_ALWAYS testing hasn't found this.
I'm even more surprised that the problem wasn't detected during the
development of d42358efb; it must not have been tested in any case
with a toast table, as the incorrect stats counts are very obvious.)

To fix, replace use of "rel" in that code branch with a more local
variable.  Adjust test cases added by d42358efb so that some of them
use tables with toast tables.

Per bug #15540 from Pan Bian.  Back-patch to 9.5 where d42358efb came in.

Discussion: https://postgr.es/m/15540-01078812338195c0@postgresql.org
2018-12-07 12:11:59 -05:00
Tom Lane 9286ef8e91 Clean up sloppy coding in publicationcmds.c's OpenTableList().
Remove dead code (which would be incorrect if it weren't dead),
per report from Pan Bian.  Add a CHECK_FOR_INTERRUPTS in the
inner loop over child relations, because there's little point
in having one in the outer loop if there's not one here too.
Minor stylistic adjustments and comment improvements.

Seems to be aboriginal to this code (cf commit 665d1fad9).
Back-patch to v10 where that came in, not because any of this
is significant, but just to keep the branches looking similar.

Discussion: https://postgr.es/m/15539-06d00ef6b1e2e1bb@postgresql.org
2018-12-07 11:02:39 -05:00
Michael Paquier 730422afcd Fix some errhint and errdetail strings missing a period
As per the error message style guide of the documentation, those should
be full sentences.

Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://1E8D49B4-16BC-4420-B4ED-58501D9E076B@yesql.se
2018-12-07 07:47:42 +09:00
Tom Lane d2b0b60e71 Improve our response to invalid format strings, and detect more cases.
Places that are testing for *printf failure ought to include the format
string in their error reports, since bad-format-string is one of the
more likely causes of such failure.  This both makes it easier to find
and repair the mistake, and provides at least some useful info to the
user who stumbles across such a problem.

Also, tighten snprintf.c to report EINVAL for an invalid flag or
final character in a format %-spec (including the case where the
%-spec is missing a final character altogether).  This seems like
better project policy, and it also allows removing an instruction
or two from the hot code path.

Back-patch the error reporting change in pvsnprintf, since it should be
harmless and may be helpful; but not the snprintf.c change.

Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an
invalid translated format string.  These changes don't fix that error,
but they should improve matters next time we make such a mistake.

Discussion: https://postgr.es/m/15511-1d8b6a0bc874112f@postgresql.org
2018-12-06 15:08:44 -05:00
Stephen Frost a243c55326 Cleanup comments in xlog compression
Skipping over the "hole" in full page images in the XLOG code was
described as being a form of compression, but this got a bit confusing
since we now have PGLZ-based compression happening, so adjust the
wording to discuss "removing" the "hole" and keeping the talk about
compression to where we're talking about using PGLZ-based compression of
the full page images.

Reviewed-By: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20181127234341.GM3415@tamriel.snowman.net
2018-12-06 11:05:39 -05:00
Alvaro Herrera 71a05b2232 Don't mark partitioned indexes invalid unnecessarily
When an indexes is created on a partitioned table using ONLY (don't
recurse to partitions), it gets marked invalid until index partitions
are attached for each table partition.  But there's no reason to do this
if there are no partitions ... and moreover, there's no way to get the
index to become valid afterwards, because all partitions that get
created/attached get their own index partition already attached to the
parent index, so there's no chance to do ALTER INDEX ... ATTACH PARTITION
that would make the parent index valid.

Fix by not marking the index as invalid to begin with.

This is very similar to 9139aa1942, but the pg_dump aspect does not
appear to be relevant until we add FKs that can point to PKs on
partitioned tables.  (I tried to cause the pg_upgrade test to break by
leaving some of these bogus tables around, but wasn't able to.)

Making this change means that an index that was supposed to be invalid
in the insert_conflict regression test is no longer invalid; reorder the
DDL so that the test continues to verify the behavior we want it to.

Author: Álvaro Herrera
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20181203225019.2vvdef2ybnkxt364@alvherre.pgsql
2018-12-05 13:31:51 -03:00
Stephen Frost f502fc88b3 Fix typo
Backends don't typically exist uncleanly, but they can certainly exit
uncleanly, and it's exiting uncleanly that's being discussed here.
2018-12-04 11:04:54 -05:00
Michael Paquier ee2b37ae04 Add some missing schema qualifications
This does not improve the security and reliability of the touched areas,
but it makes the style more consistent.

Author: Michael Paquier
Reviewed-by- Noah Misch
Discussion: https://postgr.es/m/20180309075538.GD9376@paquier.xyz
2018-12-03 14:21:52 +09:00
Alvaro Herrera 9dc1225855 Silence compiler warning
My original coding was questionable anyway.

Reported-by: Sergei Kornilov
Discussion: https://postgr.es/m/9645101543575886@myt6-27270b78ac4f.qloud-c.yandex.net
2018-11-30 10:20:49 -03:00
Amit Kapila dcfdf56e89 Fix typo. 2018-11-30 11:50:43 +05:30
Michael Paquier 5c99513975 Fix various checksum check problems for pg_verify_checksums and base backups
Three issues are fixed in this patch:
- Base backups forgot to ignore files specific to EXEC_BACKEND, leading
to spurious warnings when checksums are enabled, per analysis from me.
- pg_verify_checksums forgot about files specific to EXEC_BACKEND,
leading to failures of the tool on any such build, particularly Windows.
This error was originally found by newly-introduced TAP tests in various
buildfarm members using EXEC_BACKEND.
- pg_verify_checksums forgot to count for temporary files and temporary
paths, which could be valid relation files, without checksums, per
report from Andres Freund.  More tests are added to cover this case.

A new test case which emulates corruption for a file in a different
tablespace is added, coming from from Michael Banck, while I have coded
the main code and refactored the test code.

Author: Michael Banck, Michael Paquier
Reviewed-by: Stephen Frost, David Steele
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
2018-11-30 10:34:45 +09:00
Alvaro Herrera 88bdbd3f74 Add log_statement_sample_rate parameter
This allows to set a lower log_min_duration_statement value without
incurring excessive log traffic (which reduces performance).  This can
be useful to analyze workloads with lots of short queries.

Author: Adrien Nayrat
Reviewed-by: David Rowley, Vik Fearing
Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
2018-11-29 18:42:53 -03:00
Thomas Munro 2ac180c286 Fix minor typo in dsa.c.
Author: Takeshi Ideriha
Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
2018-11-29 14:14:26 +13:00
Michael Paquier 4c703369af Fix handling of synchronous replication for stopping WAL senders
This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction.  However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees.  This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.

On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.

Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4
2018-11-29 09:12:19 +09:00
Peter Geoghegan 1a990b207b Have BufFileSize() ereport() on FileSize() failure.
Move the responsibility for checking for and reporting a failure from
the only current BufFileSize() caller, logtape.c, to BufFileSize()
itself.  Code within buffile.c is generally responsible for interfacing
with fd.c to report irrecoverable failures.  This seems like a
convention that's worth sticking to.

Reorganizing things this way makes it easy to make the error message
raised in the event of BufFileSize() failure descriptive of the
underlying problem.  We're now clear on the distinction between
temporary file name and BufFile name, and can show errno, confident that
its value actually relates to the error being reported.  In passing, an
existing, similar buffile.c ereport() + errcode_for_file_access() site
is changed to follow the same conventions.

The API of the function BufFileSize() is changed by this commit, despite
already being in a stable release (Postgres 11).  This seems acceptable,
since the BufFileSize() ABI was changed by commit aa55183042, which
hasn't made it into a point release yet.  Besides, it's difficult to
imagine a third party BufFileSize() caller not just raising an error
anyway, since BufFile state should be considered corrupt when
BufFileSize() fails.

Per complaint from Tom Lane.

Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us
Backpatch: 11-, where shared BufFiles were introduced.
2018-11-28 14:42:54 -08:00
Peter Eisentraut f2cbffc7a6 Only allow one recovery target setting
The previous recovery.conf regime accepted multiple recovery_target*
settings and used the last one.  This does not translate well to the
general GUC system.  Specifically, under EXEC_BACKEND, the settings
are written out not in any particular order, so the order in which
they were originally set is not available to new processes.

Rather than redesign the GUC system, it was decided to abandon the old
behavior and only allow one recovery target setting.  A second setting
will cause an error.  However, it is allowed to set the same parameter
multiple times or unset a parameter and set a different one.

Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
2018-11-28 13:55:54 +01:00
Thomas Munro 0f9cdd7dca Don't set PAM_RHOST for Unix sockets.
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix
sockets.  This caused Linux PAM's libaudit integration to make DNS
requests for that name.  It's not exactly clear what value PAM_RHOST
should have in that case, but it seems clear that we shouldn't set it
to an unresolvable name, so don't do that.

Back-patch to 9.6.  Bug #15520.

Author: Thomas Munro
Reviewed-by: Peter Eisentraut
Reported-by: Albert Schabhuetl
Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
2018-11-28 14:12:30 +13:00
Tomas Vondra f69c959df0 Do not decode TOAST data for table rewrites
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.

That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.

Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap.  But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.

The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables.  So simply use it for
all TOAST data in raw_heap_insert().

That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().

Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced
2018-11-28 01:43:08 +01:00
Thomas Munro d67dae036b Don't count zero-filled buffers as 'read' in EXPLAIN.
If you extend a relation, it should count as a block written, not
read (we write a zero-filled block).  If you ask for a zero-filled
buffer, it shouldn't be counted as read or written.

Later we might consider counting zero-filled buffers with a separate
counter, if they become more common due to future work.

Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Kyotaro Horiguchi, David Rowley
Discussion: https://postgr.es/m/CAEepm%3D3JytB3KPpvSwXzkY%2Bdwc5zC8P8Lk7Nedkoci81_0E9rA%40mail.gmail.com
2018-11-28 11:58:10 +13:00
Andres Freund b238527664 Fix jit compilation bug on wide tables.
The function generated to perform JIT compiled tuple deforming failed
when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd
failed to realize that LLVM's getelementptr would treat an int8 index
argument as signed, rather than unsigned.  That means that a hoff
larger than 127 would result in a negative offset being applied.  Fix
that by widening the index to 32bit.

Add a testcase with a wide table. Don't drop it, as it seems useful to
verify other tools deal properly with wide tables.

Thanks to Justin Pryzby for both reporting a bug and then reducing it
to a reproducible testcase!

Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com
Backpatch: 11, just as jit compilation was
2018-11-27 10:07:03 -08:00
Peter Eisentraut 2dedf4d9a8 Integrate recovery.conf into postgresql.conf
recovery.conf settings are now set in postgresql.conf (or other GUC
sources).  Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.

Recovery is now initiated by a file recovery.signal.  Standby mode is
initiated by a file standby.signal.  The standby_mode setting is
gone.  If a recovery.conf file is found, an error is issued.

The trigger_file setting has been renamed to promote_trigger_file as
part of the move.

The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".

pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.

Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
2018-11-25 16:33:40 +01:00
Thomas Munro ab69ea9fee Fix assertion failure for SSL connections.
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar
handles postmaster death.  One place did not, but was missed in
review and testing due to the need for an SSL connection.  Fix, by
asking for WL_EXIT_ON_PM_DEATH.

Reported-by: Christoph Berg
Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
2018-11-25 18:34:58 +13:00
Tom Lane cbdb8b4c01 Fix float-to-integer coercions to handle edge cases correctly.
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain.  That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.

Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.

Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN.  Also remove some random cosmetic discrepancies
between these six functions.

Per bug #15519 from Victor Petrovykh.  This should get back-patched,
but first let's see what the buildfarm thinks of it --- I'm not too
sure about portability of some of the regression test cases.

Patch by me; thanks to Andrew Gierth for analysis and discussion.

Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-23 20:57:11 -05:00
Tom Lane a314c34079 Clamp semijoin selectivity to be not more than inner-join selectivity.
We should never estimate the output of a semijoin to be more rows than
we estimate for an inner join with the same input rels and join condition;
it's obviously impossible for that to happen.  However, given the
relatively poor quality of our semijoin selectivity estimates ---
particularly, but not only, in cases where we punt and return a default
estimate --- we did often deliver such estimates.  To improve matters,
calculate both estimates inside eqjoinsel() and take the smaller one.

The bulk of this patch is just mechanical refactoring to avoid repetitive
information lookup when we call both eqjoinsel_semi and eqjoinsel_inner.
The actual new behavior is just

	selec = Min(selec, inner_rel->rows * selec_inner);

which looks a bit odd but is correct because of our different definitions
for inner and semi join selectivity.

There is one ensuing plan change in the regression tests, but it looks
reasonable enough (and checking the actual row counts shows that the
estimate moved closer to reality, not further away).

Per bug #15160 from Alexey Ermakov.  Although this is arguably a bug fix,
I won't risk destabilizing plan choices in stable branches by
back-patching.

Tom Lane, reviewed by Melanie Plageman

Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
2018-11-23 12:48:49 -05:00
Alvaro Herrera 3be5fe2b10 Silence compiler warnings
Commit cfdf4dc4fc left a few unnecessary assignments, one of which
caused compiler warnings, as reported by Erik Rijkers.  Remove them all.

Discussion: https://postgr.es/m/df0dcca2025b3d90d946ecc508ca9678@xs4all.nl
2018-11-23 13:01:05 -03:00
Alvaro Herrera de38ce1b83 Don't allow partitioned indexes in pg_global tablespace
Missing in dfa6081419.

Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f-M3NMTCpv=vDfkoqHbMPFf=3-Z1ud=+1DHH00tC+zLaQ@mail.gmail.com
2018-11-23 08:48:20 -03:00
Thomas Munro cfdf4dc4fc Add WL_EXIT_ON_PM_DEATH pseudo-event.
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.

Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).

Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.

The only process that doesn't handle postmaster death is syslogger.  It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages.  By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().

Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
            https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-23 20:46:34 +13:00
Tom Lane eba2ce1712 Fix another crash in json{b}_populate_recordset and json{b}_to_recordset.
populate_recordset_worker() failed to consider the possibility that the
supplied JSON data contains no rows, so that update_cached_tupdesc never
got called.  This led to a null-pointer dereference since commit 9a5e8ed28;
before that it led to a bogus "set-valued function called in context that
cannot accept a set" error.  Fix by forcing the update to happen.

Per bug #15514.  Back-patch to v11 as 9a5e8ed28 was.  (If we were excited
about the bogus error, we could perhaps go back further, but it'd take more
work to figure out how to fix it in older branches.  Given the lack of
field complaints about that aspect, I'm not excited.)

Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
2018-11-22 15:14:01 -05:00
Michael Paquier 25c026c284 Fix typo in description of ExecFindPartition
Author: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqHg0=UL+Dhh3gpiwYNA=ufk9Lb7GQ2c=5rs=ZmVTP7xAw@mail.gmail.com
2018-11-22 13:23:54 +09:00
Alvaro Herrera 03e10b962f Fix typo in commit 6f7d02aa60
Per pink buildfarm.
2018-11-21 15:35:40 -03:00
Alvaro Herrera ee07e38c14 Fix PartitionDispatchData vertical whitespace
Per David Rowley
Discussion: https://postgr.es/m/CAKJS1f-MstvBWdkOzACsOHyBgj2oXcBM8kfv+NhVe-Ux-wq9Sg@mail.gmail.com
2018-11-21 15:04:25 -03:00
Alvaro Herrera 6f7d02aa60 instr_time.h: add INSTR_TIME_SET_CURRENT_LAZY
Sets the timestamp to current if not already set.  Will acquire more
callers momentarily.

Author: Fabien Coelho
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
2018-11-21 15:04:25 -03:00
Andres Freund 578b229718 Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.

This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row.  Neither pg_dump nor COPY included the contents of the
oid column by default.

The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.

WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.

Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
  WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
  issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
  restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
  OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
  plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.

The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.

The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such.  This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.

The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.

Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).

The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.

While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.

Catversion bump, for obvious reasons.

Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-20 16:00:17 -08:00
Peter Eisentraut ea8bc349bd Make detection of SSL_CTX_set_min_proto_version more portable
As already explained in configure.in, using the OpenSSL version number
to detect presence of functions doesn't work, because LibreSSL reports
incompatible version numbers.  Fortunately, the functions we need here
are actually macros, so we can just test for them directly.
2018-11-20 22:59:36 +01:00
Peter Eisentraut e73e67c719 Add settings to control SSL/TLS protocol version
For example:

    ssl_min_protocol_version = 'TLSv1.1'
    ssl_max_protocol_version = 'TLSv1.2'

Reviewed-by: Steve Singer <steve@ssinger.info>
Discussion: https://www.postgresql.org/message-id/flat/1822da87-b862-041a-9fc2-d0310c3da173@2ndquadrant.com
2018-11-20 22:12:10 +01:00
Peter Eisentraut 2d9140ed26 Make WAL description output more consistent
The output for record types XLOG_DBASE_CREATE and XLOG_DBASE_DROP used
the order dbid/tablespaceid, whereas elsewhere the order is
tablespaceid/dbid[/relfilenodeid].  Flip the order for those two types
to make it consistent.

Author: Jean-Christophe Arnu <jcarnu@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHZmTm18Ln62KW-G8NYvO1wbBL3QU1E76Zep=DuHmg-zS2XFAg@mail.gmail.com/
2018-11-20 13:30:01 +01:00
Peter Eisentraut a568cadaff Refine some guc.c help texts
These settings apply to communication with the sending server, which
is not necessarily a primary.

Author: Sergei Kornilov <sk@zsrv.org>
2018-11-20 06:38:34 +01:00
Tom Lane cb09903fe6 Add needed #include.
Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>.
Apparently on Linux they're also pulled in by some other inclusions,
but BSD-ish systems are pickier.  Fixes portability issue in ffa4cbd62.

Per buildfarm.
2018-11-19 17:28:04 -05:00
Tom Lane ffa4cbd623 Handle EPIPE more sanely when we close a pipe reading from a program.
Previously, any program launched by COPY TO/FROM PROGRAM inherited the
server's setting of SIGPIPE handling, i.e. SIG_IGN.  Hence, if we were
doing COPY FROM PROGRAM and closed the pipe early, the child process
would see EPIPE on its output file and typically would treat that as
a fatal error, in turn causing the COPY to report error.  Similarly,
one could get a failure report from a query that didn't read all of
the output from a contrib/file_fdw foreign table that uses file_fdw's
PROGRAM option.

To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing
of SIGPIPE.  This seems like an all-around better situation since if
the called program wants some non-default treatment of SIGPIPE, it would
expect to have to set that up for itself.  Then in COPY, if it's COPY
FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE
exit from the called program as a non-error condition.  This still allows
us to report an error for any case where the called program gets SIGPIPE
on some other file descriptor.

As coded, we won't report a SIGPIPE if we stop reading as a result of
seeing an in-band EOF marker (e.g. COPY BINARY EOF marker).  It's
somewhat debatable whether we should complain if the called program
continues to transmit data after an EOF marker.  However, it seems like
we should avoid throwing error in any questionable cases, especially in a
back-patched fix, and anyway it would take additional code to make such
an error get reported consistently.

Back-patch to v10.  We could go further back, since COPY FROM PROGRAM
has been around awhile, but AFAICS the only way to reach this situation
using core or contrib is via file_fdw, which has only supported PROGRAM
sources since v10.  The COPY statement per se has no feature whereby
it'd stop reading without having hit EOF or an error already.  Therefore,
I don't see any upside to back-patching further that'd outweigh the
risk of complaints about behavioral change.

Per bug #15449 from Eric Cyr.

Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi

Discussion: https://postgr.es/m/15449-1cf737dd5929450e@postgresql.org
2018-11-19 17:02:39 -05:00
Robert Haas 7ee5f88e65 Reduce unnecessary list construction in RelationBuildPartitionDesc.
The 'partoids' list which was constructed by the previous version
of this code was necessarily identical to 'inhoids'.  There's no
point to duplicating the list, so avoid that.  Instead, construct
the array representation directly from the original 'inhoids' list.

Also, use an array rather than a list for 'boundspecs'.  We know
exactly how many items we need to store, so there's really no
reason to use a list.  Using an array instead reduces the number
of memory allocations we perform.

Patch by me, reviewed by Michael Paquier and Amit Langote, the
latter of whom also helped with rebasing.
2018-11-19 12:10:41 -05:00
Alvaro Herrera 5c9a5513a3 Disallow COPY FREEZE on partitioned tables
This didn't actually work: COPY would fail to flush the right files, and
instead would try to flush a non-existing file, causing the whole
transaction to fail.

Cope by raising an error as soon as the command is sent instead, to
avoid a nasty later surprise.  Of course, it would be much better to
make it work, but we don't have a patch for that yet, and we don't know
if we'll want to backpatch one when we do.

Reported-by: Tomas Vondra
Author: David Rowley
Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
2018-11-19 11:16:28 -03:00
Thomas Munro 9ccdd7f66e PANIC on fsync() failure.
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure.  In that case the only remaining copy of the
data is in the WAL.  A subsequent fsync() could appear to succeed,
but not have flushed the data.  That means that a future checkpoint
could apparently complete successfully but have lost data.

Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure.  Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.

Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses.  If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail.  The GUC defaults
to off, meaning that we panic.

Back-patch to all supported releases.

There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error.  A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.

Author: Craig Ringer, with some adjustments by Thomas Munro
Reported-by: Craig Ringer
Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
2018-11-19 17:41:26 +13:00
Thomas Munro 1556cb2fc5 Don't forget about failed fsync() requests.
If fsync() fails, md.c must keep the request in its bitmap, so that
future attempts will try again.

Back-patch to all supported releases.

Author: Thomas Munro
Reviewed-by: Amit Kapila
Reported-by: Andrew Gierth
Discussion: https://postgr.es/m/87y3i1ia4w.fsf%40news-spur.riddles.org.uk
2018-11-19 17:41:26 +13:00
Michael Paquier 285bd0ac4a Remove unnecessary memcpy when reading WAL record fitting on page
When reading a WAL record, its contents are copied into an intermediate
buffer.  However, doing so is not necessary if the record fits fully
into the current page, saving one memcpy for each such record.  The
allocation handling of the intermediate buffer is also now done only
when a record crosses a page boundary, shaving some extra cycles when
reading a WAL record.

Author: Andrey Lepikhov
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas
Discussion: https://postgr.es/m/c2ea54dd-a1d3-80eb-ddbf-7e6f258e615e@postgrespro.ru
2018-11-19 10:25:48 +09:00
Tom Lane 37afc079ab Avoid defining SIGTTIN/SIGTTOU on Windows.
Setting them to SIG_IGN seems unlikely to have any beneficial effect
on that platform, and given the signal numbering collision with SIGABRT,
it could easily have bad effects.

Given the lack of field complaints that can be traced to this, I don't
presently feel a need to back-patch.

Discussion: https://postgr.es/m/5627.1542477392@sss.pgh.pa.us
2018-11-17 16:31:16 -05:00
Tom Lane 125f551c8b Leave SIGTTIN/SIGTTOU signal handling alone in postmaster child processes.
For reasons lost in the mists of time, most postmaster child processes
reset SIGTTIN/SIGTTOU signal handling to SIG_DFL, with the major exception
that backend sessions do not.  It seems like a pretty bad idea for any
postmaster children to do that: if stderr is connected to the terminal,
and the user has put the postmaster in background, any log output would
result in the child process freezing up.  Hence, switch them all to
doing what backends do, ie, nothing.  This allows them to inherit the
postmaster's SIG_IGN setting.  On the other hand, manually-launched
processes such as standalone backends will have default processing,
which seems fine.

In passing, also remove useless resets of SIGCONT and SIGWINCH signal
processing.  Perhaps the postmaster once changed those to something
besides SIG_DFL, but it doesn't now, so these are just wasted (and
confusing) syscalls.

Basically, this propagates the changes made in commit 8e2998d8a from
backends to other postmaster children.  Probably the only reason these
calls now exist elsewhere is that I missed changing pgstat.c along with
postgres.c at the time.

Given the lack of field complaints that can be traced to this, I don't
presently feel a need to back-patch.

Discussion: https://postgr.es/m/5627.1542477392@sss.pgh.pa.us
2018-11-17 16:31:16 -05:00
Andres Freund 73616126b4 Fix some spurious new compiler warnings in MSVC.
Per buildfarm animal bowerbird.

Discussion: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2018-11-17%2002%3A30%3A20
2018-11-17 11:41:14 -08:00
Andres Freund 4da597edf1 Make TupleTableSlots extensible, finish split of existing slot type.
This commit completes the work prepared in 1a0586de36, splitting the
old TupleTableSlot implementation (which could store buffer, heap,
minimal and virtual slots) into four different slot types.  As
described in the aforementioned commit, this is done with the goal of
making tuple table slots extensible, to allow for pluggable table
access methods.

To achieve runtime extensibility for TupleTableSlots, operations on
slots that can differ between types of slots are performed using the
TupleTableSlotOps struct provided at slot creation time.  That
includes information from the size of TupleTableSlot struct to be
allocated, initialization, deforming etc.  See the struct's definition
for more detailed information about callbacks TupleTableSlotOps.

I decided to rename TTSOpsBufferTuple to TTSOpsBufferHeapTuple and
ExecCopySlotTuple to ExecCopySlotHeapTuple, as that seems more
consistent with other naming introduced in recent patches.

There's plenty optimization potential in the slot implementation, but
according to benchmarking the state after this commit has similar
performance characteristics to before this set of changes, which seems
sufficient.

There's a few changes in execReplication.c that currently need to poke
through the slot abstraction, that'll be repaired once the pluggable
storage patchset provides the necessary infrastructure.

Author: Andres Freund and  Ashutosh Bapat, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-16 16:35:15 -08:00
Alvaro Herrera 0201d79a55 Avoid re-typedef'ing PartitionTupleRouting
Apparently, gcc on macOS (?) doesn't like it.

Per buildfarm.
2018-11-16 16:55:53 -03:00
Andres Freund a7aa608e0f Inline hot path of slot_getsomeattrs().
This yields a minor speedup, which roughly balances the loss from the
upcoming introduction of callbacks to do some operations on slots.

Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-16 10:29:01 -08:00
Alvaro Herrera 3f2393edef Redesign initialization of partition routing structures
This speeds up write operations (INSERT, UPDATE, DELETE, COPY, as well
as the future MERGE) on partitioned tables.

This changes the setup for tuple routing so that it does far less work
during the initial setup and pushes more work out to when partitions
receive tuples.  PartitionDispatchData structs for sub-partitioned
tables are only created when a tuple gets routed through it.  The
possibly large arrays in the PartitionTupleRouting struct have largely
been removed.  The partitions[] array remains but now never contains any
NULL gaps.  Previously the NULLs had to be skipped during
ExecCleanupTupleRouting(), which could add a large overhead to the
cleanup when the number of partitions was large.  The partitions[] array
is allocated small to start with and only enlarged when we route tuples
to enough partitions that it runs out of space. This allows us to keep
simple single-row partition INSERTs running quickly.  Redesign

The arrays in PartitionTupleRouting which stored the tuple translation maps
have now been removed.  These have been moved out into a
PartitionRoutingInfo struct which is an additional field in ResultRelInfo.

The find_all_inheritors() call still remains by far the slowest part of
ExecSetupPartitionTupleRouting(). This commit just removes the other slow
parts.

In passing also rename the tuple translation maps from being ParentToChild
and ChildToParent to being RootToPartition and PartitionToRoot. The old
names mislead you into thinking that a partition of some sub-partitioned
table would translate to the rowtype of the sub-partitioned table rather
than the root partitioned table.

Authors: David Rowley and Amit Langote, heavily revised by Álvaro Herrera
Testing help from Jesper Pedersen and Kato Sho.
Discussion: https://postgr.es/m/CAKJS1f_1RJyFquuCKRFHTdcXqoPX-PYqAd7nz=GVBwvGh4a6xA@mail.gmail.com
2018-11-16 15:01:05 -03:00
Andres Freund a387a3dff9 Fix slot type assumptions for nodeGather[Merge].
The assumption made in 1a0586de36 was wrong, as evidenced by
buildfarm failure on locust, which runs with
force_parallel_mode=regress.  The tuples accessed in either nodes are
in the outer slot, and we can't trivially rely on the slot type being
known because the leader might execute the subsidiary node directly,
or via the tuple queue on a worker. In the latter case the tuple will
always be a heaptuple slot, but in the former, it'll be whatever the
subsidiary node returns.
2018-11-15 23:16:41 -08:00
Andres Freund 7ef04e4d2c Don't generate tuple deforming functions for virtual slots.
Virtual tuple table slots never need tuple deforming. Therefore, if we
know at expression compilation time, that a certain slot will always
be virtual, there's no need to create a tuple deforming routine for
it.

Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15 22:00:30 -08:00
Andres Freund 15d8f83128 Verify that expected slot types match returned slot types.
This is important so JIT compilation knows what kind of tuple slot the
deforming routine can expect. There's also optimization potential for
expression initialization without JIT compilation. It e.g. seems
plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with
virtual slots.

Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15 22:00:30 -08:00
Andres Freund 675af5c01e Compute information about EEOP_*_FETCHSOME at expression init time.
Previously this information was computed when JIT compiling an
expression.  But the information is useful for assertions in the
non-JIT case too (for assertions), therefore it makes sense to move
it.

This will, in a followup commit, allow to treat different slot types
differently. E.g. for virtual slots there's no need to generate a JIT
function to deform the slot.

Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15 22:00:30 -08:00
Andres Freund 1a0586de36 Introduce notion of different types of slots (without implementing them).
Upcoming work intends to allow pluggable ways to introduce new ways of
storing table data. Accessing those table access methods from the
executor requires TupleTableSlots to be carry tuples in the native
format of such storage methods; otherwise there'll be a significant
conversion overhead.

Different access methods will require different data to store tuples
efficiently (just like virtual, minimal, heap already require fields
in TupleTableSlot). To allow that without requiring additional pointer
indirections, we want to have different structs (embedding
TupleTableSlot) for different types of slots.  Thus different types of
slots are needed, which requires adapting creators of slots.

The slot that most efficiently can represent a type of tuple in an
executor node will often depend on the type of slot a child node
uses. Therefore we need to track the type of slot is returned by
nodes, so parent slots can create slots based on that.

Relatedly, JIT compilation of tuple deforming needs to know which type
of slot a certain expression refers to, so it can create an
appropriate deforming function for the type of tuple in the slot.

But not all nodes will only return one type of slot, e.g. an append
node will potentially return different types of slots for each of its
subplans.

Therefore add function that allows to query the type of a node's
result slot, and whether it'll always be the same type (whether it's
fixed). This can be queried using ExecGetResultSlotOps().

The scan, result, inner, outer type of slots are automatically
inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(),
left/right subtrees respectively. If that's not correct for a node,
that can be overwritten using new fields in PlanState.

This commit does not introduce the actually abstracted implementation
of different kind of TupleTableSlots, that will be left for a followup
commit.  The different types of slots introduced will, for now, still
use the same backing implementation.

While this already partially invalidates the big comment in
tuptable.h, it seems to make more sense to update it later, when the
different TupleTableSlot implementations actually exist.

Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15 22:00:30 -08:00
Andres Freund 763f2edd92 Rejigger materializing and fetching a HeapTuple from a slot.
Previously materializing a slot always returned a HeapTuple. As
current work aims to reduce the reliance on HeapTuples (so other
storage systems can work efficiently), that needs to change. Thus
split the tasks of materializing a slot (i.e. making it independent
from the underlying storage / other memory contexts) from fetching a
HeapTuple from the slot.  For brevity, allow to fetch a HeapTuple from
a slot and materializing the slot at the same time, controlled by a
parameter.

For now some callers of ExecFetchSlotHeapTuple, with materialize =
true, expect that changes to the heap tuple will be reflected in the
underlying slot.  Those places will be adapted in due course, so while
not pretty, that's OK for now.

Also rename ExecFetchSlotTuple to ExecFetchSlotHeapTupleDatum and
ExecFetchSlotTupleDatum to ExecFetchSlotHeapTupleDatum, as it's likely
that future storage methods will need similar methods. There already
is ExecFetchSlotMinimalTuple, so the new names make the naming scheme
more coherent.

Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15 14:31:12 -08:00
Peter Eisentraut 86a4819f69 Update executor documentation for run-time partition pruning
With run-time partition pruning, there is no longer necessarily an
executor node for each corresponding plan node.

Author: David Rowley <david.rowley@2ndquadrant.com>
2018-11-15 23:02:21 +01:00
Andres Freund c058fc2a2b Rationalize expression context reset in ExecModifyTable().
The current pattern of reseting expressions both in
ExecProcessReturning() and ExecOnConflictUpdate() makes it harder than
necessary to reason about memory lifetimes.  It also requires
materializing slots unnecessarily, although this patch doesn't take
advantage of the fact that that's not necessary anymore.

Instead reset the expression context once for each input tuple.

Author: Ashutosh Bapat
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-15 11:40:50 -08:00
Tom Lane 34c9e455d0 Improve performance of partition pruning remapping a little.
ExecFindInitialMatchingSubPlans has to update the PartitionPruneState's
subplan mapping data to account for the removal of subplans it prunes.
However, that's only necessary if run-time pruning will also occur,
so we can skip it when that won't happen, which should result in not
needing to do the remapping in many cases.  (We now need it only when
some partitions are potentially startup-time prunable and others are
potentially run-time prunable, which seems like an unusual case.)

Also make some marginal performance improvements in the remapping
itself.  These will mainly win if most partitions got pruned by
the startup-time pruning, which is perhaps a debatable assumption
in this context.

Also fix some bogus comments, and rearrange code to marginally
reduce space consumption in the executor's query-lifespan context.

David Rowley, reviewed by Yoshikazu Imai

Discussion: https://postgr.es/m/CAKJS1f9+m6-di-zyy4B4AGn0y1B9F8UKDRigtBbNviXOkuyOpw@mail.gmail.com
2018-11-15 13:34:16 -05:00
Alvaro Herrera 74514bd4a5 geo_ops.c: Clarify comments and function arguments
These functions were not crystal clear about what their respective APIs
are.  Make an effort to improve that.

Emre's patch was correct AFAICT, but I (Álvaro) felt the need to improve
a few comments a bit more.  Any resulting errors are my own.

Per complaint from Coverity, Ning Yu, and Tom Lane.

Author: Emre Hasegeli, Álvaro Herrera
Reviewed-by: Tomas Vondra, Álvaro Herrera
Discussion: https://postgr.es/m/26769.1533090136@sss.pgh.pa.us
2018-11-15 12:08:03 -03:00
Thomas Munro ab8984f52d Further adjustment to random() seed initialization.
Per complaint from Tom Lane, don't chomp the timestamp at 32 bits, so we
can shift in some of its higher bits.

Discussion: https://postgr.es/m/14712.1542253115%40sss.pgh.pa.us
2018-11-15 17:38:55 +13:00
Thomas Munro 5b0ce3ec33 Increase the number of possible random seeds per time period.
Commit 197e4af9 refactored the initialization of the libc random()
seed, but reduced the number of possible seeds values that could be
chosen in a given time period.  This negation of the effects of
commit 98c50656c was unintentional.  Replace with code that
shifts the fast moving timestamp bits left, similar to the original
algorithm (though not the previous float-tolerating coding, which
is no longer necessary).

Author: Thomas Munro
Reported-by: Noah Misch
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20181112083358.GA1073796%40rfd.leadboat.com
2018-11-15 16:25:30 +13:00
Thomas Munro aa55183042 Use 64 bit type for BufFileSize().
BufFileSize() can't use off_t, because it's only 32 bits wide on
some systems.  BufFile objects can have many 1GB segments so the
total size can exceed 2^31.  The only known client of the function
is parallel CREATE INDEX, which was reported to fail when building
large indexes on Windows.

Though this is technically an ABI break on platforms with a 32 bit
off_t and we might normally avoid back-patching it, the function is
brand new and thus unlikely to have been discovered by extension
authors yet, and it's fairly thoroughly broken on those platforms
anyway, so just fix it.

Defect in 9da0cc35.  Bug #15460.  Back-patch to 11, where this
function landed.

Author: Thomas Munro
Reported-by: Paul van der Linden, Pavel Oskin
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/15460-b6db80de822fa0ad%40postgresql.org
Discussion: https://postgr.es/m/CAHDGBJP_GsESbTt4P3FZA8kMUKuYxjg57XHF7NRBoKnR%3DCAR-g%40mail.gmail.com
2018-11-15 13:13:57 +13:00
Tom Lane 600b04d6b5 Add a timezone-specific variant of date_trunc().
date_trunc(field, timestamptz, zone_name) performs truncation using
the named time zone as reference, rather than working in the session
time zone as is the default behavior.  It's equivalent to

date_trunc(field, timestamptz at time zone zone_name) at time zone zone_name

but it's faster, easier to type, and arguably easier to understand.

Vik Fearing and Tom Lane

Discussion: https://postgr.es/m/6249ffc4-2b22-4c1b-4e7d-7af84fedd7c6@2ndquadrant.com
2018-11-14 15:41:07 -05:00
Peter Eisentraut 1b5d797cd4 Lower lock level for renaming indexes
Change lock level for renaming index (either ALTER INDEX or implicitly
via some other commands) from AccessExclusiveLock to
ShareUpdateExclusiveLock.

One reason we need a strong lock for relation renaming is that the
name change causes a rebuild of the relcache entry.  Concurrent
sessions that have the relation open might not be able to handle the
relcache entry changing underneath them.  Therefore, we need to lock
the relation in a way that no one can have the relation open
concurrently.  But for indexes, the relcache handles reloads specially
in RelationReloadIndexInfo() in a way that keeps changes in the
relcache entry to a minimum.  As long as no one keeps pointers to
rd_amcache and rd_options around across possible relcache flushes,
which is the case, this ought to be safe.

We also want to use a self-exclusive lock for correctness, so that
concurrent DDL doesn't overwrite the rename if they start updating
while still seeing the old version.  Therefore, we use
ShareUpdateExclusiveLock, which is already used by other DDL commands
that want to operate in a concurrent manner.

The reason this is interesting at all is that renaming an index is a
typical part of a concurrent reindexing workflow (CREATE INDEX
CONCURRENTLY new + DROP INDEX CONCURRENTLY old + rename back).  And
indeed a future built-in REINDEX CONCURRENTLY might rely on the ability
to do concurrent renames as well.

Reviewed-by: Andrey Klychkov <aaklychkov@mail.ru>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1531767486.432607658@f357.i.mail.ru
2018-11-14 17:09:54 +01:00
Michael Paquier b4721f3950 Initialize TransactionState and user ID consistently at transaction start
If a failure happens when a transaction is starting between the moment
the transaction status is changed from TRANS_DEFAULT to TRANS_START and
the moment the current user ID and security context flags are fetched
via GetUserIdAndSecContext(), or before initializing its basic fields,
then those may get reset to incorrect values when the transaction
aborts, leaving the session in an inconsistent state.

One problem reported is that failing a starting transaction at the first
query of a session could cause several kinds of system crashes on the
follow-up queries.

In order to solve that, move the initialization of the transaction state
fields and the call of GetUserIdAndSecContext() in charge of fetching
the current user ID close to the point where the transaction status is
switched to TRANS_START, where there cannot be any error triggered
in-between, per an idea of Tom Lane.  This properly ensures that the
current user ID, the security context flags and that the basic fields of
TransactionState remain consistent even if the transaction fails while
starting.

Reported-by: Richard Guo
Diagnosed-By: Richard Guo
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAN_9JTxECSb=pEPcb0a8d+6J+bDcOZ4=DgRo_B7Y5gRHJUM=Rw@mail.gmail.com
Backpatch-through: 9.4
2018-11-14 16:46:53 +09:00
Michael Paquier 3be97b97ed Add flag values in WAL description to all heap records
Hexadecimal is consistently used as format to not bloat too much the
output but keep it readable.  This information is useful mainly for
debugging purposes with for example pg_waldump.

Author: Michael Paquier
Reviewed-by: Nathan Bossart, Dmitry Dolgov, Andres Freund, Álvaro
Herrera
Discussion: https://postgr.es/m/20180413034734.GE1552@paquier.xyz
2018-11-14 10:33:10 +09:00
Michael Paquier b52b7dc250 Refactor code creating PartitionBoundInfo
The code building PartitionBoundInfo based on the constituent partition
data read from catalogs has been located in partcache.c, with a specific
set of routines dedicated to bound types, like sorting or bound data
creation.  All this logic is moved to partbounds.c and relocates all the
bound-specific logistic into it, with partition_bounds_create() as
principal entry point.

Author: Amit Langote
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/3f289da8-6d10-75fe-814a-635e8b191d43@lab.ntt.co.jp
2018-11-14 10:01:49 +09:00
Tom Lane 965a3d6be0 Fix realfailN lexer rules to not make assumptions about input format.
The realfail1 and realfail2 backup-prevention rules always returned
token type FCONST, ignoring the possibility that what we've scanned
is more appropriately described as ICONST.  I think that at the
time that code was added, it might actually have been safe to not
distinguish; but since we started allowing AS-less aliases in SELECT
target lists, it's definitely legal to have a number immediately
followed by an identifier.

In the SELECT case, it seems there's no visible consequence because
make_const() will change the type back to integer anyway.  But I'm
worried that there are other contexts, or will be in future, where
it's more important to get the constant's type right.

Hence, use process_integer_literal to correctly determine which
token type to return.

Arguably this is a bug fix, but given the lack of evidence of
user-visible problems, I'll refrain from back-patching.

Discussion: https://postgr.es/m/21364.1542136808@sss.pgh.pa.us
2018-11-13 14:54:41 -05:00
Tom Lane ec937d0805 Align ECPG lexer more closely with the core and psql lexers.
Make a bunch of basically-cosmetic changes to reduce the diffs between
the flex rules in scan.l, psqlscan.l, and pgc.l.  Reorder some code,
adjust a lot of whitespace, sync some comments, make use of flex start
condition scopes to do that.

There are a few non-cosmetic changes in the ECPG lexer:

* Bring over the decimalfail rule (and support function
process_integer_literal) so that ECPG will lex "1..10" into
the same tokens as the backend would.  I'm not sure this makes any
visible difference to users, but I'm not sure it doesn't, either.

* <xdc><<EOF>> gets its own rule so as to produce a more on-point
error message.

* Remove duplicate <SQL>{xdstart} rule.

John Naylor, with a few additional changes by me

Discussion: https://postgr.es/m/CAJVSVGWGqY9YBs2EwtRUkbNv=hXkN8yRPOoD1wxE6COgvvrz5g@mail.gmail.com
2018-11-13 12:57:52 -05:00
Thomas Munro b59d4d6c36 Fix const correctness warning.
Per buildfarm.
2018-11-13 19:03:02 +13:00
Amit Kapila a53bc135fb Fix the initialization of atomic variables introduced by the
group clearing mechanism.

Commits 0e141c0fbb and baaf272ac9 introduced initialization of atomic
variables in InitProcess which means that it's not safe to look at those
for backends that aren't currently in use.  Fix that by initializing them
during postmaster startup.

Reported-by: Andres Freund
Author: Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/20181027104138.qmbbelopvy7cw2qv@alap3.anarazel.de
2018-11-13 10:52:40 +05:30
Thomas Munro 257ef3cd4f Fix handling of HBA ldapserver with multiple hostnames.
Commit 35c0754f failed to handle space-separated lists of alternative
hostnames in ldapserver, when building a URI for ldap_initialize()
(OpenLDAP).  Such lists need to be expanded to space-separated URIs.

Repair.  Back-patch to 11, to fix bug report #15495.

Author: Thomas Munro
Reported-by: Renaud Navarro
Discussion: https://postgr.es/m/15495-2c39fc196c95cd72%40postgresql.org
2018-11-13 17:46:28 +13:00
Thomas Munro 6a3dcd2856 Fix possible buffer overrun in hba.c.
Coverty reports a possible buffer overrun in the code that populates the
pg_hba_file_rules view.  It may not be a live bug due to restrictions
on options that can be used together, but let's increase MAX_HBA_OPTIONS
and correct a nearby misleading comment.

Back-patch to 10 where this code arrived.

Reported-by: Julian Hsiao
Discussion: https://postgr.es/m/CADnGQpzbkWdKS2YHNifwAvX5VEsJ5gW49U4o-7UL5pzyTv4vTg%40mail.gmail.com
2018-11-13 16:27:13 +13:00
Michael Paquier 52b70b1c7d Remove CommandCounterIncrement() after processing ON COMMIT DELETE
This comes from f9b5b41, which is part of one the original commits that
implemented ON COMMIT actions.  By looking at the truncation code, any
CCI needed happens locally when rebuilding indexes, so it looks safe to
just remove this final incrementation.

Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20181109024731.GF2652@paquier.xyz
2018-11-13 08:59:41 +09:00
Tom Lane 3de491482b Simplify null-element handling in extension_config_remove().
There's no point in asking deconstruct_array() for a null-flags
array when we already checked the array has no nulls, and aren't
going to examine the output anyhow.  Not asking for this output
should make the code marginally faster, and it's also more
robust since if there somehow were nulls, deconstruct_array()
would throw an error.

Daniel Gustafsson

Discussion: https://postgr.es/m/289FFB8B-7AAB-48B5-A497-6E0D41D7BA47@yesql.se
2018-11-12 11:50:28 -05:00
Tom Lane e3f005d974 Limit the number of index clauses considered in choose_bitmap_and().
classify_index_clause_usage() is O(N^2) in the number of distinct index
qual clauses it considers, because of its use of a simple search list to
store them.  For nearly all queries, that's fine because only a few clauses
will be considered.  But Alexander Kuzmenkov reported a machine-generated
query with 80000 (!) index qual clauses, which caused this code to take
forever.  Somewhat remarkably, this is the only O(N^2) behavior we now
have for such a query, so let's fix it.

We can get rid of the O(N^2) runtime for cases like this without much
damage to the functionality of choose_bitmap_and() by separating out
paths with "too many" qual or pred clauses, and deeming them to always
be nonredundant with other paths.  Then their clauses needn't go into
the search list, so it doesn't get too long, but we don't lose the
ability to consider bitmap AND plans altogether.  I set the threshold
for "too many" to be 100 clauses per path, which should be plenty to
ensure no change in planning behavior for normal queries.

There are other things we could do to make this go faster, but it's not
clear that it's worth any additional effort.  80000 qual clauses require
a whole lot of work in many other places, too.

The code's been like this for a long time, so back-patch to all supported
branches.  The troublesome query only works back to 9.5 (in 9.4 it fails
with stack overflow in the parser); so I'm not sure that fixing this in
9.4 has any real-world benefit, but perhaps it does.

Discussion: https://postgr.es/m/90c5bdfa-d633-dabe-9889-3cf3e1acd443@postgrespro.ru
2018-11-12 11:19:04 -05:00
Andres Freund 450c7defa6 Remove volatiles from {procarray,volatile}.c and fix memory ordering issue.
The use of volatiles in procarray.c largely originated from the time
when postgres did not have reliable compiler and memory
barriers. That's not the case anymore, so we can do better.

Several of the functions in procarray.c can be bottlenecks, and
removal of volatile yields mildly better code.

The new state, with explicit memory barriers, is also more
correct. The previous use of volatile did not actually deliver
sufficient guarantees on weakly ordered machines, in particular the
logic in GetNewTransactionId() does not look safe.  It seems unlikely
to be a problem in practice, but worth fixing.

Thomas and I independently wrote a patch for this.

Reported-By: Andres Freund and Thomas Munro
Author: Andres Freund, with cherrypicked changes from a patch by Thomas Munro
Discussion:
    https://postgr.es/m/20181005172955.wyjb4fzcdzqtaxjq@alap3.anarazel.de
    https://postgr.es/m/CAEepm=1nff0x=7i3YQO16jLA2qw-F9O39YmUew4oq-xcBQBs0g@mail.gmail.com
2018-11-10 16:11:57 -08:00
Peter Eisentraut 69ee2ff930 Apply RI trigger skipping tests also for DELETE
The tests added in cfa0f4255b to skip
firing an RI trigger if any old key value is NULL can also be applied
for DELETE.  This should give a performance gain in those cases, and it
also saves a lot of duplicate code in the actual RI triggers.  (That
code was already dead code for the UPDATE cases.)

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-10 16:14:51 +01:00
Peter Eisentraut 34479d9a36 Remove dead foreign key optimization code
The ri_KeysEqual() calls in the foreign-key trigger functions to
optimize away some updates are useless because since
adfeef55cb those triggers are not enqueued
at all.  (It's also not useful to keep these checks as some kind of
backstop, since it's also semantically correct to just run the full
check even with equal keys.)

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-10 16:14:51 +01:00
Andres Freund 5fde047f2b Combine two flag tests in GetSnapshotData().
Previously the code checked PROC_IN_LOGICAL_DECODING and
PROC_IN_VACUUM separately. As the relevant variable is marked as
volatile, the compiler cannot combine the two tests.  As
GetSnapshotData() is pretty hot in a number of workloads, it's
worthwhile to fix that.

It'd also be a good idea to get rid of the volatiles altogether. But
for one that's a larger patch, and for another, the code after this
change still seems at least as easy to read as before.

Author: Andres Freund
Discussion: https://postgr.es/m/20181005172955.wyjb4fzcdzqtaxjq@alap3.anarazel.de
2018-11-09 20:43:56 -08:00
Tom Lane fa2952d8eb Fix missing role dependencies for some schema and type ACLs.
This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types.  While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL.  That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names.  (I think I've
seen field reports matching that symptom, but can't find any right now.)

Several cases are fixed here:

1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain.  This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.

2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.

3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).

Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)

ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts.  This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database.  We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.

To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have.  Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.)  While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.

The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally.  We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.

Back-patch relevant parts to all supported branches.

Discussion: https://postgr.es/m/15719.1541725287@sss.pgh.pa.us
2018-11-09 20:42:14 -05:00
Andres Freund c670d0faac Remove ineffective check against dropped columns from slot_getattr().
Before this commit slot_getattr() checked for dropped
columns (returning NULL in that case), but only after checking for
previously deformed columns. As slot_deform_tuple() does not contain
such a check, the check in slot_getattr() would often not have been
reached, depending on previous use of the slot.

These days locking and plan invalidation ought to ensure that dropped
columns are not accessed in query plans. Therefore this commit just
drops the insufficient check in slot_getattr().  It's possible that
we'll find some holes againt use of dropped columns, but if so, those
need to be addressed independent of slot_getattr(), as most accesses
don't go through that function anyway.

Author: Andres Freund
Discussion: https://postgr.es/m/20181107174403.zai7fedgcjoqx44p@alap3.anarazel.de
2018-11-09 17:40:40 -08:00
Andres Freund 1ef6bd2954 Don't require return slots for nodes without projection.
In a lot of nodes the return slot is not required. That can either be
because the node doesn't do any projection (say an Append node), or
because the node does perform projections but the projection is
optimized away because the projection would yield an identical row.

Slots aren't that small, especially for wide rows, so it's worthwhile
to avoid creating them.  It's not possible to just skip creating the
slot - it's currently used to determine the tuple descriptor returned
by ExecGetResultType().  So separate the determination of the result
type from the slot creation.  The work previously done internally
ExecInitResultTupleSlotTL() can now also be done separately with
ExecInitResultTypeTL() and ExecInitResultSlot().  That way nodes that
aren't guaranteed to need a result slot, can use
ExecInitResultTypeTL() to determine the result type of the node, and
ExecAssignScanProjectionInfo() (via
ExecConditionalAssignProjectionInfo()) determines that a result slot
is needed, it is created with ExecInitResultSlot().

Besides the advantage of avoiding to create slots that then are
unused, this is necessary preparation for later patches around tuple
table slot abstraction. In particular separating the return descriptor
and slot is a prerequisite to allow JITing of tuple deforming with
knowledge of the underlying tuple format, and to avoid unnecessarily
creating JITed tuple deforming for virtual slots.

This commit removes a redundant argument from
ExecInitResultTupleSlotTL(). While this commit touches a lot of the
relevant lines anyway, it'd normally still not worthwhile to cause
breakage, except that aforementioned later commits will touch *all*
ExecInitResultTupleSlotTL() callers anyway (but fits worse
thematically).

Author: Andres Freund
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-09 17:19:39 -08:00
Michael Paquier 319a810180 Fix dependency handling of partitions and inheritance for ON COMMIT
This commit fixes a set of issues with ON COMMIT actions when used on
partitioned tables and tables with inheritance children:
- Applying ON COMMIT DROP on a partitioned table with partitions or on a
table with inheritance children caused a failure at commit time, with
complains about the children being already dropped as all relations are
dropped one at the same time.
- Applying ON COMMIT DELETE on a partition relying on a partitioned
table which uses ON COMMIT DROP would cause the partition truncation to
fail as the parent is removed first.

The solution to the first problem is to handle the removal of all the
dependencies in one go instead of dropping relations one-by-one, based
on a suggestion from Álvaro Herrera.  So instead all the relation OIDs
to remove are gathered and then processed in one round of multiple
deletions.

The solution to the second problem is to reorder the actions, with
truncation happening first and relation drop done after.  Even if it
means that a partition could be first truncated, then immediately
dropped if its partitioned table is dropped, this has the merit to keep
the code simple as there is no need to do existence checks on the
relations to drop.

Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE
does not cascade to its partitions.  The ON COMMIT action defined on
each partition gets the priority.

Author: Michael Paquier
Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas
Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp
Backpatch-through: 10
2018-11-09 10:03:22 +09:00
Tom Lane 3d360e20c9 Disallow setting client_min_messages higher than ERROR.
Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client.  Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end.  While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused.  Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.

In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages.  (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)

In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way.  I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.

In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.

Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.

Jonah Harris and Tom Lane

Discussion: https://postgr.es/m/7809.1541521180@sss.pgh.pa.us
Discussion: https://postgr.es/m/15479-ef0f4cc2fd995ca2@postgresql.org
2018-11-08 17:33:43 -05:00
Alvaro Herrera 705d433fd5 Revise attribute handling code on partition creation
The original code to propagate NOT NULL and default expressions
specified when creating a partition was mostly copy-pasted from
typed-tables creation, but not being a great match it contained some
duplicity, inefficiency and bugs.

This commit fixes the bug that NOT NULL constraints declared in the
parent table would not be honored in the partition.  One reported issue
that is not fixed is that a DEFAULT declared in the child is not used
when inserting through the parent.  That would amount to a behavioral
change that's better not back-patched.

This rewrite makes the code simpler:

1. instead of checking for duplicate column names in its own block,
reuse the original one that already did that;

2. instead of concatenating the list of columns from parent and the one
declared in the partition and scanning the result to (incorrectly)
propagate defaults and not-null constraints, just scan the latter
searching the former for a match, and merging sensibly.  This works
because we know the list in the parent is already correct and there can
only be one parent.

This rewrite makes ColumnDef->is_from_parent unused, so it's removed
on branch master; on released branches, it's kept as an unused field in
order not to cause ABI incompatibilities.

This commit also adds a test case for creating partitions with
collations mismatching that on the parent table, something that is
closely related to the code being patched.  No code change is introduced
though, since that'd be a behavior change that could break some (broken)
working applications.

Amit Langote wrote a less invasive fix for the original
NOT NULL/defaults bug, but while I kept the tests he added, I ended up
not using his original code.  Ashutosh Bapat reviewed Amit's fix.  Amit
reviewed mine.

Author: Álvaro Herrera, Amit Langote
Reviewed-by: Ashutosh Bapat, Amit Langote
Reported-by: Jürgen Strobel (bug #15212)
Discussion: https://postgr.es/m/152746742177.1291.9847032632907407358@wrigleys.postgresql.org
2018-11-08 16:22:09 -03:00
Michael Paquier 170dccc69d Fix incorrect routine name reference in partprune.c
Author: Yuzuko Hosoya
Discussion: https://postgr.es/m/00ac01d4774c$7feac860$7fc05920$@lab.ntt.co.jp
2018-11-08 20:14:16 +09:00
Andres Freund c14a8ff27e Fixup for b84a6dafbf triggering assert failure in LLVM debug builds.
Author: Andres Freund
2018-11-07 14:00:14 -08:00
Andres Freund b84a6dafbf Move EEOP_*_SYSVAR evaluation out of line.
This mainly de-duplicates code. As evaluating a system variable isn't
the hottest path and the current inline implementation ends up calling
out to an external function anyway, this is OK from a performance POV.

The main motivation for de-duplicating is the upcoming slot
abstraction work, after which there's not guaranteed to be a HeapTuple
backing the slot.

Author: Andres Freund, Amit Khandekar
Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
2018-11-07 11:08:45 -08:00
Andres Freund 5f32b29c18 Build HashState's hashkeys expression with the correct parent.
Previously the expressions were built with the HashJoinState as a
parent. That's incorrect.

Currently this does not appear to be harmful, but for the upcoming
'slot abstraction' work this proves to be problematic, as the
underlying slot types can differ between Hash and HashJoin.  It's
possible that this already causes a problem, but I've not been able to
come up with a scenario.  Therefore don't backpatch at this point.

Author: Andres Freund
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
2018-11-07 09:25:54 -08:00
Tom Lane c6e4133fae Postpone calculating total_table_pages until after pruning/exclusion.
The planner doesn't make any use of root->total_table_pages until it
estimates costs of indexscans, so we don't need to compute it as
early as that's currently done.  By doing the calculation between
set_base_rel_sizes and set_base_rel_pathlists, we can omit relations
that get removed from the query by partition pruning or constraint
exclusion, which seems like a more accurate basis for costing.

(Historical note: I think at the time this code was written, there
was not a separation between the "set sizes" and "set pathlists"
steps, so that this approach would have been impossible at the time.
But now that we have that separation, this is clearly the better way
to do things.)

David Rowley, reviewed by Edmund Horner

Discussion: https://postgr.es/m/CAKJS1f-NG1mRM0VOtkAG7=ZLQWihoqees9R4ki3CKBB0-fRfCA@mail.gmail.com
2018-11-07 12:12:56 -05:00
Tom Lane 5d28c9bd73 Disable recheck_on_update optimization to avoid crashes.
The code added by commit c203d6cf8 causes a crash in at least one case,
where a potentially-optimizable expression index has a storage type
different from the input data type.  A cursory code review turned up
numerous other problems that seem impractical to fix on short notice.

Andres argued for revert of that patch some time ago, and if additional
senior committers had been paying attention, that's likely what would
have happened, but we were not :-(

At this point we can't just revert, at least not in v11, because that would
mean an ABI break for code touching relcache entries.  And we should not
remove the (also buggy) support for the recheck_on_update index reloption,
since it might already be used in some databases in the field.  So this
patch just does the as-little-invasive-as-possible measure of disabling
the feature as though recheck_on_update were forced off for all indexes.
I also removed the related regression tests (which would otherwise fail)
and the user-facing documentation of the reloption.

We should undertake a more thorough code cleanup if the patch can't be
fixed, but not under the extreme time pressure of being already overdue
for 11.1 release.

Per report from Ondřej Bouda and subsequent private discussion among
pgsql-release.

Discussion: https://postgr.es/m/20181106185255.776mstcyehnc63ty@alvherre.pgsql
2018-11-06 18:33:28 -05:00
Thomas Munro c4f0876fb8 Remove set-but-unused variable.
Clean-up for commit c24dcd0c.

Reported-by: Andrew Dunstan
Discussion: https://postgr.es/m/2d52ff4a-5440-f6f1-7806-423b0e6370cb%402ndQuadrant.com
2018-11-07 12:06:43 +13:00
Andrew Gierth 5613da4cc7 Optimize nested ConvertRowtypeExpr nodes.
A ConvertRowtypeExpr is used to translate a whole-row reference of a
child to that of a parent. The planner produces nested
ConvertRowtypeExpr while translating whole-row reference of a leaf
partition in a multi-level partition hierarchy. Executor then
translates the whole-row reference from the leaf partition into all
the intermediate parent's whole-row references before arriving at the
final whole-row reference. It could instead translate the whole-row
reference from the leaf partition directly to the top-most parent's
whole-row reference skipping any intermediate translations.

Ashutosh Bapat, with tests by Kyotaro Horiguchi and some
editorialization by me. Reviewed by Andres Freund, Pavel Stehule,
Kyotaro Horiguchi, Dmitry Dolgov, Tom Lane.
2018-11-06 21:10:10 +00:00
Thomas Munro c24dcd0cfd Use pg_pread() and pg_pwrite() for data files and WAL.
Cut down on system calls by doing random I/O using offset-based OS
routines where available.  Remove the code for tracking the 'virtual'
seek position.  The only reason left to call FileSeek() was to get
the file's size, so provide a new function FileSize() instead.

Author: Oskari Saarenmaa, Thomas Munro
Reviewed-by: Thomas Munro, Jesper Pedersen, Tom Lane, Alvaro Herrera
Discussion: https://postgr.es/m/CAEepm=02rapCpPR3ZGF2vW=SBHSdFYO_bz_f-wwWJonmA3APgw@mail.gmail.com
Discussion: https://postgr.es/m/b8748d39-0b19-0514-a1b9-4e5a28e6a208%40gmail.com
Discussion: https://postgr.es/m/a86bd200-ebbe-d829-e3ca-0c4474b2fcb7%40ohmu.fi
2018-11-07 09:51:50 +13:00
Bruce Momjian b43df566b3 GUC: adjust effective_cache_size SQL descriptions
Follow on patch for commit 3e0f1a4741.

Reported-by: Peter Eisentraut

Discussion: https://postgr.es/m/369ec766-b947-51bd-4dad-6fb9e026439f@2ndquadrant.com

Backpatch-through: 9.4
2018-11-06 13:40:03 -05:00
Tom Lane 003c68a3b4 Rename rbtree.c functions to use "rbt" prefix not "rb" prefix.
The "rb" prefix is used by Ruby, so that our existing code results
in name collisions that break plruby.  We discussed ways to prevent
that by adjusting dynamic linker options, but it seems that at best
we'd move the pain to other cases.  Renaming to avoid the collision
is the only portable fix anyway.  Fortunately, our rbtree code is
not (yet?) widely used --- in core, there's only a single usage
in GIN --- so it seems likely that we can get away with a rename.

I chose to do this basically as s/rb/rbt/g, except for places where
there already was a "t" after "rb".  The patch could have been made
smaller by only touching linker-visible symbols, but it would have
resulted in oddly inconsistent-looking code.  Better to make it look
like "rbt" was the plan all along.

Back-patch to v10.  The rbtree.c code exists back to 9.5, but
rb_iterate() which is the actual immediate source of pain was added
in v10, so it seems like changing the names before that would have
more risk than benefit.

Per report from Pavel Raiskup.

Discussion: https://postgr.es/m/4738198.8KVIIDhgEB@nb.usersys.redhat.com
2018-11-06 13:25:24 -05:00
Thomas Munro 9e12fb02b7 Remove some remaining traces of dsm_resize().
A couple of obsolete comments and unreachable blocks remained after
commit 3c60d0fa.

Discussion: https://postgr.es/m/CAA4eK1%2B%3DyAFUvpFoHXFi_gm8YqmXN-TtkFH%2BVYjvDLS6-SFq-Q%40mail.gmail.com
2018-11-06 21:40:08 +13:00
Michael Paquier 8f045e242b Switch pg_promote to be parallel-safe
pg_promote uses nothing relying on a global state, so it is fine to mark
it as parallel-safe, conclusion based on a detailed analysis from Robert
Haas.  This also fixes an inconsistency where pg_proc.dat missed to mark
the function with its previous value for proparallel, update which does
not matter now as the default is used.

Based on a discussion between multiple folks: Laurenz Albe, Robert Haas,
Amit Kapila, Tom Lane and myself.

Discussion: https://postgr.es/m/20181029082530.GL14242@paquier.xyz
2018-11-06 14:11:21 +09:00
Thomas Munro 3c60d0fa23 Remove dsm_resize() and dsm_remap().
These interfaces were never used in core, didn't handle failure of
posix_fallocate() correctly and weren't supported on all platforms.
We agreed to remove them in 12.

Author: Thomas Munro
Reported-by: Andres Freund
Discussion: https://postgr.es/m/CAA4eK1%2B%3DyAFUvpFoHXFi_gm8YqmXN-TtkFH%2BVYjvDLS6-SFq-Q%40mail.gmail.com
2018-11-06 16:11:12 +13:00
Andres Freund a3fb382e9c Fix copy-paste error in errhint() introduced in 691d79a079.
Reported-By: Petr Jelinek
Discussion: https://postgr.es/m/c95a620b-34f0-7930-aeb5-f7ab804f26cb@2ndquadrant.com
Backpatch: 9.4-, like the previous commit
2018-11-05 12:05:38 -08:00
Michael Paquier dc3e436b19 Block creation of partitions with open references to its parent
When a partition is created as part of a trigger processing, it is
possible that the partition which just gets created changes the
properties of the table the executor of the ongoing command relies on,
causing a subsequent crash.  This has been found possible when for
example using a BEFORE INSERT which creates a new partition for a
partitioned table being inserted to.

Any attempt to do so is blocked when working on a partition, with
regression tests added for both CREATE TABLE PARTITION OF and ALTER
TABLE ATTACH PARTITION.

Reported-by: Dmitry Shalashov
Author: Amit Langote
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/15437-3fe01ee66bd1bae1@postgresql.org
Backpatch-through: 10
2018-11-05 11:04:02 +09:00
Michael Paquier 4bc772e2af Ignore partitioned tables when processing ON COMMIT DELETE ROWS
Those tables have no physical storage, making this option unusable with
partition trees as at commit time an actual truncation was attempted.
There are still issues with the way ON COMMIT actions are done when
mixing several action types, however this impacts as well inheritance
trees, so this issue will be dealt with later.

Reported-by: Rajkumar Raghuwanshi
Author: Amit Langote
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAKcux6mhgcjSiB_egqEAEFgX462QZtncU8QCAJ2HZwM-wWGVew@mail.gmail.com
2018-11-05 09:14:33 +09:00
Tom Lane 9b6fb9fbb4 Fix ExecuteCallStmt to not scribble on the passed-in parse tree.
Modifying the parse tree at execution time is, or at least ought to be,
verboten.  It seems quite difficult to actually cause a crash this way
in v11 (although you can exhibit it pretty easily in HEAD by messing
with plan_cache_mode).  Nonetheless, it's risky, so fix and back-patch.

Discussion: https://postgr.es/m/13789.1541359611@sss.pgh.pa.us
2018-11-04 14:50:55 -05:00
Tom Lane 3e0b05a756 Fix unused-variable warning.
Discussion: https://postgr.es/m/CAMkU=1xTHkS6d0iptCWykHc1Xrh3LBic_gZDo3JzDYru815fLQ@mail.gmail.com
2018-11-04 11:20:59 -05:00
Andres Freund 793beab37e Prevent generating EEOP_AGG_STRICT_INPUT_CHECK operations when nargs == 0.
This only became a problem with 4c640f4f38, which didn't synchronize
the value agg_strict_input_check.nargs is set to, with the guard
condition for emitting the operation.

Besides such instructions being unnecessary overhead, currently the
LLVM JIT provider doesn't support them. It seems more sensible to
avoid generating such instruction than supporting them. Add assertions
to make it easier to debug a potential further occurance.

Discussion: https://postgr.es/m/2a505161-2727-2473-7c46-591ed108ac52@email.cz
Backpatch: 11-, like 4c640f4f38.
2018-11-03 15:55:23 -07:00
Andres Freund 4c640f4f38 Fix STRICT check for strict aggregates with NULL ORDER BY columns.
I (Andres) broke this unintentionally in 69c3936a14, by checking
strictness for all input expressions computed for an aggregate, rather
than just the input for the aggregate transition function.

Reported-By: Ondřej Bouda
Bisected-By: Tom Lane
Diagnosed-By: Andrew Gierth
Discussion: https://postgr.es/m/2a505161-2727-2473-7c46-591ed108ac52@email.cz
Backpatch: 11-, like 69c3936a14
2018-11-03 14:48:42 -07:00
Tom Lane 981dc2baa8 Make ts_locale.c's character-type functions cope with UTF-16.
On Windows, in UTF8 database encoding, what char2wchar() produces is
UTF16 not UTF32, ie, characters above U+FFFF will be represented by
surrogate pairs.  t_isdigit() and siblings did not account for this
and failed to provide a large enough result buffer.  That in turn
led to bogus "invalid multibyte character for locale" errors, because
contrary to what you might think from char2wchar()'s documentation,
its Windows code path doesn't cope sanely with buffer overflow.

The solution for t_isdigit() and siblings is pretty clear: provide
a 3-wchar_t result buffer not 2.

char2wchar() also needs some work to provide more consistent, and more
accurately documented, buffer overrun behavior.  But that's a bigger job
and it doesn't actually have any immediate payoff, so leave it for later.

Per bug #15476 from Kenji Uno, who deserves credit for identifying the
cause of the problem.  Back-patch to all active branches.

Discussion: https://postgr.es/m/15476-4314f480acf0f114@postgresql.org
2018-11-03 13:56:10 -04:00
Alvaro Herrera dfa6081419 Fix tablespace handling for partitioned indexes
When creating partitioned indexes, the tablespace was not being saved
for the parent index. This meant that subsequently created partitions
would not use the right tablespace for their indexes.

ALTER INDEX SET TABLESPACE and ALTER INDEX ALL IN TABLESPACE raised
errors when tried; fix them too.  This requires bespoke code for
ATExecCmd() that applies to the special case when the tablespace move is
just a catalog change.

Discussion: https://postgr.es/m/20181102003138.uxpaca6qfxzskepi@alvherre.pgsql
2018-11-03 13:25:19 -03:00
Thomas Munro 1ce4a807e2 Fix NULL handling in multi-batch Parallel Hash Left Join.
NULL keys in left joins were skipped when building batch files.
Repair, by making the keep_nulls argument to ExecHashGetHashValue()
depend on whether this is a left outer join, as we do in other
paths.

Bug #15475.  Thinko in 1804284042.  Back-patch to 11.

Reported-by: Paul Schaap
Diagnosed-by: Andrew Gierth
Dicussion: https://postgr.es/m/15475-11a7a783fed72a36%40postgresql.org
2018-11-03 11:05:35 +13:00
Bruce Momjian 3e0f1a4741 GUC: adjust effective_cache_size docs and SQL description
Clarify that effective_cache_size is both kernel buffers and shared
buffers.

Reported-by: nat@makarevitch.org

Discussion: https://postgr.es/m/153685164808.22334.15432535018443165207@wrigleys.postgresql.org

Backpatch-through: 9.3
2018-11-02 09:11:00 -04:00
Magnus Hagander fbec7459aa Fix spelling errors and typos in comments
Author: Daniel Gustafsson <daniel@yesql.se>
2018-11-02 13:56:52 +01:00
Michael Paquier 6286efb524 Lower error level from PANIC to FATAL when restoring slots at startup
When restoring slot information from disk at startup and filling in
shared memory information, the startup process would issue a PANIC
message if more slots are found than what max_replication_slots allows,
and then Postgres generates a core dump, recommending to increase
max_replication_slots.  This gives users a switch to crash Postgres at
will by creating slots, lower the configuration to not support it, and
then restart it.

Making Postgres crash hard in this case is overdoing it just to give a
recommendation to users.  So instead use a FATAL, which makes Postgres
fail to start without crashing, still giving the recommendation.  This
is more consistent with what happens for prepared transactions for
example.

Author: Michael Paquier
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20181030025109.GD1644@paquier.xyz
2018-11-02 07:59:24 +09:00
Peter Eisentraut 96b00c433c Remove obsolete pg_constraint.consrc column
This has been deprecated and effectively unused for a long time.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-01 20:36:05 +01:00
Peter Eisentraut fe5038236c Remove obsolete pg_attrdef.adsrc column
This has been deprecated and effectively unused for a long time.

Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
2018-11-01 20:35:42 +01:00
Andres Freund 8a99f8a827 Fix error message typo introduced 691d79a079.
Reported-By: Michael Paquier
Discussion: https://postgr.es/m/20181101003405.GB1727@paquier.xyz
Backpatch: 9.4-, like the previous commit
2018-11-01 10:44:29 -07:00
Peter Geoghegan cb6f8a9a72 Adjust trace_sort log messages.
The project message style guide dictates: "When citing the name of an
object, state what kind of object it is".  The parallel CREATE INDEX
patch added a worker number to most of the trace_sort messages within
tuplesort.c without specifying the object type.  Bring these messages
into compliance with the style guide.

We're still treating a leader or serial Tuplesortstate as having worker
number -1.  trace_sort is a developer option, and these two cases are
highly comparable, so this seems appropriate.

Per complaint from Tom Lane.

Discussion: https://postgr.es/m/8330.1540831863@sss.pgh.pa.us
Backpatch: 11-, where parallel CREATE INDEX was introduced.
2018-11-01 09:18:57 -07:00
Andres Freund 691d79a079 Disallow starting server with insufficient wal_level for existing slot.
Previously it was possible to create a slot, change wal_level, and
restart, even if the new wal_level was insufficient for the
slot. That's a problem for both logical and physical slots, because
the necessary WAL records are not generated.

This removes a few tests in newer versions that, somewhat
inexplicably, whether restarting with a too low wal_level worked (a
buggy behaviour!).

Reported-By: Joshua D. Drake
Author: Andres Freund
Discussion: https://postgr.es/m/20181029191304.lbsmhshkyymhw22w@alap3.anarazel.de
Backpatch: 9.4-, where replication slots where introduced
2018-10-31 15:46:39 -07:00
Tom Lane 696b0c5fd0 Fix memory leak in repeated SPGIST index scans.
spgendscan neglected to pfree all the memory allocated by spgbeginscan.
It's possible to get away with that in most normal queries, since the
memory is allocated in the executor's per-query context which is about
to get deleted anyway; but it causes severe memory leakage during
creation or filling of large exclusion-constraint indexes.

Also, document that amendscan is supposed to free what ambeginscan
allocates.  The docs' lack of clarity on that point probably caused this
bug to begin with.  (There is discussion of changing that API spec going
forward, but I don't think it'd be appropriate for the back branches.)

Per report from Bruno Wolff.  It's been like this since the beginning,
so back-patch to all active branches.

In HEAD, also fix an independent leak caused by commit 2a6368343
(allocating memory during spgrescan instead of spgbeginscan, which
might be all right if it got cleaned up, but it didn't).  And do a bit
of code beautification on that commit, too.

Discussion: https://postgr.es/m/20181024012314.GA27428@wolff.to
2018-10-31 17:05:03 -04:00
Andres Freund c4ab62f9ac Fix typo in xlog.c.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/A6817958-949E-4A5B-895D-FA421B6640C2@yesql.se
2018-10-31 07:50:32 -07:00
Tom Lane 14a158f9bf Fix interaction of CASE and ArrayCoerceExpr.
An array-type coercion appearing within a CASE that has a constant
(after const-folding) test expression was mangled by the planner, causing
all the elements of the resulting array to be equal to the coerced value
of the CASE's test expression.  This is my oversight in commit c12d570fa:
that changed ArrayCoerceExpr to use a subexpression involving a
CaseTestExpr, and I didn't notice that eval_const_expressions needed an
adjustment to keep from folding such a CaseTestExpr to a constant when
it's inside a suitable CASE.

This is another in what's getting to be a depressingly long line of bugs
associated with misidentification of the referent of a CaseTestExpr.
We're overdue to redesign that mechanism; but any such fix is unlikely
to be back-patchable into v11.  As a stopgap, fix eval_const_expressions
to do what it must here.  Also add a bunch of comments pointing out the
restrictions and assumptions that are needed to make this work at all.

Also fix a related oversight: contain_context_dependent_node() was not
aware of the relationship of ArrayCoerceExpr to CaseTestExpr.  That was
somewhat fail-soft, in that the outcome of a wrong answer would be to
prevent optimizations that could have been made, but let's fix it while
we're at it.

Per bug #15471 from Matt Williams.  Back-patch to v11 where the faulty
logic came in.

Discussion: https://postgr.es/m/15471-1117f49271989bad@postgresql.org
2018-10-30 15:26:11 -04:00
Michael Paquier d5eec4eefd Add pg_partition_tree to display information about partitions
This new function is useful to display a full tree of partitions with a
partitioned table given in output, and avoids the need of any complex
WITH RECURSIVE query when looking at partition trees which are
deep multiple levels.

It returns a set of records, one for each partition, containing the
partition's name, its immediate parent's name, a boolean value telling
if the relation is a leaf in the tree and an integer telling its level
in the partition tree with given table considered as root, beginning at
zero for the root, and incrementing by one each time the scan goes one
level down.

Author: Amit Langote
Reviewed-by: Jesper Pedersen, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/8d00e51a-9a51-ad02-d53e-ba6bf50b2e52@lab.ntt.co.jp
2018-10-30 10:25:06 +09:00
Thomas Munro 051a1494bd Remove incorrect comment in dshash.c.
Back-patch to 11.

Author: Antonin Houska
Discussion: https://postgr.es/m/8726.1540553521%40localhost
2018-10-29 12:57:55 +13:00
Michael Paquier 10074651e3 Add pg_promote function
This function is able to promote a standby with this new SQL-callable
function.  Execution access can be granted to non-superusers so that
failover tools can observe the principle of least privilege.

Catalog version is bumped.

Author: Laurenz Albe
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6e7c79b3ec916cf49742fb8849ed17cd87aed620.camel@cybertec.at
2018-10-25 09:46:00 +09:00
Peter Eisentraut 0a8590b2a0 Apply unconstify() in more places
Discussion: https://www.postgresql.org/message-id/08adbe4e-38f8-2c73-55f0-591392371687%402ndquadrant.com
2018-10-25 01:02:46 +01:00
Andrew Dunstan 040a1df614 Correctly set t_self for heap tuples in expand_tuple
Commit 16828d5c0 incorrectly set an invalid pointer for t_self for heap
tuples. This patch correctly copies it from the source tuple, and
includes a regression test that relies on it being set correctly.

Backpatch to release 11.

Fixes bug #15448 reported by Tillmann Schulz

Diagnosis and test case by Amit Langote
2018-10-24 10:56:27 -04:00
Michael Paquier 5ef037cf0b List wait events in alphabetical order
This changes the documentation, and the related structures so as
everything is consistent.

Some wait events were not listed alphabetically since their
introduction, others have been added rather randomly.  Keeping all those
entries in order helps in maintenance, and helps the user looking at the
documentation.

Author: Michael Paquier, Kuntal Ghosh
Discussion: https://postgr.es/m/20181024002539.GI1658@paquier.xyz
Backpatch-through: 10, only for the documentation part to avoid an ABI
breakage.
2018-10-24 17:02:37 +09:00
Peter Eisentraut 5d7c703a44 Remove get_attidentity()
All existing uses can get this information more easily from the
relation descriptor, so the detour through the syscache is not
necessary.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-10-23 14:47:14 +02:00
Peter Eisentraut c903bb7b1c Remove get_atttypmod()
This has been unused since 2004.  get_atttypetypmodcoll() is often a
better alternative.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
2018-10-23 14:47:14 +02:00
Peter Eisentraut e6f5d1accd Drop const cast from dlsym() calls
This workaround might be obsolete.  We'll see if those "older
platforms" mentioned in the comment are still around.

Discussion: https://www.postgresql.org/message-id/08adbe4e-38f8-2c73-55f0-591392371687%402ndquadrant.com
2018-10-23 14:35:59 +02:00
Peter Eisentraut 807e4bc828 Sprinkle some const decorations
These mainly help understanding the function signatures better.
2018-10-23 12:25:17 +02:00
Michael Paquier 17f206fbc8 Set pg_class.relhassubclass for partitioned indexes
Like for relations, switching this parameter is optimistic by turning it
on each time a partitioned index gains a partition.  So seeing this
parameter set to true means that the partitioned index has or has had
partitions.  The flag cannot be reset yet for partitioned indexes, which
is something not obvious anyway as partitioned relations exist to have
partitions.

This allows to track more conveniently partition trees for indexes,
which will come in use with an upcoming patch helping in listing
partition trees with an SQL-callable function.

Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/80306490-b5fc-ea34-4427-f29c52156052@lab.ntt.co.jp
2018-10-22 11:04:48 +09:00
Tom Lane 2ddb9149d1 Server-side fix for delayed NOTIFY and SIGTERM processing.
Commit 4f85fde8e introduced some code that was meant to ensure that we'd
process cancel, die, sinval catchup, and notify interrupts while waiting
for client input.  But there was a flaw: it supposed that the process
latch would be set upon arrival at secure_read() if any such interrupt
was pending.  In reality, we might well have cleared the process latch
at some earlier point while those flags remained set -- particularly
notifyInterruptPending, which can't be handled as long as we're within
a transaction.

To fix the NOTIFY case, also attempt to process signals (except
ProcDiePending) before trying to read.

Also, if we see that ProcDiePending is set before we read, forcibly set the
process latch to ensure that we will handle that signal promptly if no data
is available.  I also made it set the process latch on the way out, in case
there is similar logic elsewhere.  (It remains true that we won't service
ProcDiePending here unless we need to wait for input.)

The code for handling ProcDiePending during a write needs those changes,
too.

Also be a little more careful about when to reset whereToSendOutput,
and improve related comments.

Back-patch to 9.5 where this code was added.  I'm not entirely convinced
that older branches don't have similar issues, but the complaint at hand
is just about the >= 9.5 code.

Jeff Janes and Tom Lane

Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
2018-10-19 21:39:21 -04:00