Commit Graph

8512 Commits

Author SHA1 Message Date
Andres Freund 86eaf208ea Hand code string to integer conversion for performance.
As benchmarks show, using libc's string-to-integer conversion is
pretty slow. At least part of the reason for that is that strtol[l]
have to be more generic than what largely is required inside pg.

This patch considerably speeds up int2/int4 input (int8 already was
already using hand-rolled code).

Most of the existing pg_atoi callers have been converted. But as one
requires pg_atoi's custom delimiter functionality, and as it seems
likely that there's external pg_atoi users, it seems sensible to just
keep pg_atoi around.

Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20171208214437.qgn6zdltyq5hmjpk@alap3.anarazel.de
2018-07-22 14:58:23 -07:00
Michael Paquier f2b1316a94 Bump catalog version for recent toast table additions
This has been forgotten in 96cdeae.
2018-07-20 09:28:19 +09:00
Michael Paquier 96cdeae07f Add toast tables to most system catalogs
It has been project policy to create toast tables only for those catalogs
that might reasonably need one.  Since this judgment call can change over
time, just create one for every catalog, as this can be useful when
creating rather-long entries in catalogs, with recent examples being in
the shape of policy expressions or customly-formatted SCRAM verifiers.

To prevent circular dependencies and to avoid adding complexity to VACUUM
FULL logic, exclude pg_class, pg_attribute, and pg_index.  Also, to
prevent pg_upgrade from seeing a non-empty new cluster, exclude
pg_largeobject and pg_largeobject_metadata from the set as large object
data is handled as user data.  Those relations have no reason to use a
toast table anyway.

Author: Joe Conway, John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/84ddff04-f122-784b-b6c5-3536804495f8@joeconway.com
2018-07-20 07:43:41 +09:00
Heikki Linnakangas 5220bb7533 Expand run-time partition pruning to work with MergeAppend
This expands the support for the run-time partition pruning which was added
for Append in 499be013de to also allow unneeded subnodes of a MergeAppend
to be removed.

Author: David Rowley
Discussion: https://www.postgresql.org/message-id/CAKJS1f_F_V8D7Wu-HVdnH7zCUxhoGK8XhLLtd%3DCu85qDZzXrgg%40mail.gmail.com
2018-07-19 13:49:43 +03:00
Tom Lane 3cb646264e Use a ResourceOwner to track buffer pins in all cases.
Historically, we've allowed auxiliary processes to take buffer pins without
tracking them in a ResourceOwner.  However, that creates problems for error
recovery.  In particular, we've seen multiple reports of assertion crashes
in the startup process when it gets an error while holding a buffer pin,
as for example if it gets ENOSPC during a write.  In a non-assert build,
the process would simply exit without releasing the pin at all.  We've
gotten away with that so far just because a failure exit of the startup
process translates to a database crash anyhow; but any similar behavior
in other aux processes could result in stuck pins and subsequent problems
in vacuum.

To improve this, institute a policy that we must *always* have a resowner
backing any attempt to pin a buffer, which we can enforce just by removing
the previous special-case code in resowner.c.  Add infrastructure to make
it easy to create a process-lifespan AuxProcessResourceOwner and clear
out its contents at appropriate times.  Replace existing ad-hoc resowner
management in bgwriter.c and other aux processes with that.  (Thus, while
the startup process gains a resowner where it had none at all before, some
other aux process types are replacing an ad-hoc resowner with this code.)
Also use the AuxProcessResourceOwner to manage buffer pins taken during
StartupXLOG and ShutdownXLOG, even when those are being run in a bootstrap
process or a standalone backend rather than a true auxiliary process.

In passing, remove some other ad-hoc resource owner creations that had
gotten cargo-culted into various other places.  As far as I can tell
that was all unnecessary, and if it had been necessary it was incomplete,
due to lacking any provision for clearing those resowners later.
(Also worth noting in this connection is that a process that hasn't called
InitBufferPoolBackend has no business accessing buffers; so there's more
to do than just add the resowner if we want to touch buffers in processes
not covered by this patch.)

Although this fixes a very old bug, no back-patch, because there's no
evidence of any significant problem in non-assert builds.

Patch by me, pursuant to a report from Justin Pryzby.  Thanks to
Robert Haas and Kyotaro Horiguchi for reviews.

Discussion: https://postgr.es/m/20180627233939.GA10276@telsasoft.com
2018-07-18 12:15:16 -04:00
Heikki Linnakangas 6b387179ba Fix misc typos, mostly in comments.
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.

Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
2018-07-18 16:17:32 +03:00
Robert Haas 32df1c9afa Add subtransaction handling for table synchronization workers.
Since the old logic was completely unaware of subtransactions, a
change made in a subsequently-aborted subtransaction would still cause
workers to be stopped at toplevel transaction commit.  Fix that by
managing a stack of worker lists rather than just one.

Amit Khandekar and Robert Haas

Discussion: http://postgr.es/m/CAJ3gD9eaG_mWqiOTA2LfAug-VRNn1hrhf50Xi1YroxL37QkZNg@mail.gmail.com
2018-07-16 17:33:22 -04:00
Peter Eisentraut f7cb2842bf Add plan_cache_mode setting
This allows overriding the choice of custom or generic plan.

Author: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRAGLaiEm8ur5DWEBo7qHRWTk9HxkuUAz00CZZtJj-LkCA%40mail.gmail.com
2018-07-16 13:35:41 +02:00
Michael Paquier ce89ad0fa0 Fix argument of pg_create_logical_replication_slot for slot name
All attributes and arguments using a slot name map to the data type
"name", but this function has been using "text".  This is cosmetic, as
even if text is used then the slot name would be truncated to 64
characters anyway and stored as such.  The documentation already said
so and the function already assumed that the argument was of this type
when fetching its value.

Bump catalog version.

Author: Sawada Masahiko
Discussion: https://postgr.es/m/CAD21AoADYz_-eAqH5AVFaCaojcRgwpo9PW=u8kgTMys63oB8Cw@mail.gmail.com
2018-07-13 09:32:12 +09:00
Amit Kapila 40ca70ebcc Allow using the updated tuple while moving it to a different partition.
An update that causes the tuple to be moved to a different partition was
missing out on re-constructing the to-be-updated tuple, based on the latest
tuple in the update chain.  Instead, it's simply deleting the latest tuple
and inserting a new tuple in the new partition based on the old tuple.
Commit 2f17844104 didn't consider this case, so some of the updates were
getting lost.

In passing, change the argument order for output parameter in ExecDelete
and add some commentary about it.

Reported-by: Pavan Deolasee
Author: Amit Khandekar, with minor changes by me
Reviewed-by: Dilip Kumar, Amit Kapila and Alvaro Herrera
Backpatch-through: 11
Discussion: https://postgr.es/m/CAJ3gD9fRbEzDqdeDq1jxqZUb47kJn+tQ7=Bcgjc8quqKsDViKQ@mail.gmail.com
2018-07-12 12:51:39 +05:30
Michael Paquier edc6b41bd4 Rename VACOPT_NOWAIT to VACOPT_SKIP_LOCKED
When it comes to SELECT ... FOR or LOCK, NOWAIT means to not wait for
something to happen, and issue an error.  SKIP LOCKED means to not wait
for something to happen but to move on without issuing an error.  The
internal option of autovacuum and autoanalyze mentioned above, used only
when wraparound is not involved was named NOWAIT, but behaves like SKIP
LOCKED which is confusing.

Author: Nathan Bossart
Discussion: https://postgr.es/m/20180307050345.GA3095@paquier.xyz
2018-07-12 14:28:28 +09:00
Tom Lane 39a96512b3 Mark built-in btree comparison functions as leakproof where it's safe.
Generally, if the comparison operators for a datatype or pair of datatypes
are leakproof, the corresponding btree comparison support function can be
considered so as well.  But we had not originally worried about marking
support functions as leakproof, reasoning that they'd not likely be used in
queries so the marking wouldn't matter.  It turns out there's at least one
place where it does matter: calc_arraycontsel() finds the target datatype's
default btree comparison function and tries to use that to estimate
selectivity, but it will be blocked in some cases if the function isn't
leakproof.  This leads to unnecessarily poor selectivity estimates and bad
plans, as seen in bug #15251.

Hence, run around and apply proleakproof markings where the corresponding
btree comparison operators are leakproof.  (I did eyeball each function
to verify that it wasn't doing anything surprising, too.)

This isn't a full solution to bug #15251, and it's not back-patchable
because of the need for a catversion bump.  A more useful response probably
is to consider whether we can check permissions on the parent table instead
of the child.  However, this change will help in some cases where that
won't, and it's easy enough to do in HEAD, so let's do so.

Discussion: https://postgr.es/m/3876.1531261875@sss.pgh.pa.us
2018-07-11 18:47:31 -04:00
Tom Lane ff4f889164 Fix bugs with degenerate window ORDER BY clauses in GROUPS/RANGE mode.
nodeWindowAgg.c failed to cope with the possibility that no ordering
columns are defined in the window frame for GROUPS mode or RANGE OFFSET
mode, leading to assertion failures or odd errors, as reported by Masahiko
Sawada and Lukas Eder.  In RANGE OFFSET mode, an ordering column is really
required, so add an Assert about that.  In GROUPS mode, the code would
work, except that the node initialization code wasn't in sync with the
execution code about when to set up tuplestore read pointers and spare
slots.  Fix the latter for consistency's sake (even though I think the
changes described below make the out-of-sync cases unreachable for now).

Per SQL spec, a single ordering column is required for RANGE OFFSET mode,
and at least one ordering column is required for GROUPS mode.  The parser
enforced the former but not the latter; add a check for that.

We were able to reach the no-ordering-column cases even with fully spec
compliant queries, though, because the planner would drop partitioning
and ordering columns from the generated plan if they were redundant with
earlier columns according to the redundant-pathkey logic, for instance
"PARTITION BY x ORDER BY y" in the presence of a "WHERE x=y" qual.
While in principle that's an optimization that could save some pointless
comparisons at runtime, it seems unlikely to be meaningful in the real
world.  I think this behavior was not so much an intentional optimization
as a side-effect of an ancient decision to construct the plan node's
ordering-column info by reverse-engineering the PathKeys of the input
path.  If we give up redundant-column removal then it takes very little
code to generate the plan node info directly from the WindowClause,
ensuring that we have the expected number of ordering columns in all
cases.  (If anyone does complain about this, the planner could perhaps
be taught to remove redundant columns only when it's safe to do so,
ie *not* in RANGE OFFSET mode.  But I doubt anyone ever will.)

With these changes, the WindowAggPath.winpathkeys field is not used for
anything anymore, so remove it.

The test cases added here are not actually very interesting given the
removal of the redundant-column-removal logic, but they would represent
important corner cases if anyone ever tries to put that back.

Tom Lane and Masahiko Sawada.  Back-patch to v11 where RANGE OFFSET
and GROUPS modes were added.

Discussion: https://postgr.es/m/CAD21AoDrWqycq-w_+Bx1cjc+YUhZ11XTj9rfxNiNDojjBx8Fjw@mail.gmail.com
Discussion: https://postgr.es/m/153086788677.17476.8002640580496698831@wrigleys.postgresql.org
2018-07-11 12:07:20 -04:00
Alexander Korotkov edf59c40dd Fix more wrong paths in header comments
It appears that there are more files, whose header comment paths are
wrong.  So, fix those paths.  No backpatching per proposal of Tom Lane.

Discussion: https://postgr.es/m/CAPpHfdsJyYbOj59MOQL%2B4XxdcomLSLfLqBtAvwR%2BpsCqj3ELdQ%40mail.gmail.com
2018-07-11 17:57:04 +03:00
Alvaro Herrera f2c587067a Rethink how to get float.h in old Windows API for isnan/isinf
We include <float.h> in every place that needs isnan(), because MSVC
used to require it.  However, since MSVC 2013 that's no longer necessary
(cf. commit cec8394b5c), so we can retire the inclusion to a
version-specific stanza in win32_port.h, where it doesn't need to
pollute random .c files.  The header is of course still needed in a few
places for other reasons.

I (Álvaro) removed float.h from a few more files than in Emre's original
patch.  This doesn't break the build in my system, but we'll see what
the buildfarm has to say about it all.

Author: Emre Hasegeli
Discussion: https://postgr.es/m/CAE2gYzyc0+5uG+Cd9-BSL7NKC8LSHLNg1Aq2=8ubjnUwut4_iw@mail.gmail.com
2018-07-11 09:11:48 -04:00
Thomas Munro f98b8476cd Use signals for postmaster death on FreeBSD.
Use FreeBSD 11.2's new support for detecting parent process death to
make PostmasterIsAlive() very cheap, as was done for Linux in an
earlier commit.

Author: Thomas Munro
Discussion: https://postgr.es/m/7261eb39-0369-f2f4-1bb5-62f3b6083b5e@iki.fi
2018-07-11 13:14:07 +12:00
Thomas Munro 9f09529952 Use signals for postmaster death on Linux.
Linux provides a way to ask for a signal when your parent process dies.
Use that to make PostmasterIsAlive() very cheap.

Based on a suggestion from Andres Freund.

Author: Thomas Munro, Heikki Linnakangas
Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/7261eb39-0369-f2f4-1bb5-62f3b6083b5e%40iki.fi
Discussion: https://postgr.es/m/20180411002643.6buofht4ranhei7k%40alap3.anarazel.de
2018-07-11 12:47:06 +12:00
Peter Eisentraut bcbd940806 Remove dynamic_shared_memory_type=none
PostgreSQL nowadays offers some kind of dynamic shared memory feature on
all supported platforms.  Having the choice of "none" prevents us from
relying on DSM in core features.  So this patch removes the choice of
"none".

Author: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
2018-07-10 18:35:24 +02:00
Peter Eisentraut 1486f7f981 Fix typos 2018-07-10 11:14:53 +02:00
Alvaro Herrera a22445ff0b Flip argument order in XLogSegNoOffsetToRecPtr
Commit fc49e24fa6 added an input argument after the existing output
argument.  Flip those.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20180708182345.imdgovmkffgtihhk@alvherre.pgsql
2018-07-09 14:33:38 -04:00
Peter Eisentraut ec67b89816 Add UtilityReturnsTuples() support for CALL
This ensures that prepared statements for CALL can return tuples.
2018-07-09 13:58:08 +02:00
Peter Eisentraut f61988d160 Fix typo 2018-07-05 08:31:40 +02:00
Fujii Masao b41669118c Improve the performance of relation deletes during recovery.
When multiple relations are deleted at the same transaction,
the files of those relations are deleted by one call to smgrdounlinkall(),
which leads to scan whole shared_buffers only one time. OTOH,
previously, during recovery, smgrdounlink() (not smgrdounlinkall()) was
called for each file to delete, which led to scan shared_buffers
multiple times. Obviously this could cause to increase the WAL replay
time very much especially when shared_buffers was huge.

To alleviate this situation, this commit changes the recovery so that
it also calls smgrdounlinkall() only one time to delete multiple
relation files.

This is just fix for oversight of commit 279628a0a7, not new feature.
So, per discussion on pgsql-hackers, we concluded to backpatch this
to all supported versions.

Author: Fujii Masao
Reviewed-by: Michael Paquier, Andres Freund, Thomas Munro, Kyotaro Horiguchi, Takayuki Tsunakawa
Discussion: https://postgr.es/m/CAHGQGwHVQkdfDqtvGVkty+19cQakAydXn1etGND3X0PHbZ3+6w@mail.gmail.com
2018-07-05 02:23:46 +09:00
Michael Paquier c55de5e512 Add wait event for fsync of WAL segments
This has been visibly a forgotten spot in the first implementation of
wait events for I/O added by 249cf07, and what has been missing is a
fsync call for WAL segments which is a wrapper reacting on the value of
GUC wal_sync_method.

Reported-by: Konstantin Knizhnik
Author: Konstantin Knizhnik
Reviewed-by: Craig Ringer, Michael Paquier
Discussion: https://postgr.es/m/4a243897-0ad8-f471-aa40-242591f2476e@postgrespro.ru
2018-07-02 22:19:46 +09:00
Andrew Dunstan feced1387f Stamp HEAD as 12devel
Let the hacking begin ...
2018-06-30 12:47:59 -04:00
Andrew Dunstan 1e9c858090 pgindent run prior to branching 2018-06-30 12:25:49 -04:00
Alvaro Herrera f5545287dc Fix typo in comment
Author: Amit Langote
Discussion: https://postgr.es/m/b23dc88b-df41-ef07-22c5-12f77cf73b57@lab.ntt.co.jp
2018-06-27 15:40:28 -04:00
Andres Freund 42121790ca Change pqformat.h's integer handling functions to take unsigned integers.
As added in 1de09ad8eb the new functions
all accept signed integers as parameters. That's not great, because
it's perfectly reasonable to call them with unsigned parameters.
Unfortunately unsigned to signed conversion is not well defined, when
exceeding the range of the signed value.  That's presently not a
practical issue in postgres (among other reasons because we force
gcc's hand with -fwrapv).  But it's clearly not quite right.

Thus change the signatures to accept unsigned integers instead, signed
to unsigned conversion is always well defined. Also change the
backward compat pq_sendint() - while it's deprecated it seems better
to be consistent.

Per discussion between Andrew Gierth, Michael Paquier, Alvaro Herrera
and Tom Lane.

Reported-By: Andrew Gierth
Author: Andres Freund
Discussion: https://postgr.es/m/87r2m10zm2.fsf@news-spur.riddles.org.uk
2018-06-26 23:40:32 -07:00
Amit Kapila 8121ab88e7 Cosmetic improvements for faster column addition.
Changed the name of few structure members for the sake of clarity and
removed spurious whitespace.

Reported-by: Amit Kapila
Author: Amit Kapila, based on suggestion by Andrew Dunstan
Reviewed-by: Alvaro Herrera
Discussion: https://postgr.es/m/CAA4eK1K2znsFpC+NQ9A4vxT4uDxADN4RmvHX0L6Y=aHVo9gB4Q@mail.gmail.com
2018-06-27 08:16:13 +05:30
Alvaro Herrera f49a80c481 Fix "base" snapshot handling in logical decoding
Two closely related bugs are fixed.  First, xmin of logical slots was
advanced too early.  During xl_running_xacts processing, xmin of the
slot was set to the oldest running xid in the record, but that's wrong:
actually, snapshots which will be used for not-yet-replayed transactions
might consider older txns as running too, so we need to keep xmin back
for them.  The problem wasn't noticed earlier because DDL which allows
to delete tuple (set xmax) while some another not-yet-committed
transaction looks at it is pretty rare, if not unique: e.g. all forms of
ALTER TABLE which change schema acquire ACCESS EXCLUSIVE lock
conflicting with any inserts. The included test case (test_decoding's
oldest_xmin) uses ALTER of a composite type, which doesn't have such
interlocking.

To deal with this, we must be able to quickly retrieve oldest xmin
(oldest running xid among all assigned snapshots) from ReorderBuffer. To
fix, add another list of ReorderBufferTXNs to the reorderbuffer, where
transactions are sorted by base-snapshot-LSN.  This is slightly
different from the existing (sorted by first-LSN) list, because a
transaction can have an earlier LSN but a later Xmin, if its first
record does not obtain an xmin (eg. xl_xact_assignment).  Note this new
list doesn't fully replace the existing txn list: we still need that one
to prevent WAL recycling.

The second issue concerns SnapBuilder snapshots and subtransactions.
SnapBuildDistributeNewCatalogSnapshot never assigned a snapshot to a
transaction that is known to be a subtxn, which is good in the common
case that the top-level transaction already has one (no point in doing
so), but a bug otherwise.  To fix, arrange to transfer the snapshot from
the subtxn to its top-level txn as soon as the kinship gets known.
test_decoding's snapshot_transfer verifies this.

Also, fix a minor memory leak: refcount of toplevel's old base snapshot
was not decremented when the snapshot is transferred from child.

Liberally sprinkle code comments, and rewrite a few existing ones.  This
part is my (Álvaro's) contribution to this commit, as I had to write all
those comments in order to understand the existing code and Arseny's
patch.

Reported-by: Arseny Sher <a.sher@postgrespro.ru>
Diagnosed-by: Arseny Sher <a.sher@postgrespro.ru>
Co-authored-by: Arseny Sher <a.sher@postgrespro.ru>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/87lgdyz1wj.fsf@ars-thinkpad
2018-06-26 16:48:10 -04:00
Alvaro Herrera 7d872c91a3 Allow direct lookups of AppendRelInfo by child relid
find_appinfos_by_relids had quite a large overhead when the number of
items in the append_rel_list was high, as it had to trawl through the
append_rel_list looking for AppendRelInfos belonging to the given
childrelids.  Since there can only be a single AppendRelInfo for each
child rel, it seems much better to store an array in PlannerInfo which
indexes these by child relid, making the function O(1) rather than O(N).
This function was only called once inside the planner, so just replace
that call with a lookup to the new array.  find_childrel_appendrelinfo
is now unused and thus removed.

This fixes a planner performance regression new to v11 reported by
Thomas Reiss.

Author: David Rowley
Reported-by: Thomas Reiss
Reviewed-by: Ashutosh Bapat
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/94dd7a4b-5e50-0712-911d-2278e055c622@dalibo.com
2018-06-26 10:35:26 -04:00
Alvaro Herrera 1d4e5edc1d Stamp 11beta2. 2018-06-25 11:09:49 -04:00
Andrew Dunstan 123efbccea Mark binary_upgrade_set_missing_value as parallel_unsafe
per buildfarm.

Bump catalog version again although in practice nobody is going to use
this in a parallel query.
2018-06-23 08:43:05 -04:00
Andrew Dunstan 2448adf29c Allow for pg_upgrade of attributes with missing values
Commit 16828d5c02 neglected to do this, so upgraded databases would
silently get null instead of the specified default in rows without the
attribute defined.

A new binary upgrade function is provided to perform this and pg_dump is
adjusted to output a call to the function if required in binary upgrade
mode.

Also included is code to drop missing attribute values for dropped
columns. That way if the type is later dropped the missing value won't
have a dangling reference to the type.

Finally the regression tests are adjusted to ensure that there is a row
with a missing value so that this code is exercised in upgrade testing.

Catalog version unfortunately bumped.

Regression test changes from Tom Lane.
Remainder from me, reviewed by Tom Lane, Andres Freund, Alvaro Herrera

Discussion: https://postgr.es/m/19987.1529420110@sss.pgh.pa.us
2018-06-22 08:42:36 -04:00
Alvaro Herrera 8f97af60d1 Consistently use the term 'partitioned rel' in partprune comments
We were using 'partition rel' in a few places, which is quite confusing.

Author: Amit Langote
Reviewed-by: David Rowley
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/fd256561-31a2-4b7e-cd84-d8241e7ebc3f@lab.ntt.co.jp
2018-06-20 11:43:01 -04:00
Michael Paquier bde64eb610 Track new configure flags introduced for version 11 in pg_config.h.win32
The following set of flags mainly matter when building Postgres code
with MSVC and those have been forgotten with latest developments:
- HAVE_LDAP_INITIALIZE, added by 35c0754f, and marked as disabled.
ldap_initialize() is a non-standard extension that provides a way to use
"ldaps" with OpenLDAP, but it is not supported on Windows, and instead
the non-standard ldap_sslinit() is used if WIN32 is defined.  Per input
from Thomas Munro.
- HAVE_X509_GET_SIGNATURE_NID, added by 054e8c6c, which is used by
SCRAM's channel binding tls-server-end-point.  Having this flag disabled
would cause this channel binding type to be unsupported for Windows
builds.
- HAVE_SSL_CLEAR_OPTIONS, added recently as of a364dfa4 to disable SSL
compression.
- HAVE_ASN1_STRING_GET0_DATA, added by 5c6df67, which is used to track
a new compatibility with OpenSSL 1.1.0.  This was missing from
pg_config.win32.h and is not enabled by default.  HAVE_BIO_GET_DATA,
HAVE_OPENSSL_INIT_SSL and HAVE_BIO_METH_NEW gain the same treatment.

The second and third flags are enabled with this commit, which raises
the bar of OpenSSL support to 1.0.2 on Windows as a minimum.  As this
is the LTS (long-time support) version of OpenSSL community and knowing
that all recent installers referred by OpenSSL upstream don't have
anymore 1.0.1 or older, we could live with that requirement.  In order
to allow the code to compile with OpenSSL 1.1.0, all the flags mentioned
above need to be enabled in pg_config.h.win32.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20180529211559.GF6632@paquier.xyz
2018-06-19 09:00:33 +09:00
Simon Riggs 15378c1a15 Remove AELs from subxids correctly on standby
Issues relate only to subtransactions that hold AccessExclusiveLocks
when replayed on standby.

Prior to PG10, aborting subtransactions that held an
AccessExclusiveLock failed to release the lock until top level commit or
abort. 49bff5300d fixed that.

However, 49bff5300d also introduced a similar bug where subtransaction
commit would fail to release an AccessExclusiveLock, leaving the lock to
be removed sometimes early and sometimes late. This commit fixes
that bug also. Backpatch to PG10 needed.

Tested by observation. Note need for multi-node isolationtester to improve
test coverage for this and other HS cases.

Reported-by: Simon Riggs
Author: Simon Riggs
2018-06-16 14:03:29 +01:00
Tom Lane 19832753f1 Fix some ill-chosen names for globally-visible partition support functions.
"compute_hash_value" is particularly gratuitously generic, but IMO
all of these ought to have names clearly related to partitioning.
2018-06-13 13:18:02 -04:00
Tom Lane e23bae82cf Fix up run-time partition pruning's use of relcache's partition data.
The previous coding saved pointers into the partitioned table's relcache
entry, but then closed the relcache entry, causing those pointers to
nominally become dangling.  Actual trouble would be seen in the field
only if a relcache flush occurred mid-query, but that's hardly out of
the question.

While we could fix this by copying all the data in question at query
start, it seems better to just hold the relcache entry open for the
whole query.

While at it, improve the handling of support-function lookups: do that
once per query not once per pruning test.  There's still something to be
desired here, in that we fail to exploit the possibility of caching data
across queries in the fn_extra fields of the relcache's FmgrInfo structs,
which could happen if we just used those structs in-place rather than
copying them.  However, combining that with the possibility of per-query
lookups of cross-type comparison functions seems to require changes in the
APIs of a lot of the pruning support functions, so it's too invasive to
consider as part of this patch.  A win would ensue only for complex
partition key data types (e.g. arrays), so it may not be worth the
trouble.

David Rowley and Tom Lane

Discussion: https://postgr.es/m/17850.1528755844@sss.pgh.pa.us
2018-06-13 12:03:26 -04:00
Andres Freund a54e1f1587 Fix bugs in vacuum of shared rels, by keeping their relcache entries current.
When vacuum processes a relation it uses the corresponding relcache
entry's relfrozenxid / relminmxid as a cutoff for when to remove
tuples etc. Unfortunately for nailed relations (i.e. critical system
catalogs) bugs could frequently lead to the corresponding relcache
entry being stale.

This set of bugs could cause actual data corruption as vacuum would
potentially not remove the correct row versions, potentially reviving
them at a later point.  After 699bf7d05c some corruptions in this vein
were prevented, but the additional error checks could also trigger
spuriously. Examples of such errors are:
  ERROR: found xmin ... from before relfrozenxid ...
and
  ERROR: found multixact ... from before relminmxid ...
To be caused by this bug the errors have to occur on system catalog
tables.

The two bugs are:

1) Invalidations for nailed relations were ignored, based on the
   theory that the relcache entry for such tables doesn't
   change. Which is largely true, except for fields like relfrozenxid
   etc.  This means that changes to relations vacuumed in other
   sessions weren't picked up by already existing sessions.  Luckily
   autovacuum doesn't have particularly longrunning sessions.

2) For shared *and* nailed relations, the shared relcache init file
   was never invalidated while running.  That means that for such
   tables (e.g. pg_authid, pg_database) it's not just already existing
   sessions that are affected, but even new connections are as well.
   That explains why the reports usually were about pg_authid et. al.

To fix 1), revalidate the rd_rel portion of a relcache entry when
invalid. This implies a bit of extra complexity to deal with
bootstrapping, but it's not too bad.  The fix for 2) is simpler,
simply always remove both the shared and local init files.

Author: Andres Freund
Reviewed-By: Alvaro Herrera
Discussion:
    https://postgr.es/m/20180525203736.crkbg36muzxrjj5e@alap3.anarazel.de
    https://postgr.es/m/CAMa1XUhKSJd98JW4o9StWPrfS=11bPgG+_GDMxe25TvUY4Sugg@mail.gmail.com
    https://postgr.es/m/CAKMFJucqbuoDRfxPDX39WhA3vJyxweRg_zDVXzncr6+5wOguWA@mail.gmail.com
    https://postgr.es/m/CAGewt-ujGpMLQ09gXcUFMZaZsGJC98VXHEFbF-tpPB0fB13K+A@mail.gmail.com
Backpatch: 9.3-
2018-06-12 11:13:21 -07:00
Tom Lane 4e23236403 Improve commentary about run-time partition pruning data structures.
No code changes except for a couple of new Asserts.

David Rowley and Tom Lane

Discussion: https://postgr.es/m/CAKJS1f-6GODRNgEtdPxCnAPme2h2hTztB6LmtfdmcYAAOE0kQg@mail.gmail.com
2018-06-11 17:35:53 -04:00
Alvaro Herrera 5b0c7e2f75 Don't needlessly check the partition contraint twice
Starting with commit f0e44751d7, ExecConstraints was in charge of
running the partition constraint; commit 19c47e7c82 modified that so
that caller could request to skip that checking depending on some
conditions, but that commit and 15ce775faa together introduced a small
bug there which caused ExecInsert to request skipping the constraint
check but have this not be honored -- in effect doing the check twice.
This could have been fixed in a very small patch, but on further
analysis of the involved function and its callsites, it turns out to be
simpler to give the responsibility of checking the partition constraint
fully to the caller, and return ExecConstraints to its original
(pre-partitioning) shape where it only checked tuple descriptor-related
constraints.  Each caller must do partition constraint checking on its
own schedule, which is more convenient after commit 2f17844104 anyway.

Reported-by: David Rowley
Author: David Rowley, Álvaro Herrera
Reviewed-by: Amit Langote, Amit Khandekar, Simon Riggs
Discussion: https://postgr.es/m/CAKJS1f8w8+awsxgea8wt7_UX8qzOQ=Tm1LD+U1fHqBAkXxkW2w@mail.gmail.com
2018-06-11 17:12:16 -04:00
Tom Lane 321f648a31 Assorted cosmetic cleanup of run-time-partition-pruning code.
Use "subplan" rather than "subnode" to refer to the child plans of
a partitioning Append; this seems a bit more specific and hence
clearer.  Improve assorted comments.  No non-cosmetic changes.

David Rowley and Tom Lane

Discussion: https://postgr.es/m/CAFj8pRBjrufA3ocDm8o4LPGNye9Y+pm1b9kCwode4X04CULG3g@mail.gmail.com
2018-06-10 18:24:34 -04:00
Tom Lane 939449de0e Relocate partition pruning structs to a saner place.
These struct definitions were originally dropped into primnodes.h,
which is a poor choice since that's mainly intended for primitive
expression node types; these are not in that category.  What they
are is auxiliary info in Plan trees, so move them to plannodes.h.

For consistency, also relocate some related code that was apparently
placed with the aid of a dartboard.

There's no interesting code changes in this commit, just reshuffling.

David Rowley and Tom Lane

Discussion: https://postgr.es/m/CAFj8pRBjrufA3ocDm8o4LPGNye9Y+pm1b9kCwode4X04CULG3g@mail.gmail.com
2018-06-10 16:30:14 -04:00
Tom Lane 73b7f48f78 Improve run-time partition pruning to handle any stable expression.
The initial coding of the run-time-pruning feature only coped with cases
where the partition key(s) are compared to Params.  That is a bit silly;
we can allow it to work with any non-Var-containing stable expression, as
long as we take special care with expressions containing PARAM_EXEC Params.
The code is hardly any longer this way, and it's considerably clearer
(IMO at least).  Per gripe from Pavel Stehule.

David Rowley, whacked around a bit by me

Discussion: https://postgr.es/m/CAFj8pRBjrufA3ocDm8o4LPGNye9Y+pm1b9kCwode4X04CULG3g@mail.gmail.com
2018-06-10 15:22:32 -04:00
Michael Paquier 9e149c847f Fix and document lock handling for in-memory replication slot data
While debugging issues on HEAD for the new slot forwarding feature of
Postgres 11, some monitoring of the code surrounding in-memory slot data
has proved that the lock handling may cause inconsistent data to be read
by read-only callers of slot functions, particularly
pg_get_replication_slots() which fetches data for the system view
pg_replication_slots, or modules looking directly at slot information.

The code paths involved in those problems concern logical decoding
initialization (down to 9.4) and WAL reservation for slots (new as of
10).

A set of comments documenting all the lock handlings, particularly the
dependency with LW locks for slots and the in_use flag as well as the
internal mutex lock is added, based on a suggested by Simon Riggs.

Some of the fixed code exists down to 9.4 where WAL decoding has been
introduced, but as those race conditions are really unlikely going to
happen as those concern code paths for slot and decoding creation, just
fix the problem on HEAD.

Author: Michael Paquier

Discussion: https://postgr.es/m/20180528085747.GA27845@paquier.xyz
2018-06-10 19:39:26 +09:00
Alvaro Herrera 0c8910a0ca Teach SHOW ALL to honor pg_read_all_settings membership
Also, fix the pg_settings view to display source filename and line
number when invoked by a pg_read_all_settings member.  This addition by
me (Álvaro).

Also, fix wording of the comment in GetConfigOption regarding the
restriction it implements, renaming the parameter for extra clarity.
Noted by Michaël.

These were all oversight in commit 25fff40798fc; backpatch to pg10,
where that commit first appeared.

Author: Laurenz Albe
Reviewed-by: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/1519917758.6586.8.camel@cybertec.at
2018-06-08 16:19:05 -04:00
Teodor Sigaev 08186dc05b Move _bt_upgrademetapage() into critical section.
Any changes on page should be done in critical section, so move
_bt_upgrademetapage into critical section. Improve comment. Found by Amit
Kapila during post-commit review of 857f9c36.

Author: Amit Kapila
2018-05-30 19:45:39 +03:00
Andrew Dunstan 3a7cc727c7 Don't fall off the end of perl functions
This complies with the perlcritic policy
Subroutines::RequireFinalReturn, which is a severity 4 policy. Since we
only currently check at severity level 5, the policy is raised to that
level until we move to level 4 or lower, so that any new infringements
will be caught.

A small cosmetic piece of tidying of the pgperlcritic script is
included.

Mike Blackwell

Discussion: https://postgr.es/m/CAESHdJpfFm_9wQnQ3koY3c91FoRQsO-fh02za9R3OEMndOn84A@mail.gmail.com
2018-05-27 09:08:42 -04:00
Tom Lane 71b349aef4 Update a couple of long-obsolete comments in pg_type.h. 2018-05-26 13:47:26 -04:00
Tom Lane b929614f5e Remove configure's check for nonstandard "long long" printf modifiers.
We used to claim to support platforms using 'q' or 'I64' as the printf
length modifier for long long int, by dint of replacing snprintf with
our own code which uses the C99 standard 'll' modifier.  But that is
only adequate if we use INT64_MODIFIER only in snprintf-based calls,
not directly with the platform's native printf or fprintf.  Which
hasn't been the case for years.  We had not noticed, partially because
of inadequate test coverage, and partially because the buildfarm is
almost completely bare of machines that won't take 'll'.  The last
one seems to have been frogmouth, which was adjusted recently so that
it will take 'll'.  We might as well just give up on the pretense
that anything else works, and save ourselves some configure cycles.

Discussion: https://postgr.es/m/13103.1526749980@sss.pgh.pa.us
Discussion: https://postgr.es/m/24769.1526772680@sss.pgh.pa.us
2018-05-23 14:19:04 -04:00
Tom Lane 1d96c1b91a Fix incorrect ordering of operations in pg_resetwal and pg_rewind.
Commit c37b3d08c dropped its added GetDataDirectoryCreatePerm call into
the wrong place in pg_resetwal.c, namely after the chdir to DataDir.
That broke invocations using a relative path, as reported by Tushar Ahuja.
We could have left it where it was and changed the argument to be ".",
but that'd result in a rather confusing error message in event of a
failure, so re-ordering seems like a better solution.

Similarly reorder operations in pg_rewind.c.  The issue there is that
it doesn't seem like a good idea to do any actual operations before the
not-root check (on Unix) or the restricted token acquisition (on Windows).
I don't know that this is an actual bug, but I'm definitely not convinced
that it isn't, either.

Assorted other code review for c37b3d08c and da9b580d8: fix some
misspelled or otherwise badly worded comments, put the #include for
<sys/stat.h> where it actually belongs, etc.

Discussion: https://postgr.es/m/aeb9c3a7-3c3f-a57f-1a18-c8d4fcdc2a1f@enterprisedb.com
2018-05-23 10:59:55 -04:00
Tom Lane 586e4e6df5 Stamp 11beta1. 2018-05-21 17:08:10 -04:00
Tom Lane f755a152d4 Improve spelling of new FINALFUNC_MODIFY aggregate attribute.
I'd used SHARABLE as a value originally, but Peter Eisentraut points out
that dictionaries agree that SHAREABLE is the preferred spelling.
Run around and change that before it's too late.

Discussion: https://postgr.es/m/d2e1afd4-659c-50d6-1b20-7cfd3675e909@2ndquadrant.com
2018-05-21 11:41:42 -04:00
Tom Lane e7a808f947 Assorted minor cleanups for bootstrap-data Perl scripts.
FindDefinedSymbol was intended to take an array of possible include
paths, but it never actually worked correctly for any but the first
array element.  Since there's no use-case for more than one path
anyway, let's just simplify this code and its callers by redefining
it as taking only one include path.

Minor other code-beautification without functional effects, except
that in one place we format the output as pgindent would do.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGXM_n32hTTkircW4_K1LQFsJNb6xjs0pAP4QC0ZpyJfPQ@mail.gmail.com
2018-05-19 16:04:47 -04:00
Tom Lane 06f66cff9e Support platforms where strtoll/strtoull are spelled __strtoll/__strtoull.
Ancient HPUX, for one, does this.  We hadn't noticed due to the lack
of regression tests that required a working strtoll.

(I was slightly tempted to remove the other historical spelling,
strto[u]q, since it seems we have no buildfarm members testing that case.
But I refrained.)

Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
2018-05-19 14:22:18 -04:00
Tom Lane a6228128fc Arrange to supply declarations for strtoll/strtoull if needed.
Buildfarm member dromedary is still unhappy about the recently-added
ecpg "long long" tests.  The reason turns out to be that it includes
"-ansi" in its CFLAGS, and in their infinite wisdom Apple have decided
to hide the declarations of strtoll/strtoull in C89-compliant builds.
(I find it pretty curious that they hide those function declarations
when you can nonetheless declare a "long long" variable, but anyway
that is their behavior, both on dromedary's obsolete macOS version and
the newest and shiniest.)  As a result, gcc assumes these functions
return "int", leading naturally to wrong results.

(Looking at dromedary's past build results, it's evident that this
problem also breaks pg_strtouint64() on 32-bit platforms; but we
evidently have no regression tests that exercise that function with
values above 32 bits.)

To fix, supply declarations for these functions when the platform
provides the functions but not the declarations, using the same type
of mechanism as we use for some other similar cases.

Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
2018-05-18 22:42:10 -04:00
Tom Lane f586f86389 Recognize that MSVC can support strtoll() and strtoull().
This is needed for full support of "long long" variables in ecpg, but
the previous patch for bug #15080 (commits 51057feaa et al) missed it.
In MSVC versions where the functions don't exist under those names,
we can nonetheless use _strtoi64() and _strtoui64().

Like the previous patch, back-patch all the way.

Dang Minh Huong

Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
2018-05-18 12:52:28 -04:00
Tom Lane 2efc924180 Detoast plpgsql variables if they might live across a transaction boundary.
Up to now, it's been safe for plpgsql to store TOAST pointers in its
variables because the ActiveSnapshot for whatever query called the plpgsql
function will surely protect such TOAST values from being vacuumed away,
even if the owning table rows are committed dead.  With the introduction of
procedures, that assumption is no longer good in "non atomic" executions
of plpgsql code.  We adopt the slightly brute-force solution of detoasting
all TOAST pointers at the time they are stored into variables, if we're in
a non-atomic context, just in case the owning row goes away.

Some care is needed to avoid long-term memory leaks, since plpgsql tends
to run with CurrentMemoryContext pointing to its call-lifespan context,
but we shouldn't assume that no memory is leaked by heap_tuple_fetch_attr.
In plpgsql proper, we can do the detoasting work in the "eval_mcontext".

Most of the code thrashing here is due to the need to add this capability
to expandedrecord.c as well as plpgsql proper.  In expandedrecord.c,
we can't assume that the caller's context is short-lived, so make use of
the short-term sub-context that was already invented for checking domain
constraints.  In view of this repurposing, it seems good to rename that
variable and associated code from "domain_check_cxt" to "short_term_cxt".

Peter Eisentraut and Tom Lane

Discussion: https://postgr.es/m/5AC06865.9050005@anastigmatix.net
2018-05-16 14:56:52 -04:00
Alvaro Herrera d1e2cac5ff Make gen_partprune_steps static
There's no need to export this function, so don't.  Michaël didn't
actually write the patch, but we list him as first author because with a
trivial one like this, intellectual authorship is as important (if not
more) as bit shovelling.

Author: Michaël Paquier, Amit Langote
Discussion: https://postgr.es/m/c91299c4-199b-0f16-339b-a29d6d2a39ee@lab.ntt.co.jp
2018-05-09 10:40:25 -03:00
Heikki Linnakangas 0668719801 Fix scenario where streaming standby gets stuck at a continuation record.
If a continuation record is split so that its first half has already been
removed from the master, and is only present in pg_wal, and there is a
recycled WAL segment in the standby server that looks like it would
contain the second half, recovery would get stuck. The code in
XLogPageRead() incorrectly started streaming at the beginning of the
WAL record, even if we had already read the first page.

Backpatch to 9.4. In principle, older versions have the same problem, but
without replication slots, there was no straightforward mechanism to
prevent the master from recycling old WAL that was still needed by standby.
Without such a mechanism, I think it's reasonable to assume that there's
enough slack in how many old segments are kept around to not run into this,
or you have a WAL archive.

Reported by Jonathon Nelson. Analysis and patch by Kyotaro HORIGUCHI, with
some extra comments by me.

Discussion: https://www.postgresql.org/message-id/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com
2018-05-05 01:34:53 +03:00
Teodor Sigaev 0bef1c0678 Re-think predicate locking on GIN indexes.
The principle behind the locking was not very well thought-out, and not
documented. Add a section in the README to explain how it's supposed to
work, and change the code so that it actually works that way.

This fixes two bugs:

1. If fast update was turned on concurrently, subsequent inserts to the
   pending list would not conflict with predicate locks that were acquired
   earlier, on entry pages. The included 'predicate-gin-fastupdate' test
   demonstrates that. To fix, make all scans acquire a predicate lock on
   the metapage. That lock represents a scan of the pending list, whether
   or not there is a pending list at the moment. Forget about the
   optimization to skip locking/checking for locks, when fastupdate=off.
2. If a scan finds no match, it still needs to lock the entry page. The
   point of predicate locks is to lock the gabs between values, whether
   or not there is a match. The included 'predicate-gin-nomatch' test
   tests that case.

In addition to those two bug fixes, this removes some unnecessary locking,
following the principle laid out in the README. Because all items in
a posting tree have the same key value, a lock on the posting tree root is
enough to cover all the items. (With a very large posting tree, it would
possibly be better to lock the posting tree leaf pages instead, so that a
"skip scan" with a query like "A & B", you could avoid unnecessary conflict
if a new tuple is inserted with A but !B. But let's keep this simple.)

Also, some spelling  fixes.

Author: Heikki Linnakangas with some editorization by me
Review: Andrey Borodin, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/0b3ad2c2-2692-62a9-3a04-5724f2af9114@iki.fi
2018-05-04 11:27:50 +03:00
Tom Lane 9bf28f96c7 Rearrange makefile rules for running Gen_fmgrtab.pl.
Make these rules look more like the ones associated with genbki.pl,
to wit:

* Use a stamp file to record when we last ran the script, instead of
relying on the timestamps of the individual output files.

* Take the knowledge out of backend/Makefile and put it in utils/Makefile
where it belongs.  I moved down the handling of errcodes.h and probes.h
too, although those continue to be built by separate processes.

In itself, this is just much-needed cleanup with little practical effect.
However, by decoupling these makefile rules from the timestamps of the
generated header files, we open the door to not advancing those timestamps
unnecessarily, which will be taken advantage of by the next commit.

msvc/Solution.pm should be taught to do things similarly, but I'll leave
that for another commit.

Discussion: https://postgr.es/m/16925.1525376229@sss.pgh.pa.us
2018-05-03 17:54:18 -04:00
Peter Eisentraut 30c66e77be Fix SPI error cleanup and memory leak
Since the SPI stack has been moved from TopTransactionContext to
TopMemoryContext, setting _SPI_stack to NULL in AtEOXact_SPI() leaks
memory.  In fact, we don't need to do that anymore: We just leave the
allocated stack around for the next SPI use.

Also, refactor the SPI cleanup so that it is run both at transaction end
and when returning to the main loop on an exception.  The latter is
necessary when a procedure calls a COMMIT or ROLLBACK command that
itself causes an error.
2018-05-03 08:39:15 -04:00
Tom Lane 1c72ec6f49 Improve our method for probing the availability of ARM CRC instructions.
Instead of depending on glibc's getauxval() function, just try to execute
the CRC code, and trap SIGILL if that happens.

Thomas Munro

Discussion: https://postgr.es/m/HE1PR0801MB1323D171938EABC04FFE7FA9E3110@HE1PR0801MB1323.eurprd08.prod.outlook.com
2018-05-02 18:06:43 -04:00
Heikki Linnakangas 445e31bdc7 Fix some sloppiness in the new BufFileSize() and BufFileAppend() functions.
There were three related issues:

* BufFileAppend() incorrectly reset the seek position on the 'source' file.
  As a result, if you had called BufFileRead() on the file before calling
  BufFileAppend(), it got confused, and subsequent calls would read/write
  at wrong position.

* BufFileSize() did not work with files opened with BufFileOpenShared().

* FileGetSize() only worked on temporary files.

To fix, change the way BufFileSize() works so that it works on shared
files. Remove FileGetSize() altogether, as it's no longer needed. Remove
buffilesize from TapeShare struct, as the leader process can simply call
BufFileSize() to get the tape's size, there's no need to pass it through
shared memory anymore.

Discussion: https://www.postgresql.org/message-id/CAH2-WznEDYe_NZXxmnOfsoV54oFkTdMy7YLE2NPBLuttO96vTQ@mail.gmail.com
2018-05-02 17:23:13 +03:00
Tom Lane 6fe25c1358 Change SIZEOF_BOOL to 1 for Windows.
For some reason it was previously defined as 0, which is silly.  The only
effect was to disable use of <stdbool.h>, which commit b2328bf62 intended
to make possible.

Thomas Munro

Discussion: https://postgr.es/m/CAEepm%3D3%3DTDYEXUEcHpEx%2BTwc31wo7PA0oBAiNt6sWmq93MW02A%40mail.gmail.com
2018-05-02 00:21:21 -04:00
Tom Lane b2328bf62b Fix some assorted compiler warnings on Windows.
Don't overflow the result type of constant expressions.  Don't negate
unsigned types.  Define HAVE_STDBOOL_H for Visual C++ 2013 and later.

Thomas Munro
Reviewed-By: Michael Paquier and Tom Lane

Discussion: https://postgr.es/m/CAEepm%3D3%3DTDYEXUEcHpEx%2BTwc31wo7PA0oBAiNt6sWmq93MW02A%40mail.gmail.com
2018-05-01 19:38:26 -04:00
Andres Freund 1667148a4d Improve representation of 'moved partitions' indicator on deleted tuples.
Previously a tuple that has been moved to a different partition (see
f16241bef7), set the block number on the old tuple to an invalid
value to indicate that fact. But the tuple offset was left
untouched. That turned out to trigger a wal_consistency_checking
failure as reported by Peter Geoghegan, as the offset wasn't
always overwritten during WAL replay.

Heikki observed that we're wasting valuable data by not putting
information also in the offset. Thus set that to
MovedPartitionsOffsetNumber when a tuple indicates it has moved.

We continue to set the block number to MovedPartitionsBlockNumber, as
that seems more likely to cause problems for code not updated to know
about moved tuples.

As t_ctid's offset number is now always set, this refinement also
fixes the wal_consistency_checking issue.

This technically is a minor disk format break, with previously created
moved tuples not being recognized anymore. But since there not even
has been a beta release since f16241bef7c...

Reported-By: Peter Geoghegan
Author: Heikki Linnakangas, Amul Sul
Discussion: https://postgr.es/m/CAH2-Wzm9ty+1BX7-GMNJ=xPRg67oJTVeDNdA9LSyJJtMgRiCMA@mail.gmail.com
2018-05-01 13:30:12 -07:00
Tom Lane 85475afdb6 Cosmetic improvement: use BKI_DEFAULT and BKI_LOOKUP in pg_language.
The point of this is not really to remove redundancy in pg_language.dat;
with only three entries, it's hardly worth it.  Rather, it is to get
to a point where there are exactly zero hard-coded numeric pg_proc OID
references in the catalog .dat files.  The lanvalidator column was the
only remaining location of such references, and it seems like a good
thing for future-proofing reasons to make it not be a special case.

There are still a few places in the .dat files with numeric OID references
to other catalogs, but after review I don't see any that seem worth
changing at present.  In each case there are just too few entries to make
it worth the trouble to create lookup infrastructure.

This doesn't change the emitted postgres.bki file, so no catversion bump.
2018-04-29 13:26:26 -04:00
Tom Lane 9cb7db3f0c In AtEOXact_Files, complain if any files remain unclosed at commit.
This change makes this module act more like most of our other low-level
resource management modules.  It's a caller error if something is not
explicitly closed by the end of a successful transaction, so issue
a WARNING about it.  This would not actually have caught the file leak
bug fixed in commit 231bcd080, because that was in a transaction-abort
path; but it still seems like a good, and pretty cheap, cross-check.

Discussion: https://postgr.es/m/152056616579.4966.583293218357089052@wrigleys.postgresql.org
2018-04-28 17:45:02 -04:00
Tom Lane 84549ebd4c Tweak reformat_dat_file.pl to make it more easily hand-invokable.
Use the same code we already applied in duplicate_oids and unused_oids
to let this script find Catalog.pm without help.  This removes the need
to supply a -I switch in most cases.

Also, mark the script executable, again to follow the precedent of
duplicate_oids and unused_oids.  Now you can just do
"./reformat_dat_file.pl pg_proc.dat"
if you want to reformat only one or a few .dat files rather than all.

It'd be possible to remove the -I switches in the Makefile's convenience
targets, but I chose to leave them: they don't hurt anything, and it's
possible that in weird VPATH situations they might be of value.
2018-04-28 16:09:03 -04:00
Tom Lane 45c6d75f8c Clarify handling of special-case values in bootstrap catalog data.
I (tgl) originally coded the special case for pg_proc.pronargs as
though it were a kind of default value.  It seems better though to
treat computable columns as an independent concern: this makes the
code clearer, and probably a bit faster too since we needn't do
work inside the per-column loop.

Improve related comments, as well, in the expectation that there
might be more cases like this in future.

John Naylor, some additional comment-hacking by me

Discussion: https://postgr.es/m/CAJVSVGW-D7OobzU=dybVT2JqZAx-4X1yvBJdavBmqQL05Q6CLw@mail.gmail.com
2018-04-28 15:27:16 -04:00
Tom Lane 4094031dd3 Assorted minor doc/comment fixes.
Identify pg_replication_origin as a shared catalog in catalogs.sgml,
using the same boilerplate wording used for most other shared catalogs
(and tweak another place where someone had randomly deviated from
that boilerplate).

Make an example in mmgr/README more consistent with surrounding text.

Update an obsolete cross-reference in a comment in storage/block.h.

Zhuo Ql

Discussion: https://postgr.es/m/44296255.1819230.1524889719001@mail.yahoo.com
2018-04-28 11:46:15 -04:00
Bruce Momjian 7551d9bc40 C comment: add description of root_tuple_slot
Reported-by: Amit Langote

Discussion: https://postgr.es/m/d2e6674c-c5c6-fe89-1d0b-3534b9db0476@lab.ntt.co.jp
2018-04-26 14:55:03 -04:00
Tom Lane bdf46af748 Post-feature-freeze pgindent run.
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
2018-04-26 14:47:16 -04:00
Tom Lane f83bf385c1 Preliminary work for pgindent run.
Update typedefs.list from current buildfarm results.  Adjust pgindent's
typedef blacklist to block some more unfortunate typedef names that have
snuck in since last time.  Manually tweak a few places where I didn't
like the initial results of pgindent'ing.
2018-04-26 14:45:04 -04:00
Tom Lane dd4cc9d706 Fix duplicate_oids and unused_oids so user needn't cd to catalog dir.
Previously, you had to cd into src/include/catalog before running either
of these scripts.  That's a bit tedious, so let's make the scripts do it
for you.

In passing, improve the initial comments in both scripts.  Also remove
unused_oids' code to complain about duplicate oids.  That was added in
yesterday's commit 5602265f7, but on second thought we shouldn't be
randomly redefining the script's behavior that way.

John Naylor and Tom Lane

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-26 11:20:01 -04:00
Tom Lane 5602265f77 Convert unused_oids and duplicate_oids to use Catalog.pm infrastructure.
unused_oids was previously a shell script, which of course didn't work at
all on Windows.  Also, commit 372728b0d introduced some other portability
problems, as complained of by Stas Kelvich.  We can improve matters by
converting it to Perl.

While we're at it, let's future-proof both this script and duplicate_oids
to use Catalog.pm rather than having a bunch of ad-hoc logic for parsing
catalog headers and .dat files.  These scripts are thereby a bit slower,
which doesn't seem like a problem for typical manual use.  It is a little
annoying for buildfarm purposes, but we should be able to fix that case
by having genbki.pl make the check instead of parsing the headers twice.
(That's not done in this commit, though.)

Stas Kelvich, adjusted a bit by me

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-25 16:01:58 -04:00
Tom Lane 1eb3a09e93 Make Catalog.pm's representation of toast and index decls more abstract.
Instead of immediately constructing the string we need to emit into the
.BKI file, preserve the items we extracted from the header file in a hash.
This eases using the info for other purposes.

John Naylor (with cosmetic adjustments by me)

Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
2018-04-25 16:01:58 -04:00
Tom Lane f04d4ac919 Reindent Perl files with perltidy version 20170521.
Discussion: https://postgr.es/m/CABUevEzK3cNiHZQ18f5tK0guoT+cN_jWeVzhYYxY=r+1Q3SmoA@mail.gmail.com
2018-04-25 14:00:19 -04:00
Alvaro Herrera 1957f8dabf Initialize ExprStates once in run-time partition pruning
Instead of doing ExecInitExpr every time a Param needs to be evaluated
in run-time partition pruning, do it once during run-time pruning
set-up and cache the exprstate in PartitionPruneContext, saving a lot of
work.

Author: David Rowley
Reviewed-by: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f8-x+q-90QAPDu_okhQBV4DPEtPz8CJ=m0940GyT4DA4w@mail.gmail.com
2018-04-24 14:03:10 -03:00
Alvaro Herrera 055fb8d33d Add GUC enable_partition_pruning
This controls both plan-time and execution-time new-style partition
pruning.  While finer-grain control is possible (maybe using an enum GUC
instead of boolean), there doesn't seem to be much need for that.

This new parameter controls partition pruning for all queries:
trivially, SELECT queries that affect partitioned tables are naturally
under its control since they are using the new technology.  However,
while UPDATE/DELETE queries do not use the new code, we make the new GUC
control their behavior also (stealing control from
constraint_exclusion), because it is more natural, and it leads to a
more natural transition to the future in which those queries will also
use the new pruning code.

Constraint exclusion still controls pruning for regular inheritance
situations (those not involving partitioned tables).

Author: David Rowley
Review: Amit Langote, Ashutosh Bapat, Justin Pryzby, David G. Johnston
Discussion: https://postgr.es/m/CAKJS1f_0HwsxJG9m+nzU+CizxSdGtfe6iF_ykPYBiYft302DCw@mail.gmail.com
2018-04-23 17:57:43 -03:00
Teodor Sigaev 6db4b49986 Fix wrong validation of top-parent pointer during page deletion in Btree.
After introducing usage of t_tid of inner or page high key for storing
number of attributes of tuple, validation of tuple's ItemPointer with
ItemPointerIsValid becomes incorrect, it's need to validate only blocknumber of
ItemPointer. Missing this causes a incorrect page deletion, fix that. Test is
added.

BTW, current contrib/amcheck doesn't fail on index corrupted by this way.

Also introduce BTreeTupleGetTopParent/BTreeTupleSetTopParent macroses to improve
code readability and to avoid possible confusion with page high key: high key
is used to store top-parent link for branch to remove.

Bug found by Michael Paquier, but bug doesn't exist in previous versions because
t_tid was set to P_HIKEY.

Author: Teodor Sigaev
Reviewer: Peter Geoghegan
Discussion: https://www.postgresql.org/message-id/flat/20180419052436.GA16000%40paquier.xyz
2018-04-23 15:55:10 +03:00
Heikki Linnakangas 6a7b2ce2bd Make PGJIT_* macros safer.
Author: David Rowley
Discussion: https://www.postgresql.org/message-id/CAKJS1f8Ge2y0sDs6RQEJFH-vjb-bWhs86rCX4Fp4FZ+TmxtRkw@mail.gmail.com
2018-04-23 04:48:08 -04:00
Magnus Hagander 43cc4ee634 Add comment explaining BGWORKER_BYPASS_ALLOWCONN
Suggested by Michael Paquier
2018-04-23 10:31:22 +02:00
Tom Lane ec38dcd363 Tweak a couple of planner APIs to save recalculating join relids.
Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
2018-04-20 16:00:47 -04:00
Tom Lane c792c7db41 Change more places to be less trusting of RestrictInfo.is_pushed_down.
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids.  Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.

Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.

In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.

Like the previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
2018-04-20 15:19:16 -04:00
Tom Lane 68c23cba34 Improve consistency of comments in system catalog headers.
Use the term "system catalog" rather than "system relation" in assorted
places where it's clearly referring to a table rather than, say, an
index.  Use more natural word order in the header boilerplate, improve
some of the one-liner catalog descriptions, and fix assorted random
deviations from the normal boilerplate.  All purely neatnik-ism, but
why not.

John Naylor, some additional cleanup by me

Discussion: https://postgr.es/m/CAJVSVGUeJmFB3h-NJ18P32NPa+kzC165nm7GSoGHfPaN80Wxcw@mail.gmail.com
2018-04-19 17:14:09 -04:00
Tom Lane e5d83995e9 Fix incorrect handling of join clauses pushed into parameterized paths.
In some cases a clause attached to an outer join can be pushed down into
the outer join's RHS even though the clause is not degenerate --- this
can happen if we choose to make a parameterized path for the RHS.  If
the clause ends up attached to a lower outer join, we'd misclassify it
as being a "join filter" not a plain "filter" condition at that node,
leading to wrong query results.

To fix, teach extract_actual_join_clauses to examine each join clause's
required_relids, not just its is_pushed_down flag.  (The latter now
seems vestigial, or at least in need of rethinking, but we won't do
anything so invasive as redefining it in a bug-fix patch.)

This has been wrong since we introduced parameterized paths in 9.2,
though it's evidently hard to hit given the lack of previous reports.
The test case used here involves a lateral function call, and I think
that a lateral reference may be required to get the planner to select
a broken plan; though I wouldn't swear to that.  In any case, even if
LATERAL is needed to trigger the bug, it still affects all supported
branches, so back-patch to all.

Per report from Andreas Karlsson.  Thanks to Andrew Gierth for
preliminary investigation.

Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
2018-04-19 15:49:30 -04:00
Teodor Sigaev ff4943042f Fix datatype for number of heap tuples during last cleanup
It appears that new fields introduced in 857f9c36 have inconsistent datatypes:
BTMetaPageData.btm_last_cleanup_num_heap_tuples is of float4 type,
while xl_btree_metadata.last_cleanup_num_heap_tuples is of double type.
IndexVacuumInfo.num_heap_tuples, which is a source of values for
both former fields is of double type.  So, make both those fields in
BTMetaPageData and xl_btree_metadata use float8 type in order to match the
precision of the source.  That shouldn't be double type, because we always
use types with explicit width in WAL.

Patch introduces incompatibility of on-disk format since 857f9c36 commit, but
that versions never was released, so just bump catalog version to avoid
possible confusion.

Author: Alexander Korortkov
2018-04-19 11:28:03 +03:00
Teodor Sigaev 075aade436 Adjust INCLUDE index truncation comments and code.
Add several assertions that ensure that we're dealing with a pivot tuple
without non-key attributes where that's expected.  Also, remove the
assertion within _bt_isequal(), restoring the v10 function signature.  A
similar check will be performed for the page highkey within
_bt_moveright() in most cases.  Also avoid dropping all objects within
regression tests, to increase pg_dump test coverage for INCLUDE indexes.

Rather than using infrastructure that's generally intended to be used
with reference counted heap tuple descriptors during truncation, use the
same function that was introduced to store flat TupleDescs in shared
memory (we use a temp palloc'd buffer).  This isn't strictly necessary,
but seems more future-proof than the old approach.  It also lets us
avoid including rel.h within indextuple.c, which was arguably a
modularity violation.  Also, we now call index_deform_tuple() with the
truncated TupleDesc, not the source TupleDesc, since that's more robust,
and saves a few cycles.

In passing, fix a memory leak by pfree'ing truncated pivot tuple memory
during CREATE INDEX.  Also pfree during a page split, just to be
consistent.

Refactor _bt_check_natts() to be more readable.

Author: Peter Geoghegan with some editorization by me
Reviewed by: Alexander Korotkov, Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CAH2-Wz%3DkCWuXeMrBCopC-tFs3FbiVxQNjjgNKdG2sHxZ5k2y3w%40mail.gmail.com
2018-04-19 08:45:58 +03:00
Tom Lane 55d26ff638 Rationalize handling of single and double quotes in bootstrap data.
Change things around so that proper quoting of values interpolated into
the BKI data by initdb is the responsibility of initdb, not something
we half-heartedly handle by putting double quotes into the raw BKI data.
(Note: experimentation shows that it still doesn't work to put a double
quote into the initial superuser username, but that's the fault of
inadequate quoting while interpolating the name into SQL scripts;
the BKI aspect of it works fine now.)

Having done that, we can remove the special-case handling of values
that look like "something" from genbki.pl, and instead teach it to
escape double --- and single --- quotes properly.  This removes the
nowhere-documented need to treat those specially in the BKI source
data; whatever you write will be passed through unchanged into the
inserted data value, modulo Perl's rules about single-quoted strings.

Add documentation explaining the (pre-existing) handling of backslashes
in the BKI data.

Per an earlier discussion with John Naylor.

Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
2018-04-17 19:53:50 -04:00
Tom Lane 9ffcccdb95 Rationalize handling of array type names in bootstrap data.
Formerly, Catalog.pm turned a C array type declaration in the catalog
header files into a SQL type, e.g., 'foo[]'.  Along the way, genbki.pl
turned this into '_foo' for the purpose of type lookups, but wrote 'foo[]'
to postgres.bki.  During bootstrap, bootscanner.l had to have a special
case rule to tokenize this, and then MapArrayTypeName() would turn 'foo[]'
into '_foo' one more time.

This seems unnecessarily complicated, especially since nobody cares that
much about the readability of postgres.bki.  Instead, make Catalog.pm
convert the C declaration into '_foo' to start with, and preserve that
representation of the type name throughout bootstrap data processing.
Then rip out the special-case code in bootscanner.l and bootstrap.c.

This changes postgres.bki to the extent that array fields are now
declared like
  proconfig = _text ,
rather than
  proconfig = text[] ,

No documentation update, since the SGML docs didn't mention any of this
in the first place, and it's all pretty transparent to writers of
catalog header files anyway.

John Naylor

Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
2018-04-17 18:29:11 -04:00
Heikki Linnakangas cf5a189059 Fix confusion on the padding of GIDs in on commit and abort records.
Review of commit 1eb6d652: It's pointless to add padding to the GID fields,
when the code that follows assumes that there is no alignment, and uses
memcpy(). Remove the pointless padding.

Update comments to note the new fields in the WAL records.

Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/33b787bf-dc20-1161-54e9-3f3b607bf59d%40iki.fi
2018-04-17 16:10:42 -04:00
Alvaro Herrera b7e2cbc5b4 Update Append's idea of first_partial_plan
It turns out that after runtime partition pruning, Append's
first_partial_plan does not accurately represent partial plans to run,
if any of those got pruned.  This could limit participation of workers
in some partial subplans, if other subplans got pruned.  Fix it by
keeping an index of the first valid partial subplan in the state node,
determined at execnode Init time.

Author: David Rowley, with cosmetic changes by me.
Discussion: https://postgr.es/m/CAKJS1f8o2Yd=rOP=Et3A0FWgF+gSAOkFSU6eNhnGzTPV7nN8sQ@mail.gmail.com
2018-04-17 16:25:02 -03:00
Alvaro Herrera 4d64abc2fe List src/include/partitioning in src/include/Makefile
This omission prevented partitioning header files from being installed.

Per buildfarm member crake.
2018-04-14 21:33:32 -03:00
Alvaro Herrera da6f3e45dd Reorganize partitioning code
There's been a massive addition of partitioning code in PostgreSQL 11,
with little oversight on its placement, resulting in a
catalog/partition.c with poorly defined boundaries and responsibilities.
This commit tries to set a couple of distinct modules to separate things
a little bit.  There are no code changes here, only code movement.

There are three new files:
  src/backend/utils/cache/partcache.c
  src/include/partitioning/partdefs.h
  src/include/utils/partcache.h

The previous arrangement of #including catalog/partition.h almost
everywhere is no more.

Authors: Amit Langote and Álvaro Herrera
Discussion: https://postgr.es/m/98e8d509-790a-128c-be7f-e48a5b2d8d97@lab.ntt.co.jp
	https://postgr.es/m/11aa0c50-316b-18bb-722d-c23814f39059@lab.ntt.co.jp
	https://postgr.es/m/143ed9a4-6038-76d4-9a55-502035815e68@lab.ntt.co.jp
	https://postgr.es/m/20180413193503.nynq7bnmgh6vs5vm@alvherre.pgsql
2018-04-14 21:12:14 -03:00
Tom Lane 2a67d6440d Add commentary explaining why MaxIndexTuplesPerPage calculation is safe.
MaxIndexTuplesPerPage ignores the fact that btree indexes sometimes
store tuples with no data payload.  But it also ignores the possibility
of "special space" on index pages, which offsets that, so that the
result isn't an underestimate.  This all seems worth documenting, though.

In passing, remove #define MinIndexTupleSize, which was added by
commit 2c03216d8 but not used in that commit nor later ones.

Comment text by me; issue noticed by Peter Geoghegan.

Discussion: https://postgr.es/m/CAH2-WzkQmb54Kbx-YHXstRKXcNc+_87jwV3DRb54xcybLR7Oig@mail.gmail.com
2018-04-14 12:33:15 -04:00
Peter Eisentraut a8677e3ff6 Support named and default arguments in CALL
We need to call expand_function_arguments() to expand named and default
arguments.

In PL/pgSQL, we also need to deal with named and default INOUT arguments
when receiving the output values into variables.

Author: Pavel Stehule <pavel.stehule@gmail.com>
2018-04-14 09:13:53 -04:00
Simon Riggs 08ea7a2291 Revert MERGE patch
This reverts commits d204ef6377,
83454e3c2b and a few more commits thereafter
(complete list at the end) related to MERGE feature.

While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.

Thanks again to all reviewers and bug reporters.

List of commits reverted, in reverse chronological order:

 f1464c5380 Improve parse representation for MERGE
 ddb4158579 MERGE syntax diagram correction
 530e69e59b Allow cpluspluscheck to pass by renaming variable
 01b88b4df5 MERGE minor errata
 3af7b2b0d4 MERGE fix variable warning in non-assert builds
 a5d86181ec MERGE INSERT allows only one VALUES clause
 4b2d44031f MERGE post-commit review
 4923550c20 Tab completion for MERGE
 aa3faa3c7a WITH support in MERGE
 83454e3c2b New files for MERGE
 d204ef6377 MERGE SQL Command following SQL:2016

Author: Pavan Deolasee
Reviewed-by: Michael Paquier
2018-04-12 11:22:56 +01:00
Teodor Sigaev c9c875a28f Rename IndexInfo.ii_KeyAttrNumbers array
Rename ii_KeyAttrNumbers to ii_IndexAttrNumbers to prevent confusion with
ii_NumIndexAttrs/ii_NumIndexKeyAttrs. ii_IndexAttrNumbers contains
all attributes including "including" columns, not only key attribute.

Discussion: https://www.postgresql.org/message-id/13123421-1d52-d0e4-c95c-6d69011e0595%40sigaev.ru
2018-04-12 13:02:45 +03:00
Alvaro Herrera 15a8f8caad Fix IndexOnlyScan counter for heap fetches in parallel mode
The HeapFetches counter was using a simple value in IndexOnlyScanState,
which fails to propagate values from parallel workers; so the counts are
wrong when IndexOnlyScan runs in parallel.  Move it to Instrumentation,
like all the other counters.

While at it, change INSERT ON CONFLICT conflicting tuple counter to use
the new ntuples2 instead of nfiltered2, which is a blatant misuse.

Discussion: https://postgr.es/m/20180409215851.idwc75ct2bzi6tea@alvherre.pgsql
2018-04-10 15:56:15 -03:00
Alvaro Herrera 468abb8f7a Fix incorrect logic for choosing the next Parallel Append subplan
In 499be013de support for pruning unneeded Append subnodes was added.
The logic in that commit was not correctly checking if the next subplan
was in fact a valid subplan. This could cause parallel workers processes
to be given a subplan to work on which didn't require any work.

Per code review following an otherwise unexplained regression failure in
buildfarm member Pademelon.  (We haven't been able to reproduce the
failure, so this is a bit of a blind fix in terms of whether it'll
actually fix it; but it is a clear bug nonetheless).

In passing, also add a comment to explain what first_partial_plan means.

Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f_E5r05hHUVG3UmCQJ49DGKKHtN=SHybD44LdzBn+CJng@mail.gmail.com
2018-04-09 17:23:49 -03:00
Tom Lane 2cdf359fc4 Make reformat_dat_file.pl preserve all blank lines.
In its original form, reformat_dat_file.pl smashed consecutive blank
lines to a single blank line, which was helpful for mopping up excess
whitespace during the bootstrap data format conversion.  But going
forward, there seems little reason to do that; if developers want to
put in multiple blank lines, let 'em.  This makes it conform to the
documentation I (tgl) wrote, too.

In passing, clean up some sloppy markup choices in bki.sgml.

John Naylor

Discussion: https://postgr.es/m/28827.1523039259@sss.pgh.pa.us
2018-04-09 14:58:39 -04:00
Tom Lane af1a949109 Further cleanup of client dependencies on src/include/catalog headers.
In commit 9c0a0de4c, I'd failed to notice that catalog/catalog.h
should also be considered a frontend-unsafe header, because it includes
(and needs) the full form of pg_class.h, not to mention relcache.h.
However, various frontend code was depending on it to get
TABLESPACE_VERSION_DIRECTORY, so refactoring of some sort is called for.

The cleanest answer seems to be to move TABLESPACE_VERSION_DIRECTORY,
as well as the OIDCHARS symbol, to common/relpath.h.  Do that, and mop up
inclusions as necessary.  (I found that quite a few current users of
catalog/catalog.h don't seem to need it at all anymore, apparently as a
result of the refactorings that created common/relpath.[hc].  And
initdb.c needed it only as a route to pg_class_d.h.)

Discussion: https://postgr.es/m/6629.1523294509@sss.pgh.pa.us
2018-04-09 14:39:58 -04:00
Magnus Hagander f5543d47bc catversion bump for online-checksums revert
Lack thereof pointed out by Tom Lane.
2018-04-09 19:26:58 +02:00
Magnus Hagander a228cc13ae Revert "Allow on-line enabling and disabling of data checksums"
This reverts the backend sides of commit 1fde38beaa.
I have, at least for now, left the pg_verify_checksums tool in place, as
this tool can be very valuable without the rest of the patch as well,
and since it's a read-only tool that only runs when the cluster is down
it should be a lot safer.
2018-04-09 19:03:42 +02:00
Alvaro Herrera d7a95f06a1 Minor comment updates
Fix a couple of typos, and update a comment about why we set a BMS to
NULL.

Author: David Rowley
Discussion: http://postgr.es/m/CAKJS1f-tux=KdUz6ENJ9GHM_V2qgxysadYiOyQS9Ko9PTteVhQ@mail.gmail.com
2018-04-09 11:17:35 -03:00
Tom Lane 4f85f66469 Cosmetic cleanups in initial catalog data.
Write ',' and ';' for typdelim values instead of the obscurantist
ASCII octal equivalents.  Not sure why anybody ever thought the
latter were better; maybe it had something to do with lack of
a better quoting convention, twenty-plus years ago?

Reassign a couple of high-numbered OIDs that were left in during
yesterday's mad rush to commit stuff of uncertain internal
temperature.

The latter requires a catversion bump, though the former wouldn't
since the end-result catalog data is unchanged.
2018-04-08 15:55:49 -04:00
Tom Lane cca563f384 Reduce worst-case shell command line length during "make install".
Addition of the catalog/pg_foo_d.h headers seems to have pushed us over
the brink of the maximum command line length for some older platforms
during "make install" for our header files.  The main culprit here is
repetition of the target directory path, which could be long.
Rearrange so that we don't repeat that once per file, but only once
per subdirectory.

Per buildfarm.

Discussion: https://postgr.es/m/E1f5Dwm-0004n5-7O@gemulon.postgresql.org
2018-04-08 15:08:32 -04:00
Tom Lane cefa387153 Merge catalog/pg_foo_fn.h headers back into pg_foo.h headers.
Traditionally, include/catalog/pg_foo.h contains extern declarations
for functions in backend/catalog/pg_foo.c, in addition to its function
as the authoritative definition of the pg_foo catalog's rowtype.
In some cases, we'd been forced to split out those extern declarations
into separate pg_foo_fn.h headers so that the catalog definitions
could be #include'd by frontend code.  That problem is gone as of
commit 9c0a0de4c, so let's undo the splits to make things less
confusing.

Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
2018-04-08 14:35:29 -04:00
Tom Lane 372728b0d4 Replace our traditional initial-catalog-data format with a better design.
Historically, the initial catalog data to be installed during bootstrap
has been written in DATA() lines in the catalog header files.  This had
lots of disadvantages: the format was badly underdocumented, it was
very difficult to edit the data in any mechanized way, and due to the
lack of any abstraction the data was verbose, hard to read/understand,
and easy to get wrong.

Hence, move this data into separate ".dat" files and represent it in a way
that can easily be read and rewritten by Perl scripts.  The new format is
essentially "key => value" for each column; while it's a bit repetitive,
explicit labeling of each value makes the data far more readable and less
error-prone.  Provide a way to abbreviate entries by omitting field values
that match a specified default value for their column.  This allows removal
of a large amount of repetitive boilerplate and also lowers the barrier to
adding new columns.

Also teach genbki.pl how to translate symbolic OID references into
numeric OIDs for more cases than just "regproc"-like pg_proc references.
It can now do that for regprocedure-like references (thus solving the
problem that regproc is ambiguous for overloaded functions), operators,
types, opfamilies, opclasses, and access methods.  Use this to turn
nearly all OID cross-references in the initial data into symbolic form.
This represents a very large step forward in readability and error
resistance of the initial catalog data.  It should also reduce the
difficulty of renumbering OID assignments in uncommitted patches.

Also, solve the longstanding problem that frontend code that would like to
use OID macros and other information from the catalog headers often had
difficulty with backend-only code in the headers.  To do this, arrange for
all generated macros, plus such other declarations as we deem fit, to be
placed in "derived" header files that are safe for frontend inclusion.
(Once clients migrate to using these pg_*_d.h headers, it will be possible
to get rid of the pg_*_fn.h headers, which only exist to quarantine code
away from clients.  That is left for follow-on patches, however.)

The now-automatically-generated macros include the Anum_xxx and Natts_xxx
constants that we used to have to update by hand when adding or removing
catalog columns.

Replace the former manual method of generating OID macros for pg_type
entries with an automatic method, ensuring that all built-in types have
OID macros.  (But note that this patch does not change the way that
OID macros for pg_proc entries are built and used.  It's not clear that
making that match the other catalogs would be worth extra code churn.)

Add SGML documentation explaining what the new data format is and how to
work with it.

Despite being a very large change in the catalog headers, there is no
catversion bump here, because postgres.bki and related output files
haven't changed at all.

John Naylor, based on ideas from various people; review and minor
additional coding by me; previous review by Alvaro Herrera

Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
2018-04-08 13:17:27 -04:00
Andrew Gierth 49b0e300f7 Support index INCLUDE in the AM properties interface.
This rectifies an oversight in commit 8224de4f4, by adding a new
property 'can_include' for pg_indexam_has_property, and adjusting the
results of pg_index_column_has_property to give more appropriate
results for INCLUDEd columns.
2018-04-08 06:02:05 +01:00
Andres Freund d234602c28 Remove overzeleous assertions in pg_atomic_flag code.
The atomics code asserts proper alignment in various places. That's
mainly because the alignment of 64bit integers is not sufficient for
atomic operations on all platforms. Some ABIs only have four byte
alignment, but don't have atomic behavior when crossing page
boundaries.

The flags code isn't affected by that however, as the type alignment
always is sufficient for atomic operations. Nevertheless the code
asserted alignment requirements. Before 8c3debbb it was only broken on
hppa, after it probably affect further platforms.

Thus remove the assertions for pg_atomic_flag operators.

Per buildfarm animal pademelon.

Discussion: https://postgr.es/m/7223.1523124425@sss.pgh.pa.us
Backpatch: 9.5-
2018-04-07 18:30:15 -07:00
Stephen Frost c37b3d08ca Allow group access on PGDATA
Allow the cluster to be optionally init'd with read access for the
group.

This means a relatively non-privileged user can perform a backup of the
cluster without requiring write privileges, which enhances security.

The mode of PGDATA is used to determine whether group permissions are
enabled for directory and file creates.  This method was chosen as it's
simple and works well for the various utilities that write into PGDATA.

Changing the mode of PGDATA manually will not automatically change the
mode of all the files contained therein.  If the user would like to
enable group access on an existing cluster then changing the mode of all
the existing files will be required.  Note that pg_upgrade will
automatically change the mode of all migrated files if the new cluster
is init'd with the -g option.

Tests are included for the backend and all the utilities which operate
on the PG data directory to ensure that the correct mode is set based on
the data directory permissions.

Author: David Steele <david@pgmasters.net>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
2018-04-07 17:45:39 -04:00
Stephen Frost da9b580d89 Refactor dir/file permissions
Consolidate directory and file create permissions for tools which work
with the PG data directory by adding a new module (common/file_perm.c)
that contains variables (pg_file_create_mode, pg_dir_create_mode) and
constants to initialize them (0600 for files and 0700 for directories).

Convert mkdir() calls in the backend to MakePGDirectory() if the
original call used default permissions (always the case for regular PG
directories).

Add tests to make sure permissions in PGDATA are set correctly by the
tools which modify the PG data directory.

Authors: David Steele <david@pgmasters.net>,
         Adam Brightwell <adam.brightwell@crunchydata.com>
Reviewed-By: Michael Paquier, with discussion amongst many others.
Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
2018-04-07 17:45:39 -04:00
Alvaro Herrera 499be013de Support partition pruning at execution time
Existing partition pruning is only able to work at plan time, for query
quals that appear in the parsed query.  This is good but limiting, as
there can be parameters that appear later that can be usefully used to
further prune partitions.

This commit adds support for pruning subnodes of Append which cannot
possibly contain any matching tuples, during execution, by evaluating
Params to determine the minimum set of subnodes that can possibly match.
We support more than just simple Params in WHERE clauses. Support
additionally includes:

1. Parameterized Nested Loop Joins: The parameter from the outer side of the
   join can be used to determine the minimum set of inner side partitions to
   scan.

2. Initplans: Once an initplan has been executed we can then determine which
   partitions match the value from the initplan.

Partition pruning is performed in two ways.  When Params external to the plan
are found to match the partition key we attempt to prune away unneeded Append
subplans during the initialization of the executor.  This allows us to bypass
the initialization of non-matching subplans meaning they won't appear in the
EXPLAIN or EXPLAIN ANALYZE output.

For parameters whose value is only known during the actual execution
then the pruning of these subplans must wait.  Subplans which are
eliminated during this stage of pruning are still visible in the EXPLAIN
output.  In order to determine if pruning has actually taken place, the
EXPLAIN ANALYZE must be viewed.  If a certain Append subplan was never
executed due to the elimination of the partition then the execution
timing area will state "(never executed)".  Whereas, if, for example in
the case of parameterized nested loops, the number of loops stated in
the EXPLAIN ANALYZE output for certain subplans may appear lower than
others due to the subplan having been scanned fewer times.  This is due
to the list of matching subnodes having to be evaluated whenever a
parameter which was found to match the partition key changes.

This commit required some additional infrastructure that permits the
building of a data structure which is able to perform the translation of
the matching partition IDs, as returned by get_matching_partitions, into
the list index of a subpaths list, as exist in node types such as
Append, MergeAppend and ModifyTable.  This allows us to translate a list
of clauses into a Bitmapset of all the subpath indexes which must be
included to satisfy the clause list.

Author: David Rowley, based on an earlier effort by Beena Emerson
Reviewers: Amit Langote, Robert Haas, Amul Sul, Rajkumar Raghuwanshi,
Jesper Pedersen
Discussion: https://postgr.es/m/CAOG9ApE16ac-_VVZVvv0gePSgkg_BwYEV1NBqZFqDR2bBE0X0A@mail.gmail.com
2018-04-07 17:54:39 -03:00
Alvaro Herrera 5c0675215e Add bms_prev_member function
This works very much like the existing bms_last_member function, only it
traverses through the Bitmapset in the opposite direction from the most
significant bit down to the least significant bit.  A special prevbit value of
-1 may be used to have the function determine the most significant bit.  This
is useful for starting a loop.  When there are no members less than prevbit,
the function returns -2 to indicate there are no more members.

Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f-K=3d5MDASNYFJpUpc20xcBnAwNC1-AOeunhn0OtkWbQ@mail.gmail.com
2018-04-07 17:54:39 -03:00
Andres Freund f16241bef7 Raise error when affecting tuple moved into different partition.
When an update moves a row between partitions (supported since
2f17844104), our normal logic for following update chains in READ
COMMITTED mode doesn't work anymore. Cross partition updates are
modeled as an delete from the old and insert into the new
partition. No ctid chain exists across partitions, and there's no
convenient space to introduce that link.

Not throwing an error in a partitioned context when one would have
been thrown without partitioning is obviously problematic. This commit
introduces infrastructure to detect when a tuple has been moved, not
just plainly deleted. That allows to throw an error when encountering
a deletion that's actually a move, while attempting to following a
ctid chain.

The row deleted as part of a cross partition update is marked by
pointing it's t_ctid to an invalid block, instead of self as a normal
update would.  That was deemed to be the least invasive and most
future proof way to represent the knowledge, given how few infomask
bits are there to be recycled (there's also some locking issues with
using infomask bits).

External code following ctid chains should be updated to check for
moved tuples. The most likely consequence of not doing so is a missed
error.

Author: Amul Sul, editorialized by me
Reviewed-By: Amit Kapila, Pavan Deolasee, Andres Freund, Robert Haas
Discussion: http://postgr.es/m/CAAJ_b95PkwojoYfz0bzXU8OokcTVGzN6vYGCNVUukeUDrnF3dw@mail.gmail.com
2018-04-07 13:24:27 -07:00
Teodor Sigaev 8224de4f42 Indexes with INCLUDE columns and their support in B-tree
This patch introduces INCLUDE clause to index definition.  This clause
specifies a list of columns which will be included as a non-key part in
the index.  The INCLUDE columns exist solely to allow more queries to
benefit from index-only scans.  Also, such columns don't need to have
appropriate operator classes.  Expressions are not supported as INCLUDE
columns since they cannot be used in index-only scans.

Index access methods supporting INCLUDE are indicated by amcaninclude flag
in IndexAmRoutine.  For now, only B-tree indexes support INCLUDE clause.

In B-tree indexes INCLUDE columns are truncated from pivot index tuples
(tuples located in non-leaf pages and high keys).  Therefore, B-tree indexes
now might have variable number of attributes.  This patch also provides
generic facility to support that: pivot tuples contain number of their
attributes in t_tid.ip_posid.  Free 13th bit of t_info is used for indicating
that.  This facility will simplify further support of index suffix truncation.
The changes of above are backward-compatible, pg_upgrade doesn't need special
handling of B-tree indexes for that.

Bump catalog version

Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me
Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes,
			 David Rowley, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/flat/56168952.4010101@postgrespro.ru
2018-04-07 23:00:39 +03:00
Teodor Sigaev 1c1791e000 Add json(b)_to_tsvector function
Jsonb has a complex nature so there isn't best-for-everything way to convert it
to tsvector for full text search. Current to_tsvector(json(b)) suggests to
convert only string values, but it's possible to index keys, numerics and even
booleans value. To solve that json(b)_to_tsvector has a second required
argument contained a list of desired types of json fields. Second argument is
a jsonb scalar or array right now with possibility to add new options in a
future.

Bump catalog version

Author: Dmitry Dolgov with some editorization by me
Reviewed by: Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CA+q6zcXJQbS1b4kJ_HeAOoOc=unfnOrUEL=KGgE32QKDww7d8g@mail.gmail.com
2018-04-07 20:58:03 +03:00
Peter Eisentraut 039eb6e92f Logical replication support for TRUNCATE
Update the built-in logical replication system to make use of the
previously added logical decoding for TRUNCATE support.  Add the
required truncate callback to pgoutput and a new logical replication
protocol message.

Publications get a new attribute to determine whether to replicate
truncate actions.  When updating a publication via pg_dump from an older
version, this is not set, thus preserving the previous behavior.

Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
2018-04-07 11:34:11 -04:00
Peter Eisentraut 5dfd1e5a66 Logical decoding of TRUNCATE
Add a new WAL record type for TRUNCATE, which is only used when
wal_level >= logical.  (For physical replication, TRUNCATE is already
replicated via SMGR records.)  Add new callback for logical decoding
output plugins to receive TRUNCATE actions.

Author: Simon Riggs <simon@2ndquadrant.com>
Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
2018-04-07 11:34:10 -04:00
Andres Freund 8c3debbbf6 Fix and improve pg_atomic_flag fallback implementation.
The atomics fallback implementation for pg_atomic_flag was broken,
returning the inverted value from pg_atomic_test_set_flag().  This was
unnoticed because a) atomic flags were unused until recently b) the
test code wasn't run when the fallback implementation was in
use (because it didn't allow to test for some edge cases).

Fix the bug, and improve the fallback so it has the same behaviour as
the non-fallback implementation in the problematic edge cases. That
breaks ABI compatibility in the back branches when fallbacks are in
use, but given they were broken until now...

Author: Andres Freund
Reported-by: Daniel Gustafsson
Discussion:
    https://postgr.es/m/FB948276-7B32-4B77-83E6-D00167F8EEB4@yesql.se
    https://postgr.es/m/20180406233854.uni2h3mbnveczl32@alap3.anarazel.de
Backpatch: 9.5-, where the atomics abstraction was introduced.
2018-04-06 19:55:32 -07:00
Robert Haas 3d956d9562 Allow insert and update tuple routing and COPY for foreign tables.
Also enable this for postgres_fdw.

Etsuro Fujita, based on an earlier patch by Amit Langote. The larger
patch series of which this is a part has been reviewed by Amit
Langote, David Fetter, Maksim Milyutin, Álvaro Herrera, Stephen Frost,
and me.  Minor documentation changes to the final version by me.

Discussion: http://postgr.es/m/29906a26-da12-8c86-4fb9-d8f88442f2b9@lab.ntt.co.jp
2018-04-06 19:22:03 -04:00
Tom Lane cb1ff1e5af Remove some unnecessary quote marks from catalog DATA lines.
This has no functional impact whatsoever.  However, it causes
these unnecessary quote marks to disappear from the generated
postgres.bki file, making it easier to verify that the upcoming
bootstrap data conversion patch doesn't change the generated file.
2018-04-06 18:58:38 -04:00
Alvaro Herrera 9fdb675fc5 Faster partition pruning
Add a new module backend/partitioning/partprune.c, implementing a more
sophisticated algorithm for partition pruning.  The new module uses each
partition's "boundinfo" for pruning instead of constraint exclusion,
based on an idea proposed by Robert Haas of a "pruning program": a list
of steps generated from the query quals which are run iteratively to
obtain a list of partitions that must be scanned in order to satisfy
those quals.

At present, this targets planner-time partition pruning, but there exist
further patches to apply partition pruning at execution time as well.

This commit also moves some definitions from include/catalog/partition.h
to a new file include/partitioning/partbounds.h, in an attempt to
rationalize partitioning related code.

Authors: Amit Langote, David Rowley, Dilip Kumar
Reviewers: Robert Haas, Kyotaro Horiguchi, Ashutosh Bapat, Jesper Pedersen.
Discussion: https://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
2018-04-06 16:44:05 -03:00
Stephen Frost 11523e860f Support new default roles with adminpack
This provides a newer version of adminpack which works with the newly
added default roles to support GRANT'ing to non-superusers access to
read and write files, along with related functions (unlinking files,
getting file length, renaming/removing files, scanning the log file
directory) which are supported through adminpack.

Note that new versions of the functions are required because an
environment might have an updated version of the library but still have
the old adminpack 1.0 catalog definitions (where EXECUTE is GRANT'd to
PUBLIC for the functions).

This patch also removes the long-deprecated alternative names for
functions that adminpack used to include and which are now included in
the backend, in adminpack v1.1.  Applications using the deprecated names
should be updated to use the backend functions instead.  Existing
installations which continue to use adminpack v1.0 should continue to
function until/unless adminpack is upgraded.

Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20171231191939.GR2416%40tamriel.snowman.net
2018-04-06 14:47:10 -04:00
Stephen Frost 0fdc8495bf Add default roles for file/program access
This patch adds new default roles named 'pg_read_server_files',
'pg_write_server_files', 'pg_execute_server_program' which
allow an administrator to GRANT to a non-superuser role the ability to
access server-side files or run programs through PostgreSQL (as the user
the database is running as).  Having one of these roles allows a
non-superuser to use server-side COPY to read, write, or with a program,
and to use file_fdw (if installed by a superuser and GRANT'd USAGE on
it) to read from files or run a program.

The existing misc file functions are also changed to allow a user with
the 'pg_read_server_files' default role to read any files on the
filesystem, matching the privileges given to that role through COPY and
file_fdw from above.

Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/20171231191939.GR2416%40tamriel.snowman.net
2018-04-06 14:47:10 -04:00
Peter Eisentraut bbca77623f Rename MemoryContextCopySetIdentifier() for clarity
MemoryContextCopySetIdentifier -> MemoryContextCopyAndSetIdentifier

Discussion: https://www.postgresql.org/message-id/6421.1522194949@sss.pgh.pa.us
2018-04-06 12:37:54 -04:00
Peter Eisentraut bcf79b5bb6 Split the SetSubscriptionRelState function into two
We don't actually need the insert-or-update logic, so it's clearer to
have separate functions for the inserting and updating.

Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
2018-04-06 10:00:26 -04:00
Simon Riggs f1464c5380 Improve parse representation for MERGE
Separation of parser data structures from executor, as
requested by Tom Lane. Further improvements possible.

While there, implement error for multiple VALUES clauses via parser
to allow line number of error, as requested by Andres Freund.

Author: Pavan Deolasee

Discussion: https://www.postgresql.org/message-id/CABOikdPpqjectFchg0FyTOpsGXyPoqwgC==OLKWuxgBOsrDDZw@mail.gmail.com
2018-04-06 09:38:59 +01:00
Magnus Hagander 1fde38beaa Allow on-line enabling and disabling of data checksums
This makes it possible to turn checksums on in a live cluster, without
the previous need for dump/reload or logical replication (and to turn it
off).

Enabling checkusm starts a background process in the form of a
launcher/worker combination that goes through the entire database and
recalculates checksums on each and every page. Only when all pages have
been checksummed are they fully enabled in the cluster. Any failure of
the process will revert to checksums off and the process has to be
started.

This adds a new WAL record that indicates the state of checksums, so
the process works across replicated clusters.

Authors: Magnus Hagander and Daniel Gustafsson
Review: Tomas Vondra, Michael Banck, Heikki Linnakangas, Andrey Borodin
2018-04-05 22:04:48 +02:00
Simon Riggs 530e69e59b Allow cpluspluscheck to pass by renaming variable
Use of a C++ keyword as a function name caused problems

Reported-by: Álvaro Herrera
2018-04-05 20:06:02 +01:00
Magnus Hagander eed1ce72e1 Allow background workers to bypass datallowconn
THis adds a "flags" field to the BackgroundWorkerInitializeConnection()
and BackgroundWorkerInitializeConnectionByOid(). For now only one flag,
BGWORKER_BYPASS_ALLOWCONN, is defined, which allows the worker to ignore
datallowconn.
2018-04-05 19:02:45 +02:00
Teodor Sigaev 1664ae1978 Add websearch_to_tsquery
Error-tolerant conversion function with web-like syntax for search query,
it simplifies  constraining search engine with close to habitual interface for
users.

Bump catalog version

Authors: Victor Drobny, Dmitry Ivanov with editorization by me
Reviewed by: Aleksander Alekseev, Tomas Vondra, Thomas Munro, Aleksandr Parfenov
Discussion: https://www.postgresql.org/message-id/flat/fe931111ff7e9ad79196486ada79e268@postgrespro.ru
2018-04-05 19:55:11 +03:00
Alvaro Herrera fbc27330b8 Add missing include
Newly added prototype broke cpluspluscheck.

Minor buglet in commit 8694cc96b5.
2018-04-05 12:20:17 -03:00
Simon Riggs 4b2d44031f MERGE post-commit review
Review comments from Andres Freund

* Consolidate code into AfterTriggerGetTransitionTable()
* Rename nodeMerge.c to execMerge.c
* Rename nodeMerge.h to execMerge.h
* Move MERGE handling in ExecInitModifyTable()
  into a execMerge.c ExecInitMerge()
* Move mt_merge_subcommands flags into execMerge.h
* Rename opt_and_condition to opt_merge_when_and_condition
* Wordsmith various comments

Author: Pavan Deolasee
Reviewer: Simon Riggs
2018-04-05 09:54:07 +01:00
Tom Lane 1383e2a1a9 Improve FSM management for BRIN indexes.
BRIN indexes like to propagate additions of free space into the upper pages
of their free space maps as soon as the new space is known, even when it's
just on one individual index page.  Previously this required calling
FreeSpaceMapVacuum, which is quite an expensive thing if the map is large.
Use the FreeSpaceMapVacuumRange function recently added by commit c79f6df75
to reduce the amount of work done for this purpose.

Fix a couple of places that neglected to do the upper-page vacuuming at all
after recording new free space.  If the policy is to be that BRIN should do
that, it should do it everywhere.

Do RecordPageWithFreeSpace unconditionally in brin_page_cleanup, and do
FreeSpaceMapVacuum unconditionally in brin_vacuum_scan.  Because of the
FSM's imprecise storage of free space, the old complications here seldom
bought anything, they just slowed things down.  This approach also
provides a predictable path for FSM corruption to be repaired.

Remove premature RecordPageWithFreeSpace call in brin_getinsertbuffer
where it's about to return an extended page to the caller.  The caller
should do that, instead, after it's inserted its new tuple.  Fix the
one caller that forgot to do so.

Simplify logic in brin_doupdate's same-page-update case by postponing
brin_initialize_empty_new_buffer to after the critical section; I see
little point in doing it before.

Avoid repeat calls of RelationGetNumberOfBlocks in brin_vacuum_scan.
Avoid duplicate BufferGetBlockNumber and BufferGetPage calls in
a couple of places where we already had the right values.

Move a BRIN_elog debug logging call out of a critical section; that's
pretty unsafe and I don't think it buys us anything to not wait till
after the critical section.

Move the "*extended = false" step in brin_getinsertbuffer into the
routine's main loop.  There's no actual bug there, since the loop can't
iterate with *extended still true, but it doesn't seem very future-proof
as coded; and it's certainly not documented as a loop invariant.

This is all from follow-on investigation inspired by commit c79f6df75.

Discussion: https://postgr.es/m/5801.1522429460@sss.pgh.pa.us
2018-04-04 14:26:04 -04:00
Alvaro Herrera 3de241dba8 Foreign keys on partitioned tables
Author: Álvaro Herrera
Discussion: https://postgr.es/m/20171231194359.cvojcour423ulha4@alvherre.pgsql
Reviewed-by: Peter Eisentraut
2018-04-04 14:02:49 -03:00
Teodor Sigaev 857f9c36cd Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.

This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).

This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.

Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 19:29:00 +03:00
Heikki Linnakangas f044d71e33 Use ARMv8 CRC instructions where available.
ARMv8 introduced special CPU instructions for calculating CRC-32C. Use
them, when available, for speed.

Like with the similar Intel CRC instructions, several factors affect
whether the instructions can be used. The compiler intrinsics for them must
be supported by the compiler, and the instructions must be supported by the
target architecture. If the compilation target architecture does not
support the instructions, but adding "-march=armv8-a+crc" makes them
available, then we compile the code with a runtime check to determine if
the host we're running on supports them or not.

For the runtime check, use glibc getauxval() function. Unfortunately,
that's not very portable, but I couldn't find any more portable way to do
it. If getauxval() is not available, the CRC instructions will still be
used if the target architecture supports them without any additional
compiler flags, but the runtime check will not be available.

Original patch by Yuqi Gu, heavily modified by me. Reviewed by Andres
Freund, Thomas Munro.

Discussion: https://www.postgresql.org/message-id/HE1PR0801MB1323D171938EABC04FFE7FA9E3110%40HE1PR0801MB1323.eurprd08.prod.outlook.com
2018-04-04 12:22:45 +03:00
Heikki Linnakangas 638a199fa9 Also fix the descriptions in pg_config.h.win32.
I missed pg_config.h.win32 in the previous commit that fixed these in
pg_config.h.in.
2018-04-04 11:33:39 +03:00
Heikki Linnakangas 8989f52b1b Fix incorrect description of USE_SLICING_BY_8_CRC32C.
And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK,
spotted by Daniel Gustafsson.
2018-04-04 11:20:53 +03:00
Teodor Sigaev 710d90da1f Add prefix operator for TEXT type.
The prefix operator along with SP-GiST indexes can be used as an alternative
for LIKE 'word%' commands  and it doesn't have a limitation of string/prefix
length as B-Tree has.

Bump catalog version

Author: Ildus Kurbangaliev with some editorization by me
Review by: Arthur Zakirov, Alexander Korotkov, and me
Discussion: https://www.postgresql.org/message-id/flat/20180202180327.222b04b3@wp.localdomain
2018-04-03 19:46:45 +03:00
Simon Riggs aa3faa3c7a WITH support in MERGE
Author: Peter Geoghegan
Recursive support removed, no tests
Docs added by me
2018-04-03 12:13:59 +01:00
Simon Riggs 83454e3c2b New files for MERGE 2018-04-03 10:22:21 +01:00
Simon Riggs d204ef6377 MERGE SQL Command following SQL:2016
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.

MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
  UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
  DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
  INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
  DO NOTHING;

MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.

MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.

MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.

Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.

This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.

Various issues reported via sqlsmith by Andreas Seltenreich

Authors: Pavan Deolasee, Simon Riggs
Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs

Discussion:
https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
2018-04-03 09:28:16 +01:00
Simon Riggs aa5877bb26 Revert "MERGE SQL Command following SQL:2016"
This reverts commit e6597dc353.
2018-04-02 21:36:38 +01:00
Simon Riggs 7cf8a5c302 Revert "Modified files for MERGE"
This reverts commit 354f13855e.
2018-04-02 21:34:15 +01:00
Simon Riggs 354f13855e Modified files for MERGE 2018-04-02 21:12:47 +01:00
Simon Riggs e6597dc353 MERGE SQL Command following SQL:2016
MERGE performs actions that modify rows in the target table
using a source table or query. MERGE provides a single SQL
statement that can conditionally INSERT/UPDATE/DELETE rows
a task that would other require multiple PL statements.
e.g.

MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
  UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
  DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
  INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
  DO NOTHING;

MERGE works with regular and partitioned tables, including
column and row security enforcement, as well as support for
row, statement and transition triggers.

MERGE is optimized for OLTP and is parameterizable, though
also useful for large scale ETL/ELT. MERGE is not intended
to be used in preference to existing single SQL commands
for INSERT, UPDATE or DELETE since there is some overhead.
MERGE can be used statically from PL/pgSQL.

MERGE does not yet support inheritance, write rules,
RETURNING clauses, updatable views or foreign tables.
MERGE follows SQL Standard per the most recent SQL:2016.

Includes full tests and documentation, including full
isolation tests to demonstrate the concurrent behavior.

This version written from scratch in 2017 by Simon Riggs,
using docs and tests originally written in 2009. Later work
from Pavan Deolasee has been both complex and deep, leaving
the lead author credit now in his hands.
Extensive discussion of concurrency from Peter Geoghegan,
with thanks for the time and effort contributed.

Various issues reported via sqlsmith by Andreas Seltenreich

Authors: Pavan Deolasee, Simon Riggs
Reviewers: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs

Discussion:
https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
2018-04-02 21:04:35 +01:00
Peter Eisentraut 2764d5dcfa Make be-secure-common.c more consistent for future SSL implementations
Recent commit 8a3d9425 has introduced be-secure-common.c, which is aimed
at including backend-side APIs that can be used by any SSL
implementation.  The purpose is similar to fe-secure-common.c for the
frontend-side APIs.

However, this has forgotten to include check_ssl_key_file_permissions()
in the move, which causes a double dependency between be-secure.c and
be-secure-openssl.c.

Refactor the code in a more logical way.  This also puts into light an
API which is usable by future SSL implementations for permissions on SSL
key files.

Author: Michael Paquier <michael@paquier.xyz>
2018-04-02 11:37:40 -04:00
Robert Haas 7e0d64c7a5 postgres_fdw: Push down partition-wise aggregation.
Since commit 7012b132d0, postgres_fdw
has been able to push down the toplevel aggregation operation to the
remote server.  Commit e2f1eb0ee3 made
it possible to break down the toplevel aggregation into one
aggregate per partition.  This commit lets postgres_fdw push down
aggregation in that case just as it does at the top level.

In order to make this work, this commit adds an additional argument
to the GetForeignUpperPaths FDW API.  A matching argument is added
to the signature for create_upper_paths_hook.  Third-party code using
either of these will need to be updated.

Also adjust create_foreignscan_plan() so that it picks up the correct
set of relids in this case.

Jeevan Chalke, reviewed by Ashutosh Bapat and by me and with some
adjustments by me.  The larger patch series of which this patch is a
part was also reviewed and tested by Antonin Houska, Rajkumar
Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal
Legrand, and Rafia Sabih.

Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
Discussion: http://postgr.es/m/CAM2+6=XPWujjmj5zUaBTGDoB38CemwcPmjkRy0qOcsQj_V+2sQ@mail.gmail.com
2018-04-02 10:51:50 -04:00
Tom Lane 0b11a674fb Fix a boatload of typos in C comments.
Justin Pryzby

Discussion: https://postgr.es/m/20180331105640.GK28454@telsasoft.com
2018-04-01 15:01:28 -04:00
Andres Freund 51bc271790 Add Bloom filter implementation.
A Bloom filter is a space-efficient, probabilistic data structure that
can be used to test set membership.  Callers will sometimes incur false
positives, but never false negatives.  The rate of false positives is a
function of the total number of elements and the amount of memory
available for the Bloom filter.

Two classic applications of Bloom filters are cache filtering, and data
synchronization testing.  Any user of Bloom filters must accept the
possibility of false positives as a cost worth paying for the benefit in
space efficiency.

This commit adds a test harness extension module, test_bloomfilter.  It
can be used to get a sense of how the Bloom filter implementation
performs under varying conditions.

This is infrastructure for the upcoming "heapallindexed" amcheck patch,
which verifies the consistency of a heap relation against one of its
indexes.

Author: Peter Geoghegan
Reviewed-By: Andrey Borodin, Michael Paquier, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
2018-03-31 17:49:41 -07:00
Tom Lane b0c90c85fc Portability fix for commit 9a895462d.
So far as I can find, NI_MAXHOST isn't actually required anywhere by
POSIX.  Nonetheless, commit 9a895462d supposed that it could rely on
having that symbol without any ceremony at all.  We do have a hack
for providing it if the platform doesn't, in getaddrinfo.h, so fix
the problem by #including that file.  Per buildfarm.
2018-03-30 20:52:13 -04:00
Andres Freund 3e256e5506 Add SKIP_LOCKED option to RangeVarGetRelidExtended().
This will be used for VACUUM (SKIP LOCKED).

Author: Nathan Bossart
Reviewed-By: Michael Paquier and Andres Freund
Discussion: https://postgr.es/m/20180306005349.b65whmvj7z6hbe2y@alap3.anarazel.de
2018-03-30 17:05:16 -07:00
Andres Freund d87510a524 Combine options for RangeVarGetRelidExtended() into a flags argument.
A followup patch will add a SKIP_LOCKED option. To avoid introducing
evermore arguments, breaking existing callers each time, introduce a
flags argument. This'll no doubt break a few external users...

Also change the MISSING_OK behaviour so a DEBUG1 debug message is
emitted when a relation is not found.

Author: Nathan Bossart
Reviewed-By: Michael Paquier and Andres Freund
Discussion: https://postgr.es/m/20180306005349.b65whmvj7z6hbe2y@alap3.anarazel.de
2018-03-30 17:05:16 -07:00
Fujii Masao 9a895462d9 Enhance pg_stat_wal_receiver view to display host and port of sender server.
Previously there was no way in the standby side to find out the host and port
of the sender server that the walreceiver was currently connected to when
multiple hosts and ports were specified in primary_conninfo. For that purpose,
this patch adds sender_host and sender_port columns into pg_stat_wal_receiver
view. They report the host and port that the active replication connection
currently uses.

Bump catalog version.

Author: Haribabu Kommi
Reviewed-by: Michael Paquier and me

Discussion: https://postgr.es/m/CAJrrPGcV_aq8=cdqkFhVDJKEnDQ70yRTTdY9RODzMnXNrCz2Ow@mail.gmail.com
2018-03-31 07:51:22 +09:00
Tom Lane 11002f8afa Fix bogus provolatile/proparallel markings on a few built-in functions.
Richard Yen reported that pg_upgrade failed if the target cluster had
force_parallel_mode = on, because binary_upgrade_create_empty_extension()
is marked parallel restricted, allowing it to be executed in parallel
mode, which complains because it tries to acquire an XID.

In general, no function that might try to modify database data should
be considered parallel safe or restricted, since execution of it might
force XID acquisition.  We found several other examples of this mistake.

Furthermore, functions that execute user-supplied SQL queries or query
fragments, or pull data from user-supplied cursors, had better be marked
both volatile and parallel unsafe, because we don't know what the supplied
query or cursor might try to do.  There were several tsquery and XML
functions that had the wrong proparallel marking for this, and some of
them were even mislabeled as to volatility.

All these bugs are old, dating back to 9.6 for the proparallel mistakes
and much further for the provolatile mistakes.  We can't force a
catversion bump in the back branches, but we can at least ensure that
installations initdb'd in future have the right values.

Thomas Munro and Tom Lane

Discussion: https://postgr.es/m/CAEepm=2sNDScSLTfyMYu32Q=ob98ZGW-vM_2oLxinzSABGQ6VA@mail.gmail.com
2018-03-30 18:14:51 -04:00
Teodor Sigaev 43d1ed60fd Predicate locking in GIN index
Predicate locks are used on per page basis only if fastupdate = off, in
opposite case predicate lock on pending list will effectively lock whole index,
to reduce locking overhead, just lock a relation. Entry and posting trees are
essentially B-tree, so locks are acquired on leaf pages only.

Author: Shubham Barai with some editorization by me and Dmitry Ivanov
Review by: Alexander Korotkov, Dmitry Ivanov, Fedor Sigaev
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPt5sWW+EwTaKUGFL5_XFcZ0MuGBcyJ70oqbWqr42YKR8Q@mail.gmail.com
2018-03-30 14:23:17 +03:00
Robert Haas c1de1a3a8b Remove 'target' from GroupPathExtraData.
It's not needed.

Jeevan Chalke

Discussion: http://postgr.es/m/CAM2+6=XPWujjmj5zUaBTGDoB38CemwcPmjkRy0qOcsQj_V+2sQ@mail.gmail.com
2018-03-29 16:17:18 -04:00
Tom Lane a063baaced Remove UpdateFreeSpaceMap(), use FreeSpaceMapVacuumRange() instead.
FreeSpaceMapVacuumRange has the same effect, is more efficient if many
pages are involved, and makes fewer assumptions about how it's used.
Notably, Claudio Freire pointed out that UpdateFreeSpaceMap could fail
if the specified freespace value isn't the maximum possible.  This isn't
a problem for the single existing user, but the function represents an
attractive nuisance IMO, because it's named as though it were a
general-purpose update function and its limitations are undocumented.
In any case we don't need multiple ways to get the same result.

In passing, do some code review and cleanup in RelationAddExtraBlocks.
In particular, I see no excuse for it to omit the PageIsNew safety check
that's done in the mainline extension path in RelationGetBufferForTuple.

Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
2018-03-29 12:22:44 -04:00
Tom Lane 851a26e266 While vacuuming a large table, update upper-level FSM data every so often.
VACUUM updates leaf-level FSM entries immediately after cleaning the
corresponding heap blocks.  fsmpage.c updates the intra-page search trees
on the leaf-level FSM pages when this happens, but it does not touch the
upper-level FSM pages, so that the released space might not actually be
findable by searchers.  Previously, updating the upper-level pages happened
only at the conclusion of the VACUUM run, in a single FreeSpaceMapVacuum()
call.  This is bad because the VACUUM might get canceled before ever
reaching that point, so that from the point of view of searchers no space
has been freed at all, leading to table bloat.

We can improve matters by updating the upper pages immediately after each
cycle of index-cleaning and heap-cleaning, processing just the FSM pages
corresponding to the range of heap blocks we have now fully cleaned.
This adds a small amount of extra work, since the FSM pages leading down
to each range boundary will be touched twice, but it's pretty negligible
compared to everything else going on in a large VACUUM.

If there are no indexes, VACUUM doesn't work in cycles but just cleans
each heap page on first visit.  In that case we just arbitrarily update
upper FSM pages after each 8GB of heap.  That maintains the goal of not
letting all this work slide until the very end, and it doesn't seem worth
expending extra complexity on a case that so seldom occurs in practice.

In either case, the FSM is fully up to date before any attempt is made
to truncate the relation, so that the most likely scenario for VACUUM
cancellation no longer results in out-of-date upper FSM pages.  When
we do successfully truncate, adjusting the FSM to reflect that is now
fully handled within FreeSpaceMapTruncateRel.

Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me

Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
2018-03-29 11:29:54 -04:00
Teodor Sigaev c0cbe00fee Add casts from jsonb
Add explicit cast from scalar jsonb to all numeric and bool types. It would be
better to have cast from scalar jsonb to text too but there is already a cast
from jsonb to text as just text representation of json. There is no way to have
two different casts for the same type's pair.

Bump catalog version

Author: Anastasia Lubennikova with editorization by Nikita Glukhov and me
Review by: Aleksander Alekseev, Nikita Glukhov, Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e@postgrespro.ru
2018-03-29 16:33:56 +03:00
Peter Eisentraut 056a5a3f63 Allow committing inside cursor loop
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor.  To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks.  Portals now have a new state "auto-held", which
means they have been converted automatically from pinned.  An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.

This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.

Reviewed-by: Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru>
2018-03-28 19:03:26 -04:00
Andres Freund 1f0c6a9e7d Add EXPLAIN support for JIT.
This just shows a few details about JITing, e.g. how many functions
have been JITed, and how long that took.  To avoid noise in regression
tests with functions sometimes being JITed in --with-llvm builds,
disable display when COSTS OFF is specified.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-28 13:26:51 -07:00
Andres Freund 9370462e9a Add inlining support to LLVM JIT provider.
This provides infrastructure to allow JITed code to inline code
implemented in C. This e.g. can be postgres internal functions or
extension code.

This already speeds up long running queries, by allowing the LLVM
optimizer to optimize across function boundaries. The optimization
potential currently doesn't reach its full potential because LLVM
cannot optimize the FunctionCallInfoData argument fully away, because
it's allocated on the heap rather than the stack. Fixing that is
beyond what's realistic for v11.

To be able to do that, use CLANG to convert C code to LLVM bitcode,
and have LLVM build a summary for it. That bitcode can then be used to
to inline functions at runtime. For that the bitcode needs to be
installed. Postgres bitcode goes into $pkglibdir/bitcode/postgres,
extensions go into equivalent directories.  PGXS has been modified so
that happens automatically if postgres has been compiled with LLVM
support.

Currently this isn't the fastest inline implementation, modules are
reloaded from disk during inlining. That's to work around an apparent
LLVM bug, triggering an apparently spurious error in LLVM assertion
enabled builds.  Once that is resolved we can remove the superfluous
read from disk.

Docs will follow in a later commit containing docs for the whole JIT
feature.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-28 13:19:08 -07:00
Andres Freund 8a934d6778 Use isinf builtin for clang, for performance.
When compiling with clang glibc's definition of isinf() ends up
leading to and external libc function call. That's because there was a
bug in the builtin in an old gcc version, and clang claims
compatibility with an older version.  That causes clang to be
measurably slower for floating point heavy workloads than gcc.

To fix simply redirect isinf when using clang and clang confirms it
has __builtin_isinf().
2018-03-28 13:12:15 -07:00
Peter Eisentraut d92bc83c48 PL/pgSQL: Nested CALL with transactions
So far, a nested CALL or DO in PL/pgSQL would not establish a context
where transaction control statements were allowed.  This fixes that by
handling CALL and DO specially in PL/pgSQL, passing the atomic/nonatomic
execution context through and doing the required management around
transaction boundaries.

Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
2018-03-28 13:31:27 -04:00
Simon Riggs 1eb6d6527a Store 2PC GID in commit/abort WAL recs for logical decoding
Store GID of 2PC in commit/abort WAL records when wal_level = logical.
This allows logical decoding to send the SAME gid to subscribers
across restarts of logical replication.

Track relica origin replay progress for 2PC.

(Edited from patch 0003 in the logical decoding 2PC series.)

Authors: Nikhil Sontakke, Stas Kelvich
Reviewed-by: Simon Riggs, Andres Freund
2018-03-28 17:42:50 +01:00
Andres Freund f4f5845b31 Quick adaption of JIT tuple deforming to the fast default patch.
Instead using memset to set tts_isnull, call the new
slot_getmissingattrs().

Also fix a bug (= instead of >=) in the code generation. Normally = is
correct, but when repeatedly deforming fields not in a
tuple (e.g. deform up to natts + 1 and then natts + 2) >= is needed.

Discussion: https://postgr.es/m/20180328010053.i2qvsuuusst4lgmc@alap3.anarazel.de
2018-03-27 21:03:10 -07:00
Andres Freund b4013b8e4a Add catversion bump missed in 16828d5c0.
Given that pg_attribute changed its layout...
2018-03-27 19:07:39 -07:00
Andrew Dunstan 16828d5c02 Fast ALTER TABLE ADD COLUMN with a non-NULL default
Currently adding a column to a table with a non-NULL default results in
a rewrite of the table. For large tables this can be both expensive and
disruptive. This patch removes the need for the rewrite as long as the
default value is not volatile. The default expression is evaluated at
the time of the ALTER TABLE and the result stored in a new column
(attmissingval) in pg_attribute, and a new column (atthasmissing) is set
to true. Any existing row when fetched will be supplied with the
attmissingval. New rows will have the supplied value or the default and
so will never need the attmissingval.

Any time the table is rewritten all the atthasmissing and attmissingval
settings for the attributes are cleared, as they are no longer needed.

The most visible code change from this is in heap_attisnull, which
acquires a third TupleDesc argument, allowing it to detect a missing
value if there is one. In many cases where it is known that there will
not be any (e.g.  catalog relations) NULL can be passed for this
argument.

Andrew Dunstan, heavily modified from an original patch from Serge
Rielau.
Reviewed by Tom Lane, Andres Freund, Tomas Vondra and David Rowley.

Discussion: https://postgr.es/m/31e2e921-7002-4c27-59f5-51f08404c858@2ndQuadrant.com
2018-03-28 10:43:52 +10:30
Tom Lane 442accc3fe Allow memory contexts to have both fixed and variable ident strings.
Originally, we treated memory context names as potentially variable in
all cases, and therefore always copied them into the context header.
Commit 9fa6f00b1 rethought this a little bit and invented a distinction
between fixed and variable names, skipping the copy step for the former.
But we can make things both simpler and more useful by instead allowing
there to be two parts to a context's identification, a fixed "name" and
an optional, variable "ident".  The name supplied in the context create
call is now required to be a compile-time-constant string in all cases,
as it is never copied but just pointed to.  The "ident" string, if
wanted, is supplied later.  This is needed because typically we want
the ident to be stored inside the context so that it's cleaned up
automatically on context deletion; that means it has to be copied into
the context before we can set the pointer.

The cost of this approach is basically just an additional pointer field
in struct MemoryContextData, which isn't much overhead, and is bought
back entirely in the AllocSet case by not needing a headerSize field
anymore, since we no longer have to cope with variable header length.
In addition, we can simplify the internal interfaces for memory context
creation still further, saving a few cycles there.  And it's no longer
true that a custom identifier disqualifies a context from participating
in aset.c's freelist scheme, so possibly there's some win on that end.

All the places that were using non-compile-time-constant context names
are adjusted to put the variable info into the "ident" instead.  This
allows more effective identification of those contexts in many cases;
for example, subsidary contexts of relcache entries are now identified
by both type (e.g. "index info") and relname, where before you got only
one or the other.  Contexts associated with PL function cache entries
are now identified more fully and uniformly, too.

I also arranged for plancache contexts to use the query source string
as their identifier.  This is basically free for CachedPlanSources, as
they contained a copy of that string already.  We pay an extra pstrdup
to do it for CachedPlans.  That could perhaps be avoided, but it would
make things more fragile (since the CachedPlanSource is sometimes
destroyed first).  I suspect future improvements in error reporting will
require CachedPlans to have a copy of that string anyway, so it's not
clear that it's worth moving mountains to avoid it now.

This also changes the APIs for context statistics routines so that the
context-specific routines no longer assume that output goes straight
to stderr, nor do they know all details of the output format.  This
is useful immediately to reduce code duplication, and it also allows
for external code to do something with stats output that's different
from printing to stderr.

The reason for pushing this now rather than waiting for v12 is that
it rethinks some of the API changes made by commit 9fa6f00b1.  Seems
better for extension authors to endure just one round of API changes
not two.

Discussion: https://postgr.es/m/CAB=Je-FdtmFZ9y9REHD7VsSrnCkiBhsA4mdsLKSPauwXtQBeNA@mail.gmail.com
2018-03-27 16:46:51 -04:00
Simon Riggs c203d6cf81 Allow HOT updates for some expression indexes
If the value of an index expression is unchanged after UPDATE,
allow HOT updates where previously we disallowed them, giving
a significant performance boost in those cases.

Particularly useful for indexes such as JSON->>field where the
JSON value changes but the indexed value does not.

Submitted as "surjective indexes" patch, now enabled by use
of new "recheck_on_update" parameter.

Author: Konstantin Knizhnik
Reviewer: Simon Riggs, with much wordsmithing and some cleanup
2018-03-27 19:57:02 +01:00
Teodor Sigaev 920a5e500a Skip temp tables from basebackup.
Do not store temp tables in basebackup, they will not be visible anyway, so,
there are not reasons to store them.

Author: David Steel
Reviewed by: me
Discussion: https://www.postgresql.org/message-id/flat/5ea4d26a-a453-c1b7-eff9-5a3ef8f8aceb@pgmasters.net
2018-03-27 16:14:40 +03:00
Andres Freund 32af96b2b1 JIT tuple deforming in LLVM JIT provider.
Performing JIT compilation for deforming gains performance benefits
over unJITed deforming from compile-time knowledge of the tuple
descriptor. Fixed column widths, NOT NULLness, etc can be taken
advantage of.

Right now the JITed deforming is only used when deforming tuples as
part of expression evaluation (and obviously only if the descriptor is
known). It's likely to be beneficial in other cases, too.

By default tuple deforming is JITed whenever an expression is JIT
compiled. There's a separate boolean GUC controlling it, but that's
expected to be primarily useful for development and benchmarking.

Docs will follow in a later commit containing docs for the whole JIT
feature.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-26 12:57:19 -07:00
Alvaro Herrera 555ee77a96 Handle INSERT .. ON CONFLICT with partitioned tables
Commit eb7ed3f306 enabled unique constraints on partitioned tables,
but one thing that was not working properly is INSERT/ON CONFLICT.
This commit introduces a new node keeps state related to the ON CONFLICT
clause per partition, and fills it when that partition is about to be
used for tuple routing.

Author: Amit Langote, Álvaro Herrera
Reviewed-by: Etsuro Fujita, Pavan Deolasee
Discussion: https://postgr.es/m/20180228004602.cwdyralmg5ejdqkq@alvherre.pgsql
2018-03-26 10:43:54 -03:00
Teodor Sigaev 8694cc96b5 Exclude unlogged tables from base backups
Exclude unlogged tables from base backup entirely except init fork which marks
created unlogged table. The next question is do not backup temp table but
it's a story for separate patch.

Author: David Steele
Review by: Adam Brightwell, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/04791bab-cb04-ba43-e9c0-664a4c1ffb2c@pgmasters.net
2018-03-23 19:14:12 +03:00
Peter Eisentraut 7ba7986fb4 Fix interaction of Perl and stdbool.h
Revert the PL/Perl-specific change in
9a95a77d9d.  We must not prevent Perl from
using stdbool.h when it has been built to do so, even if it uses an
incompatible size.  Otherwise, we would be imposing our bool on Perl,
which will lead to crashes because of the size mismatch.

Instead, we undef bool after including the Perl headers, as we did
previously, but now only if we are not using stdbool.h ourselves.
Record that choice in c.h as USE_STDBOOL.  This will also make it easier
to apply that coding pattern elsewhere if necessary.
2018-03-23 10:31:10 -04:00
Alvaro Herrera 86f575948c Allow FOR EACH ROW triggers on partitioned tables
Previously, FOR EACH ROW triggers were not allowed in partitioned
tables.  Now we allow AFTER triggers on them, and on trigger creation we
cascade to create an identical trigger in each partition.  We also clone
the triggers to each partition that is created or attached later.

This means that deferred unique keys are allowed on partitioned tables,
too.

Author: Álvaro Herrera
Reviewed-by: Peter Eisentraut, Simon Riggs, Amit Langote, Robert Haas,
	Thomas Munro
Discussion: https://postgr.es/m/20171229225319.ajltgss2ojkfd3kp@alvherre.pgsql
2018-03-23 10:48:22 -03:00
Andres Freund 2111a48a0c Adapt expression JIT to stdbool.h introduction.
The LLVM JIT provider uses clang to synchronize types between normal C
code and runtime generated code. Clang represents stdbool.h style
booleans in return values & parameters differently from booleans
stored in variables.

Thus the expression compilation code from 2a0faed9d needs to be
adapted to 9a95a77d9. Instead of hardcoding i8 as the type for
booleans (which already was wrong on some edge case platforms!), use
postgres' notion of a boolean as used for storage and for parameters.

Per buildfarm animal xenodermus.

Author: Andres Freund
2018-03-22 22:15:51 -07:00
Peter Eisentraut 9a95a77d9d Use stdbool.h if suitable
Using the standard bool type provided by C allows some recent compilers
and debuggers to give better diagnostics.  Also, some extension code and
third-party headers are increasingly pulling in stdbool.h, so it's
probably saner if everyone uses the same definition.

But PostgreSQL code is not prepared to handle bool of a size other than
1, so we keep our own old definition if we encounter a stdbool.h with a
bool of a different size.  (Among current build farm members, this only
applies to old macOS versions on PowerPC.)

To check that the used bool is of the right size, add a static
assertions about size of GinTernaryValue vs bool.  This is currently the
only place that assumes that bool and char are of the same size.

Discussion: https://www.postgresql.org/message-id/flat/3a0fe7e1-5ed1-414b-9230-53bbc0ed1f49@2ndquadrant.com
2018-03-22 20:42:25 -04:00
Andres Freund 2a0faed9d7 Add expression compilation support to LLVM JIT provider.
In addition to the interpretation of expressions (which back
evaluation of WHERE clauses, target list projection, aggregates
transition values etc) support compiling expressions to native code,
using the infrastructure added in earlier commits.

To avoid duplicating a lot of code, only support emitting code for
cases that are likely to be performance critical. For expression steps
that aren't deemed that, use the existing interpreter.

The generated code isn't great - some architectural changes are
required to address that. But this already yields a significant
speedup for some analytics queries, particularly with WHERE clauses
filtering a lot, or computing multiple aggregates.

Author: Andres Freund
Tested-By: Thomas Munro
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de

Disable JITing for VALUES() nodes.

VALUES() nodes are only ever executed once. This is primarily helpful
for debugging, when forcing JITing even for cheap queries.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 14:45:59 -07:00
Andres Freund 7ced1d1247 Add FIELDNO_* macro designating offset into structs required for JIT.
For any interesting JIT target, fields inside structs need to be
accessed. b96d550e contains infrastructure for syncing the definition
of types between postgres C code and runtime code generation with
LLVM. But that doesn't sync the number or names of fields inside
structs, just the types (including padding etc).

One option would be to hardcode the offset numbers in the JIT code,
but that'd be hard to keep in sync. Instead add macros indicating the
field offset to the fields that need to be accessed. Not pretty, but
manageable.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 14:45:59 -07:00
Andres Freund fb46ac26fe Expand list of synchronized types and functions in LLVM JIT provider.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 14:45:59 -07:00
Andres Freund cc415a56d0 Basic planner and executor integration for JIT.
This adds simple cost based plan time decision about whether JIT
should be performed. jit_above_cost, jit_optimize_above_cost are
compared with the total cost of a plan, and if the cost is above them
JIT is performed / optimization is performed respectively.

For that PlannedStmt and EState have a jitFlags (es_jit_flags) field
that stores information about what JIT operations should be performed.

EState now also has a new es_jit field, which can store a
JitContext. When there are no errors the context is released in
standard_ExecutorEnd().

It is likely that the default values for jit_[optimize_]above_cost
will need to be adapted further, but in my test these values seem to
work reasonably.

Author: Andres Freund, with feedback by Peter Eisentraut
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 11:51:58 -07:00
Andres Freund 7ec0d80c05 Add helpers for emitting LLVM IR.
These basically just help to make code a bit more concise and pgindent
proof.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 11:51:58 -07:00
Andres Freund 250bca7fc1 Debugging and profiling support for LLVM JIT provider.
This currently requires patches to the LLVM codebase to be
effective (submitted upstream), the GUCs are available without those
patches however.

Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 11:07:55 -07:00
Andres Freund b96d550eb0 Support for optimizing and emitting code in LLVM JIT provider.
This commit introduces the ability to actually generate code using
LLVM. In particular, this adds:

- Ability to emit code both in heavily optimized and largely
  unoptimized fashion
- Batching facility to allow functions to be defined in small
  increments, but optimized and emitted in executable form in larger
  batches (for performance and memory efficiency)
- Type and function declaration synchronization between runtime
  generated code and normal postgres code. This is critical to be able
  to access struct fields etc.
- Developer oriented jit_dump_bitcode GUC, for inspecting / debugging
  the generated code.
- per JitContext statistics of number of functions, time spent
  generating code, optimizing, and emitting it.  This will later be
  employed for EXPLAIN support.

This commit doesn't yet contain any code actually generating
functions. That'll follow in later commits.

Documentation for GUCs added, and for JIT in general, will be added in
later commits.

Author: Andres Freund, with contributions by Pierre Ducroquet
Testing-By: Thomas Munro, Peter Eisentraut
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-22 11:05:22 -07:00
Robert Haas e2f1eb0ee3 Implement partition-wise grouping/aggregation.
If the partition keys of input relation are part of the GROUP BY
clause, all the rows belonging to a given group come from a single
partition.  This allows aggregation/grouping over a partitioned
relation to be broken down * into aggregation/grouping on each
partition.  This should be no worse, and often better, than the normal
approach.

If the GROUP BY clause does not contain all the partition keys, we can
still perform partial aggregation for each partition and then finalize
aggregation after appending the partial results.  This is less certain
to be a win, but it's still useful.

Jeevan Chalke, Ashutosh Bapat, Robert Haas.  The larger patch series
of which this patch is a part was also reviewed and tested by Antonin
Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin
Knizhnik, Pascal Legrand, and Rafia Sabih.

Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
2018-03-22 12:49:48 -04:00
Teodor Sigaev f67b113ac6 Add \if support to pgbench
Patch adds \if to pgbench as it done for psql. Implementation shares condition
stack code with psql, so, this code is moved to fe_utils directory.

Author: Fabien COELHO with minor editorization by me
Review by: Vik Fearing, Fedor Sigaev
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.20.1711252200190.28523@lancre
2018-03-22 17:42:03 +03:00
Andres Freund 31bc604e0b Add file containing extensions of the LLVM C API.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-21 19:44:17 -07:00
Andres Freund 432bb9e04d Basic JIT provider and error handling infrastructure.
This commit introduces:

1) JIT provider abstraction, which allows JIT functionality to be
   implemented in separate shared libraries. That's desirable because
   it allows to install JIT support as a separate package, and because
   it allows experimentation with different forms of JITing.
2) JITContexts which can be, using functions introduced in follow up
   commits, used to emit JITed functions, and have them be cleaned up
   on error.
3) The outline of a LLVM JIT provider, which will be fleshed out in
   subsequent commits.

Documentation for GUCs added, and for JIT in general, will be added in
later commits.

Author: Andres Freund, with architectural input from Jeff Davis
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
2018-03-21 19:28:28 -07:00
Tom Lane 742869946f Fix mishandling of quoted-list GUC values in pg_dump and ruleutils.c.
Code that prints out the contents of setconfig or proconfig arrays in
SQL format needs to handle GUC_LIST_QUOTE variables differently from
other ones, because for those variables, flatten_set_variable_args()
already applied a layer of quoting.  The value can therefore safely
be printed as-is, and indeed must be, or flatten_set_variable_args()
will muck it up completely on reload.  For all other GUC variables,
it's necessary and sufficient to quote the value as a SQL literal.

We'd recognized the need for this long ago, but mis-analyzed the
need slightly, thinking that all GUC_LIST_INPUT variables needed
the special treatment.  That's actually wrong, since a valid value
of a LIST variable might include characters that need quoting,
although no existing variables accept such values.

More to the point, we hadn't made any particular effort to keep the
various places that deal with this up-to-date with the set of variables
that actually need special treatment, meaning that we'd do the wrong
thing with, for example, temp_tablespaces values.  This affects dumping
of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET
commands.

In ruleutils.c we can fix it reasonably honestly by exporting a guc.c
function that allows discovering the flags for a given GUC variable.
But pg_dump doesn't have easy access to that, so continue the old method
of having a hard-wired list of affected variable names.  At least we can
fix it to have just one list not two, and update the list to match
current reality.

A remaining problem with this is that it only works for built-in
GUC variables.  pg_dump's list obvious knows nothing of third-party
extensions, and even the "ask guc.c" method isn't bulletproof since
the relevant extension might not be loaded.  There's no obvious
solution to that, so for now, we'll just have to discourage extension
authors from inventing custom GUCs that need GUC_LIST_QUOTE.

This has been busted for a long time, so back-patch to all supported
branches.

Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and
Pavel Stehule

Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
2018-03-21 20:03:28 -04:00
Tom Lane 27ba260c73 Change oddly-chosen OID allocation.
I noticed while fooling with John Naylor's bootstrap-data patch that we had
one high-numbered manually assigned OID, 8888, which evidently came from a
submission that the committer didn't bother to bring into line with usual
OID allocation practices before committing.  That's a bad idea, because it
creates a hazard for other patches that may be temporarily using high OID
numbers.  Change it to something more in line with what we usually do.

This evidently dates to commit abb173392.  It's too late to change it
in released branches, but we can fix it in HEAD.
2018-03-21 14:57:12 -04:00
Peter Eisentraut 325f2ec555 Handle heap rewrites even better in logical decoding
Logical decoding should not publish anything about tables created as
part of a heap rewrite during DDL.  Those tables don't exist externally,
so consumers of logical decoding cannot do anything sensible with that
information.  In ab28feae2b, we worked
around this for built-in logical replication, but that was hack.

This is a more proper fix: We mark such transient heaps using the new
field pg_class.relwrite, linking to the original relation OID.  By
default, we ignore them in logical decoding before they get to the
output plugin.  Optionally, a plugin can register their interest in
getting such changes, if they handle DDL specially, in which case the
new field will help them get information about the actual table.

Reviewed-by: Craig Ringer <craig@2ndquadrant.com>
2018-03-21 09:15:04 -04:00