Especially, warn about the hazards of mishandling the backup_label
file. Adjust a couple of server messages to be more clear about
the hazards associated with removing backup_label files, too.
David Steele and Robert Haas, reviewed by Laurenz Albe, Martín
Marqués, Peter Eisentraut, and Magnus Hagander.
Discussion: http://postgr.es/m/7d85c387-000e-16f0-e00b-50bf83c22127@pgmasters.net
This adds the CONCURRENTLY option to the REINDEX command. A REINDEX
CONCURRENTLY on a specific index creates a new index (like CREATE
INDEX CONCURRENTLY), then renames the old index away and the new index
in place and adjusts the dependencies, and then drops the old
index (like DROP INDEX CONCURRENTLY). The REINDEX command also has
the capability to run its other variants (TABLE, DATABASE) with the
CONCURRENTLY option (but not SYSTEM).
The reindexdb command gets the --concurrently option.
Author: Michael Paquier, Andreas Karlsson, Peter Eisentraut
Reviewed-by: Andres Freund, Fujii Masao, Jim Nasby, Sergei Kornilov
Discussion: https://www.postgresql.org/message-id/flat/60052986-956b-4478-45ed-8bd119e9b9cf%402ndquadrant.com#74948a1044c56c5e817a5050f554ddee
To support building indexes over tables of different AMs, the scans to
do so need to be routed through the table AM. While moving a fair
amount of code, nearly all the changes are just moving code to below a
callback.
Currently the range based interface wouldn't make much sense for non
block based table AMs. But that seems aceptable for now.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
The makefile rule for the (rarely used) plain-text output postgres.txt
was still written to use lynx, but in
96b8b8b6f9, where the INSTALL file was
switched to pandoc, the rest of the makefile support for lynx was
removed, so this was broken. Rewrite the rule to also use pandoc for
postgres.txt.
Introduce a third extended statistic type, supported by the CREATE
STATISTICS command - MCV lists, a generalization of the statistic
already built and used for individual columns.
Compared to the already supported types (n-distinct coefficients and
functional dependencies), MCV lists are more complex, include column
values and allow estimation of much wider range of common clauses
(equality and inequality conditions, IS NULL, IS NOT NULL etc.).
Similarly to the other types, a new pseudo-type (pg_mcv_list) is used.
Author: Tomas Vondra
Reviewed-by: Dean Rasheed, David Rowley, Mark Dilger, Alvaro Herrera
Discussion: https://postgr.es/m/dfdac334-9cf2-2597-fb27-f0fb3753f435@2ndquadrant.com
Column references are not allowed in default expressions and partition
bound expressions, and are restricted as such once the transformation of
their expressions is done. However, trying to use more complex column
references can lead to confusing error messages. For example, trying to
use a two-field column reference name for default expressions and
partition bounds leads to "missing FROM-clause entry for table", which
makes no sense in their respective context.
In order to make the errors generated more useful, this commit adds more
verbose messages when transforming column references depending on the
context. This has a little consequence though: for example an
expression using an aggregate with a column reference as argument would
cause an error to be generated for the column reference, while the
aggregate was the problem reported before this commit because column
references get transformed first.
The confusion exists for default expressions for a long time, and the
problem is new as of v12 for partition bounds. Still per the lack of
complaints on the matter no backpatch is done.
The patch has been written by Amit Langote and me, and Tom Lane has
provided the improvement of the documentation for default expressions on
the CREATE TABLE page.
Author: Amit Langote, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20190326020853.GM2558@paquier.xyz
Partial revert of commit 6260cc550b, "pgbench: add \cset and \gset
commands".
While \gset is widely considered a useful and necessary tool for user-
defined benchmarks, \cset does not have as much value, and its
implementation was considered "not to be up to project standards"
(though I, Álvaro, can't quite understand exactly how). Therefore,
remove \cset.
Author: Fabien Coelho
Discussion: https://postgr.es/m/alpine.DEB.2.21.1903230716030.18811@lancre
Discussion: https://postgr.es/m/201901101900.mv7zduch6sad@alvherre.pgsql
This uses the same progress reporting infrastructure added in commit
c16dc1aca5 and extends it to these
additional cases. We lack the ability to track the internal progress
of sorts and index builds so the information reported is
coarse-grained for some parts of the operation, but it still seems
like a significant improvement over having nothing at all.
Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael
Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me. A fair
amount of polishing also by me.
Discussion: http://postgr.es/m/59A77072.3090401@lab.ntt.co.jp
Add command variants COMMIT AND CHAIN and ROLLBACK AND CHAIN, which
start new transactions with the same transaction characteristics as the
just finished one, per SQL standard.
Support for transaction chaining in PL/pgSQL is also added. This
functionality is especially useful when running COMMIT in a loop in
PL/pgSQL.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/28536681-324b-10dc-ade8-ab46f7645a5a@2ndquadrant.com
Previously we were using the SQL:2003 definition, which doesn't allow
this, but that creates a serious dump/restore gotcha: there is no
setting of xmloption that will allow all valid XML data. Hence,
switch to the 2006 definition.
Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
in the input and switch to DOCUMENT parsing mode. This should not
cost much, because <!DOCTYPE> should be close to the front of the
input if it's there at all. It's possible that this causes the
error messages for malformed input to be slightly different than
they were before, if said input includes <!DOCTYPE>; but that does
not seem like a big problem.
In passing, buy back a few cycles in parsing of large XML documents
by not doing strlen() of the whole input in parse_xml_decl().
Back-patch because dump/restore failures are not nice. This change
shouldn't break any cases that worked before, so it seems safe to
back-patch.
Chapman Flack (revised a bit by me)
Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
This is an option consistent with what pg_dump, pg_rewind and
pg_basebackup provide which is useful for leveraging the I/O effort when
testing things, not to be used in a production environment.
Author: Michael Paquier
Reviewed-by: Michael Banck, Fabien Coelho, Sergei Kornilov
Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
An offline cluster can now work with more modes in pg_checksums:
- --enable enables checksums in a cluster, updating all blocks with a
correct checksum, and updating the control file at the end.
- --disable disables checksums in a cluster, updating only the control
file.
- --check is an extra option able to verify checksums for a cluster, and
the default used if no mode is specified.
When running --enable or --disable, the data folder gets fsync'd for
durability, and then it is followed by a control file update and flush
to keep the operation consistent should the tool be interrupted, killed
or the host unplugged. If no mode is specified in the options, then
--check is used for compatibility with older versions of pg_checksums
(named pg_verify_checksums in v11 where it was introduced).
Author: Michael Banck, Michael Paquier
Reviewed-by: Fabien Coelho, Magnus Hagander, Sergei Kornilov
Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
This adds a flag "deterministic" to collations. If that is false,
such a collation disables various optimizations that assume that
strings are equal only if they are byte-wise equal. That then allows
use cases such as case-insensitive or accent-insensitive comparisons
or handling of strings with different Unicode normal forms.
This functionality is only supported with the ICU provider. At least
glibc doesn't appear to have any locales that work in a
nondeterministic way, so it's not worth supporting this for the libc
provider.
The term "deterministic comparison" in this context is from Unicode
Technical Standard #10
(https://unicode.org/reports/tr10/#Deterministic_Comparison).
This patch makes changes in three areas:
- CREATE COLLATION DDL changes and system catalog changes to support
this new flag.
- Many executor nodes and auxiliary code are extended to track
collations. Previously, this code would just throw away collation
information, because the eventually-called user-defined functions
didn't use it since they only cared about equality, which didn't
need collation information.
- String data type functions that do equality comparisons and hashing
are changed to take the (non-)deterministic flag into account. For
comparison, this just means skipping various shortcuts and tie
breakers that use byte-wise comparison. For hashing, we first need
to convert the input string to a canonical "sort key" using the ICU
analogue of strxfrm().
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7@2ndquadrant.com
LDAP servers can be advertised on a network with RFC 2782 DNS SRV
records. The OpenLDAP command-line tools automatically try to find
servers that way, if no server name is provided by the user. Teach
PostgreSQL to do the same using OpenLDAP's support functions, when
building with OpenLDAP.
For now, we assume that HAVE_LDAP_INITIALIZE (an OpenLDAP extension
available since OpenLDAP 2.0 and also present in Apple LDAP) implies
that you also have ldap_domain2hostlist() (which arrived in the same
OpenLDAP version and is also present in Apple LDAP).
Author: Thomas Munro
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CAEepm=2hAnSfhdsd6vXsM6VZVN0br-FbAZ-O+Swk18S5HkCP=A@mail.gmail.com
Teach contrib/amcheck's bt_index_parent_check() function to take
advantage of the uniqueness property of heapkeyspace indexes in support
of a new verification option: non-pivot tuples (non-highkey tuples on
the leaf level) can optionally be re-found using a new search for each,
that starts from the root page. If a tuple cannot be re-found, report
that the index is corrupt.
The new "rootdescend" verification option is exhaustive, and can
therefore make a call to bt_index_parent_check() take a lot longer.
Re-finding tuples during verification is mostly intended as an option
for backend developers, since the corruption scenarios that it alone is
uniquely capable of detecting seem fairly far-fetched.
For example, "rootdescend" verification is much more likely to detect
corruption of the least significant byte of a key from a pivot tuple in
the root page of a B-Tree that already has at least three levels.
Typically, only a few tuples on a cousin leaf page are at risk of
"getting overlooked" by index scans in this scenario. The corrupt key
in the root page is only slightly corrupt: corrupt enough to give wrong
answers to some queries, and yet not corrupt enough to allow the problem
to be detected without verifying agreement between the leaf page and the
root page, skipping at least one internal page level. The existing
bt_index_parent_check() checks never cross more than a single level.
Author: Peter Geoghegan
Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/CAH2-Wz=yTWnVu+HeHGKb2AGiADL9eprn-cKYAto4MkKOuiGtRQ@mail.gmail.com
Make nbtree treat all index tuples as having a heap TID attribute.
Index searches can distinguish duplicates by heap TID, since heap TID is
always guaranteed to be unique. This general approach has numerous
benefits for performance, and is prerequisite to teaching VACUUM to
perform "retail index tuple deletion".
Naively adding a new attribute to every pivot tuple has unacceptable
overhead (it bloats internal pages), so suffix truncation of pivot
tuples is added. This will usually truncate away the "extra" heap TID
attribute from pivot tuples during a leaf page split, and may also
truncate away additional user attributes. This can increase fan-out,
especially in a multi-column index. Truncation can only occur at the
attribute granularity, which isn't particularly effective, but works
well enough for now. A future patch may add support for truncating
"within" text attributes by generating truncated key values using new
opclass infrastructure.
Only new indexes (BTREE_VERSION 4 indexes) will have insertions that
treat heap TID as a tiebreaker attribute, or will have pivot tuples
undergo suffix truncation during a leaf page split (on-disk
compatibility with versions 2 and 3 is preserved). Upgrades to version
4 cannot be performed on-the-fly, unlike upgrades from version 2 to
version 3. contrib/amcheck continues to work with version 2 and 3
indexes, while also enforcing stricter invariants when verifying version
4 indexes. These stricter invariants are the same invariants described
by "3.1.12 Sequencing" from the Lehman and Yao paper.
A later patch will enhance the logic used by nbtree to pick a split
point. This patch is likely to negatively impact performance without
smarter choices around the precise point to split leaf pages at. Making
these two mostly-distinct sets of enhancements into distinct commits
seems like it might clarify their design, even though neither commit is
particularly useful on its own.
The maximum allowed size of new tuples is reduced by an amount equal to
the space required to store an extra MAXALIGN()'d TID in a new high key
during leaf page splits. The user-facing definition of the "1/3 of a
page" restriction is already imprecise, and so does not need to be
revised. However, there should be a compatibility note in the v12
release notes.
Author: Peter Geoghegan
Reviewed-By: Heikki Linnakangas, Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
Aggregates have acquired a dozen or so optional attributes in recent
years for things like parallel query and moving-aggregate mode; the
lack of an OR REPLACE option to add or change these for an existing
agg makes extension upgrades gratuitously hard. Rectify.
Add support of numeric error suppression to jsonpath as it's required by
standard. This commit doesn't use PG_TRY()/PG_CATCH() in order to implement
that. Instead, it provides internal versions of numeric functions used, which
support error suppression.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Alexander Korotkov, Nikita Glukhov
Reviewed-by: Tomas Vondra
SQL 2016 standards among other things contains set of SQL/JSON features for
JSON processing inside of relational database. The core of SQL/JSON is JSON
path language, allowing access parts of JSON documents and make computations
over them. This commit implements partial support JSON path language as
separate datatype called "jsonpath". The implementation is partial because
it's lacking datetime support and suppression of numeric errors. Missing
features will be added later by separate commits.
Support of SQL/JSON features requires implementation of separate nodes, and it
will be considered in subsequent patches. This commit includes following
set of plain functions, allowing to execute jsonpath over jsonb values:
* jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_match(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_query(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]).
* jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]).
This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which
are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb,
jsonpath) correspondingly. These operators will have an index support
(implemented in subsequent patches).
Catversion bumped, to add new functions and operators.
Code was written by Nikita Glukhov and Teodor Sigaev, revised by me.
Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work
was inspired by Oleg Bartunov.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
Previously, the SERIALIZABLE isolation level prevented parallel query
from being used. Allow the two features to be used together by
sharing the leader's SERIALIZABLEXACT with parallel workers.
An extra per-SERIALIZABLEXACT LWLock is introduced to make it safe to
share, and new logic is introduced to coordinate the early release
of the SERIALIZABLEXACT required for the SXACT_FLAG_RO_SAFE
optimization, as follows:
The first backend to observe the SXACT_FLAG_RO_SAFE flag (set by
some other transaction) will 'partially release' the SERIALIZABLEXACT,
meaning that the conflicts and locks it holds are released, but the
SERIALIZABLEXACT itself will remain active because other backends
might still have a pointer to it.
Whenever any backend notices the SXACT_FLAG_RO_SAFE flag, it clears
its own MySerializableXact variable and frees local resources so that
it can skip SSI checks for the rest of the transaction. In the
special case of the leader process, it transfers the SERIALIZABLEXACT
to a new variable SavedSerializableXact, so that it can be completely
released at the end of the transaction after all workers have exited.
Remove the serializable_okay flag added to CreateParallelContext() by
commit 9da0cc35, because it's now redundant.
Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Robert Haas, Masahiko Sawada, Kevin Grittner
Discussion: https://postgr.es/m/CAEepm=0gXGYhtrVDWOTHS8SQQy_=S9xo+8oCxGLWZAOoeJ=yzQ@mail.gmail.com
If a heap on the old cluster has 4 pages or fewer, and the old cluster
was PG v11 or earlier, don't copy or link the FSM. This will shrink
space usage for installations with large numbers of small tables.
This will allow pg_upgrade to take advantage of commit b0eaa4c51b where
we have avoided creation of the free space map for small heap relations.
Author: John Naylor
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACPNZCu4cOdm3uGnNEGXivy7Gz8UWyQjynDpdkPGabQ18_zK6g%40mail.gmail.com
Commit a6417078c missed updating some comments in transam.h about
reservation of high OIDs for development purposes. Also tamp down
an over-optimistic comment there about how easy it'd be to change
FirstNormalObjectId.
Earlier, commit 09568ec3d failed to update bki.sgml for the split
between genbki.pl-assigned OIDs and those assigned during initdb.
Also fix genbki.pl so that it will complain if it overruns
that split. It's possible that doing so would have no very bad
consequences, but that's no excuse for not detecting it.
If existing CHECK or NOT NULL constraints preclude the presence
of nulls, we need not look to see whether any are present.
Sergei Kornilov, reviewed by Stephen Frost, Ildar Musin, David Rowley,
and by me.
Discussion: http://postgr.es/m/81911511895540@web58j.yandex.ru
The current tool name is too restrictive and focuses only on verifying
checksums. As more options to control checksums for an offline cluster
are planned to be added, switch to a more generic name. Documentation
as well as all past references to the tool are updated.
Author: Michael Paquier
Reviewed-by: Michael Banck, Fabien Coelho, Seigei Kornilov
Discussion: https://postgr.es/m/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de
The SQL:2016 standard adds support for the hyperbolic functions
sinh(), cosh(), and tanh(). POSIX has long required libm to
provide those functions as well as their inverses asinh(),
acosh(), atanh(). Hence, let's just expose the libm functions
to the SQL level. As with the trig functions, we only implement
versions for float8, not numeric.
For the moment, we'll assume that all platforms actually do have
these functions; if experience teaches otherwise, some autoconf
effort may be needed.
SQL:2016 also adds support for base-10 logarithm, but with the
function name log10(), whereas the name we've long used is log().
Add aliases named log10() for the float8 and numeric versions.
Lætitia Avrot
Discussion: https://postgr.es/m/CAB_COdguG22LO=rnxDQ2DW1uzv8aQoUzyDQNJjrR4k00XSgm5w@mail.gmail.com
In the v11-era commits that taught genbki.pl to resolve symbolic
OID references in the initial catalog data, we didn't bother to
make every last reference symbolic; some of the catalogs have so
few initial rows that it didn't seem worthwhile.
However, the new project policy that OIDs assigned by new patches
should be automatically renumberable changes this calculus.
A patch that wants to add a row in one of these catalogs would have
a problem when the OID it assigns gets renumbered. Hence, do the
mop-up work needed to make all OID references in initial data be
symbolic, and establish an associated project policy that we'll
never again write a hard-wired OID reference there.
No catversion bump since the contents of postgres.bki aren't
actually changed by this commit.
Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
This commit adds a Perl script renumber_oids.pl, which can reassign a
range of manually-assigned OIDs to someplace else by modifying OID
fields of the catalog *.dat files and OID-assigning macros in the
catalog *.h files.
Up to now, we've encouraged new patches that need manually-assigned
OIDs to use OIDs just above the range of existing OIDs. Predictably,
this leads to patches stepping on each others' toes, as whichever
one gets committed first creates an OID conflict that other patch
author(s) have to resolve manually. With the availability of
renumber_oids.pl, we can eliminate a lot of this hassle.
The new project policy, therefore, is:
* Encourage new patches to use high OIDs (the documentation suggests
choosing a block of OIDs at random in 8000..9999).
* After feature freeze in each development cycle, run renumber_oids.pl
to move all such OIDs down to lower numbers, thus freeing the high OID
range for the next development cycle.
This plan should greatly reduce the risk of OID collisions between
concurrently-developed patches. Also, if such a collision happens
anyway, we have the option to resolve it without much effort by doing
an off-schedule OID renumbering to get the first-committed patch out
of the way. Or a patch author could use renumber_oids.pl to change
their patch's assignments without much pain.
This approach does put a premium on not hard-wiring any OID values
in places where renumber_oids.pl and genbki.pl can't fix them.
Project practice in that respect seems to be pretty good already,
but a follow-on patch will sand down some rough edges.
John Naylor and Tom Lane, per an idea of Peter Geoghegan's
Discussion: https://postgr.es/m/CAH2-WzmMTGMcPuph4OvsO7Ykut0AOCF_i-=eaochT0dd2BN9CQ@mail.gmail.com
Historically guc.c has just refused examples like set work_mem = '30.1GB',
but it seems more useful for it to take that and round off the value to
some reasonable approximation of what the user said. Just rounding to
the parameter's native unit would work, but it would lead to rather
silly-looking settings, such as 31562138kB for this example. Instead
let's round to the nearest multiple of the next smaller unit (if any),
producing 30822MB.
Also, do the units conversion math in floating point and round to integer
(if needed) only at the end. This produces saner results for inputs that
aren't exact multiples of the parameter's native unit, and removes another
difference in the behavior for integer vs. float parameters.
In passing, document the ability to use hex or octal input where it
ought to be documented.
Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
COALESCE, GREATEST and LEAST all look like functions taking variable
numbers of arguments, but in fact they are not functions, and so
VARIADIC array arguments don't work with them. Add a note to the docs
explaining this fact.
The consensus is not to try to make this work, but just to document the
limitation.
Discussion: https://postgr.es/m/CAFj8pRCaAtuXuRtvXf5GmPbAVriUQrNMo7-=TXUFN025S31R_w@mail.gmail.com
pgbench's arbitrary limit of 10 arguments for SQL statements or
metacommands is far too low. Increase it to 256.
This results in a very modest increase in memory usage, not enough to
worry about.
The maximum includes the SQL statement or metacommand. This is reflected
in the comments and revised TAP tests.
Simon Riggs and Dagfinn Ilmari Mannsåker with some light editing by me.
Reviewed by: David Rowley and Fabien Coelho
Discussion: https://postgr.es/m/CANP8+jJiMJOAf-dLoHuR-8GENiK+eHTY=Omw38Qx7j2g0NDTXA@mail.gmail.com
1. The PARTITION OF clause of CREATE FOREIGN TABLE was not explained in
the CREATE FOREIGN TABLE reference page. Add it.
(Postgres 10 onwards)
2. The limitation that tuple routing cannot target partitions that are
foreign tables was not documented clearly enough. Improve wording.
(Postgres 10 onwards)
3. The UPDATE tuple re-routing concurrency behavior was explained in
the DDL chapter, which doesn't seem the right place. Move it to the
UPDATE reference page instead. (Postgres 11 onwards).
Authors: Amit Langote, David Rowley.
Reviewed-by: Etsuro Fujita.
Reported-by: Derek Hans
Discussion: https://postgr.es/m/CAGrP7a3Xc1Qy_B2WJcgAD8uQTS_NDcJn06O5mtS_Ne1nYhBsyw@mail.gmail.com
This reverts commit bd09503e63.
Per discussion, it seems like what we should do instead is to
reduce the default value of autovacuum_vacuum_cost_delay by the
same factor. That's functionally equivalent as long as the
platform can accurately service the smaller delay request, which
should be true on anything released in the last 10 years or more.
And smaller, more-closely-spaced delays are better in terms of
providing a steady I/O load.
Discussion: https://postgr.es/m/28720.1552101086@sss.pgh.pa.us
This change makes it possible to specify sub-millisecond delays,
which work well on most modern platforms, though that was not true
when the cost-delay feature was designed.
To support this without breaking existing configuration entries,
improve guc.c to allow floating-point GUCs to have units. Also,
allow "us" (microseconds) as an input/output unit for time-unit GUCs.
(It's not allowed as a base unit, at least not yet.)
Likewise change the autovacuum_vacuum_cost_delay reloption to be
floating-point; this forces a catversion bump because the layout of
StdRdOptions changes.
This patch doesn't in itself change the default values or allowed
ranges for these parameters, and it should not affect the behavior
for any already-allowed setting for them.
Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
Similarly to B-tree, GiST index access method gets support of INCLUDE
attributes. These attributes aren't used for tree navigation and aren't
present in non-leaf pages. But they are present in leaf pages and can be
fetched during index-only scan.
The point of having INCLUDE attributes in GiST indexes is slightly different
from the point of having them in B-tree. The main point of INCLUDE attributes
in B-tree is to define UNIQUE constraint over part of attributes enabled for
index-only scan. In GiST the main point of INCLUDE attributes is to use
index-only scan for attributes, whose data types don't have GiST opclasses.
Discussion: https://postgr.es/m/73A1A452-AD5F-40D4-BD61-978622FF75C1%40yandex-team.ru
Author: Andrey Borodin, with small changes by me
Reviewed-by: Andreas Karlsson
This allows a login to require both that the cn of the certificate
matches (like authentication type cert) *and* that another
authentication method (such as password or kerberos) succeeds as well.
The old value of clientcert=1 maps to the new clientcert=verify-ca,
clientcert=0 maps to the new clientcert=no-verify, and the new option
erify-full will add the validation of the CN.
Author: Julian Markwort, Marius Timmer
Reviewed by: Magnus Hagander, Thomas Munro
This adds a column that counts how many checksum failures have occurred
on files belonging to a specific database. Both checksum failures
during normal backend processing and those created when a base backup
detects a checksum failure are counted.
Author: Magnus Hagander
Reviewed by: Julien Rouhaud
tuple_data_split() lacked the type of the first argument, and
heap_page_item_attrs() has reversed the first and second argument,
with the bytea argument using an incorrect name.
Author: Laurenz Albe
Discussion: https://postgr.es/m/8f9ab7b16daf623e87eeef5203a4ffc0dece8dfd.camel@cybertec.at
Backpatch-through: 9.6
Correctly process nodes of more types than previously. In some cases,
nodes were being ignored (nothing was output); in other cases, trying to
return them resulted in errors about unrecognized nodes. In yet other
cases, necessary escaping (of XML special characters) was not being
done. Fix all those (as far as the authors could find) and add
regression tests cases verifying the new behavior.
I (Álvaro) was of two minds about backpatching these changes. They do
seem bugfixes that would benefit most users of the affected functions;
but on the other hand it would change established behavior in minor
releases, so it seems prudent not to.
Authors: Pavel Stehule, Markus Winand, Chapman Flack
Discussion:
https://postgr.es/m/CAFj8pRA6J25CtAZ2TuRvxK3gat7-bBUYh0rfE2yM7Hj9GD14Dg@mail.gmail.comhttps://postgr.es/m/8BDB0627-2105-4564-AA76-7849F028B96E@winand.at
The elephant in the room as pointed out by Chapman Flack, not fixed in
this commit, is that we still have XMLTABLE operating on XPath 1.0
instead of the standard-mandated XQuery (or even its subset XPath 2.0).
Fixing that is a major undertaking, however.
We still require AccessExclusiveLock on the partition itself, because
otherwise an insert that violates the newly-imposed partition
constraint could be in progress at the same time that we're changing
that constraint; only the lock level on the parent relation is
weakened.
To make this safe, we have to cope with (at least) three separate
problems. First, relevant DDL might commit while we're in the process
of building a PartitionDesc. If so, find_inheritance_children() might
see a new partition while the RELOID system cache still has the old
partition bound cached, and even before invalidation messages have
been queued. To fix that, if we see that the pg_class tuple seems to
be missing or to have a null relpartbound, refetch the value directly
from the table. We can't get the wrong value, because DETACH PARTITION
still requires AccessExclusiveLock throughout; if we ever want to
change that, this will need more thought. In testing, I found it quite
difficult to hit even the null-relpartbound case; the race condition
is extremely tight, but the theoretical risk is there.
Second, successive calls to RelationGetPartitionDesc might not return
the same answer. The query planner will get confused if lookup up the
PartitionDesc for a particular relation does not return a consistent
answer for the entire duration of query planning. Likewise, query
execution will get confused if the same relation seems to have a
different PartitionDesc at different times. Invent a new
PartitionDirectory concept and use it to ensure consistency. This
ensures that a single invocation of either the planner or the executor
sees the same view of the PartitionDesc from beginning to end, but it
does not guarantee that the planner and the executor see the same
view. Since this allows pointers to old PartitionDesc entries to
survive even after a relcache rebuild, also postpone removing the old
PartitionDesc entry until we're certain no one is using it.
For the most part, it seems to be OK for the planner and executor to
have different views of the PartitionDesc, because the executor will
just ignore any concurrently added partitions which were unknown at
plan time; those partitions won't be part of the inheritance
expansion, but invalidation messages will trigger replanning at some
point. Normally, this happens by the time the very next command is
executed, but if the next command acquires no locks and executes a
prepared query, it can manage not to notice until a new transaction is
started. We might want to tighten that up, but it's material for a
separate patch. There would still be a small window where a query
that started just after an ATTACH PARTITION command committed might
fail to notice its results -- but only if the command starts before
the commit has been acknowledged to the user. All in all, the warts
here around serializability seem small enough to be worth accepting
for the considerable advantage of being able to add partitions without
a full table lock.
Although in general the consequences of new partitions showing up
between planning and execution are limited to the query not noticing
the new partitions, run-time partition pruning will get confused in
that case, so that's the third problem that this patch fixes.
Run-time partition pruning assumes that indexes into the PartitionDesc
are stable between planning and execution. So, add code so that if
new partitions are added between plan time and execution time, the
indexes stored in the subplan_map[] and subpart_map[] arrays within
the plan's PartitionedRelPruneInfo get adjusted accordingly. There
does not seem to be a simple way to generalize this scheme to cope
with partitions that are removed, mostly because they could then get
added back again with different bounds, but it works OK for added
partitions.
This code does not try to ensure that every backend participating in
a parallel query sees the same view of the PartitionDesc. That
currently doesn't matter, because we never pass PartitionDesc
indexes between backends. Each backend will ignore the concurrently
added partitions which it notices, and it doesn't matter if different
backends are ignoring different sets of concurrently added partitions.
If in the future that matters, for example because we allow writes in
parallel query and want all participants to do tuple routing to the same
set of partitions, the PartitionDirectory concept could be improved to
share PartitionDescs across backends. There is a draft patch to
serialize and restore PartitionDescs on the thread where this patch
was discussed, which may be a useful place to start.
Patch by me. Thanks to Alvaro Herrera, David Rowley, Simon Riggs,
Amit Langote, and Michael Paquier for discussion, and to Alvaro
Herrera for some review.
Discussion: http://postgr.es/m/CA+Tgmobt2upbSocvvDej3yzokd7AkiT+PvgFH+a9-5VV1oJNSQ@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZE0r9-cyA-aY6f8WFEROaDLLL7Vf81kZ8MtFCkxpeQSw@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoY13KQZF-=HNTrt9UYWYx3_oYOQpu9ioNT49jGgiDpUEA@mail.gmail.com
This clause is used to indicate the passing mode of a XML document, but
we were doing it wrong: we accepted BY REF and ignored it, and rejected
BY VALUE as a syntax error. The reality, however, is that documents are
always passed BY VALUE, so rejecting that clause was silly. Change
things so that we accept BY VALUE.
BY REF continues to be accepted, and continues to be ignored.
Author: Chapman Flack
Reviewed-by: Pavel Stehule
Discussion: https://postgr.es/m/5C297BB7.9070509@anastigmatix.net
This introduces the concept of table access methods, i.e. CREATE
ACCESS METHOD ... TYPE TABLE and
CREATE TABLE ... USING (storage-engine).
No table access functionality is delegated to table AMs as of this
commit, that'll be done in following commits.
Subsequent commits will incrementally abstract table access
functionality to be routed through table access methods. That change
is too large to be reviewed & committed at once, so it'll be done
incrementally.
Docs will be updated at the end, as adding them incrementally would
likely make them less coherent, and definitely is a lot more work,
without a lot of benefit.
Table access methods are specified similar to index access methods,
i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with
callbacks. In contrast to index AMs that struct needs to live as long
as a backend, typically that's achieved by just returning a pointer to
a constant struct.
Psql's \d+ now displays a table's access method. That can be disabled
with HIDE_TABLEAM=true, which is mainly useful so regression tests can
be run against different AMs. It's quite possible that this behaviour
still needs to be fine tuned.
For now it's not allowed to set a table AM for a partitioned table, as
we've not resolved how partitions would inherit that. Disallowing
allows us to introduce, if we decide that's the way forward, such a
behaviour without a compatibility break.
Catversion bumped, to add the heap table AM and references to it.
Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.dehttps://postgr.es/m/20160812231527.GA690404@alvherre.pgsqlhttps://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.dehttps://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
The original 200 default value was set back in f425b605f4 when the cost
delay settings were first added. Hardware has improved quite a bit since
then and we've also made improvements such as sorting buffers during
checkpoints (9cd00c457e) which should result in less random writes.
This low default value was reportedly causing problems with badly
configured servers and in the absence of a native method to remove
excessive bloat from tables without incurring an AccessExclusiveLock, this
often made cleaning up the damage caused by badly configured auto-vacuums
difficult.
It seems more likely that someone will notice that auto-vacuum is running
too quickly than too slowly, so let's go all out and multiple the default
value for the setting by 10. With the default vacuum_cost_page_dirty and
autovacuum_vacuum_cost_delay (assuming a page size of 8192 bytes), this
allows autovacuum a theoretical maximum dirty write rate of around 39MB/s
instead of just 3.9MB/s.
Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f_YbXC2qTMPyCbmsPiKvZYwpuQNQMohiRXLj1r=8_rYvw@mail.gmail.com
For the upcoming pluggable table access methods it's quite
inconvenient to store tuples as HeapTuples, as that'd require
converting tuples from a their native format into HeapTuples. Instead
use slots to manage epq tuples.
To fit into that scheme, change the foreign data wrapper callback
RefetchForeignRow, to store the tuple in a slot. Insist on using the
caller provided slot, so it conveniently can be stored in the
corresponding EPQ slot. As there is no in core user of
RefetchForeignRow, that change was done blindly, but we plan to test
that soon.
To avoid duplicating that work for row locks, move row locks to just
directly use the EPQ slots - it previously temporarily stored tuples
in LockRowsState.lr_curtuples, but that doesn't seem beneficial, given
we'd possibly end up with a significant number of additional slots.
The behaviour of es_epqTupleSet[rti -1] is now checked by
es_epqTupleSlot[rti -1] != NULL, as that is distinguishable from a
slot containing an empty tuple.
Author: Andres Freund, Haribabu Kommi, Ashutosh Bapat
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
This option functions similarly to pg_dump's --exclude-table option, but
for database names. The option can be given once, and the argument can
be a pattern including wildcard characters.
Author: Andrew Dunstan.
Reviewd-by: Fabien Coelho and Michael Paquier
Discussion: https://postgr.es/m/43a54a47-4aa7-c70e-9ca6-648f436dd6e6@2ndQuadrant.com
Reflecting an updated parameter value requires a server restart, which
was not mentioned in the documentation and in postgresql.conf.sample.
Reported-by: Thomas Poty
Discussion: https://postgr.es/m/15659-0cd812f13027a2d8@postgresql.org
By default, the fallback_application_name for a physical walreceiver
is "walreceiver". This means that multiple standbys cannot be
distinguished easily on a primary, for example in pg_stat_activity or
synchronous_standby_names.
If cluster_name is set, use that for fallback_application_name in the
walreceiver. (If it's not set, it remains "walreceiver".) If someone
set cluster_name to identify their instance, we might as well use that
by default to identify the node remotely as well. It's still possible
to specify another application_name in primary_conninfo explicitly.
Reviewed-by: Euler Taveira <euler@timbira.com.br>
Discussion: https://www.postgresql.org/message-id/flat/1257eaee-4874-e791-e83a-46720c72cac7@2ndquadrant.com
Remove some unnecessary, legacy-looking use of the PROCEDURAL keyword
before LANGUAGE. We mostly don't use this anymore, so some of these
look a bit old.
There is still some use in pg_dump, which is harder to remove because
it's baked into the archive format, so I'm not touching that.
Discussion: https://www.postgresql.org/message-id/2330919b-62d9-29ac-8de3-58c024fdcb96@2ndquadrant.com
In previous releases, the input file of dbtoepub was postgres.xml, and
dbtoepub knows to derive the output file name postgres.epub from that
automatically. But now the intput file is postgres.sgml (since
postgres.sgml is itself an XML file and we no longer need the
intermediate postgres.xml file), but dbtoepub doesn't know how to deal
with the .sgml suffix, so the automatically derived output file name
becomes postgres.sgml.epub. Fix by adding an explicit -o option.
In commit b0eaa4c51b, we have avoided the creation of FSM for small tables.
So the functions that use FSM to compute the free space can return zero for
such tables. This was previously also possible for the cases where the
vacuum has not been triggered to update FSM.
This commit updates the comments in code and documentation to reflect this
behavior.
Author: John Naylor
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACPNZCtba-3m1q3A8gxA_vxg=T7gQf7gMbpR4Ciy5LntY-j+0Q@mail.gmail.com
Changes made by commit 02ddd49 mean that dumps made against pre version
12 instances are no longer comparable with those made against version 12
or later instances. This makes cross-version upgrade testing fail in the
buildfarm. Experimentation has shown that the error is cured if the
dumps are made when extra_float_digits is set to 0. Hence this patch
allows for it to be explicitly set rather than relying on pg_dump's
builtin default (3 in almost all cases). This feature might have other
uses, but should not normally be used.
Discussion: https://postgr.es/m/c76f7051-8fd3-ec10-7579-1f8842305b85@2ndQuadrant.com
So far ECPG programs had to treat binary data for bytea column as 'char' type.
But this meant converting from/to escaped format with PQunescapeBytea/
PQescapeBytea() and therefore forcing users to add unnecessary code and cost
for the conversion in runtime. By adding a dedicated datatype for bytea most of
this special handling is no longer needed.
Author: Matsumura-san ("Matsumura, Ryo" <matsumura.ryo@jp.fujitsu.com>)
Discussion: https://postgr.es/m/flat/03040DFF97E6E54E88D3BFEE5F5480F737A141F9@G01JPEXMBYT04
The dblink documentation claims that an empty string is returned if there
has been no error, however OK is actually returned in that case. Also,
clarify that an async error may not be seen unless dblink_is_busy() or
dblink_get_result() have been called first.
Backpatch to all supported branches.
Reported-by: realyota
Backpatch-through: 9.4
Discussion: https://postgr.es/m/153371978486.1298.2091761143788088262@wrigleys.postgresql.org
Since this was also the variant used as an example in the docs, update
the docs to use float4-misrounded-input as an example instead, since
that is now the only remaining variant file.
Historically we've always materialized the full output of a CTE query,
treating WITH as an optimization fence (so that, for example, restrictions
from the outer query cannot be pushed into it). This is appropriate when
the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE
query is non-recursive and side-effect-free, there's no hazard of changing
the query results by pushing restrictions down.
Another argument for materialization is that it can avoid duplicate
computation of an expensive WITH query --- but that only applies if
the WITH query is called more than once in the outer query. Even then
it could still be a net loss, if each call has restrictions that
would allow just a small part of the WITH query to be computed.
Hence, let's change the behavior for WITH queries that are non-recursive
and side-effect-free. By default, we will inline them into the outer
query (removing the optimization fence) if they are called just once.
If they are called more than once, we will keep the old behavior by
default, but the user can override this and force inlining by specifying
NOT MATERIALIZED. Lastly, the user can force the old behavior by
specifying MATERIALIZED; this would mainly be useful when the query had
deliberately been employing WITH as an optimization fence to prevent a
poor choice of plan.
Andreas Karlsson, Andrew Gierth, David Fetter
Discussion: https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk
DECLARE STATEMENT is a statement that lets users declare an identifier
pointing at a connection. This identifier will be used in other embedded
dynamic SQL statement such as PREPARE, EXECUTE, DECLARE CURSOR and so on.
When connecting to a non-default connection, the AT clause can be used in
a DECLARE STATEMENT once and is no longer needed in every dynamic SQL
statement. This makes ECPG applications easier and more efficient. Moreover,
writing code without designating connection explicitly improves portability.
Authors: Ideriha-san ("Ideriha, Takeshi" <ideriha.takeshi@jp.fujitsu.com>)
Kuroda-san ("Kuroda, Hayato" <kuroda.hayato@jp.fujitsu.com>)
Discussion: https://postgr.es/m4E72940DA2BF16479384A86D54D0988A565669DF@G01JPEXMBKW04
The guideline to "not use text with <ulink> so the URL appears in
printed output" is obsolete, since the URL appears as a footnote in
printed output in that case.
Reported-by: Chapman Flack <chap@anastigmatix.net>
Discussion: https://www.postgresql.org/message-id/5C4B1F2F.2000106@anastigmatix.net
Previously, floating-point output was done by rounding to a specific
decimal precision; by default, to 6 or 15 decimal digits (losing
information) or as requested using extra_float_digits. Drivers that
wanted exact float values, and applications like pg_dump that must
preserve values exactly, set extra_float_digits=3 (or sometimes 2 for
historical reasons, though this isn't enough for float4).
Unfortunately, decimal rounded output is slow enough to become a
noticable bottleneck when dealing with large result sets or COPY of
large tables when many floating-point values are involved.
Floating-point output can be done much faster when the output is not
rounded to a specific decimal length, but rather is chosen as the
shortest decimal representation that is closer to the original float
value than to any other value representable in the same precision. The
recently published Ryu algorithm by Ulf Adams is both relatively
simple and remarkably fast.
Accordingly, change float4out/float8out to output shortest decimal
representations if extra_float_digits is greater than 0, and make that
the new default. Applications that need rounded output can set
extra_float_digits back to 0 or below, and take the resulting
performance hit.
We make one concession to portability for systems with buggy
floating-point input: we do not output decimal values that fall
exactly halfway between adjacent representable binary values (which
would rely on the reader doing round-to-nearest-even correctly). This
is known to be a problem at least for VS2013 on Windows.
Our version of the Ryu code originates from
https://github.com/ulfjack/ryu/ at commit c9c3fb1979, but with the
following (significant) modifications:
- Output format is changed to use fixed-point notation for small
exponents, as printf would, and also to use lowercase 'e', a
minimum of 2 exponent digits, and a mandatory sign on the exponent,
to keep the formatting as close as possible to previous output.
- The output of exact midpoint values is disabled as noted above.
- The integer fast-path code is changed somewhat (since we have
fixed-point output and the upstream did not).
- Our project style has been largely applied to the code with the
exception of C99 declaration-after-statement, which has been
retained as an exception to our present policy.
- Most of upstream's debugging and conditionals are removed, and we
use our own configure tests to determine things like uint128
availability.
Changing the float output format obviously affects a number of
regression tests. This patch uses an explicit setting of
extra_float_digits=0 for test output that is not expected to be
exactly reproducible (e.g. due to numerical instability or differing
algorithms for transcendental functions).
Conversions from floats to numeric are unchanged by this patch. These
may appear in index expressions and it is not yet clear whether any
change should be made, so that can be left for another day.
This patch assumes that the only supported floating point format is
now IEEE format, and the documentation is updated to reflect that.
Code by me, adapting the work of Ulf Adams and other contributors.
References:
https://dl.acm.org/citation.cfm?id=3192369
Reviewed-by: Tom Lane, Andres Freund, Donald Dong
Discussion: https://postgr.es/m/87r2el1bx6.fsf@news-spur.riddles.org.uk
The current wording can confuse the reader about constraint exclusion
being available at query execution, but this only applies to partition
pruning.
Reported-by: Shouyu Luo
Author: David Rowley
Reviewed-by: Chapman Flack, Amit Langote
Discussion: https://postgr.es/m/15629-2ef8b22e61f8333f@postgresql.org
For a long time, indxpath.c has had the ability to extract derived (lossy)
index conditions from certain operators such as LIKE. For just as long,
it's been obvious that we really ought to make that capability available
to extensions. This commit finally accomplishes that, by adding another
API for planner support functions that lets them create derived index
conditions for their functions. As proof of concept, the hardwired
"special index operator" code formerly present in indxpath.c is pushed
out to planner support functions attached to LIKE and other relevant
operators.
A weak spot in this design is that an extension needs to know OIDs for
the operators, datatypes, and opfamilies involved in the transformation
it wants to make. The core-code prototypes use hard-wired OID references
but extensions don't have that option for their own operators etc. It's
usually possible to look up the required info, but that may be slow and
inconvenient. However, improving that situation is a separate task.
I want to do some additional refactorization around selfuncs.c, but
that also seems like a separate task.
Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
Since its introduction, max_wal_senders is counted as part of
max_connections when it comes to define how many connection slots can be
used for replication connections with a WAL sender context. This can
lead to confusion for some users, as it could be possible to block a
base backup or replication from happening because other backend sessions
are already taken for other purposes by an application, and
superuser-only connection slots are not a correct solution to handle
that case.
This commit makes max_wal_senders independent of max_connections for its
handling of PGPROC entries in ProcGlobal, meaning that connection slots
for WAL senders are handled using their own free queue, like autovacuum
workers and bgworkers.
One compatibility issue that this change creates is that a standby now
requires to have a value of max_wal_senders at least equal to its
primary. So, if a standby created enforces the value of
max_wal_senders to be lower than that, then this could break failovers.
Normally this should not be an issue though, as any settings of a
standby are inherited from its primary as postgresql.conf gets normally
copied as part of a base backup, so parameters would be consistent.
Author: Alexander Kukushkin
Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii
Kliukin
Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
The original setup for dependencies of partitioned objects had
serious problems:
1. It did not verify that a drop cascading to a partition-child object
also cascaded to at least one of the object's partition parents. Now,
normally a child object would share all its dependencies with one or
another parent (e.g. a child index's opclass dependencies would be shared
with the parent index), so that this oversight is usually harmless.
But if some dependency failed to fit this pattern, the child could be
dropped while all its parents remain, creating a logically broken
situation. (It's easy to construct artificial cases that break it,
such as attaching an unrelated extension dependency to the child object
and then dropping the extension. I'm not sure if any less-artificial
cases exist.)
2. Management of partition dependencies during ATTACH/DETACH PARTITION
was complicated and buggy; for example, after detaching a partition
table it was possible to create cases where a formerly-child index
should be dropped and was not, because the correct set of dependencies
had not been reconstructed.
Less seriously, because multiple partition relationships were
represented identically in pg_depend, there was an order-of-traversal
dependency on which partition parent was cited in error messages.
We also had some pre-existing order-of-traversal hazards for error
messages related to internal and extension dependencies. This is
cosmetic to users but causes testing problems.
To fix#1, add a check at the end of the partition tree traversal
to ensure that at least one partition parent got deleted. To fix#2,
establish a new policy that partition dependencies are in addition to,
not instead of, a child object's usual dependencies; in this way
ATTACH/DETACH PARTITION need not cope with adding or removing the
usual dependencies.
To fix the cosmetic problem, distinguish between primary and secondary
partition dependency entries in pg_depend, by giving them different
deptypes. (They behave identically except for having different
priorities for being cited in error messages.) This means that the
former 'I' dependency type is replaced with new 'P' and 'S' types.
This also fixes a longstanding bug that after handling an internal
dependency by recursing to the owning object, findDependentObjects
did not verify that the current target was now scheduled for deletion,
and did not apply the current recursion level's objflags to it.
Perhaps that should be back-patched; but in the back branches it
would only matter if some concurrent transaction had removed the
internal-linkage pg_depend entry before the recursive call found it,
or the recursive call somehow failed to find it, both of which seem
unlikely.
Catversion bump because the contents of pg_depend change for
partitioning relationships.
Patch HEAD only. It's annoying that we're not fixing #2 in v11,
but there seems no practical way to do so given that the problem
is exactly a poor choice of what entries to put in pg_depend.
We can't really fix that while staying compatible with what's
in pg_depend in existing v11 installations.
Discussion: https://postgr.es/m/CAH2-Wzkypv1R+teZrr71U23J578NnTBt2X8+Y=Odr4pOdW1rXg@mail.gmail.com
Add support function requests for estimating the selectivity, cost,
and number of result rows (if a SRF) of the target function.
The lack of a way to estimate selectivity of a boolean-returning
function in WHERE has been a recognized deficiency of the planner
since Berkeley days. This commit finally fixes it.
In addition, non-constant estimates of cost and number of output
rows are now possible. We still fall back to looking at procost
and prorows if the support function doesn't service the request,
of course.
To make concrete use of the possibility of estimating output rowcount
for SRFs, this commit adds support functions for array_unnest(anyarray)
and the integer variants of generate_series; the lack of plausible
rowcount estimates for those, even when it's obvious to a human,
has been a repeated subject of complaints. Obviously, much more
could now be done in this line, but I'm mostly just trying to get
the infrastructure in place.
Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
Rename/repurpose pg_proc.protransform as "prosupport". The idea is
still that it names an internal function that provides knowledge to
the planner about the behavior of the function it's attached to;
but redesign the API specification so that it's not limited to doing
just one thing, but can support an extensible set of requests.
The original purpose of simplifying a function call is handled by
the first request type to be invented, SupportRequestSimplify.
Adjust all the existing transform functions to handle this API,
and rename them fron "xxx_transform" to "xxx_support" to reflect
the potential generalization of what they do. (Since we never
previously provided any way for extensions to add transform functions,
this change doesn't create an API break for them.)
Also add DDL and pg_dump support for attaching a support function to a
user-defined function. Unfortunately, DDL access has to be restricted
to superusers, at least for now; but seeing that support functions
will pretty much have to be written in C, that limitation is just
theoretical. (This support is untested in this patch, but a follow-on
patch will add cases that exercise it.)
Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
The previous ordering of these steps satisfied the nominal requirement
that set_rel_pathlist_hook could editorialize on the whole set of Paths
constructed for a base relation. In practice, though, trying to change
the set of partial paths was impossible. Adding one didn't work because
(a) it was too late to be included in Gather paths made by the core code,
and (b) calling add_partial_path after generate_gather_paths is unsafe,
because it might try to delete a path it thinks is dominated, but that
is already embedded in some Gather path(s). Nor could the hook safely
remove partial paths, for the same reason that they might already be
embedded in Gathers.
Better to call extensions first, let them add partial paths as desired,
and then gather. In v11 and up, we already doubled down on that ordering
by postponing gathering even further for single-relation queries; so even
if the hook wished to editorialize on Gather path construction, it could
not.
Report and patch by KaiGai Kohei. Back-patch to 9.6 where Gather paths
were added.
Discussion: https://postgr.es/m/CAOP8fzahwpKJRTVVTqo2AE=mDTz_efVzV6Get_0=U3SO+-ha1A@mail.gmail.com
This is useful when looking at partition trees with multiple layers, and
combined with pg_partition_tree, it provides the possibility to show up
an entire tree by just knowing one member at any level.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Amit Langote
Discussion: https://postgr.es/m/20181207014015.GP2407@paquier.xyz
Up to now postgres_fdw has been using create_foreignscan_path() to
generate not only base-relation paths, but also paths for foreign joins
and foreign upperrels. This is wrong, because create_foreignscan_path()
calls get_baserel_parampathinfo() which will only do the right thing for
baserels. It accidentally fails to fail for unparameterized paths, which
are the only ones postgres_fdw (thought it) was handling, but we really
need different APIs for the baserel and join cases.
In HEAD, the best thing to do seems to be to split up the baserel,
joinrel, and upperrel cases into three functions so that they can
have different APIs. I haven't actually given create_foreign_join_path
a different API in this commit: we should spend a bit of time thinking
about just what we want to do there, since perhaps FDWs would want to
do something different from the build-up-a-join-pairwise approach that
get_joinrel_parampathinfo expects. In the meantime, since postgres_fdw
isn't prepared to generate parameterized joins anyway, just give it a
defense against trying to plan joins with lateral refs.
In addition (and this is what triggered this whole mess) fix bug #15613
from Srinivasan S A, by teaching file_fdw and postgres_fdw that plain
baserel foreign paths still have outer refs if the relation has
lateral_relids. Add some assertions in relnode.c to catch future
occurrences of the same error --- in particular, to catch other FDWs
doing that, but also as backstop against core-code mistakes like the
one fixed by commit bdd9a99aa.
Bug #15613 also needs to be fixed in the back branches, but the
appropriate fix will look quite a bit different there, since we don't
want to assume that existing FDWs get the word right away.
Discussion: https://postgr.es/m/15613-092be1be9576c728@postgresql.org
In commit f16241bef7, we have changed the behavior for concurrent updates
that move row to a different partition, but forgot to update the docs.
Previously when an UPDATE command causes a row to move from one partition
to another, there is a chance that another concurrent UPDATE or DELETE
misses this row. However, now we raise a serialization failure error in
such a case.
Reported-by: David Rowley
Author: David Rowley and Amit Kapila
Backpatch-through: 11 where it was introduced
Discussion: https://postgr.es/m/CAKJS1f-iVhGD4-givQWpSROaYvO3c730W8yoRMTF9Gc3craY3w@mail.gmail.com
Historically we've had each release branch include all prior branches'
notes, including minor-release changes, back to the beginning of the
project. That's basically an O(N^2) proposition, and it was starting to
catch up with us: as of HEAD the back-branch release notes alone accounted
for nearly 30% of the documentation. While there's certainly some value
in easy access to back-branch notes, this is getting out of hand.
Hence, switch over to the rule that each branch contains only its own
release notes. So as to not make older notes too hard to find, each
branch will provide URLs for the immediately preceding branches'
release notes on the project website.
There might be value in providing aggregated notes across all branches
somewhere on the website, but that's a task for another day.
Discussion: https://postgr.es/m/cbd4aeb5-2d9c-8b84-e968-9e09393d4c83@postgresql.org
Previously, all heaps had FSMs. For very small tables, this means that the
FSM took up more space than the heap did. This is wasteful, so now we
refrain from creating the FSM for heaps with 4 pages or fewer. If the last
known target block has insufficient space, we still try to insert into some
other page before giving up and extending the relation, since doing
otherwise leads to table bloat. Testing showed that trying every page
penalized performance slightly, so we compromise and try every other page.
This way, we visit at most two pages. Any pages with wasted free space
become visible at next relation extension, so we still control table bloat.
As a bonus, directly attempting one or two pages can even be faster than
consulting the FSM would have been.
Once the FSM is created for a heap we don't remove it even if somebody
deletes all the rows from the corresponding relation. We don't think it is
a useful optimization as it is quite likely that relation will again grow
to the same size.
Author: John Naylor, Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Mithun C Y
Discussion: https://www.postgresql.org/message-id/CAJVSVGWvB13PzpbLEecFuGFc5V2fsO736BsdTakPiPAcdMM5tQ@mail.gmail.com
Add PG_CFLAGS, PG_CXXFLAGS, and PG_LDFLAGS variables to pgxs.mk which
will be appended or prepended to the corresponding make variables.
Notably, there was previously no way to pass custom CXXFLAGS to third
party extension module builds, COPT and PROFILE supporting only CFLAGS
and LDFLAGS.
Backpatch all the way down to ease integration with existing
extensions.
Author: Christoph Berg
Reviewed-by: Andres Freund, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20181113104005.GA32154@msg.credativ.de
Backpatch-through: 9.4
The documentation includes sections to be able to initialize and start
Postgres via a couple of commands. Some of its recommendations involve
using directly "postgres", which is inconsistent with the recommendation
given by initdb. At the same time make some other command calls more
consistent with the rest, by using an absolute path when creating a
database.
Author: Andreas Scherbaum
Reviewed-by: Michael Banck, Ryan Lambert
These two new options can be used to improve the selectivity of
relations to vacuum or analyze even further depending on the age of
respectively their transaction ID or multixact ID, so as it is possible
to prioritize tables to prevent wraparound of one or the other.
Combined with --table, it is possible to target a subset of tables to
choose as potential processing targets.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
Previously, all heaps had FSMs. For very small tables, this means that the
FSM took up more space than the heap did. This is wasteful, so now we
refrain from creating the FSM for heaps with 4 pages or fewer. If the last
known target block has insufficient space, we still try to insert into some
other page before giving up and extending the relation, since doing
otherwise leads to table bloat. Testing showed that trying every page
penalized performance slightly, so we compromise and try every other page.
This way, we visit at most two pages. Any pages with wasted free space
become visible at next relation extension, so we still control table bloat.
As a bonus, directly attempting one or two pages can even be faster than
consulting the FSM would have been.
Once the FSM is created for a heap we don't remove it even if somebody
deletes all the rows from the corresponding relation. We don't think it is
a useful optimization as it is quite likely that relation will again grow
to the same size.
Author: John Naylor with design inputs and some code contribution by Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Mithun C Y
Discussion: https://www.postgresql.org/message-id/CAJVSVGWvB13PzpbLEecFuGFc5V2fsO736BsdTakPiPAcdMM5tQ@mail.gmail.com
Before this change FunctionCallInfoData, the struct arguments etc for
V1 function calls are stored in, always had space for
FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
arrays. For nearly every function call 100 arguments is far more than
needed, therefore wasting memory. Arg and argnull being two separate
arrays also guarantees that to access a single argument, two
cachelines have to be touched.
Change the layout so there's a single variable-length array with pairs
of value / isnull. That drastically reduces memory consumption for
most function calls (on x86-64 a two argument function now uses
64bytes, previously 936 bytes), and makes it very likely that argument
value and its nullness are on the same cacheline.
Arguments are stored in a new NullableDatum struct, which, due to
padding, needs more memory per argument than before. But as usually
far fewer arguments are stored, and individual arguments are cheaper
to access, that's still a clear win. It's likely that there's other
places where conversion to NullableDatum arrays would make sense,
e.g. TupleTableSlots, but that's for another commit.
Because the function call information is now variable-length
allocations have to take the number of arguments into account. For
heap allocations that can be done with SizeForFunctionCallInfoData(),
for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
that helps to allocate an appropriately sized and aligned variable.
Some places with stack allocation function call information don't know
the number of arguments at compile time, and currently variably sized
stack allocations aren't allowed in postgres. Therefore allow for
FUNC_MAX_ARGS space in these cases. They're not that common, so for
now that seems acceptable.
Because of the need to allocate FunctionCallInfo of the appropriate
size, older extensions may need to update their code. To avoid subtle
breakages, the FunctionCallInfoData struct has been renamed to
FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
so that shouldn't cause much collateral damage.
This change is also a prerequisite for more efficient expression JIT
compilation (by allocating the function call information on the stack,
allowing LLVM to optimize it away); previously the size of the call
information caused problems inside LLVM's optimizer.
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
Previously, \g would successfully execute the COPY command, but
the target specification if any was ignored, so that the data was
always dumped to the regular query output target. This seems like
a clear bug, so let's not just fix it but back-patch it.
While at it, adjust the documentation for \copy to recommend
"COPY ... TO STDOUT \g foo" as a plausible alternative.
Back-patch to 9.5. The problem exists much further back, but the
code associated with \g was refactored enough in 9.5 that we'd
need a significantly different patch for 9.4, and it doesn't
seem worth the trouble.
Daniel Vérité, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/15dadc39-e050-4d46-956b-dcc4ed098753@manitou-mail.org
Since LISTEN is (still) disallowed, UNLISTEN must be a no-op in a
hot-standby session, and so there's no harm in allowing it. This
change allows client code to not worry about whether it's connected
to a primary or standby server when performing session-state-reset
type activities. (Note that DISCARD ALL, which includes UNLISTEN,
was already allowed, making it inconsistent to reject UNLISTEN.)
Per discussion, back-patch to all supported versions.
Shay Rojansky, reviewed by Mi Tar
Discussion: https://postgr.es/m/CADT4RqCf2gA_TJtPAjnGzkC3ZiexfBZiLmA-mV66e4UyuVv8bA@mail.gmail.com
Previously, only literals were allowed. This change allows general
expressions, including functions calls, which are evaluated at the
time the DDL command is executed.
Besides offering some more functionality, it simplifies the parser
structures and removes some inconsistencies in how the literals were
handled.
Author: Kyotaro Horiguchi, Tom Lane, Amit Langote
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/9f88b5e0-6da2-5227-20d0-0d7012beaa1c@lab.ntt.co.jp/
The pgbench regression test supposed that srandom() with a specific value
would result in deterministic output from random(), as required by POSIX.
It emerges however that OpenBSD is too smart to be constrained by mere
standards, so their random() emits nondeterministic output anyway.
While a workaround does exist, what seems like a better fix is to stop
relying on the platform's srandom()/random() altogether, so that what
you get from --random-seed=N is not merely deterministic but platform
independent. Hence, use a separate pg_jrand48() random sequence in
place of random().
Also adjust the regression test case that's supposed to detect
nondeterminism so that it's more likely to detect it; the original
choice of random_zipfian parameter tended to produce the same output
all the time even if the underlying behavior wasn't deterministic.
In passing, improve pgbench's docs about random_zipfian().
Back-patch to v11 where this code was introduced.
Fabien Coelho and Tom Lane
Discussion: https://postgr.es/m/4615.1547792324@sss.pgh.pa.us
Extends the COPY FROM command with a WHERE condition, which allows doing
various types of filtering while importing the data (random sampling,
condition on a data column, etc.). Until now such filtering required
either preprocessing of the input data, or importing all data and then
filtering in the database. COPY FROM ... WHERE is an easy-to-use and
low-overhead alternative for most simple cases.
Author: Surafel Temesgen
Reviewed-by: Tomas Vondra, Masahiko Sawada, Lim Myungkyu
Discussion: https://www.postgresql.org/message-id/flat/CALAY4q_DdpWDuB5-Zyi-oTtO2uSk8pmy+dupiRe3AvAc++1imA@mail.gmail.com
Commit c0d0e54084 replaced the ones in the documentation, but missed out
on the ones in the code. Replace those as well, but unlike c0d0e54084,
don't backpatch the code changes to avoid breaking translations.
Attempting to use a temporary table within a two-phase transaction is
forbidden for ages. However, there have been uncovered grounds for
a couple of other object types and commands which work on temporary
objects with two-phase commit. In short, trying to create, lock or drop
an object on a temporary schema should not be authorized within a
two-phase transaction, as it would cause its state to create
dependencies with other sessions, causing all sorts of side effects with
the existing session or other sessions spawned later on trying to use
the same temporary schema name.
Regression tests are added to cover all the grounds found, the original
report mentioned function creation, but monitoring closer there are many
other patterns with LOCK, DROP or CREATE EXTENSION which are involved.
One of the symptoms resulting in combining both is that the session
which used the temporary schema is not able to shut down completely,
waiting for being able to drop the temporary schema, something that it
cannot complete because of the two-phase transaction involved with
temporary objects. In this case the client is able to disconnect but
the session remains alive on the backend-side, potentially blocking
connection backend slots from being used. Other problems reported could
also involve server crashes.
This is back-patched down to v10, which is where 9b013dc has introduced
MyXactFlags, something that this patch relies on.
Reported-by: Alexey Bashtanov
Author: Michael Paquier
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/5d910e2e-0db8-ec06-dd5f-baec420513c3@imap.cc
Backpatch-through: 10
These have been found while cross-checking for the use of unique words
in the documentation, and a wait event was not getting generated in a way
consistent to what the documentation provided.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/9b5a3a85-899a-ae62-dbab-1e7943aa5ab1@gmail.com
With the previous rule, if pandoc was missing, a zero-length output
file would be created without an error from make. To improve that,
write the rule as two separate commands without a pipe.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Since approximately PostgreSQL 10, it is no longer required that
environment variables at installation time such as PERL, PYTHON, TCLSH
be "full path names", so change that phrasing in the installation
instructions. (The exact time of change appears to differ for PERL
and the others, but it works consistently in PostgreSQL 10.)
Also while we're here document the defaults for PERL and PYTHON, but
since the search list for TCLSH is so long, let's leave that out so we
don't need to maintain a copy of that list in the installation
instructions.
This value represents the default behavior of using the current
timeline. Previously, this was represented by an empty string.
(Before the removal of recovery.conf, this setting could not be chosen
explicitly but was used when recovery_target_timeline was not
mentioned at all.)
Discussion: https://www.postgresql.org/message-id/flat/6dd2c23a-4162-8469-410f-bfe146e28c0c@2ndquadrant.com/
Reviewed-by: David Steele <david@pgmasters.net>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
particular user/db/query.
The function pg_stat_statements_reset() is extended to accept userid, dbid,
and queryid as input parameters. Now, it can discard the statistics
gathered so far by pg_stat_statements corresponding to the specified
userid, dbid, and queryid. If no parameter is specified or all the
specified parameters have default value aka 0, it will discard all
statistics as per the old behavior.
The new behavior is useful to get the fresh statistics for a specific
user/database/query without resetting all the existing statistics.
Author: Haribabu Kommi, with few additional changes by me
Reviewed-by: Michael Paquier, Amit Kapila and Fujii Masao
Discussion: https://postgr.es/m/CAJrrPGcyh-gkFswyc6C661K6cknL0XkNqVT0sQt2mFNMR4HRKA@mail.gmail.com
These commands allow assignment of values produced by queries to pgbench
variables, where they can be used by further commands. \gset terminates
a command sequence (just like a bare semicolon); \cset separates
multiple queries in a compound command, like an escaped semicolon (\;).
A prefix can be provided to the \-command and is prepended to the name
of each output column to produce the final variable name.
This feature allows pgbench scripts to react meaningfully to the actual
database contents, allowing more powerful benchmarks to be written.
Authors: Fabien Coelho, Álvaro Herrera
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Reviewed-by: Stephen Frost <sfrost@snowman.net>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tatsuo Ishii <ishii@sraoss.co.jp>
Reviewed-by: Rafia Sabih <rafia.sabih@enterprisedb.com>
Discussion: https://postgr.es/m/alpine.DEB.2.20.1607091005330.3412@sto
For a long time, plpgsql has allowed trigger functions to parse
references to OLD and NEW even if the current trigger event type didn't
assign a value to one or the other variable; but actually executing such
a reference would fail. The v11 changes to use "expanded records" for
DTYPE_REC variables changed the behavior so that the unassigned variable
now reads as a null composite value. While this behavioral change was
more or less unintentional, it seems that leaving it like this is better
than adding code and complexity to be bug-compatible with the old way.
The change doesn't break any code that worked before, and it eliminates
a gotcha that often required extra code to work around.
Hence, update the docs to say that these variables are "null" not
"unassigned" when not relevant to the event type. And add a regression
test covering the behavior, so that we'll notice if we ever break it
again.
Per report from Kristjan Tammekivi.
Discussion: https://postgr.es/m/CAABK7uL-uC9ZxKBXzo_68pKt7cECfNRv+c35CXZpjq6jCAzYYA@mail.gmail.com
runtime.sgml said that you couldn't change SysV IPC parameters on OpenBSD
except by rebuilding the kernel. That's definitely wrong in OpenBSD 6.x,
and excavation in their man pages says it changed in OpenBSD 3.3.
Update NetBSD and OpenBSD sections to recommend adjustment of the SEMMNI
and SEMMNS settings, which are painfully small by default on those
platforms. (The discussion thread contemplated recommending that
people select POSIX semaphores instead, but the performance consequences
of that aren't really clear, so I'll refrain.)
Remove pointless discussion of SEMMNU and SEMMAP from the FreeBSD
section. Minor other wordsmithing.
Discussion: https://postgr.es/m/27582.1546928073@sss.pgh.pa.us
DISABLE_PAGE_SKIPPING is available since v9.6, and SKIP_LOCKED since
v12. They lacked equivalents for vacuumdb, so this closes the gap.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
Instead of running a SQL script to create the standard conversion
functions and pg_conversion entries, put those entries into the
initial data in postgres.bki.
This shaves a few percent off the runtime of initdb, and also allows
accurate comments to be attached to the conversion functions; the
previous script labeled them with machine-generated comments that
were not quite right for multi-purpose conversion functions.
Also, we can get rid of the duplicative Makefile and MSVC perl
implementations of the generation code for that SQL script.
A functional change is that these pg_proc and pg_conversion entries
are now "pinned" by initdb. Leaving them unpinned was perhaps a
good thing back while the conversions feature was under development,
but there seems no valid reason for it now.
Also, the conversion functions are now marked as immutable, where
before they were volatile by virtue of lacking any explicit
specification. That seems like it was just an oversight.
To avoid using magic constants in pg_conversion.dat, extend
genbki.pl to allow encoding names to be converted, much as it
does for language, access method, etc names.
John Naylor
Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
This patch teaches genbki.pl to replace pg_language names by OIDs
in much the same way as it already does for pg_am names etc, and
converts pg_proc.dat to use such symbolic references in the prolang
column.
Aside from getting rid of a few more magic numbers in the initial
catalog data, this means that Gen_fmgrtab.pl no longer needs to read
pg_language.dat, since it doesn't have to know the OID of the "internal"
language; now it's just looking for the string "internal".
No need for a catversion bump, since the contents of postgres.bki
don't actually change at all.
John Naylor
Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
This removes a portion of infrastructure introduced by fe0a0b5 to allow
compilation of Postgres in environments where no strong random source is
available, meaning that there is no linking to OpenSSL and no
/dev/urandom (Windows having its own CryptoAPI). No systems shipped
this century lack /dev/urandom, and the buildfarm is actually not
testing this switch at all, so just remove it. This simplifies
particularly some backend code which included a fallback implementation
using shared memory, and removes a set of alternate regression output
files from pgcrypto.
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20181230063219.GG608@paquier.xyz
Previously, the SQL random() function depended on libc's random(3),
and setseed() invoked srandom(3). This results in interference between
these functions and backend-internal uses of random(3). We'd never paid
too much mind to that, but in the wake of commit 88bdbd3f7 which added
log_statement_sample_rate, the interference arguably has a security
consequence: if log_statement_sample_rate is active then an unprivileged
user could probably control which if any of his SQL commands get logged,
by issuing setseed() at the right times. That seems bad.
To fix this reliably, we need random() and setseed() to use their own
private random state variable. Standard random(3) isn't amenable to such
usage, so let's switch to pg_erand48(). It's hard to say whether that's
more or less "random" than any particular platform's version of random(3),
but it does have a wider seed value and a longer period than are required
by POSIX, so we can hope that this isn't a big downgrade. Also, we should
now have uniform behavior of random() across platforms, which is worth
something.
While at it, upgrade the per-process seed initialization method to use
pg_strong_random() if available, greatly reducing the predictability
of the initial seed value. (I'll separately do something similar for
the internal uses of random().)
In addition to forestalling the possible security problem, this has a
benefit in the other direction, which is that we can now document
setseed() as guaranteeing a reproducible sequence of random() values.
Previously, because of the possibility of internal calls of random(3),
we could not promise any such thing.
Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
The documentation of ON DELETE and ON UPDATE uses the term "action",
which is also used on the ALTER TABLE page for other purposes. This
commit renames the term to "referential_action", which is more
consistent with the SQL specification. The new term is now used on the
documentation of both CREATE TABLE and ALTER TABLE for consistency.
Reported-by: Brigitte Blanc-Lafay
Author: Lætitia Avrot
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAB_COdiHEVVs0uB+uYCjjYUwQ4YFFekppq+Xqv6qAM8+cd42gA@mail.gmail.com
"$user" in a search_path string is replaced by CURRENT_USER not
SESSION_USER. (It actually was SESSION_USER in the initial implementation,
but we changed it shortly later, and evidently forgot to fix the docs to
match.)
Noted by antonov@stdpr.ru
Discussion: https://postgr.es/m/159151fb45d490c8d31ea9707e9ba99d@stdpr.ru
The "name" comparison operators now all support collations, making them
functionally equivalent to "text" comparisons, except for the different
physical representation of the datatype. They do, in fact, mostly share
the varstr_cmp and varstr_sortsupport infrastructure, which has been
slightly enlarged to handle the case.
To avoid changes in the default behavior of the datatype, set name's
typcollation to C_COLLATION_OID not DEFAULT_COLLATION_OID, so that
by default comparisons to a name value will continue to use strcmp
semantics. (This would have been the case for system catalog columns
anyway, because of commit 6b0faf723, but doing this makes it true for
user-created name columns as well. In particular, this avoids
locale-dependent changes in our regression test results.)
In consequence, tweak a couple of places that made assumptions about
collatable base types always having typcollation DEFAULT_COLLATION_OID.
I have not, however, attempted to relax the restriction that user-
defined collatable types must have that. Hence, "name" doesn't
behave quite like a user-defined type; it acts more like a domain
with COLLATE "C". (Conceivably, if we ever get rid of the need for
catalog name columns to be fixed-length, "name" could actually become
such a domain over text. But that'd be a pretty massive undertaking,
and I'm not volunteering.)
Discussion: https://postgr.es/m/15938.1544377821@sss.pgh.pa.us
When partitioned tables were introduced, we failed to realize that by
copying the tablespace handling for other relation kinds with no
physical storage we were causing the secondary effect that their
partitions would not automatically inherit the tablespace setting. This
is surprising and unhelpful, so change it to adopt the behavior
introduced in pg11 (commit 33e6c34c32) for partitioned indexes: the
parent relation remembers the tablespace specification, which is then
used for any new partitions that don't declare one.
Because this commit changes behavior of the TABLESPACE clause for
partitioned tables (it's no longer a no-op), it is not backpatched.
Author: David Rowley, Álvaro Herrera
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAKJS1f9SxVzqDrGD1teosFd6jBMM0UEaa14_8mRvcWE19Tu0hA@mail.gmail.com
When we first put in collations support, we basically punted on teaching
pg_statistic, ANALYZE, and the planner selectivity functions about that.
They've just used DEFAULT_COLLATION_OID independently of the actual
collation of the data. It's time to improve that, so:
* Add columns to pg_statistic that record the specific collation associated
with each statistics slot.
* Teach ANALYZE to use the column's actual collation when comparing values
for statistical purposes, and record this in the appropriate slot. (Note
that type-specific typanalyze functions are now expected to fill
stats->stacoll with the appropriate collation, too.)
* Teach assorted selectivity functions to use the actual collation of
the stats they are looking at, instead of just assuming it's
DEFAULT_COLLATION_OID.
This should give noticeably better results in selectivity estimates for
columns with nondefault collations, at least for query clauses that use
that same collation (which would be the default behavior in most cases).
It's still true that comparisons with explicit COLLATE clauses different
from the stored data's collation won't be well-estimated, but that's no
worse than before. Also, this patch does make the first step towards
doing better with that, which is that it's now theoretically possible to
collect stats for a collation other than the column's own collation.
Patch by me; thanks to Peter Eisentraut for review.
Discussion: https://postgr.es/m/14706.1544630227@sss.pgh.pa.us
The cache lookup routines for foreign-data wrappers and foreign servers
are extended with an extra argument to handle a set of flags. The only
value which can be used now is to indicate if a missing object should
result in an error or not, and are designed to be extensible on need.
Those new routines are added into the existing set of user-visible
FDW APIs and documented in consequence. They will be used for future
patches to improve the SQL interface for object addresses.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
The timestamp generated by the standby at message transmission has been
included in the protocol since its introduction for both the status
update message and hot standby feedback message, but it has never
appeared in pg_stat_replication. Seeing this timestamp does not matter
much with a cluster which has a lot of activity, but on a mostly-idle
cluster, this makes monitoring able to react faster than the configured
timeouts.
Author: MyungKyu LIM
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/1657809367.407321.1533027417725.JavaMail.jboss@ep2ml404
This restriction is implicit in the check-only-once implementation we use
for table and domain constraints, but it wasn't spelled out anywhere, nor
was there any advice about how to alter a constraint's behavior safely.
Improve that.
I was also dissatisfied with the documentation of ALTER DOMAIN VALIDATE
CONSTRAINT, which entirely failed to explain the use of that feature; and
thence decided that ALTER TABLE VALIDATE CONSTRAINT could be documented
better as well.
Perhaps we should back-patch this, along with the related commit 36d442a25,
but for now I refrained.
Discussion: https://postgr.es/m/12539.1544107316@sss.pgh.pa.us
Expand section 5.6 "Privileges" to include the full definition of
each privilege type, and an explanation of aclitem privilege displays,
along with some helpful summary tables. Most of this material came
out of the GRANT reference page, although some of it is new.
Adjust a bunch of links that were pointing to GRANT to point to 5.6.
Fabien Coelho and Tom Lane, reviewed by Bradley DeJong
Discussion: https://postgr.es/m/alpine.DEB.2.21.1807311735200.20743@lancre
The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl. A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.
A couple of custom Makefile rules have been accumulated across the tree
to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage. Note that tests of contrib/bloom/
are not enabled yet, as those are proving unstable in the buildfarm.
Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
This allows control of the directory in which the postmaster sockets
are created for the temporary postmasters started by pg_upgrade.
The default location remains the current working directory, which is
typically fine, but if it is deeply nested then its pathname might
be too long to be a socket name.
In passing, clean up some messiness in pg_upgrade's option handling,
particularly the confusing and undocumented way that configuration-only
datadirs were handled. And fix check_required_directory's substantially
under-baked cleanup of directory pathnames.
Daniel Gustafsson, reviewed by Hironobu Suzuki, some code cleanup by me
Discussion: https://postgr.es/m/E72DD5C3-2268-48A5-A907-ED4B34BEC223@yesql.se
The source code comments documented this, but the user-facing docs, not
so much. Add a section to Appendix B that discusses it.
In passing, improve a couple other things in Appendix B --- notably,
a long-obsolete claim that time zone abbreviations are looked up in
a fixed table.
Per bug #15527 from Michael Davidson.
Discussion: https://postgr.es/m/15527-f1be0b4dc99ebbe7@postgresql.org
This allows to set a lower log_min_duration_statement value without
incurring excessive log traffic (which reduces performance). This can
be useful to analyze workloads with lots of short queries.
Author: Adrien Nayrat
Reviewed-by: David Rowley, Vik Fearing
Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
The previous recovery.conf regime accepted multiple recovery_target*
settings and used the last one. This does not translate well to the
general GUC system. Specifically, under EXEC_BACKEND, the settings
are written out not in any particular order, so the order in which
they were originally set is not available to new processes.
Rather than redesign the GUC system, it was decided to abandon the old
behavior and only allow one recovery target setting. A second setting
will cause an error. However, it is allowed to set the same parameter
multiple times or unset a parameter and set a different one.
Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
"\pset format csv", or --csv, selects comma-separated values table format.
This is compliant with RFC 4180, except that we aren't too picky about
whether the record separator is LF or CRLF; also, the user may choose a
field separator other than comma.
This output format is directly compatible with the server's COPY CSV
format, and will also be useful as input to other programs. It's
considerably safer for that purpose than the old recommendation to
use "unaligned" format, since the latter couldn't handle data
containing the field separator character.
Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some
tweaking by me
Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
A set of failures in buildfarm machines are proving that this is not
quite ready yet because of another set of issues:
- MSVC scripts assume that REGRESS_OPTS can only use top_builddir. Some
test suites actually finish by using top_srcdir, like pg_stat_statements
which cause the regression tests to never run.
- Trying to enforce top_builddir does not work either when using VPATH
as this is not recognized properly.
- TAP tests of bloom are unstable on various platforms, causing various
failures.
The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl. A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.
A couple of custom Makefile targets have been accumulated across the
tree to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage. This also fixes an issue with
contrib/bloom/, which had a custom target to trigger its TAP tests of
its own not part of the main check runs.
Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
recovery.conf settings are now set in postgresql.conf (or other GUC
sources). Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.
Recovery is now initiated by a file recovery.signal. Standby mode is
initiated by a file standby.signal. The standby_mode setting is
gone. If a recovery.conf file is found, an error is issued.
The trigger_file setting has been renamed to promote_trigger_file as
part of the move.
The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".
pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.
Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
The documentation of CREATE/ALTER ROLE has been missing two things
related to PASSWORD:
- The password value provided needs to be quoted, some places of the
documentation marked the field with quotes, but not others, which led to
confusion.
- PASSWORD NULL was not provided consistently, with ENCRYPTED being not
compatible with it.
Reported-by: Steven Winfield
Author: Michael Paquier
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/154282901979.1316.7418475422120496802@wrigleys.postgresql.org
Documenting INCLUDE in the section about unique indexes is confusing,
as complained of by Emilio Platzer. Furthermore, it entirely failed
to explain why you might want to use the feature. The section about
index-only scans is really the right place; it already talked about
making such things the hard way. Rewrite that text to describe INCLUDE
as the normal way to make a covering index.
Also, move that section up a couple of places, as it now seems more
important than some of the stuff we had before it. It still has to
be after expression and partial indexes, since otherwise some of it
would involve forward references.
Discussion: https://postgr.es/m/154031939560.30897.14677735588262722042@wrigleys.postgresql.org
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
The documentation claimed that an enum type requires "one or more"
labels, but since 1fd9883ff4, zero labels are also allowed.
Reported-by: Lukas Eder <lukas.eder@gmail.com>
Bug: #15356
When hostaddr is given, the actual IP address that psql is connected to
can be totally unexpected for the given host. The more verbose output
we now generate makes things clearer. Since the "host" and "hostaddr"
parts of the conninfo could come from different sources (say, one of
them is in the service specification or a URI-style conninfo and the
other is not), this is not as silly as it may first appear. This is
also definitely useful if the hostname resolves to multiple addresses.
Author: Fabien Coelho
Reviewed-by: Pavel Stehule, Arthur Zakirov
Discussion: https://postgr.es/m/alpine.DEB.2.21.1810261532380.27686@lancrehttps://postgr.es/m/alpine.DEB.2.21.1808201323020.13832@lancre
This didn't actually work: COPY would fail to flush the right files, and
instead would try to flush a non-existing file, causing the whole
transaction to fail.
Cope by raising an error as soon as the command is sent instead, to
avoid a nasty later surprise. Of course, it would be much better to
make it work, but we don't have a patch for that yet, and we don't know
if we'll want to backpatch one when we do.
Reported-by: Tomas Vondra
Author: David Rowley
Reviewed-by: Amit Langote, Steve Singer, Tomas Vondra
On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro
Reported-by: Craig Ringer
Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
This hasn't been correct since 9.3 added "latex-longtable".
I left the phraseology "Unique abbreviations are allowed" alone.
It's correct as far as it goes, and we are studiously refraining
from specifying exactly what happens if you give a non-unique
abbreviation. (The answer in the back branches is "you get a
backwards-compatible choice", and the answer in HEAD will shortly
be "you get an error", but there seems no need to mention such
details here.)
Daniel Vérité
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
date_trunc(field, timestamptz, zone_name) performs truncation using
the named time zone as reference, rather than working in the session
time zone as is the default behavior. It's equivalent to
date_trunc(field, timestamptz at time zone zone_name) at time zone zone_name
but it's faster, easier to type, and arguably easier to understand.
Vik Fearing and Tom Lane
Discussion: https://postgr.es/m/6249ffc4-2b22-4c1b-4e7d-7af84fedd7c6@2ndquadrant.com
Change lock level for renaming index (either ALTER INDEX or implicitly
via some other commands) from AccessExclusiveLock to
ShareUpdateExclusiveLock.
One reason we need a strong lock for relation renaming is that the
name change causes a rebuild of the relcache entry. Concurrent
sessions that have the relation open might not be able to handle the
relcache entry changing underneath them. Therefore, we need to lock
the relation in a way that no one can have the relation open
concurrently. But for indexes, the relcache handles reloads specially
in RelationReloadIndexInfo() in a way that keeps changes in the
relcache entry to a minimum. As long as no one keeps pointers to
rd_amcache and rd_options around across possible relcache flushes,
which is the case, this ought to be safe.
We also want to use a self-exclusive lock for correctness, so that
concurrent DDL doesn't overwrite the rename if they start updating
while still seeing the old version. Therefore, we use
ShareUpdateExclusiveLock, which is already used by other DDL commands
that want to operate in a concurrent manner.
The reason this is interesting at all is that renaming an index is a
typical part of a concurrent reindexing workflow (CREATE INDEX
CONCURRENTLY new + DROP INDEX CONCURRENTLY old + rename back). And
indeed a future built-in REINDEX CONCURRENTLY might rely on the ability
to do concurrent renames as well.
Reviewed-by: Andrey Klychkov <aaklychkov@mail.ru>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1531767486.432607658@f357.i.mail.ru
A note in ddl.sgml used to mention that run-time pruning was only
implemented for Append. When we got MergeAppend support, this was
updated to mention that MergeAppend is supported too. This is
slightly weird as it's not all that obvious what exactly isn't
supported when we mention:
<para>
Both of these behaviors are likely to be changed in a future release
of <productname>PostgreSQL</productname>.
</para>
This patch updates this to mention that ModifyTable is unsupported,
which makes the above fragment make sense again.
Author: David Rowley <david.rowley@2ndquadrant.com>
This commit fixes a set of issues with ON COMMIT actions when used on
partitioned tables and tables with inheritance children:
- Applying ON COMMIT DROP on a partitioned table with partitions or on a
table with inheritance children caused a failure at commit time, with
complains about the children being already dropped as all relations are
dropped one at the same time.
- Applying ON COMMIT DELETE on a partition relying on a partitioned
table which uses ON COMMIT DROP would cause the partition truncation to
fail as the parent is removed first.
The solution to the first problem is to handle the removal of all the
dependencies in one go instead of dropping relations one-by-one, based
on a suggestion from Álvaro Herrera. So instead all the relation OIDs
to remove are gathered and then processed in one round of multiple
deletions.
The solution to the second problem is to reorder the actions, with
truncation happening first and relation drop done after. Even if it
means that a partition could be first truncated, then immediately
dropped if its partitioned table is dropped, this has the merit to keep
the code simple as there is no need to do existence checks on the
relations to drop.
Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE
does not cascade to its partitions. The ON COMMIT action defined on
each partition gets the priority.
Author: Michael Paquier
Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas
Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp
Backpatch-through: 10
Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client. Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end. While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused. Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.
In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages. (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)
In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way. I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.
In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.
Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.
Jonah Harris and Tom Lane
Discussion: https://postgr.es/m/7809.1541521180@sss.pgh.pa.us
Discussion: https://postgr.es/m/15479-ef0f4cc2fd995ca2@postgresql.org
Add another transfer mode --clone to pg_upgrade (besides the existing
--link and the default copy), using special file cloning calls. This
makes the file transfer faster and more space efficient, achieving
speed similar to --link mode without the associated drawbacks.
On Linux, file cloning is supported on Btrfs and XFS (if formatted with
reflink support). On macOS, file cloning is supported on APFS.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
The code added by commit c203d6cf8 causes a crash in at least one case,
where a potentially-optimizable expression index has a storage type
different from the input data type. A cursory code review turned up
numerous other problems that seem impractical to fix on short notice.
Andres argued for revert of that patch some time ago, and if additional
senior committers had been paying attention, that's likely what would
have happened, but we were not :-(
At this point we can't just revert, at least not in v11, because that would
mean an ABI break for code touching relcache entries. And we should not
remove the (also buggy) support for the recheck_on_update index reloption,
since it might already be used in some databases in the field. So this
patch just does the as-little-invasive-as-possible measure of disabling
the feature as though recheck_on_update were forced off for all indexes.
I also removed the related regression tests (which would otherwise fail)
and the user-facing documentation of the reloption.
We should undertake a more thorough code cleanup if the patch can't be
fixed, but not under the extreme time pressure of being already overdue
for 11.1 release.
Per report from Ondřej Bouda and subsequent private discussion among
pgsql-release.
Discussion: https://postgr.es/m/20181106185255.776mstcyehnc63ty@alvherre.pgsql
I removed the item about the pg_stat_statements change from
release-11.sgml, as part of a sweep to delete items already committed
in 11.0; but actually we'd best keep it to ensure that people who've
pg_upgraded their databases will take the requisite action. Also make
said action more visible by making it into its own para. Noted by
Jonathan Katz.
exec_stmt_call() tried to extract information out of a CALL statement's
argument list without using expand_function_arguments(), apparently in
the hope of saving a few nanoseconds by not processing defaulted
arguments. It got that quite wrong though, leading to crashes with
named arguments, as well as failure to enforce writability of the
argument for a defaulted INOUT parameter. Fix and simplify the logic
by using expand_function_arguments() before examining the list.
Also, move the argument-examination to just after producing the CALL
command's plan, before invoking the called procedure. This ensures
that we'll track possible changes in the procedure's argument list
correctly, and avoids a hazard of the plan cache being flushed while
the procedure executes.
Also fix assorted falsehoods and omissions in associated documentation.
Per bug #15477 from Alexey Stepanov.
Patch by me, with some help from Pavel Stehule. Back-patch to v11.
Discussion: https://postgr.es/m/15477-86075b1d1d319e0a@postgresql.org
Discussion: https://postgr.es/m/CAFj8pRA6UsujpTs9Sdwmk-R6yQykPx46wgjj+YZ7zxm4onrDyw@mail.gmail.com
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first. Note that a
fair percentage of the entries apply only to prior branches because
their issue was already fixed in 11.0.
The solution arrived at in commit e74dd00f5 presumes that the compiler
has a suitable default -isysroot setting ... but further experience
shows that in many combinations of macOS version, XCode version, Xcode
command line tools version, and phase of the moon, Apple's compiler
will *not* supply a default -isysroot value.
We could potentially go back to the approach used in commit 68fc227dd,
but I don't have a lot of faith in the reliability or life expectancy of
that either. Let's just revert to the approach already shipped in 11.0,
namely specifying an -isysroot switch globally. As a partial response to
the concerns raised by Jakob Egger, adjust the contents of Makefile.global
to look like
CPPFLAGS = -isysroot $(PG_SYSROOT) ...
PG_SYSROOT = /path/to/sysroot
This allows overriding the sysroot path at build time in a relatively
painless way.
Add documentation to installation.sgml about how to use the PG_SYSROOT
option. I also took the opportunity to document how to work around
macOS's "System Integrity Protection" feature.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/20840.1537850987@sss.pgh.pa.us
This has been deprecated and effectively unused for a long time.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
spgendscan neglected to pfree all the memory allocated by spgbeginscan.
It's possible to get away with that in most normal queries, since the
memory is allocated in the executor's per-query context which is about
to get deleted anyway; but it causes severe memory leakage during
creation or filling of large exclusion-constraint indexes.
Also, document that amendscan is supposed to free what ambeginscan
allocates. The docs' lack of clarity on that point probably caused this
bug to begin with. (There is discussion of changing that API spec going
forward, but I don't think it'd be appropriate for the back branches.)
Per report from Bruno Wolff. It's been like this since the beginning,
so back-patch to all active branches.
In HEAD, also fix an independent leak caused by commit 2a6368343
(allocating memory during spgrescan instead of spgbeginscan, which
might be all right if it got cleaned up, but it didn't). And do a bit
of code beautification on that commit, too.
Discussion: https://postgr.es/m/20181024012314.GA27428@wolff.to
This new function is useful to display a full tree of partitions with a
partitioned table given in output, and avoids the need of any complex
WITH RECURSIVE query when looking at partition trees which are
deep multiple levels.
It returns a set of records, one for each partition, containing the
partition's name, its immediate parent's name, a boolean value telling
if the relation is a leaf in the tree and an integer telling its level
in the partition tree with given table considered as root, beginning at
zero for the root, and incrementing by one each time the scan goes one
level down.
Author: Amit Langote
Reviewed-by: Jesper Pedersen, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/8d00e51a-9a51-ad02-d53e-ba6bf50b2e52@lab.ntt.co.jp
The reference to pg_attribute is switched to a link, which is more
useful for the html documentation. The conditions under which a default
value is defined for a given column are made more general.
Author: Daniel Gustafsson
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/0E8748E3-8B7D-445E-9ABA-09DA5C7345CC@yesql.se
This function is able to promote a standby with this new SQL-callable
function. Execution access can be granted to non-superusers so that
failover tools can observe the principle of least privilege.
Catalog version is bumped.
Author: Laurenz Albe
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6e7c79b3ec916cf49742fb8849ed17cd87aed620.camel@cybertec.at
This changes the documentation, and the related structures so as
everything is consistent.
Some wait events were not listed alphabetically since their
introduction, others have been added rather randomly. Keeping all those
entries in order helps in maintenance, and helps the user looking at the
documentation.
Author: Michael Paquier, Kuntal Ghosh
Discussion: https://postgr.es/m/20181024002539.GI1658@paquier.xyz
Backpatch-through: 10, only for the documentation part to avoid an ABI
breakage.
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
When an error occurs during a benchmark run, exit with a nonzero exit
code and write a message at the end. Previously, it would just print
the error message when it happened but then proceed to print the run
summary normally and exit with status 0. To still allow
distinguishing setup from run-time errors, we use exit status 2 for
the new state, whereas existing errors during pgbench initialization
use exit status 1.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
With the PostgreSQL 11 release notes acknowledgments list, FOP reported
WARNING: Glyph "?" (0x144, nacute) not available in font "Times-Roman".
WARNING: Glyph "?" (0x15e, Scedilla) not available in font "Times-Roman".
WARNING: Glyph "?" (0x15f, scedilla) not available in font "Times-Roman".
WARNING: Glyph "?" (0x131, dotlessi) not available in font "Times-Roman".
This is because we have some new contributors whose names use letters
that we haven't used before, and apparently FOP can't handle them out
of the box.
For now, just fix this by "unaccenting" those names. In the future,
maybe this can be fixed better with a different font configuration.
There is also another warning
WARNING: Glyph "?" (0x3c0, pi) not available in font "Times-Roman".
but that existed in previous releases and is not touched here.
Set the release date. Do a bunch of copy-editing and markup improvement,
rearrange some stuff into what seemed a more sensible order, move some
things that did not seem to be in the right section.
This function lists the contents of the WAL archive status directory,
and is intended to be used by monitoring tools. Unlike pg_ls_dir(),
access to it can be granted to non-superusers so that those monitoring
tools can observe the principle of least privilege. Access is also
given by default to members of pg_monitor.
Author: Christoph Moench-Tegeder
Reviewed-by: Aya Iwata
Discussion: https://postgr.es/m/20180930205920.GA64534@elch.exwg.net
Originally committed as 15bc038f (plus some follow-ups), this was
reverted in 28e07270 due to a problem discovered in parallel
workers. This new version corrects that problem by sending the
list of uncommitted enum values to parallel workers.
Here follows the original commit message describing the change:
To prevent possibly breaking indexes on enum columns, we must keep
uncommitted enum values from getting stored in tables, unless we
can be sure that any such column is new in the current transaction.
Formerly, we enforced this by disallowing ALTER TYPE ... ADD VALUE
from being executed at all in a transaction block, unless the target
enum type had been created in the current transaction. This patch
removes that restriction, and instead insists that an uncommitted enum
value can't be referenced unless it belongs to an enum type created
in the same transaction as the value. Per discussion, this should be
a bit less onerous. It does require each function that could possibly
return a new enum value to SQL operations to check this restriction,
but there aren't so many of those that this seems unmaintainable.
Author: Andrew Dunstan and Tom Lane, with parallel query fix by Thomas Munro
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D0Ei7g6PaNTbcmAh9tCRahQrk%3Dr5ZWLD-jr7hXweYX3yg%40mail.gmail.com
Discussion: https://postgr.es/m/4075.1459088427%40sss.pgh.pa.us
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
This lists the contents of a temporary directory associated to a given
tablespace, useful to get information about on-disk consumption caused
by temporary files used by a session query. By default, pg_default is
scanned, and a tablespace can be specified as argument.
This function is intended to be used by monitoring tools, and, unlike
pg_ls_dir(), access to them can be granted to non-superusers so that
those monitoring tools can observe the principle of least privilege.
Access is also given by default to members of pg_monitor.
Author: Nathan Bossart
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/92F458A2-6459-44B8-A7F2-2ADD3225046A@amazon.com
When specified, this option allows VACUUM to skip the work on a relation
if there is a conflicting lock on it when trying to open it at the
beginning of its processing.
Similarly to autovacuum, this comes with a couple of limitations while
the relation is processed which can cause the process to still block:
- when opening the relation indexes.
- when acquiring row samples for table inheritance trees, partition trees
or certain types of foreign tables, and that a lock is taken on some
leaves of such trees.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada
Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com
Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org
I (Andres) was more than a bit hasty in committing 33001fd7a7
after last minute changes, leading to a number of problems (jit output
was only shown for JIT in parallel workers, and just EXPLAIN without
ANALYZE didn't work). Lukas luckily found these issues quickly.
Instead of combining instrumentation in in standard_ExecutorEnd(), do
so on demand in the new ExplainPrintJITSummary().
Also update a documentation example of the JIT output, changed in
52050ad8eb.
Author: Lukas Fittl, with minor changes by me
Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com
Backpatch: 11, where JIT compilation was introduced
The previous commit wasn't careful enough to remove all traces of
TupleDescGetSlot().
Besides fixing the oversight of not removing TupleDescGetSlot()'s
declaration, this also removes FuncCallContext->slot. That was
documented to be for use in combination with TupleDescGetSlot(), a
cursory search over extensions finds no users, and there doesn't seem
to be convincing reasons to keep it around. If we later in the v12
release cycle find users, we can re-consider this part of the commit.
Reported-By: Michael Paquier
Discussion: https://postgr.es/m/20180926000413.GC1659@paquier.xyz
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
aclitem functions and operators have been heretofore undocumented.
Fix that. While at it, ensure the non-operator aclitem functions have
pg_description strings.
Does not seem worthwhile to back-patch.
Author: Fabien Coelho, with pg_description from John Naylor, and significant
refactoring and editorialization by me.
Reviewed by: Tom Lane
Discussion: https://postgr.es/m/flat/alpine.DEB.2.21.1808010825490.18204%40lancre
Being able to use a value that can be changed on a connection basis is
useful with clusters distributed geographically, and makes failure
detection more flexible. A note is added in the documentation about the
use of "options" in primary_conninfo, which can be hard to grasp for
newcomers with the need of two single quotes when listing a set of
parameters.
Author: Tsunakawa Takayuki
Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1FAAD3AE@G01JPEXMBYT05
JIT was disabled by default in a PG 11 in a separate commit that will
normally not appear in the PG 12 git logs. Therefore, create a PG 12
document and mention the fact that JIT is enabled by default in this
release. (A similar change in parallelism was missed in a prior
release.)
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20180922000554.qukbhhlagpnopvko@alap3.anarazel.de
Backpatch-through: head
This eliminates some more tedium in adding new catalog entries,
specifically the need to set up an array type when adding a new
built-in data type. Now it's sufficient to assign an OID for the
array type and write it in an "array_type_oid" metadata field.
You don't have to fill the base type's typarray link explicitly, either.
No catversion bump since the contents of pg_type aren't changed.
(Well, their order might be different, but that doesn't matter.)
John Naylor, reviewed and whacked around a bit by
Dagfinn Ilmari Mannsåker, and some more by me.
Discussion: https://postgr.es/m/CAJVSVGVTb6m9pJF49b3SuA8J+T-THO9c0hxOmoyv-yGKh-FbNg@mail.gmail.com
Currently, KNN searches were supported only by GiST. SP-GiST also capable to
support them. This commit implements that support. SP-GiST scan stack is
replaced with queue, which serves as stack if no ordering is specified. KNN
support is provided for three SP-GIST opclasses: quad_point_ops, kd_point_ops
and poly_ops (catversion is bumped). Some common parts between GiST and SP-GiST
KNNs are extracted into separate functions.
Discussion: https://postgr.es/m/570825e8-47d0-4732-2bf6-88d67d2d51c8%40postgrespro.ru
Author: Nikita Glukhov, Alexander Korotkov based on GSoC work by Vlad Sterzhanov
Review: Andrey Borodin, Alexander Korotkov
Per discussion, JIT isn't quite mature enough to ship enabled-by-default.
I failed to resist the temptation to do a bunch of copy-editing on the
related documentation. Also, clean up some inconsistencies in which
section of config.sgml the JIT GUCs are documented in vs. what guc.c
and postgresql.config.sample had.
Discussion: https://postgr.es/m/20180914222657.mw25esrzbcnu6qlu@alap3.anarazel.de
to_timestamp()/to_date() functions were introduced mainly for Oracle
compatibility, and became very popular among PostgreSQL users. However, some
behavior of to_timestamp()/to_date() functions are both incompatible with Oracle
and confusing for our users. This behavior is related to handling of spaces and
separators in non FX (fixed format) mode. This commit reworks this behavior
making less confusing, better documented and more compatible with Oracle.
Nevertheless, there are still following incompatibilities with Oracle.
1) We don't insist that there are no format string patterns unmatched to
input string.
2) In FX mode we don't insist space and separators in format string to exactly
match input string.
3) When format string patterns are divided by mix of spaces and separators, we
don't distinguish them, while Oracle takes into account only last group of
spaces/separators.
Discussion: https://postgr.es/m/1873520224.1784572.1465833145330.JavaMail.yahoo%40mail.yahoo.com
Author: Artur Zakirov, Alexander Korotkov, Liudmila Mantrova
Review: Amul Sul, Robert Haas, Tom Lane, Dmitry Dolgov, David G. Johnston
Commit be54b3777 failed on gmake 3.80 due to a chained conditional,
which on closer examination could be removed entirely with some
refactoring elsewhere for a net simplification and more robustness
against empty expansions. Along the way, add some more comments.
Also make explicit in the documentation and comments that built
headers are not removed by 'make clean', since we don't typically want
that for headers generated by a separate ./configure step, and it's
much easier to add your own 'distclean' rule or use EXTRA_CLEAN than
to try and override a deletion rule in pgxs.mk.
Per buildfarm member prariedog and comments by Michael Paquier, though
all the actual changes are my fault.
Since the fixes for CVE-2018-1058, we've advised people to schema-qualify
function references in order to fix failures in code that executes under
a minimal search_path setting. However, that's insufficient to make the
single-argument form of unaccent() work, because it looks up the "unaccent"
text search dictionary using the search path.
The most expedient answer seems to be to remove the search_path dependency
by making it look in the same schema that the unaccent() function itself
is declared in. This will definitely work for the normal usage of this
function with the unaccent dictionary provided by the extension.
It's barely possible that there are people who were relying on the
search-path-dependent behavior to select other dictionaries with the same
name; but if there are any such people at all, they can still get that
behavior by writing unaccent('unaccent', ...), or possibly
unaccent('unaccent'::text::regdictionary, ...) if the lookup has to be
postponed to runtime.
Per complaint from Gunnlaugur Thor Briem. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CAPs+M8LCex6d=DeneofdsoJVijaG59m9V0ggbb3pOH7hZO4+cQ@mail.gmail.com
Commit df163230b overlooked the case that an out-of-tree extension
might need to build its header files (e.g. via ./configure). If it is
also doing a VPATH build, the HEADERS_* rules in the original commit
would then fail to find the files, since they would be looking only
under $(srcdir) and not in the build directory.
Fix by adding HEADERS_built and HEADERS_built_$(MODULE) which behave
like DATA_built in that they look in the build dir rather than the
source dir (and also make the files dependencies of the "all" target).
No Windows support appears to be needed for this, since it is only
relevant to out-of-tree builds (no support exists in Mkvcbuild.pm to
build extension header files in any case).
pg_get_object_address and pg_identify_object_as_address are supposed
to be inverses, but they disagreed as to the names of the arguments
representing the textual form of an object address. Moreover, the
documented argument names didn't agree with reality at all, either
for these functions or pg_identify_object.
In HEAD and v11, I think we can get away with renaming the input
arguments of pg_get_object_address to match the outputs of
pg_identify_object_as_address. In theory that might break queries
using named-argument notation to call pg_get_object_address, but
it seems really unlikely that anybody is doing that, or that they'd
have much trouble adjusting if they were. In older branches, we'll
just live with the lack of consistency.
Aside from fixing the documentation of these functions to match reality,
I couldn't resist the temptation to do some copy-editing.
Per complaint from Jean-Pierre Pelletier. Back-patch to 9.5 where these
functions were introduced. (Before v11, this is a documentation change
only.)
Discussion: https://postgr.es/m/CANGqjDnWH8wsTY_GzDUxbt4i=y-85SJreZin4Hm8uOqv1vzRQA@mail.gmail.com
The previous description was unclear. Also add a third example, change
use of time zone acronyms to more verbose descriptions, and add a
mention that using 'time' with AT TIME ZONE uses the current time zone
rules.
Backpatch-through: 9.3
It's been true for a long time that we expect names of table and domain
constraints to be unique among the constraints of that table or domain.
However, the enforcement of that has been pretty haphazard, and it missed
some corner cases such as creating a CHECK constraint and then an index
constraint of the same name (as per recent report from André Hänsel).
Also, due to the lack of an actual unique index enforcing this, duplicates
could be created through race conditions.
Moreover, the code that searches pg_constraint has been quite inconsistent
about how to handle duplicate names if one did occur: some places checked
and threw errors if there was more than one match, while others just
processed the first match they came to.
To fix, create a unique index on (conrelid, contypid, conname). Since
either conrelid or contypid is zero, this will separately enforce
uniqueness of constraint names among constraints of any one table and any
one domain. (If we ever implement SQL assertions, and put them into this
catalog, more thought might be needed. But it'd be at least as reasonable
to put them into a new catalog; having overloaded this one catalog with
two kinds of constraints was a mistake already IMO.) This index can replace
the existing non-unique index on conrelid, though we need to keep the one
on contypid for query performance reasons.
Having done that, we can simplify the logic in various places that either
coped with duplicates or neglected to, as well as potentially improve
lookup performance when searching for a constraint by name.
Also, as per our usual practice, install a preliminary check so that you
get something more friendly than a unique-index violation report in the
case complained of by André. And teach ChooseIndexName to avoid choosing
autogenerated names that would draw such a failure.
While it's not possible to make such a change in the back branches,
it doesn't seem quite too late to put this into v11, so do so.
Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
This column was added in commit 8224de4f42 ("Indexes with INCLUDE
columns and their support in B-tree") to ease writing the ruleutils.c
supporting code for that feature, but it turns out to be unnecessary --
we can do the same thing with just one more syscache lookup.
Even the documentation for the new column being removed in this commit
is awkward.
Discussion: https://postgr.es/m/20180902165018.33otxftp3olgtu4t@alvherre.pgsql
This table claimed that JOHAB could be used as a server encoding, which
was true originally but hasn't been true since 8.3. It also lacked
entries for EUC_JIS_2004 and SHIFT_JIS_2004.
JOHAB problem noted by Lars Kanis, the others by me.
Discussion: https://postgr.es/m/c0f514a1-b7a9-b9ea-1c02-c34aead56c06@greiz-reinsdorf.de
Currently there are two ways to trigger log rotation in logging collector
process: call pg_rotate_logfile() SQL-function or send SIGUSR1 signal directly
to logging collector process. However, it's nice to have more suitable way
for external tools to do that, which wouldn't require SQL connection or
knowledge of logging collector pid. This commit implements triggering log
rotation by "pg_ctl logrotate" command.
Discussion: https://postgr.es/m/20180416.115435.28153375.horiguchi.kyotaro%40lab.ntt.co.jp
Author: Kyotaro Horiguchi, Alexander Kuzmenkov, Alexander Korotkov
Using -d is odd, because we normally reserve that for a database
argument, so rename it to -v and add long version --verbose.
Also, reduce it to emit one line per file checked rather than one line
per block.
Per a complaint from Michael Banck.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Michael Banck <michael.banck@credativ.de>
Discussion: https://postgr.es/m/20180827113411.GA22768@nighthawk.caipicrew.dd-dns.de
Like oid2name, vacuumlo has been lacking consistency with other
utilities for its options:
- Connection options gain long aliases.
- Document environment variables which could be used: PGHOST, PGPORT and
PGUSER.
Documentation and code is reordered to be more consistent. A basic set
of TAP tests has been added while on it.
Author: Tatsuro Yamada
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
oid2name has done little effort to keep an interface consistent with
other binary utilities:
- -H was used instead of -h/-host. This option is now marked as
deprecated, still its output is accepted to be backward-compatible.
- -P has been removed from the code, and was still documented.
- All options gain long aliases, making connection options more similar
to other binaries.
- Document environment variables which could be used: PGHOST, PGPORT and
PGUSER.
A basic set of TAP tests is added on the way, and documentation is
cleaned up to be more consistent with other things.
Author: Tatsuro Yamada
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
Mention that "Latest checkpoint location" will not match in pg_upgrade
if the standby server is still running during the upgrade, which is
possible. "Match" text first appeared in PG 9.5.
Reported-by: Paul Bonaud
Discussion: https://postgr.es/m/c7268794-edb4-1772-3bfd-04c54585c24e@trainline.com
Backpatch-through: 9.5
In 86d78ef50e I enabled configure to check for C99 support, with the
goal of checking which platforms support C99. While there are a few
machines without C99 support among our buildfarm animals,
de-supporting them for v12 was deemed acceptable.
While not tested in aforementioned commit, the biggest increase in
minimum compiler version comes from MSVC, which gained C99 support
fairly late. The subset in MSVC 2013 is sufficient for our needs, at
this point. While that is a significant increase in minimum version,
the existing windows binaries are already built with a new enough
version.
Make configure error out if C99 support could not be detected. For
MSVC builds, increase the minimum version to 2013.
The increase to MSVC 2013 allows us to get rid of VCBuildProject.pm,
as that was only required for MSVC 2005/2008.
Author: Andres Freund
Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
Historically, we looked up the target hostname in connectDBStart, so that
PQconnectPoll did not need to do DNS name resolution. The patches that
added multiple-target-host support to libpq preserved this division of
labor; but it's really nonsensical now, because it means that if any one
of the target hosts fails to resolve in DNS, the connection fails. That
negates the no-single-point-of-failure goal of the feature. Additionally,
DNS lookups aren't exactly cheap, but the code did them all even if the
first connection attempt succeeds.
Hence, rearrange so that PQconnectPoll does the lookups, and only looks
up a hostname when it's time to try that host. This does mean that
PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid
that, you should be using hostaddr, as the documentation has always
specified. It seems fairly unlikely that any applications would really
care whether the lookup occurs inside PQconnectStart or PQconnectPoll.
In addition to calling out that fact explicitly, do some other minor
wordsmithing in the docs around the multiple-target-host feature.
Since this seems like a bug in the multiple-target-host feature,
backpatch to v10 where that was introduced. In the back branches,
avoid moving any existing fields of struct pg_conn, just in case
any third-party code is looking into that struct.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
Since procedures are now a different thing from functions, change the
CREATE TRIGGER and CREATE EVENT TRIGGER syntax to use FUNCTION in the
clause that specifies the function. PROCEDURE is still accepted for
compatibility.
pg_dump and ruleutils.c output is not changed yet, because that would
require a change in information_schema.sql and thus a catversion change.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
Since procedures are now a different thing from functions, change the
CREATE OPERATOR syntax to use FUNCTION in the clause that specifies the
function. PROCEDURE is still accepted for compatibility.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
Historically, the term procedure was used as a synonym for function in
Postgres/PostgreSQL. Now we have procedures as separate objects from
functions, so we need to clean up the documentation to not mix those
terms.
In particular, mentions of "trigger procedures" are changed to "trigger
functions", and access method "support procedures" are changed to
"support functions". (The latter already used FUNCTION in the SQL
syntax anyway.) Also, the terminology in the SPI chapter has been
cleaned up.
A few tests, examples, and code comments are also adjusted to be
consistent with documentation changes, but not everything.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
This para is obsolete, first because nobody is using Solaris 7 anymore,
and second because if someone was, configure should catch the snprintf
buffer overrun problem automatically (since commit 9bed827b1), and third
because this is incorrect advice about how to manually force use of
snprintf.c anyway, and has been so at least since commit 3bc6bdf32.
The lack of complaints about it reinforces the conclusion that Solaris 7
no longer exists in the wild; so I don't feel a need to insert correct
advice instead.
Commit 5f374fe7a attempted to turn the connect_timeout from an overall
maximum time limit into a per-host limit, but it didn't do a great job of
that. The timer would only get restarted if we actually detected timeout
within connectDBComplete(), not if we changed our attention to a new host
for some other reason. In that case the old timeout continued to run,
possibly causing a premature timeout failure for the new host.
Fix that, and also tweak the logic so that if we do get a timeout,
we advance to the next available IP address, not to the next host name.
There doesn't seem to be a good reason to assume that all the IP
addresses supplied for a given host name will necessarily fail the
same way as the current one. Moreover, this conforms better to the
admittedly-vague documentation statement that the timeout is "per
connection attempt". I changed that to "per host name or IP address"
to be clearer. (Note that reconnections to the same server, such as for
switching protocol version or SSL status, don't get their own separate
timeout; that was true before and remains so.)
Also clarify documentation about the interpretation of connect_timeout
values less than 2.
This seems like a bug, so back-patch to v10 where this logic came in.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/5735.1533828184@sss.pgh.pa.us
5262f7a4fc have introduced parallel index scan. In order to estimate the
number of parallel workers, it adds extra argument to amcostestimate() index
access method API function. However, this extra argument was missed in the
documentation. This commit fixes that.
Discussion: https://postgr.es/m/4128fdb4-8b63-2e05-38f6-3125f8c27263%40lab.ntt.co.jp
Author: Tatsuro Yamada, Alexander Korotkov
Backpatch-through: 10
Since commit 3b8f6e75f, failure to do this would lead to
submake-generated-headers not doing anything, so that references to
generated or symlinked headers would fail. Previous to that, the
omission only led to temp-install not doing anything, which apparently
affects many fewer people (doesn't anybody use "make check" in their
build rules??). Hence, backpatch to v11 but not further.
Per complaints from Christoph Berg, Jakob Egger, and others.
A database owner running a database-level REINDEX has the possibility to
also do the operation on shared system catalogs without being an owner
of them, which allows him to block resources it should not have access
to. The same goes for a schema owner. For example, PostgreSQL would go
unresponsive and even block authentication if a lock is waited for
pg_authid. This commit makes sure that a user running a REINDEX SYSTEM,
DATABASE or SCHEMA only works on the following relations:
- The user is a superuser
- The user is the table owner
- The user is the database/schema owner, only if the relation worked on
is not shared.
Robert has worded most the documentation changes, and I have coded the
core part.
Reported-by: Lloyd Albin, Jeremy Schneider
Author: Michael Paquier, Robert Haas
Reviewed by: Nathan Bossart, Kyotaro Horiguchi
Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20180805211059.GA2185@paquier.xyz
Backpatch-through: 11- as the current behavior has been around for a
very long time and could be disruptive for already released branches.
The amcheck documentation incorrectly claimed that its example query
verifies every catalog index in the database. In fact, the query only
verifies the 10 largest indexes (as determined by pg_class.relpages).
Adjust the description accordingly.
Backpatch: 10-, where contrib/amcheck was introduced.
There are some problems with the tls-unique channel binding type. It's not
supported by all SSL libraries, and strictly speaking it's not defined for
TLS 1.3 at all, even though at least in OpenSSL, the functions used for it
still seem to work with TLS 1.3 connections. And since we had no
mechanism to negotiate what channel binding type to use, there would be
awkward interoperability issues if a server only supported some channel
binding types. tls-server-end-point seems feasible to support with any SSL
library, so let's just stick to that.
This removes the scram_channel_binding libpq option altogether, since there
is now only one supported channel binding type.
This also removes all the channel binding tests from the SSL test suite.
They were really just testing the scram_channel_binding option, which
is now gone. Channel binding is used if both client and server support it,
so it is used in the existing tests. It would be good to have some tests
specifically for channel binding, to make sure it really is used, and the
different combinations of a client and a server that support or doesn't
support it. The current set of settings we have make it hard to write such
tests, but I did test those things manually, by disabling
HAVE_BE_TLS_GET_CERTIFICATE_HASH and/or
HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH.
I also removed the SCRAM_CHANNEL_BINDING_TLS_END_POINT constant. This is a
matter of taste, but IMO it's more readable to just use the
"tls-server-end-point" string.
Refactor the checks on whether the SSL library supports the functions
needed for tls-server-end-point channel binding. Now the server won't
advertise, and the client won't choose, the SCRAM-SHA-256-PLUS variant, if
compiled with an OpenSSL version too old to support it.
In the passing, add some sanity checks to check that the chosen SASL
mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, matches whether the SCRAM
exchange used channel binding or not. For example, if the client selects
the non-channel-binding variant SCRAM-SHA-256, but in the SCRAM message
uses channel binding anyway. It's harmless from a security point of view,
I believe, and I'm not sure if there are some other conditions that would
cause the connection to fail, but it seems better to be strict about these
things and check explicitly.
Discussion: https://www.postgresql.org/message-id/ec787074-2305-c6f4-86aa-6902f98485a4%40iki.fi
Remove description of commit 1944cdc98, which has now been back-patched
so it's not relevant to v11 any longer. Add descriptions of other
recent commits that seemed worth mentioning.
I marked the update as stopping at 2018-07-30, because it's unclear
whether d06eebce5 will be allowed to stay in v11, and I didn't feel like
putting effort into writing a description of it yet. If it does stay,
I think it will deserve mention in the Source Code section.
The example output of pg_replication_slot is wrong. Correct it and make
the output stable by explicitly listing columns to output.
Author: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20180731.190909.42582169.horiguchi.kyotaro@lab.ntt.co.jp
Before v10, we always searched ~/.pgpass using the host parameter,
and nothing else, to match to the "hostname" field of ~/.pgpass.
(However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
"localhost".) In v10, this got broken by commit 274bb2b38, repaired by
commit bdac9836d, and broken again by commit 7b02ba62e; in the code
actually shipped, we'd search with hostaddr if both that and host were
specified --- though oddly, *not* if only hostaddr were specified.
Since this is directly contrary to the documentation, and not
backwards-compatible, it's clearly a bug.
However, the change wasn't totally without justification, even though it
wasn't done quite right, because the pre-v10 behavior has arguably been
buggy since we added hostaddr. If hostaddr is specified and host isn't,
the pre-v10 code will search ~/.pgpass for "localhost", and ship that
password off to a server that most likely isn't local at all. That's
unhelpful at best, and could be a security breach at worst.
Therefore, rather than just revert to that old behavior, let's define
the behavior as "search with host if provided, else with hostaddr if
provided, else search for localhost". (As before, a host name matching
DEFAULT_PGSOCKET_DIR is replaced by localhost.) This matches the
behavior of the actual connection code, so that we don't pick up an
inappropriate password; and it allows useful searches to happen when
only hostaddr is given.
While we're messing around here, ensure that empty elements within a
host or hostaddr list select the same behavior as a totally-empty
field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
if DEFAULT_PGSOCKET_DIR is /tmp. Things worked that way in some cases
already, but not consistently so, which contributed to the confusion
about what key ~/.pgpass would get searched with.
Update documentation accordingly, and also clarify some nearby text.
Back-patch to v10 where the host/hostaddr list functionality was
introduced.
Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us
This allows out-of-tree PLs and similar code to get access to
definitions needed to work with extension data types.
The following existing modules now install headers: contrib/cube,
contrib/hstore, contrib/isn, contrib/ltree, contrib/seg.
Discussion: https://postgr.es/m/87y3euomjh.fsf%40news-spur.riddles.org.uk
Move out of the concurrency control chapter, where mostly only user
table locks are discussed, and move to CREATE COLLATION reference page.
Author: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
As written, this policy constrained only the post-image not the pre-image
of rows, meaning that users could delete other users' rows or take
ownership of such rows, contrary to what the docs claimed would happen.
We need two separate policies to achieve the documented effect.
While at it, try to explain what's happening a bit more fully.
Per report from Олег Самойлов. Back-patch to 9.5 where this was added.
Thanks to Stephen Frost for off-list discussion.
Discussion: https://postgr.es/m/3298321532002010@sas1-2b3c3045b736.qloud-c.yandex.net
This allows querying the SSL implementation used on the server side.
It's analogous to using PQsslAttribute(conn, "library") in libpq.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Commit 5770172cb0 documented secure schema
usage, and that advice suffices for using unqualified names securely.
Document, in typeconv-func primarily, the additional issues that arise
with qualified names. Back-patch to 9.3 (all supported versions).
Reviewed by Jonathan S. Katz.
Discussion: https://postgr.es/m/20180721012446.GA1840594@rfd.leadboat.com
Until now shadowed_variables was the only plpgsql check supported by
plpgsql.extra_warnings and plpgsql.extra_errors. This patch introduces
two new checks - strict_multi_assignment and too_many_rows. Unlike
shadowed_variables, these new checks are enforced at run-time.
strict_multi_assignment checks that commands allowing multi-assignment
(for example SELECT INTO) have the same number of sources and targets.
too_many_rows checks that queries with an INTO clause return one row
exactly.
These checks are aimed at cases that are technically valid and allowed,
but are often a sign of a bug. Therefore those checks are expected to
be enabled primarily in development and testing environments.
Author: Pavel Stehule
Reviewed-by: Stephen Frost, Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRA2kKRDKpUNwLY0GeG1OqOp+tLS2yQA1V41gzuSz-hCng@mail.gmail.com
As benchmarks show, using libc's string-to-integer conversion is
pretty slow. At least part of the reason for that is that strtol[l]
have to be more generic than what largely is required inside pg.
This patch considerably speeds up int2/int4 input (int8 already was
already using hand-rolled code).
Most of the existing pg_atoi callers have been converted. But as one
requires pg_atoi's custom delimiter functionality, and as it seems
likely that there's external pg_atoi users, it seems sensible to just
keep pg_atoi around.
Author: Andres Freund
Reviewed-By: Robert Haas
Discussion: https://postgr.es/m/20171208214437.qgn6zdltyq5hmjpk@alap3.anarazel.de
Previously a lot of the error messages referenced the type in the
error message itself. That requires that the message is translated
separately for each type.
Note that currently a few smallint cases continue to reference the
integer, rather than smallint, type. A later patch will create a
separate routine for 16bit input.
Author: Andres Freund
Discussion: https://postgr.es/m/20180707200158.wpqkd7rjr4jxq5g7@alap3.anarazel.de
The initial version of the included-index-column feature stated that
included columns couldn't be the same as any key column of the index.
While it'd be pretty silly to do that, since the included column would be
entirely redundant, we've never prohibited redundant index columns before
so it's not very consistent to do so here. Moreover, the prohibition
was itself badly implemented, so that it failed to reject columns that
were effectively identical but not spelled quite alike, as reported by
Aditya Toshniwal.
(Moreover, it's not hard to imagine that for some non-btree index types,
such cases would be non-silly anyhow: the index might use a lossy
representation for key columns but be able to support retrieval of the
original form of included columns.)
Hence, let's just drop the prohibition.
In passing, do some copy-editing on the documentation for the
included-column feature.
Yugo Nagata; documentation and test corrections by me
Discussion: https://postgr.es/m/CAM9w-_mhBCys4fQNfaiQKTRrVWtoFrZ-wXmDuE9Nj5y-Y7aDKQ@mail.gmail.com
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.
Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
It's more logical this way, since the new ordering matches the way the
tables are created; but in any case, the previous location of PARTITION OF
did not appear carefully chosen anyway (since it didn't match the
location in which it appears in the synopsys either, which is what we
normally do.)
In the PARTITION BY stanza, add a link to the partitioning section in
the DDL chapter, too.
Suggested-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwY4Ld7ecxL_KAmaxwt0FUu5VcPPN2L4dh+3BeYbrdBa5g@mail.gmail.com
Update links that resulted in redirects. Most are changes from http to
https, but there are also some other minor edits. (There are still some
redirects where the target URL looks less elegant than the one we
currently have. I have left those as is.)
lca_inner() wasn't prepared for the possibility of getting no inputs.
Fix that, and make some cosmetic improvements to the code while at it.
Also, I thought the documentation of this function as returning the
"longest common prefix" of the paths was entirely misleading; it really
returns a path one shorter than the longest common prefix, for the typical
definition of "prefix". Don't use that term in the docs, and adjust the
examples to clarify what really happens.
This has been broken since its beginning, so back-patch to all supported
branches.
Per report from Hailong Li. Thanks to Pierre Ducroquet for diagnosing
and for the initial patch, though I whacked it around some and added
test cases.
Discussion: https://postgr.es/m/5b0d8e4f-f2a3-1305-d612-e00e35a7be66@qunar.com
Now that the documentation sources are in XML rather than SGML, some of
the documentation about the editor, or more specifically Emacs, setup
needs updating. The updated instructions recommend using nxml-mode,
which works mostly out of the box, with some small tweaks in
emacs.samples and .dir-locals.el.
Also remove some obsolete stuff in .dir-locals.el. I did, however,
leave the sgml-mode settings in there so that someone using Emacs
without emacs.samples gets those settings when editing a *.sgml file.
Starting and aborting transactions in security definer procedures
doesn't work. StartTransaction() insists that the security context
stack is empty, so this would currently cause a crash, and
AbortTransaction() resets it. This could be made to work by
reorganizing the code, but right now we just prohibit it.
Reported-by: amul sul <sulamul@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b96Gupt_LFL7uNyy3c50-wbhA68NUjiK5%3DrF6_w%3Dpq_T%3DQ%40mail.gmail.com
Explain that you can use any integer expression for the "count" in
pl/pgsql's versions of FETCH/MOVE, unlike the SQL versions which only
allow a constant.
Remove the duplicate version of this para under MOVE. I don't see
a good reason to maintain two identical paras when we just said that
MOVE works exactly like FETCH.
Per Pavel Stehule, though I didn't use his text.
Discussion: https://postgr.es/m/CAFj8pRAcvSXcNdUGx43bOK1e3NNPbQny7neoTLN42af+8MYWEA@mail.gmail.com
Commit ff4f88916 adjusted the code to enforce the SQL spec's requirement
that a window using GROUPS mode must have an ORDER BY clause. But I missed
that the documentation explicitly said you didn't have to have one.
Also minor wordsmithing in the window-function section of select.sgml.
Per Masahiko Sawada, though I didn't use his patch.
Such replication slots are physical slots freshly created without WAL
being reserved, which is the default behavior, which have not been used
yet as WAL consumption resources to retain WAL. This prevents advancing
a slot to a position older than any WAL available, which could falsify
calculations for WAL segment recycling.
This also cleans up a bit the code, as ReplicationSlotRelease() would be
called on ERROR, and improves error messages.
Reported-by: Kyotaro Horiguchi
Author: Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20180626071305.GH31353@paquier.xyz
PostgreSQL nowadays offers some kind of dynamic shared memory feature on
all supported platforms. Having the choice of "none" prevents us from
relying on DSM in core features. So this patch removes the choice of
"none".
Author: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
This is an option consistent with what pg_dump and pg_basebackup provide
which is useful for leveraging the I/O effort when testing things, not
to be used in a production environment.
Author: Michael Paquier
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/20180325122607.GB3707@paquier.xyz
If a view references a foreign table, and the foreign table has a
BEFORE INSERT trigger, then it's possible for a tuple inserted or
updated through the view to be changed such that it violates the
view's WITH CHECK OPTION constraint.
Before this commit, postgres_fdw handled this case inconsistently. A
RETURNING clause on the INSERT or UPDATE statement targeting the view
would cause the finally-inserted tuple to be read back, and the WITH
CHECK OPTION violation would throw an error. But without a RETURNING
clause, postgres_fdw would not read the final tuple back, and WITH
CHECK OPTION would not throw an error for the violation (or may throw
an error when there is no real violation). AFTER ROW triggers on the
foreign table had a similar effect as a RETURNING clause on the INSERT
or UPDATE statement.
To fix, this commit retrieves the attributes needed to enforce the
WITH CHECK OPTION constraint along with the attributes needed for the
RETURNING clause (if any) from the remote side. Thus, the WITH CHECK
OPTION constraint is always evaluated against the final tuple after
any triggers on the remote side.
This fix may be considered inconsistent with CHECK constraints
declared on foreign tables, which are not enforced locally at all
(because the constraint is on a remote object). The discussion
concluded that this difference is reasonable, because the WITH CHECK
OPTION is a constraint on the local view (not any remote object);
therefore it only makes sense to enforce its WITH CHECK OPTION
constraint locally.
Author: Etsuro Fujita
Reviewed-by: Arthur Zakirov, Stephen Frost
Discussion: https://www.postgresql.org/message-id/7eb58fab-fd3b-781b-ac33-f7cfec96021f%40lab.ntt.co.jp
When performing pg_rewind, the presence of a read-only file which is not
accessible for writes will cause a failure while processing. This can
cause the control file of the target data folder to be truncated,
causing it to not be reusable with a successive run.
Also, when pg_rewind fails mid-flight, there is likely no way to be able
to recover the target data folder anyway, in which case a new base
backup is the best option. A note is added in the documentation as
well about.
Reported-by: Christian H.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20180104200633.17004.16377%40wrigleys.postgresql.org
Prefer to use phrases like "child" instead of "partition" when
describing the legacy inheritance-based partitioning. The word
"partition" now has a fixed meaning for the built-in partitioning, so
keeping it out of the documentation of the old method makes things
clearer.
Author: Justin Pryzby <pryzby@telsasoft.com>
This section once started out small but has now grown quite a bit and
needs a bit of structure.
Rewrite as list, add documentation of EXCLUDING, and improve the
documentation of INCLUDING ALL instead of just listing all the options
again.
per report from Yugo Nagata that EXCLUDING was not documented, that part
reviewed by Daniel Gustafsson, most of the rewrite was by me
This has been visibly a forgotten spot in the first implementation of
wait events for I/O added by 249cf07, and what has been missing is a
fsync call for WAL segments which is a wrapper reacting on the value of
GUC wal_sync_method.
Reported-by: Konstantin Knizhnik
Author: Konstantin Knizhnik
Reviewed-by: Craig Ringer, Michael Paquier
Discussion: https://postgr.es/m/4a243897-0ad8-f471-aa40-242591f2476e@postgrespro.ru
When these programs call pg_catalog.set_config, they need to check for
PGRES_TUPLES_OK instead of PGRES_COMMAND_OK. Fix for
5770172cb0.
Reported-by: Ideriha, Takeshi <ideriha.takeshi@jp.fujitsu.com>