Investigation of 2d2d06b7e2 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN. To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.
For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence. The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet. So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type. Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.
As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.
Amit Langote, reviewed by Etsuro Fujita
Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
The changes in b81b5a96f4 did not fully
address the issue, because the bit-mixing of the IV into the final
hash-key didn't prevent clustering in the input-data survive in the
output data.
This didn't cause a lot of problems because of the additional growth
conditions added d4c62a6b62. But as we
want to rein those in due to explosive growth in some edges, this
needs to be fixed.
Author: Andres Freund
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to
postpone initialization of the indexscan proper until the first tuple
fetch. It overlooked the question of mark/restore behavior, which means
that if some caller attempts to mark the scan before the first tuple fetch,
you get a null pointer dereference.
The only existing user of mark/restore is nodeMergejoin.c, which (somewhat
accidentally) will never attempt to set a mark before the first inner tuple
unless the inner child node is a Material node. Hence the case can't arise
normally, so it seems sufficient to document the assumption at both ends.
However, during an EvalPlanQual recheck, ExecScanFetch doesn't call
IndexNext but just returns the jammed-in test tuple. Therefore, if we're
doing a recheck in a plan tree with a mergejoin with inner indexscan,
it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null,
as reported by Guo Xiang Tan in bug #15032.
Really, when there's a test tuple supplied during an EPQ recheck, touching
the index at all is the wrong thing: rather, the behavior of mark/restore
ought to amount to saving and restoring the es_epqScanDone flag. We can
avoid finding a place to actually save the flag, for the moment, because
given the assumption that no caller will set a mark before fetching a
tuple, es_epqScanDone must always be set by the time we try to mark.
So the actual behavior change required is just to not reach the index
access if a test tuple is supplied.
The set of plan node types that need to consider this issue are those
that support EPQ test tuples (i.e., call ExecScan()) and also support
mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps
CustomScan. It's tempting to try to fix the problem in one place by
teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some
plan types that aren't Scans, and also it seems risky to make assumptions
about what a CustomScan wants to do here. Also, the most likely future
change here is to decide that we do need to support marks placed before
the first tuple, which would require additional work in IndexScan and
IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c
and nodeIndexonlyscan.c, accepting the small amount of code duplicated
thereby, and leave it to CustomScan providers to fix this bug if they
have it.
Back-patch to v10 where commit 09529a70b came in. In earlier branches,
the index_markpos() call is a waste of cycles when EPQ is active, but
no more than that, so it doesn't seem appropriate to back-patch further.
Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
get_relation_info() was too optimistic about opening indexes in
partitioned tables, which would raise errors when any queries were
planned on such tables. Fix by ignoring any indexes of the partitioned
kind.
CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem. Fix by
disallowing these commands in partitioned tables.
Fallout from 8b08f7d482.
Connection strings can have items with spaces in them, wrapped in
quotes. The tests however ran a SELECT '$connstr' upon connection which
broke on the embedded quotes. Use dollar quotes on the connstr to
protect against this. This was hit during the development of the macOS
Secure Transport patch, but is independent of it.
Author: Daniel Gustafsson <daniel@yesql.se>
record_image_eq was covered a bit by the materialized view code that it
is meant to support, but record_image_cmp was not tested at all.
While we're here, add more tests to record_eq and record_cmp as well,
for symmetry.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
If we're inside a lateral subquery, there may be no unparameterized paths
for a particular child relation of an appendrel, in which case we *must*
be able to create similarly-parameterized paths for each other child
relation, else the planner will fail with "could not devise a query plan
for the given query". This means that there are situations where we'd
better be able to reparameterize at least one path for each child.
This calls into question the assumption in reparameterize_path() that
it can just punt if it feels like it. However, the only case that is
known broken right now is where the child is itself an appendrel so that
all its paths are AppendPaths. (I think possibly I disregarded that in
the original coding on the theory that nested appendrels would get folded
together --- but that only happens *after* reparameterize_path(), so it's
not excused from handling a child AppendPath.) Given that this code's been
like this since 9.3 when LATERAL was introduced, it seems likely we'd have
heard of other cases by now if there were a larger problem.
Per report from Elvis Pranskevichus. Back-patch to 9.3.
Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net
When an UPDATE causes a row to no longer match the partition
constraint, try to move it to a different partition where it does
match the partition constraint. In essence, the UPDATE is split into
a DELETE from the old partition and an INSERT into the new one. This
can lead to surprising behavior in concurrency scenarios because
EvalPlanQual rechecks won't work as they normally did; the known
problems are documented. (There is a pending patch to improve the
situation further, but it needs more review.)
Amit Khandekar, reviewed and tested by Amit Langote, David Rowley,
Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro
Herrera, Amit Kapila, and me. A few final revisions by me.
Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that. It's only used in
aclcheck_error. By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.
As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones. Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.
To support pg_dump'ing these indexes, add commands
CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index). These reconstruct prior
database state exactly.
Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
This was hardly tested at all. The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
If a query against an inheritance tree runs concurrently with an ALTER
TABLE that's disinheriting one of the tree members, it's possible to get
a "could not find inherited attribute" error because after obtaining lock
on the removed member, make_inh_translation_list sees that its columns
have attinhcount=0 and decides they aren't the columns it's looking for.
An ideal fix, perhaps, would avoid including such a just-removed member
table in the query at all; but there seems no way to accomplish that
without adding expensive catalog rechecks or creating a likelihood of
deadlocks. Instead, let's just drop the check on attinhcount. In this
way, a query that's included a just-disinherited child will still
succeed, which is not a completely unreasonable behavior.
This problem has existed for a long time, so back-patch to all supported
branches. Also add an isolation test verifying related behaviors.
Patch by me; the new isolation test is based on Kyotaro Horiguchi's work.
Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.
To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing. This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.
Back-patch to 9.5 where grouping sets were introduced.
Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth
Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
This was nearly the same code. Extend wait_for_catchup to allow waiting
for pg_current_wal_lsn() and use that in the subscription tests. Also
change one use in the pg_rewind tests to use this.
Also remove some broken code in wait_for_catchup and
wait_for_slot_catchup. The error message in case the waiting failed
wanted to show the current LSN, but the way it was written never
worked. So since nobody ever cared, just remove it.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
These are compatible with Oracle and required for the datetime template
language for jsonpath in an upcoming patch.
Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
Commit ab7271677 introduced code that attempts to order the child
scans of a Parallel Append node in a way that will minimize execution
time, based on total cost and startup cost. However, it failed to
think hard about what to do when estimated costs are exactly equal;
a case that's particularly likely to occur when comparing on startup
cost. In such a case the ordering of the child paths would be left
to the whims of qsort, an algorithm that isn't even stable.
We can improve matters by applying the rule used elsewhere in the
planner: if total costs are equal, sort on startup cost, and
vice versa. When both cost estimates are exactly equal, rather
than letting qsort do something unpredictable, sort based on the
child paths' relids, which should typically result in sorting in
inheritance order. (The latter provision requires inventing a
qsort-style comparator for bitmapsets, but maybe we'll have use
for that for other reasons in future.)
This results in a few plan changes in the select_parallel test,
but those all look more reasonable than before, when the actual
underlying cost numbers are taken into account.
Discussion: https://postgr.es/m/4944.1515446989@sss.pgh.pa.us
Add a function to TestLib that allows us to check pg_config.h and then
decide the expected test outcome based on that.
Author: Michael Paquier <michael.paquier@gmail.com>
This adds a second standard channel binding type for SCRAM. It is
mainly intended for third-party clients that cannot implement
tls-unique, for example JDBC.
Author: Michael Paquier <michael.paquier@gmail.com>
Add some infrastructure (mostly macros) to make it easier to write
typical cases for constant-expression simplification. Add simplification
processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types,
which formerly went unsimplified even if all their inputs were constants.
Also teach it to simplify FieldSelect from a composite constant.
Make use of the new infrastructure to reduce the amount of code needed
for the existing ArrayExpr and ArrayCoerceExpr cases.
One existing test case changes output as a result of the fact that
RowExpr can now be folded to a constant. All the new code is exercised
by existing test cases according to gcov, so I feel no need to add
additional tests.
Tom Lane, reviewed by Dmitry Dolgov
Discussion: https://postgr.es/m/3be3b82c-e29c-b674-2163-bf47d98817b1@iki.fi
While ldaptls=1 provides an RFC 4513 conforming way to do LDAP
authentication with TLS encryption, there was an earlier de facto
standard way to do LDAP over SSL called LDAPS. Even though it's not
enshrined in a standard, it's still widely used and sometimes required
by organizations' network policies. There seems to be no reason not to
support it when available in the client library. Therefore, add support
when using OpenLDAP 2.4+ or Windows. It can be configured with
ldapscheme=ldaps or ldapurl=ldaps://...
Add tests for both ways of requesting LDAPS and a test for the
pre-existing ldaptls=1. Modify the 001_auth.pl test for "diagnostic
messages", which was previously relying on the server rejecting
ldaptls=1.
Author: Thomas Munro
Reviewed-By: Peter Eisentraut
Discussion: https://postgr.es/m/CAEepm=1s+pA-LZUjQ-9GQz0Z4rX_eK=DFXAF1nBQ+ROPimuOYQ@mail.gmail.com
Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are
supposed to be able to work in parallel, as evidenced by fixes in commit
c3d09b3bd2 specifically to support this case. In reality, one of the
sessions would be aborted by a misterious "deadlock detected" error.
Jeff Janes diagnosed that this is because of leftover snapshots used for
system catalog scans -- this was broken by 8aa3e47510 keeping track of
(registering) the catalog snapshot. To fix the deadlocks, it's enough
to de-register that snapshot prior to waiting.
Backpatch to 9.4, which introduced MVCC catalog scans.
Include an isolationtester spec that 8 out of 10 times reproduces the
deadlock with the unpatched code for me (Álvaro).
Author: Jeff Janes
Diagnosed-by: Jeff Janes
Reported-by: Jeremy Finzel
Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com
Otherwise, the test fails with "Timed out while waiting for standby to
catch up". This happened rarely, perhaps only when autovacuum wrote WAL
between our choosing the recovery target and choosing the LSN to await.
Commit b26f7fa6ae fixed one case of this.
Fix two more. Back-patch to 9.6, which introduced the affected test.
Discussion: https://postgr.es/m/20180101055227.GA2952815@rfd.leadboat.com
I noticed that our code coverage report showed considerable deficiency
in test coverage for PL/pgSQL control statements. Notably, both
exec_stmt_block and most of the loop control statements had very poor
coverage of handling of return/exit/continue result codes from their
child statements; and exec_stmt_fori was seriously lacking in feature
coverage, having no test that exercised its BY or REVERSE features,
nor verification that its overflow defenses work.
Now that we have some infrastructure for plpgsql-specific test scripts,
the natural thing to do is make a new script rather than further extend
plpgsql.sql. So I created a new script plpgsql_control.sql with the
charter to test plpgsql control structures, and moved a few existing
tests there because they fell entirely under that charter. I then
added new test cases that exercise the bits of code complained of above.
Of the five kinds of loop statements, only exec_stmt_while's result code
handling is fully exercised by these tests. That would be a deficiency
as things stand, but a follow-on commit will merge the loop statements'
result code handling into one implementation. So testing each usage of
that implementation separately seems redundant.
In passing, also add a couple test cases to plpgsql.sql to more fully
exercise plpgsql's code related to expanded arrays --- I had thought
that area was sufficiently covered already, but the coverage report
showed a couple of un-executed code paths.
Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
Polygon opclass uses compress method feature of SP-GiST added earlier. For now
it's a single operator class which uses this feature. SP-GiST actually indexes
a bounding boxes of input polygons, so part of supported operations are lossy.
Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
treats bounding boxes as point in 4D-space.
Bump catalog version.
Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me
Reviewed-By: all authors + Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
Since 9.4, we've allowed the syntax "select union select" and variants
of that. However, the planner wasn't expecting a no-column set operation
and ended up treating the set operation as if it were UNION ALL.
Turns out it's trivial to fix in v10 and later; we just need to be careful
about not generating a Sort node with no sort keys. However, since a weird
corner case like this is never going to be exercised by developers, we'd
better have thorough regression tests if we want to consider it supported.
Per report from Victor Yegorov.
Discussion: https://postgr.es/m/CAGnEbojGJrRSOgJwNGM7JSJZpVAf8xXcVPbVrGdhbVEHZ-BUMw@mail.gmail.com
Various Perl scripts we use to generate files were in the habit of
printing things like "generated by $0" into their output files.
That looks like a fine idea at first glance, but it results in
non-reproducible output, because in VPATH builds $0 won't be just
the name of the script file, but a full path for it. We'd prefer
that you get identical results whether using VPATH or not, so this
is a bad thing.
Some of these places also printed their input file name(s), causing
an additional hazard of the same type.
Hence, establish a policy that thou shalt not print $0, nor input file
pathnames, into output files (they're still allowed in error messages,
though). Instead just write the script name verbatim. While we are at
it, we can make these annotations more useful by giving the script's
full relative path name within the PG source tree, eg instead of
Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.
Not all of the changes made here actually affect any files shipped
in finished tarballs today, but it seems best to apply the policy
everyplace so that nobody copies unsafe code into places where it
could matter.
Christoph Berg and Tom Lane
Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de
Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
Hash Join with Parallel Hash. While hash joins could already appear in
parallel queries, they were previously always parallel-oblivious and had a
partial subplan only on the outer side, meaning that the work of the inner
subplan was duplicated in every worker.
After this commit, the planner will consider using a partial subplan on the
inner side too, using the Parallel Hash node to divide the work over the
available CPU cores and combine its results in shared memory. If the join
needs to be split into multiple batches in order to respect work_mem, then
workers process different batches as much as possible and then work together
on the remaining batches.
The advantages of a parallel-aware hash join over a parallel-oblivious hash
join used in a parallel query are that it:
* avoids wasting memory on duplicated hash tables
* avoids wasting disk space on duplicated batch files
* divides the work of building the hash table over the CPUs
One disadvantage is that there is some communication between the participating
CPUs which might outweigh the benefits of parallelism in the case of small
hash tables. This is avoided by the planner's existing reluctance to supply
partial plans for small scans, but it may be necessary to estimate
synchronization costs in future if that situation changes. Another is that
outer batch 0 must be written to disk if multiple batches are required.
A potential future advantage of parallel-aware hash joins is that right and
full outer joins could be supported, since there is a single set of matched
bits for each hashtable, but that is not yet implemented.
A new GUC enable_parallel_hash is defined to control the feature, defaulting
to on.
Author: Thomas Munro
Reviewed-By: Andres Freund, Robert Haas
Tested-By: Rafia Sabih, Prabhat Sahu
Discussion:
https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.comhttps://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
When a Gather or Gather Merge node is started and stopped multiple
times, accumulate instrumentation data only once, at the end, instead
of after each execution, to avoid recording inflated totals.
Commit 778e78ae9f, the previous attempt
at a fix, instead reset the state after every execution, which worked
for the general instrumentation data but had problems for the additional
instrumentation specific to Sort and Hash nodes.
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
following a design proposal from Thomas Munro, with a comment tweak
by me.
Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
This parameter can be used to enforce the channel binding type used
during a SCRAM authentication. This can be useful to check code paths
where an invalid channel binding type is used by a client and will be
even more useful to allow testing other channel binding types when they
are added.
The default value is tls-unique, which is what RFC 5802 specifies.
Clients can optionally specify an empty value, which has as effect to
not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism.
More tests for SCRAM and channel binding are added to the SSL test
suite.
Author: Author: Michael Paquier <michael.paquier@gmail.com>
Don't write to stdin of a psql process that could have already exited
with an authentication failure. Buildfarm members crake and mandrill
have failed once by doing so. Ignore SIGPIPE in all TAP tests.
Back-patch to v10, where these tests were introduced.
Reviewed by Michael Paquier.
Discussion: https://postgr.es/m/20171209210203.GC3362632@rfd.leadboat.com
Previously it was possible that a tuple was not pruned during vacuum,
even though its update xmax (i.e. the updating xid in a multixact with
both key share lockers and an updater) was below the cutoff horizon.
As the freezing code assumed, rightly so, that that's not supposed to
happen, xmax would be preserved (as a member of a new multixact or
xmax directly). That causes two problems: For one the tuple is below
the xmin horizon, which can cause problems if the clog is truncated or
once there's an xid wraparound. The bigger problem is that that will
break HOT chains, which in turn can lead two to breakages: First,
failing index lookups, which in turn can e.g lead to constraints being
violated. Second, future hot prunes / vacuums can end up making
invisible tuples visible again. There's other harmful scenarios.
Fix the problem by recognizing that tuples can be DEAD instead of
RECENTLY_DEAD, even if the multixactid has alive members, if the
update_xid is below the xmin horizon. That's safe because newer
versions of the tuple will contain the locking xids.
A followup commit will harden the code somewhat against future similar
bugs and already corrupted data.
Author: Andres Freund, with changes by Alvaro Herrera
Reported-By: Daniel Wood
Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter
Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier
Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier
Discussion:
https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.comhttps://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
The plpgsql.sql test file in the main regression tests is now by far the
largest after numeric_big, making editing and managing the test cases
very cumbersome. The other PLs have their own test suites split up into
smaller files by topic. It would be nice to have that for plpgsql as
well. So, to get that started, set up test infrastructure in
src/pl/plpgsql/src/ and split out the recently added procedure test
cases into a new file there. That file now mirrors the test cases added
to the other PLs, making managing those matching tests a bit easier too.
msvc build system changes with help from Michael Paquier
The test added by commit 390d58135 turns out to have different output
in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
error message as a result of detecting the error at a different place.
Possibly we should do something to make that more consistent. But as
a stopgap measure to make the buildfarm green again, adjust the test
to suppress CONTEXT entirely. We can revert this if we do something
in the backend to eliminate the inconsistency.
Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
If one exits and re-enters a DECLARE ... BEGIN ... END block within a
single execution of a plpgsql function, perhaps due to a surrounding loop,
the declared variables are supposed to get re-initialized to null (or
whatever their initializer is). But this failed to happen for variables
of type "record", because while exec_stmt_block() expected such variables
to be included in the block's initvarnos list, plpgsql_add_initdatums()
only adds DTYPE_VAR variables to that list. This bug appears to have
been there since the aboriginal addition of plpgsql to our tree.
Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
as well. (We don't need to consider other DTYPEs because they don't
represent separately-stored values.) I failed to resist the temptation
to make some nearby cosmetic adjustments, too.
No back-patch, because there have not been field complaints, and it
seems possible that somewhere out there someone has code depending
on the incorrect behavior. In any case this change would have no
impact on correctly-written code.
Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
Those cases currently crash and supporting them is more work then
originally thought, so we'll just prohibit these scenarios for now.
Author: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Reported-by: Мансур Галиев <gomer94@yandex.ru>
Bug: #14866
A COPY into a table should apply identity sequence values just like it
does for ordinary defaults. This was previously forgotten, leading to
null values being inserted, which in turn would fail because identity
columns have not-null constraints.
Author: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Steven Winfield <steven.winfield@cantabcapital.com>
Bug: #14952
I suppose it is a copy-and-paste error that this test doesn't actually
test the "Parallel Append with both partial and non-partial subplans"
case (EXPLAIN alone surely doesn't qualify as a test of executor
behavior). Fix that.
Also, add cosmetic aliases to make it possible to tell apart these
otherwise-identical test cases in log_statement output.
When we create an Append node, we can spread out the workers over the
subplans instead of piling on to each subplan one at a time, which
should typically be a bit more efficient, both because the startup
cost of any plan executed entirely by one worker is paid only once and
also because of reduced contention. We can also construct Append
plans using a mix of partial and non-partial subplans, which may allow
for parallelism in places that otherwise couldn't support it.
Unfortunately, this patch doesn't handle the important case of
parallelizing UNION ALL by running each branch in a separate worker;
the executor infrastructure is added here, but more planner work is
needed.
Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
Rajkumar Raghuwanshi.
Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
When a Gather or Gather Merge node is started and stopped multiple
times, the old code wouldn't reset the shared state between executions,
potentially resulting in dramatically inflated instrumentation data
for nodes beneath it. (The per-worker instrumentation ended up OK,
I think, but the overall totals were inflated.)
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
reviewed and tweaked a bit by me.
Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
If a hash join appears in a parallel query, there may be no hash table
available for explain.c to inspect even though a hash table may have
been built in other processes. This could happen either because
parallel_leader_participation was set to off or because the leader
happened to hit the end of the outer relation immediately (even though
the complete relation is not empty) and decided not to build the hash
table.
Commit bf11e7ee introduced a way for workers to exchange
instrumentation via the DSM segment for Sort nodes even though they
are not parallel-aware. This commit does the same for Hash nodes, so
that explain.c has a way to find instrumentation data from an
arbitrary participant that actually built the hash table.
Author: Thomas Munro
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
Hopefully, the additional logging will help avoid confusion that
could otherwise result.
Nathan Bossart, reviewed by Michael Paquier, Fabrízio Mello, and me
Without this, when partdesc->nparts == 0, we end up calling
ExecBuildSlotPartitionKeyDescription without initializing values
and isnull.
Reported by Coverity via Michael Paquier. Patch by Michael Paquier,
reviewed and revised by Amit Langote.
Discussion: http://postgr.es/m/CAB7nPqQ3mwkdMoPY-ocgTpPnjd8TKOadMxdTtMLvEzF8480Zfg@mail.gmail.com
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar. This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.
This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL). There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql. Support for defining
procedures is available in all the languages supplied by the core
distribution.
While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target. Back-patch to 9.3
(all supported versions).
Previously, this function estimated the selectivity as 1 minus eqjoinsel()
for the negator equality operator, regardless of join type (I think there
was an expectation that eqjoinsel would handle the join type). But
actually this is completely wrong for semijoin cases: the fraction of the
LHS that has a non-matching row is not one minus the fraction of the LHS
that has a matching row. In reality a semijoin with <> will nearly always
succeed: it can only fail when the RHS is empty, or it contains a single
distinct value that is equal to the particular LHS value, or the LHS value
is null. The only one of those things we should have much confidence in
estimating is the fraction of LHS values that are null, so let's just take
the selectivity as 1 minus outer nullfrac.
Per coding convention, antijoin should be estimated the same as semijoin.
Arguably this is a bug fix, but in view of the lack of field complaints
and the risk of destabilizing plans, no back-patch.
Thomas Munro, reviewed by Ashutosh Bapat
Discussion: https://postgr.es/m/CAEepm=270ze2hVxWkJw-5eKzc3AB4C9KpH3L2kih75R5pdSogg@mail.gmail.com
Although hash joins are already tested by many queries, these tests
systematically cover the four different states we can reach as part of
the strategy for respecting work_mem.
Author: Thomas Munro
Reviewed-By: Andres Freund
Currently, partition pruning happens via constraint exclusion, but
there are pending places to replace that with a different and
hopefully faster mechanism. To be sure that we don't change behavior
without realizing it, add extensive test coverage.
Note that not all of these behaviors are optimal; in some cases,
partitions are not pruned even though it would be safe to do so.
These tests therefore serve to memorialize the current state rather
than the ideal state. Patches that improve things can update the test
results as appropriate.
Amit Langote, adjusted by me. Review and testing of the larger patch
set of which this is a part by Ashutosh Bapat, David Rowley, Dilip
Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul,
and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
Non-default range partitions have a constraint which include null
tests, and both default and non-default list partitions also have a
constraint which includes null tests, but for some reason this was
missed for default range partitions. This could cause the partition
constraint to evaluate to false for rows that were (correctly) routed
to that partition by insert tuple routing, which could in turn
cause constraint exclusion to prune the default partition in cases
where it should not.
Amit Langote, reviewed by Kyotaro Horiguchi
Discussion: http://postgr.es/m/ba7aaeb1-4399-220e-70b4-62eade1522d0@lab.ntt.co.jp
Doing this suppresses Coverity warnings and might allow improved
code in some cases. The prospects of that are not so bright as
to warrant back-patching, though.
Michael Paquier, per Coverity
When nodeValuesscan.c was written, it was impossible to have a SubPlan in
VALUES --- any sub-SELECT there would have to be uncorrelated and thereby
would produce an InitPlan instead. We therefore took a shortcut in the
logic that throws away a ValuesScan's per-row expression evaluation data
structures. This was broken by the introduction of LATERAL however; a
sub-SELECT containing a lateral reference produces a correlated SubPlan.
The cleanest fix for this would be to give up the optimization of
discarding the expression eval state. But that still seems pretty
unappetizing for long VALUES lists. It seems to work to just prevent
the subexpressions from hooking into the ValuesScan node's subPlan
list, so let's do that and see how well it works. (If this breaks,
due to additional connections between the subexpressions and the outer
query structures, we might consider compromises like throwing away data
only for VALUES rows not containing SubPlans.)
Per bug #14924 from Christian Duta. Back-patch to 9.3 where LATERAL
was introduced.
Discussion: https://postgr.es/m/20171124120836.1463.5310@wrigleys.postgresql.org
The query didn't really have a preferred index, leading to platform-
specific choices of which one to use. Adjust it to make sure tenk1_hundred
is always chosen.
Per buildfarm.
When strict aggregate combine functions, used in multi-stage/parallel
aggregation, returned NULL, we didn't check for that, invoking the
combine function with NULL the next round, despite it being strict.
The equivalent code invoking normal transition functions has a check
for that situation, which did not get copied in a7de3dc5c3. Fix the
bug by adding the equivalent check.
Based on a quick look I could not find any strict combine functions in
core actually returning NULL, and it doesn't seem very likely external
users have done so. So this isn't likely to have caused issues in
practice.
Add tests verifying transition / combine functions returning NULL is
tested.
Reported-By: Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20171121033642.7xvmjqrl4jdaaat3@alap3.anarazel.de
Backpatch: 9.6, where parallel aggregation was introduced
This hack closes a race condition in "make -j check-world" and "make -j
installcheck-world". Back-patch to v10, before which these parallel
invocations had worse problems.
Discussion: https://postgr.es/m/20171106080752.GA1298146@rfd.leadboat.com
Fix the function header comment to describe the actual behavior.
Check that table OID, modulus, and remainder arguments are not NULL
before accessing them. Check that the modulus and remainder are
sensible. If the table OID doesn't exist, return NULL instead of
emitting an internal error, similar to what we do elsewhere. Check
that the actual argument types match, or at least are binary coercible
to, the expected argument types. Correctly handle invocation of this
function using the VARIADIC syntax. Add regression tests.
Robert Haas and Amul Sul, per a report by Andreas Seltenreich and
subsequent followup investigation.
Discussion: http://postgr.es/m/871sl4sdrv.fsf@ansel.ydns.eu
This code evidently intended to treat backslash as an escape character
within double-quoted substrings, but it was sufficiently confused that
cases like ..."foo\\"... did not work right: the second backslash
managed to quote the double-quote after it, despite being quoted itself.
Rewrite to get that right, while preserving the existing behavior
outside double-quoted substrings, which is that backslash isn't special
except in the combination \".
Comparing to Oracle, it seems that their version of to_char() for
timestamps allows literal alphanumerics only within double quotes, while
non-alphanumerics are allowed outside quotes; backslashes aren't special
anywhere; there is no way at all to emit a literal double quote.
(Bizarrely, their to_char() for numbers is different; it doesn't allow
literal text at all AFAICT.) The fact that they don't treat backslash
as special justifies our existing behavior for backslash outside double
quotes. I considered making backslash inside double quotes act the same
way (ie, special only if before "), which in a green field would be a
more consistent behavior. But that would likely break more existing SQL
code than what this patch does.
Add some test cases illustrating this behavior. (Only the last new
case actually changes behavior in this commit.)
Little of this behavior was documented, either, so fix that.
Discussion: https://postgr.es/m/3626.1510949486@sss.pgh.pa.us
This is the basic feature set using OpenSSL to support the feature. In
order to allow the frontend and the backend to fetch the sent and
expected TLS Finished messages, a PG-like API is added to be able to
make the interface pluggable for other SSL implementations.
This commit also adds a infrastructure to facilitate the addition of
future channel binding types as well as libpq parameters to control the
SASL mechanism names and channel binding names. Those will be added by
upcoming commits.
Some tests are added to the SSL test suite to test SCRAM authentication
with channel binding.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Non-data template patterns would consume characters whether or not those
characters were what the pattern expected, for example
SELECT TO_NUMBER('1234', '9,999');
produced 134 because the '2' got eaten by the comma pattern. This seems
undesirable, not least because it doesn't happen in Oracle. For the ','
and 'G' template patterns, we can fix this by consuming characters only
if they match what the pattern would output. For non-data patterns such
as 'L' and 'TH', it seems impractical to tighten things up to the point of
consuming only exact matches to what the pattern would output; but we can
improve matters quite a lot by redefining the behavior as "consume only
characters that aren't digits, signs, decimal point, or comma".
Also, fix it so that the behavior is to consume the number of *characters*
the pattern would output, not the number of *bytes*. The old coding would
do surprising things with non-ASCII currency symbols, for example. (It
would be good to apply that rule for literal text as well, but this commit
only fixes it for non-data patterns.)
Oliver Ford, reviewed by Thomas Munro and Nathan Wagner, and whacked around
a bit more by me
Discussion: https://postgr.es/m/CAGMVOdvpbMqPf9XWNzOwBpzJfErkydr_fEGhmuDGa015z97mwg@mail.gmail.com
It appears that proargmodes should always be set for variadic
functions, but satifies_hash_partition had it as NULL. In addition to
fixing the problem, add a regression test to guard against future
mistakes of this type.
If a PARAM_EXEC parameter is used below a Gather (Merge) but the InitPlan
that computes it is attached to or above the Gather (Merge), force the
value to be computed before starting parallelism and pass it down to all
workers. This allows us to use parallelism in cases where it previously
would have had to be rejected as unsafe. We do - in this case - lose the
optimization that the value is only computed if it's actually used. An
alternative strategy would be to have the first worker that needs the value
compute it, but one downside of that approach is that we'd then need to
select a parallel-safe path to compute the parameter value; it couldn't for
example contain a Gather (Merge) node. At some point in the future, we
might want to consider both approaches.
Independent of that consideration, there is a great deal more work that
could be done to make more kinds of PARAM_EXEC parameters parallel-safe.
This infrastructure could be used to allow a Gather (Merge) on the inner
side of a nested loop (although that's not a very appealing plan) and
cases where the InitPlan is attached below the Gather (Merge) could be
addressed as well using various techniques. But this is a good start.
Amit Kapila, reviewed and revised by me. Reviewing and testing from
Kuntal Ghosh, Haribabu Kommi, and Tushar Ahuja.
Discussion: http://postgr.es/m/CAA4eK1LV0Y1AUV4cUCdC+sYOx0Z0-8NAJ2Pd9=UKsbQ5Sr7+JQ@mail.gmail.com
It's become apparent during testing that there are problems with at
least the testing regime. I don't think we should have it without a
working test regime, and the difficulties might indicate implementation
problems anyway, so I'm backing out the whole thing until that's sorted
out.
This reverts commits 74594849989f92cd8ce3a
Move the sub-routines wrappers to check if a connection to a server is
fine or not into the test main module. This is useful for other tests
willing to check connectivity into a server.
Author: Michael Paquier <michael@paquier.xyz>
The module requires a preloaded library and the defect can't be cured by
a LOAD instruction in the test script. To achieve this we override the
installcheck target in the module's Makefile, and exclude ithe module in
vcregress.pl.
Along the way, revert commit 9989f92aab.
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.
Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break. Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.
The only way out of the box is to make type int128 conform to the system's
alignment assumptions. Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.
Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.
Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.
This will need to be back-patched, along with the preparatory commit
91aec93e6. But let's see what the buildfarm makes of it first.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
When a value contained an XML declaration naming some other encoding,
this function interpreted UTF8 bytes as the named encoding, yielding
mojibake. xml_parse() already has similar logic. This would be
necessary but not sufficient for non-UTF8 databases, so preserve
behavior there until the xpath facility can support such databases
comprehensively. Back-patch to 9.3 (all supported versions).
Pavel Stehule and Noah Misch
Discussion: https://postgr.es/m/CAFj8pRC-dM=tT=QkGi+Achkm+gwPmjyOayGuUfXVumCxkDgYWg@mail.gmail.com
Hash partitioning is useful when you want to partition a growing data
set evenly. This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.
At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work. Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.
Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few
final tweaks also by me.
Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
While it's generally unwise to give permissions on these functions to
anyone but a superuser, we've been moving away from hard-wired permission
checks inside functions in favor of using the SQL permission system to
control access. Bring lo_import() and lo_export() into compliance with
that approach.
In particular, this removes the manual configuration option
ALLOW_DANGEROUS_LO_FUNCTIONS. That dates back to 1999 (commit 4cd4a54c8);
it's unlikely anyone has used it in many years. Moreover, if you really
want such behavior, now you can get it with GRANT ... TO PUBLIC instead.
Michael Paquier
Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com
The problem reported as CVE-2017-15098 was already resolved in HEAD by
commit 37a795a60, but let's add the relevant test cases anyway.
Michael Paquier and Tom Lane, per a report from David Rowley.
Security: CVE-2017-15098
The update path of an INSERT ... ON CONFLICT DO UPDATE requires SELECT
permission on the columns of the arbiter index, but it failed to check
for that in the case of an arbiter specified by constraint name.
In addition, for a table with row level security enabled, it failed to
check updated rows against the table's SELECT policies when the update
path was taken (regardless of how the arbiter index was specified).
Backpatch to 9.5 where ON CONFLICT DO UPDATE and RLS were introduced.
Security: CVE-2017-15099
Makefile.global assigns this prerequisite to every target named "check",
but similar targets must mention it explicitly. Affected targets
failed, tested $PATH binaries, or tested a stale temporary installation.
The src/test/modules examples worked properly when called as "make -C
src/test/modules/$FOO check", but "make -j" allowed the test to start
before the temporary installation was in place. Back-patch to 9.5,
where commit dcae5facca introduced the
shared temp-install.
When a publisher table has fewer columns than a subscriber, the update
of a row on the publisher should result in updating of only the columns
in common. The previous coding mistakenly reset the values of
additional columns on the subscriber to NULL because it failed to skip
updates of columns not found in the attribute map.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
This is only used in the pg_rewind tests, so only set it there. It's
better if other tests run closer to a default configuration.
Author: Michael Paquier <michael.paquier@gmail.com>
It turns out we misdiagnosed what the real problem was. Revert the
previous changes, because they may have worse consequences going
forward. A better fix is forthcoming.
The simplistic test case is kept, though disabled.
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
If we don't have to return any columns from heap tuples, and there's
no need to recheck qual conditions, and the heap page is all-visible,
then we can skip fetching the heap page altogether.
Skip prefetching pages too, when possible, on the assumption that the
recheck flag will remain the same from one page to the next. While that
assumption is hardly bulletproof, it seems like a good bet most of the
time, and better than prefetching pages we don't need.
This commit installs the executor infrastructure, but doesn't change
any planner cost estimates, thus possibly causing bitmap scans to
not be chosen in cases where this change renders them the best choice.
I (tgl) am not entirely convinced that we need to account for this
behavior in the planner, because I think typically the bitmap scan would
get chosen anyway if it's the best bet. In any case the submitted patch
took way too many shortcuts, resulting in too many clearly-bad choices,
to be committable.
Alexander Kuzmenkov, reviewed by Alexey Chernyshov, and whacked around
rather heavily by me.
Discussion: https://postgr.es/m/239a8955-c0fc-f506-026d-c837e86c827b@postgrespro.ru
It's possible for dropping a column, or altering its type, to require
changes in domain CHECK constraint expressions; but the code was
previously only expecting to find dependent table CHECK constraints.
Make the necessary adjustments.
This is a fairly old oversight, but it's a lot easier to encounter
the problem in the context of domains over composite types than it
was before. Given the lack of field complaints, I'm not going to
bother with a back-patch, though I'd be willing to reconsider that
decision if someone does complain.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/30656.1509128130@sss.pgh.pa.us
Without this fix, dropping a role can sometimes result in parallel
query failures in sessions that have used "SET ROLE" to assume the
dropped role, even if that setting isn't active any more.
Report by Pavan Deolasee. Patch by Amit Kapila, reviewed by me.
Discussion: http://postgr.es/m/CABOikdOomRcZsLsLK+Z+qENM1zxyaWnAvFh3MJZzZnnKiF+REg@mail.gmail.com
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed. This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.
To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns. While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.
Per bug #14876 from Samuel Horwitz. This has been busted since commit
ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org
This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't. In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked. In a generic plan, the Param node survives
and must be treated as parallel-safe. SerializeParamList provides
for the transmission of parameter values to workers.
Amit Kapila with help from Kuntal Ghosh. Some changes by me.
Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load. What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract. Otherwise, we can break it by dropping that
column. The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.
Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column. That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.
Likewise for FieldStore.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us
This is the last major omission in our domains feature: you can now
make a domain over anything that's not a pseudotype.
The major complication from an implementation standpoint is that places
that might be creating tuples of a domain type now need to be prepared
to apply domain_check(). It seems better that unprepared code fail
with an error like "<type> is not composite" than that it silently fail
to apply domain constraints. Therefore, relevant infrastructure like
get_func_result_type() and lookup_rowtype_tupdesc() has been adjusted
to treat domain-over-composite as a distinct case that unprepared code
won't recognize, rather than just transparently treating it the same
as plain composite. This isn't a 100% solution to the possibility of
overlooked domain checks, but it catches most places.
In passing, improve typcache.c's support for domains (it can now cache
the identity of a domain's base type), and rewrite the argument handling
logic in jsonfuncs.c's populate_record[set]_worker to reduce duplicative
per-call lookups.
I believe this is code-complete so far as the core and contrib code go.
The PLs need varying amounts of work, which will be tackled in followup
patches.
Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
Previously, we skipped using search_indexed_tlist_for_sortgroupref()
if the tlist expression being sought in the child plan node was merely
a Var. This is purely an optimization, based on the theory that
search_indexed_tlist_for_var() is faster, and one copy of a Var should
be as good as another. However, the GROUPING SETS patch broke the
latter assumption: grouping columns containing the "same" Var can
sometimes have different outputs, as shown in the test case added here.
So do it the hard way whenever a ressortgroupref marking exists.
(If this seems like a bottleneck, we could imagine building a tlist index
data structure for ressortgroupref values, as we do for Vars. But I'll
let that idea go until there's some evidence it's worthwhile.)
Back-patch to 9.6. The problem also exists in 9.5 where GROUPING SETS
came in, but this patch is insufficient to resolve the problem in 9.5:
there is some obscure dependency on the upper-planner-pathification
work that happened in 9.6. Given that this is such a weird corner case,
and no end users have complained about it, it doesn't seem worth the work
to develop a fix for 9.5.
Patch by me, per a report from Heikki Linnakangas. (This does not fix
Heikki's original complaint, just the follow-on one.)
Discussion: https://postgr.es/m/aefc657e-edb2-64d5-6df1-a0828f6e9104@iki.fi
json_build_object and json_build_array and the jsonb equivalents did not
correctly process explicit VARIADIC arguments. They are modified to use
the new extract_variadic_args() utility function which abstracts away
the details of the call method.
Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov.
Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as
that's where they originated.
Although joinaliasvars lists coming out of the parser are quite simple,
those lists can contain arbitrarily complex expressions after subquery
pullup. We do not perform expression preprocessing on them, meaning that
expressions in those lists will not meet the expectations of later phases
of the planner (for example, that they do not contain SubLinks). This had
been thought pretty harmless, since we don't intentionally touch those
lists in later phases --- but Andreas Seltenreich found a case in which
adjust_appendrel_attrs() could recurse into a joinaliasvars list and then
die on its assertion that it never sees a SubLink. We considered a couple
of localized fixes to prevent that specific case from looking at the
joinaliasvars lists, but really this seems like a generic hazard for all
expression processing in the planner. Therefore, probably the best answer
is to delete the joinaliasvars lists from the parsetree at the end of
expression preprocessing, so that there are no reachable expressions that
haven't been through preprocessing.
The case Andreas found seems to be harmless in non-Assert builds, and so
far there are no field reports suggesting that there are user-visible
effects in other cases. I considered back-patching this anyway, but
it turns out that Andreas' test doesn't fail at all in 9.4-9.6, because
in those versions adjust_appendrel_attrs contains code (added in commit
842faa714 and removed again in commit 215b43cdc) to process SubLinks
rather than complain about them. Barring discovery of another path by
which unprocessed joinaliasvars lists can cause trouble, the most
prudent compromise seems to be to patch this into v10 but not further.
Patch by me, with thanks to Amit Langote for initial investigation
and review.
Discussion: https://postgr.es/m/87r2tvt9f1.fsf@ansel.ydns.eu
Like the similar logic for arrays and records, it's necessary to examine
the range's subtype to decide whether the range type can support hashing.
We can omit checking the subtype for btree-defined operations, though,
since range subtypes are required to have those operations. (Possibly
that simplification for btree cases led us to overlook that it does
not apply for hash cases.)
This is only an issue if the subtype lacks hash support, which is not
true of any built-in range type, but it's easy to demonstrate a problem
with a range type over, eg, money: you can get a "could not identify
a hash function" failure when the planner is misled into thinking that
hash join or aggregation would work.
This was born broken, so back-patch to all supported branches.
This is preparation for a future patch to extensively change how
reloptions work.
Author: Nikolay Shaplov
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/2615372.orqtEn8VGB@x200m
setTargetTable threw an error if the proposed target RangeVar's relname
matched any visible CTE or ENR. This breaks backwards compatibility in
the CTE case, since pre-v10 we never looked for a CTE here at all, so that
CTE names did not mask regular tables. It does seem like a good idea to
throw an error for the ENR case, though, thus causing ENRs to mask tables
for this purpose; ENRs are new in v10 so we're not breaking existing code,
and we may someday want to allow them to be the targets of DML.
To fix that, replace use of getRTEForSpecialRelationTypes, which was
overkill anyway, with use of scanNameSpaceForENR.
A second problem was that the check neglected to verify null schemaname,
so that a CTE or ENR could incorrectly be thought to match a qualified
RangeVar. That happened because getRTEForSpecialRelationTypes relied
on its caller to have checked for null schemaname. Even though the one
remaining caller got it right, this is obviously bug-prone, so move
the check inside getRTEForSpecialRelationTypes.
Also, revert commit 18ce3a4ab's extremely poorly thought out decision to
add a NULL return case to parserOpenTable --- without either documenting
that or adjusting any of the callers to check for it. The current bug
seems to have arisen in part due to working around that bad idea.
In passing, remove the one-line shim functions transformCTEReference and
transformENRReference --- they don't seem to be adding any clarity or
functionality.
Per report from Hugo Mercier (via Julien Rouhaud). Back-patch to v10
where the bug was introduced.
Thomas Munro, with minor editing by me
Discussion: https://postgr.es/m/CAOBaU_YdPVH+PTtiKSSLOiiW3mVDYsnNUekK+XPbHXiP=wrFLA@mail.gmail.com
Flex generates a lot of functions that are not actually used. In order
to avoid coverage figures being ruined by that, mark up the part of the
.l files where the generated code appears by lcov exclusion markers.
That way, lcov will typically only reported on coverage for the .l file,
which is under our control, but not for the .c file.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
The built-in OSAs all share the same transition function, so they can
share transition state as long as the final functions cooperate to not
do the sort step more than once. To avoid running the tuplesort object
in randomAccess mode unnecessarily, add a bit of infrastructure to
nodeAgg.c to let the aggregate functions find out whether the transition
state is actually being shared or not.
This doesn't work for the hypothetical aggregates, since those inject
a hypothetical row that isn't traceable to the shared input state.
So they remain marked aggfinalmodify = 'w'.
Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com
An aggregate's input expression(s) are not supposed to be evaluated
at all for a row where its FILTER test fails ... but commit 8ed3f11bb
overlooked that requirement. Reshuffle so that aggregates having a
filter clause evaluate their arguments separately from those without.
This still gets the benefit of doing only one ExecProject in the
common case of multiple Aggrefs, none of which have filters.
While at it, arrange for filter clauses to be included in the common
ExecProject evaluation, thus perhaps buying a little bit even when
there are filters.
Back-patch to v10 where the bug was introduced.
Discussion: https://postgr.es/m/30065.1508161354@sss.pgh.pa.us
Up to now, there's been hard-wired assumptions that normal aggregates'
final functions never modify their transition states, while ordered-set
aggregates' final functions always do. This has always been a bit
limiting, and in particular it's getting in the way of improving the
built-in ordered-set aggregates to allow merging of transition states.
Therefore, let's introduce catalog and CREATE AGGREGATE infrastructure
that lets the finalfn's behavior be declared explicitly.
There are now three possibilities for the finalfn behavior: it's purely
read-only, it trashes the transition state irrecoverably, or it changes
the state in such a way that no more transfn calls are possible but the
state can still be passed to other, compatible finalfns. There are no
examples of this third case today, but we'll shortly make the built-in
OSAs act like that.
This change allows user-defined aggregates to explicitly disclaim support
for use as window functions, and/or to prevent transition state merging,
if their implementations cannot handle that. While it was previously
possible to handle the window case with a run-time error check, there was
not any way to prevent transition state merging, which in retrospect is
something commit 804163bc2 should have provided for. But better late
than never.
In passing, split out pg_aggregate.c's extern function declarations into
a new header file pg_aggregate_fn.h, similarly to what we've done for
some other catalog headers, so that pg_aggregate.h itself can be safe
for frontend files to include. This lets pg_dump use the symbolic
names for relevant constants.
Discussion: https://postgr.es/m/4834.1507849699@sss.pgh.pa.us
A few command line options accepted by pg_regress were not being output
by help(), including --help itself. Add that one, as well as --version
and --bindir, and the corresponding short options for the first two.
We could consider this for backpatching, but it did not seem worthwhile
and no one else advocated for it, so apply only to master for now.
Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/dd519469-06d7-2662-83ef-c926f6c4f0f1%40joeconway.com
This ought to work, but the built-in OSAs are not capable of coping,
because their final-functions destructively modify their transition
state (specifically, the contained tuplesort object). That was fine
when those functions were written, but commit 804163bc2 moved the
goalposts without telling orderedsetaggs.c.
We should fix the built-in OSAs to support this, but it will take
a little work, especially if we don't want to sacrifice performance
in the normal non-shared-state case. Given that it took a year after
9.6 release for anyone to notice this bug, we should not prioritize
sharable-state over nonsharable-state performance. And a proper fix
is likely to be more complicated than we'd want to back-patch, too.
Therefore, let's just put in this stop-gap patch to prevent nodeAgg.c
from choosing to use shared state for OSAs. We can revert it in HEAD
when we get a better fix.
Report from Lukas Eder, diagnosis by me, patch by David Rowley.
Back-patch to 9.6 where the problem was introduced.
Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com
I noticed the tmp_check subdirectory wasn't getting cleaned up
after a check-world run. Apparently pgxs.mk will only do this
for you if you've defined REGRESS. The only other src/test/modules
Makefile that does not set that is snapshot_too_old, and it
does it like this.
If the operator is a strict btree equality operator, and X isn't volatile,
then the clause must yield true for any non-null value of X, or null if X
is null. At top level of a WHERE clause, we can ignore the distinction
between false and null results, so it's valid to simplify the clause to
"X IS NOT NULL". This is a useful improvement mainly because we'll get
a far better selectivity estimate in most cases.
Because such cases seldom arise in well-written queries, it is unappetizing
to expend a lot of planner cycles looking for them ... but it turns out
that there's a place we can shoehorn this in practically for free, because
equivclass.c already has to detect and reject candidate equivalences of the
form X = X. That doesn't catch every place that it would be valid to
simplify to X IS NOT NULL, but it catches the typical case. Working harder
doesn't seem justified.
Patch by me, reviewed by Petr Jelinek
Discussion: https://postgr.es/m/CAMjNa7cC4X9YR-vAJS-jSYCajhRDvJQnN7m2sLH1wLh-_Z2bsw@mail.gmail.com
The previous coding here trashed the line buffer as it scanned it,
making it impossible to print the source line in subsequent error
messages. With a few save/restore/strdup pushups we can improve
that situation.
In passing, move the free'ing of the various strings that are collected
while processing one set of tests down to the bottom of the loop.
That's simpler, less surprising, and should make valgrind less unhappy
about the strings that were previously leaked by the last iteration.
We have a very old rule that parallel_schedule should have no more
than twenty tests in any one parallel group, so as to provide a
bound on the number of concurrently running processes needed to
pass the tests. But people keep forgetting the rule, so let's add
a few lines of code to check it.
Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com
The partition_join test was added to a parallel group that was already
at the maximum of 20 concurrent tests. The hash_func test wasn't
added to serial_schedule at all. The identity and partition_join tests
were added to serial_schedule with the aid of a dartboard, rather than
maintaining consistency with parallel_schedule.
There are proposals afoot to make these sorts of errors harder to make,
but in the meantime let's fix the ones already in place.
Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com
Instead of joining two partitioned tables in their entirety we can, if
it is an equi-join on the partition keys, join the matching partitions
individually. This involves teaching the planner about "other join"
rels, which are related to regular join rels in the same way that
other member rels are related to baserels. This can use significantly
more CPU time and memory than regular join planning, because there may
now be a set of "other" rels not only for every base relation but also
for every join relation. In most practical cases, this probably
shouldn't be a problem, because (1) it's probably unusual to join many
tables each with many partitions using the partition keys for all
joins and (2) if you do that scenario then you probably have a big
enough machine to handle the increased memory cost of planning and (3)
the resulting plan is highly likely to be better, so what you spend in
planning you'll make up on the execution side. All the same, for now,
turn this feature off by default.
Currently, we can only perform joins between two tables whose
partitioning schemes are absolutely identical. It would be nice to
cope with other scenarios, such as extra partitions on one side or the
other with no match on the other side, but that will have to wait for
a future patch.
Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit
Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit
Khandekar, and by me. A few final adjustments by me.
Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com
Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com
If the table attached as a partition is itself partitioned, individual
partitions might have constraints strong enough to skip scanning the
table even if the table actually attached does not. This is pretty
cheap to check, and possibly a big win if it works out.
Amit Langote, with test case changes by me.
Discussion: http://postgr.es/m/1f08b844-0078-aa8d-452e-7af3bf77d05f@lab.ntt.co.jp
Haribabu Kommi, reviewed by Dilip Kumar and Rafia Sabih. Various
cosmetic changes by me to explain why this appears to be safe but
allowing inserts in parallel mode in general wouldn't be. Also, I
removed the REFRESH MATERIALIZED VIEW case from Haribabu's patch,
since I'm not convinced that case is OK, and hacked on the
documentation somewhat.
Discussion: http://postgr.es/m/CAJrrPGdo5bak6qnPWe8Kpi8g_jfQEs-G4SYmG9y+OFaw2-dPvA@mail.gmail.com
A lot of semi-internal code just prints out numeric SPI error codes,
which is not very helpful. We already have an API function to convert
the codes to a string, so let's make more use of that.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Not much to say about this; does what it says on the tin.
However, formerly, if there was a column list then the ANALYZE action was
implied; now it must be specified, or you get an error. This is because
it would otherwise be a bit unclear what the user meant if some tables
have column lists and some don't.
Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada, with some
editorialization by me
Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
Allowing arrays with a domain type as their element type was left un-done
in the original domain patch, but not for any very good reason. This
omission leads to such surprising results as array_agg() not working on
a domain column, because the parser can't identify a suitable output type
for the polymorphic aggregate.
In order to fix this, first clean up the APIs of coerce_to_domain() and
some internal functions in parse_coerce.c so that we consistently pass
around a CoercionContext along with CoercionForm. Previously, we sometimes
passed an "isExplicit" boolean flag instead, which is strictly less
information; and coerce_to_domain() didn't even get that, but instead had
to reverse-engineer isExplicit from CoercionForm. That's contrary to the
documentation in primnodes.h that says that CoercionForm only affects
display and not semantics. I don't think this change fixes any live bugs,
but it makes things more consistent. The main reason for doing it though
is that now build_coercion_expression() receives ccontext, which it needs
in order to be able to recursively invoke coerce_to_target_type().
Next, reimplement ArrayCoerceExpr so that the node does not directly know
any details of what has to be done to the individual array elements while
performing the array coercion. Instead, the per-element processing is
represented by a sub-expression whose input is a source array element and
whose output is a target array element. This simplifies life in
parse_coerce.c, because it can build that sub-expression by a recursive
invocation of coerce_to_target_type(). The executor now handles the
per-element processing as a compiled expression instead of hard-wired code.
The main advantage of this is that we can use a single ArrayCoerceExpr to
handle as many as three successive steps per element: base type conversion,
typmod coercion, and domain constraint checking. The old code used two
stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
inefficient, and adding yet another array deconstruction to do domain
constraint checking seemed very unappetizing.
In the case where we just need a single, very simple coercion function,
doing this straightforwardly leads to a noticeable increase in the
per-array-element runtime cost. Hence, add an additional shortcut evalfunc
in execExprInterp.c that skips unnecessary overhead for that specific form
of expression. The runtime speed of simple cases is within 1% or so of
where it was before, while cases that previously required two levels of
array processing are significantly faster.
Finally, create an implicit array type for every domain type, as we do for
base types, enums, etc. Everything except the array-coercion case seems
to just work without further effort.
Tom Lane, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
For \d sequencename, the psql code just did SELECT * FROM sequencename
to get the information to display, but this does not contain much
interesting information anymore in PostgreSQL 10, because the metadata
has been moved to a separate system catalog.
This patch creates a newly designed sequence display that is not merely
an extension of the general relation/table display as it was previously.
Example:
PostgreSQL 9.6:
=> \d foobar
Sequence "public.foobar"
Column | Type | Value
---------------+---------+---------------------
sequence_name | name | foobar
last_value | bigint | 1
start_value | bigint | 1
increment_by | bigint | 1
max_value | bigint | 9223372036854775807
min_value | bigint | 1
cache_value | bigint | 1
log_cnt | bigint | 0
is_cycled | boolean | f
is_called | boolean | f
PostgreSQL 10 before this change:
=> \d foobar
Sequence "public.foobar"
Column | Type | Value
------------+---------+-------
last_value | bigint | 1
log_cnt | bigint | 0
is_called | boolean | f
New:
=> \d foobar
Sequence "public.foobar"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
Add bgw_type field to background worker structure. It is intended to be
set to the same value for all workers of the same type, so they can be
grouped in pg_stat_activity, for example.
The backend_type column in pg_stat_activity now shows bgw_type for a
background worker. The ps listing also no longer calls out that a
process is a background worker but just show the bgw_type. That way,
being a background worker is more of an implementation detail now that
is not shown to the user. However, most log messages still refer to
'background worker "%s"'; otherwise constructing sensible and
translatable log messages would become tricky.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
At the time replacement_sort_tuples was introduced, there were still
cases where replacement selection sort noticeably outperformed using
quicksort even for the first run. However, those cases seem to have
evaporated as a result of further improvements made since that time
(and perhaps also advances in CPU technology). So remove replacement
selection and the controlling GUC entirely. This makes tuplesort.c
noticeably simpler and probably paves the way for further
optimizations someone might want to do later.
Peter Geoghegan, with review and testing by Tomas Vondra and me.
Discussion: https://postgr.es/m/CAH2-WzmmNjG_K0R9nqYwMq3zjyJJK+hCbiZYNGhAy-Zyjs64GQ@mail.gmail.com
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing
liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But
concurrent transaction commit/abort may turn DEAD some of the HOT tuples
that survived the prune, before HeapTupleSatisfiesVacuum tests them.
This happens to activate the code that decides to freeze the tuple ...
which resuscitates it, duplicating data.
(This is especially bad if there's any unique constraints, because those
are now internally violated due to the duplicate entries, though you
won't know until you try to REINDEX or dump/restore the table.)
One possible fix would be to simply skip doing anything to the tuple,
and hope that the next HOT prune would remove it. But there is a
problem: if the tuple is older than freeze horizon, this would leave an
unfrozen XID behind, and if no HOT prune happens to clean it up before
the containing pg_clog segment is truncated away, it'd later cause an
error when the XID is looked up.
Fix the problem by having the tuple freezing routines cope with the
situation: don't freeze the tuple (and keep it dead). In the cases that
the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag
so that there is no need to look up the XID in pg_clog later on.
An isolation test is included, authored by Michael Paquier, loosely
based on Daniel Wood's original reproducer. It only tests one
particular scenario, though, not all the possible ways for this problem
to surface; it be good to have a more reliable way to test this more
fully, but it'd require more work.
In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql
I outlined another test case (more closely matching Dan Wood's) that
exposed a few more ways for the problem to occur.
Backpatch all the way back to 9.3, where this problem was introduced by
multixact juggling. In branches 9.3 and 9.4, this includes a backpatch
of commit e5ff9fefcd50 (of 9.5 era), since the original is not
correctable without matching the coding pattern in 9.5 up.
Reported-by: Daniel Wood
Diagnosed-by: Daniel Wood
Reviewed-by: Yi Wen Wong, Michaël Paquier
Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity. The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like
ERROR: invalid input syntax for type numeric: "inf"
but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.
Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.
While at it, let's use snprintf not sprintf. Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.
Problem reported by Taiki Kondo. Patch by me based on fix suggestion
from KaiGai Kohei. Back-patch to all supported branches.
Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
This reverts commit 15bc038f9, along with the followon commits 1635e80d3
and 984c92074 that tried to clean up the problems exposed by bug #14825.
The result was incomplete because it failed to address parallel-query
requirements. With 10.0 release so close upon us, now does not seem like
the time to be adding more code to fix that. I hope we can un-revert this
code and add the missing parallel query support during the v11 cycle.
Back-patch to v10.
Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
The blacklist mechanism added by the preceding commit directly fixes
most of the practical cases that the same-transaction test was meant
to cover. What remains is use-cases like
begin;
create type e as enum('x');
alter type e add value 'y';
-- use 'y' somehow
commit;
However, because the same-transaction test is heuristic, it fails on
small variants of that, such as renaming the type or changing its
owner. Rather than try to explain the behavior to users, let's
remove it and just have a rule that the newly added value can't be
used before being committed, full stop. Perhaps later it will be
worth the implementation effort and overhead to have a more accurate
test for type-was-created-in-this-transaction. We'll wait for some
field experience with v10 before deciding to do that.
Back-patch to v10.
Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
Commit 15bc038f9 allowed ALTER TYPE ADD VALUE to be executed inside
transaction blocks, by disallowing the use of the added value later
in the same transaction, except under limited circumstances. However,
the test for "limited circumstances" was heuristic and could reject
references to enum values that were created during CREATE TYPE AS ENUM,
not just later. This breaks the use-case of restoring pg_dump scripts
in a single transaction, as reported in bug #14825 from Balazs Szilfai.
We can improve this by keeping a "blacklist" table of enum value OIDs
created by ALTER TYPE ADD VALUE during the current transaction. Any
visible-but-uncommitted value whose OID is not in the blacklist must
have been created by CREATE TYPE AS ENUM, and can be used safely
because it could not have a lifespan shorter than its parent enum type.
This change also removes the restriction that a renamed enum value
can't be used before being committed (unless it was on the blacklist).
Andrew Dunstan, with cosmetic improvements by me.
Back-patch to v10.
Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
A FOR ALL TABLES publication naturally considers all base tables to be a
candidate for replication. This includes transient heaps that are
created during a table rewrite during DDL. This causes failures on the
subscriber side because it will not have a table like pg_temp_16386 to
receive data (and if it did, it would be the wrong table).
The prevent this problem, we filter out any tables that match this
naming pattern and match an actual table from FOR ALL TABLES
publications. This is only a heuristic, meaning that user tables that
match that naming could accidentally be omitted. A more robust solution
might require an explicit marking of such tables in pg_class somehow.
Reported-by: yxq <yxq@o2.pl>
Bug: #14785
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
If construct_array() or construct_md_array() were given a dimension of
zero, they'd produce an array that contains no elements but has positive
dimension. This violates a general expectation that empty arrays should
have ndims = 0; in particular, while arrays like this print as empty,
they don't compare equal to other empty arrays.
Up to now we've expected callers to avoid making such calls and instead
be careful to call construct_empty_array() if there would be no elements.
But this has always been an easily missed case, and we've repeatedly had to
fix callers to do it right. In bug #14826, Erwin Brandstetter pointed out
yet another such oversight, in ts_lexize(); and a bit of examination of
other call sites found at least two more with similar issues. So let's
fix the problem centrally and permanently by changing these two functions
to construct a proper zero-D empty array whenever the array would be empty.
This renders a few explicit calls of construct_empty_array() redundant,
but the only such place I found that really seemed worth changing was in
ExecEvalArrayExpr().
Although this fixes some very old bugs, no back-patch: the problem is
pretty minor and the risk of changing behavior seems to outweigh the
benefit in stable branches.
Discussion: https://postgr.es/m/20170923125723.1448.39412@wrigleys.postgresql.org
Discussion: https://postgr.es/m/20570.1506198383@sss.pgh.pa.us
"\if :{?variable_name}" will be translated to "\if TRUE" if the variable
exists and "\if FALSE" otherwise. Thus it will be possible to execute code
conditionally on the existence of the variable, regardless of its value.
Fabien Coelho, with some review by Robins Tharakan and some light text
editing by me.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1708260835520.3627@lancre
Previously, the code didn't think about this case and would just try to
analyze such a column twice. That would fail at the point of inserting
the second version of the pg_statistic row, with obscure error messsages
like "duplicate key value violates unique constraint" or "tuple already
updated by self", depending on context and PG version. We could allow
the case by ignoring duplicate column specifications, but it seems better
to reject it explicitly.
The bogus error messages seem like arguably a bug, so back-patch to
all supported versions.
Nathan Bossart, per a report from Michael Paquier, and whacked
around a bit by me.
Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
005_encoding.pl neglected to wait for the subscriber's initial
synchronization to happen. While we have not seen this fail in
the buildfarm, it's pretty easy to demonstrate there's an issue
by hacking logicalrep_worker_launch() to fail most of the time.
Michael Paquier
Discussion: https://postgr.es/m/27032.1505749806@sss.pgh.pa.us
As it turns out we can't rely that the script's monitoring session is
terminated with a proper error by the server, because the session
might be terminated while already trying to send data.
Also improve robustness and error reporting facilities of the test,
developed while debugging this issue.
Discussion: https://postgr.es/m/20170920020038.kllxgilo7xzwmtto@alap3.anarazel.de
The preceding patch allowed us to remove useless GiST support functions.
This patch actually does that for all the no-op cases in the core GiST
code. This buys us whatever performance gain is to be had, and more
importantly exercises the preceding patch.
There remain no-op functions in the contrib GiST opclasses, but those
will take more work to remove.
Discussion: https://postgr.es/m/CAJEAwVELVx9gYscpE=Be6iJxvdW5unZ_LkcAaVNSeOwvdwtD=A@mail.gmail.com
Add timeouts in case psql doesn't deliver the expected output, and try
to cause the monitoring psql to be fully connected to a backend. This
isn't necessarily everything needed, but at least the timeouts should
reduce the pain for buildfarm owners.
Author: Andres Freund
Reported-By: Tom Lane, BF animals prairiedog and calliphoridae
Discussion: https://postgr.es/m/E1du6ZT-00043I-91@gemulon.postgresql.org
When ALTER SUBSCRIPTION DISABLE is run in the same transaction before
DROP SUBSCRIPTION, the latter will hang because workers will still be
running, not having seen the DISABLE committed, and DROP SUBSCRIPTION
will wait until the workers have vacated the replication origin slots.
Previously, DROP SUBSCRIPTION killed the logical replication workers
immediately only if it was going to drop the replication slot, otherwise
it scheduled the worker killing for the end of the transaction, as a
result of 7e174fa793. This, however,
causes the present problem. To fix, kill the workers immediately in all
cases. This covers all cases: A subscription that doesn't have a
replication slot must be disabled. It was either disabled in the same
transaction, or it was already disabled before the current transaction,
but then there shouldn't be any workers left and this won't make a
difference.
Reported-by: Arseny Sher <a.sher@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/87mv6av84w.fsf%40ars-thinkpad
Commit 0f79440fb introduced mechanism to keep AFTER STATEMENT triggers
from firing more than once per statement, which was formerly possible
if more than one FK enforcement action had to be applied to a given
table. Add a similar mechanism for BEFORE STATEMENT triggers, so that
we don't have the unexpected situation of firing BEFORE STATEMENT
triggers more often than AFTER STATEMENT.
As with the previous patch, back-patch to v10.
Discussion: https://postgr.es/m/22315.1505584992@sss.pgh.pa.us
The standard says that all changes of the same kind (insert, update, or
delete) caused in one table by a single SQL statement should be reported
in a single transition table; and by that, they mean to include foreign key
enforcement actions cascading from the statement's direct effects. It's
also reasonable to conclude that if the standard had wCTEs, they would say
that effects of wCTEs applying to the same table as each other or the outer
statement should be merged into one transition table. We weren't doing it
like that.
Hence, arrange to merge tuples from multiple update actions into a single
transition table as much as we can. There is a problem, which is that if
the firing of FK enforcement triggers and after-row triggers with
transition tables is interspersed, we might need to report more tuples
after some triggers have already seen the transition table. It seems like
a bad idea for the transition table to be mutable between trigger calls.
There's no good way around this without a major redesign of the FK logic,
so for now, resolve it by opening a new transition table each time this
happens.
Also, ensure that AFTER STATEMENT triggers fire just once per statement,
or once per transition table when we're forced to make more than one.
Previous versions of Postgres have allowed each FK enforcement query
to cause an additional firing of the AFTER STATEMENT triggers for the
referencing table, but that's certainly not per spec. (We're still
doing multiple firings of BEFORE STATEMENT triggers, though; is that
something worth changing?)
Also, forbid using transition tables with column-specific UPDATE triggers.
The spec requires such transition tables to show only the tuples for which
the UPDATE trigger would have fired, which means maintaining multiple
transition tables or else somehow filtering the contents at readout.
Maybe someday we'll bother to support that option, but it looks like a
lot of trouble for a marginal feature.
The transition tables are now managed by the AfterTriggers data structures,
rather than being directly the responsibility of ModifyTable nodes. This
removes a subtransaction-lifespan memory leak introduced by my previous
band-aid patch 3c4359521.
In passing, refactor the AfterTriggers data structures to reduce the
management overhead for them, by using arrays of structs rather than
several parallel arrays for per-query-level and per-subtransaction state.
I failed to resist the temptation to do some copy-editing on the SGML
docs about triggers, above and beyond merely documenting the effects
of this patch.
Back-patch to v10, because we don't want the semantics of transition
tables to change post-release.
Patch by me, with help and review from Thomas Munro.
Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org
Flattening the partitioning hierarchy at this stage makes various
desirable optimizations difficult. The original use case for this
patch was partition-wise join, which wants to match up the partitions
in one partitioning hierarchy with those in another such hierarchy.
However, it now seems that it will also be useful in making partition
pruning work using the PartitionDesc rather than constraint exclusion,
because with a flattened expansion, we have no easy way to figure out
which PartitionDescs apply to which leaf tables in a multi-level
partition hierarchy.
As it turns out, we end up creating both rte->inh and !rte->inh RTEs
for each intermediate partitioned table, just as we previously did for
the root table. This seems unnecessary since the partitioned tables
have no storage and are not scanned. We might want to go back and
rejigger things so that no partitioned tables (including the parent)
need !rte->inh RTEs, but that seems to require some adjustments not
related to the core purpose of this patch.
Ashutosh Bapat, reviewed by me and by Amit Langote. Some final
adjustments by me.
Discussion: http://postgr.es/m/CAFjFpRd=1venqLL7oGU=C1dEkuvk2DJgvF+7uKbnPHaum1mvHQ@mail.gmail.com
It's not necessary for such a small program, and it causes unnecessary
extra work to get the correct definition of bool, more so if we are
going to introduce stdbool.h later.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Test queries added by commit 69835bc89 are giving unexpected results
on some smaller buildfarm critters. I think probably the seqscan
logic is kicking in to cause the scans to not start at the beginning
of the table. Add ORDER BY to make them be indexscans instead.
Per buildfarm member chipmunk.
This patch adds ERROR, SQLSTATE, and ROW_COUNT, which are updated after
every query, as well as LAST_ERROR_MESSAGE and LAST_ERROR_SQLSTATE,
which are updated only when a query fails. The expected usage of these
is for scripting.
Fabien Coelho, reviewed by Pavel Stehule
Discussion: https://postgr.es/m/alpine.DEB.2.20.1704042158020.12290@lancre
When copying from an active database tree, it's possible for files to be
deleted after we see them in a readdir() scan but before we can open them.
(Once we've got a file open, we don't expect any further errors from it
getting unlinked, though.) Tweak RecursiveCopy so it can cope with this
case, so as to avoid irreproducible test failures.
Back-patch to 9.6 where this code was added. In v10 and HEAD, also
remove unused "use RecursiveCopy" in one recovery test script.
Michael Paquier and Tom Lane
Discussion: https://postgr.es/m/24621.1504924323@sss.pgh.pa.us
AFTER triggers using transition tables crashed if they were fired due
to a foreign key ON CASCADE update. This is because ExecEndModifyTable
flushes the transition tables, on the assumption that any trigger that
could need them was already fired during ExecutorFinish. Normally
that's true, because we don't allow transition-table-using triggers
to be deferred. However, foreign key CASCADE updates force any
triggers on the referencing table to be deferred to the outer query
level, by means of the EXEC_FLAG_SKIP_TRIGGERS flag. I don't recall
all the details of why it's like that and am pretty loath to redesign
it right now. Instead, just teach ExecEndModifyTable to skip destroying
the TransitionCaptureState when that flag is set. This will allow the
transition table data to survive until end of the current subtransaction.
This isn't a terribly satisfactory solution, because (1) we might be
leaking the transition tables for much longer than really necessary,
and (2) as things stand, an AFTER STATEMENT trigger will fire once per
RI updating query, ie once per row updated or deleted in the referenced
table. I suspect that is not per SQL spec. But redesigning this is a
research project that we're certainly not going to get done for v10.
So let's go with this hackish answer for now.
In passing, tweak AfterTriggerSaveEvent to not save the transition_capture
pointer into the event record for a deferrable trigger. This is not
necessary to fix the current bug, but it avoids letting dangling pointers
to long-gone transition tables persist in the trigger event queue. That's
at least a safety feature. It might also allow merging shared trigger
states in more cases than before.
I added a regression test that demonstrates the crash on unpatched code,
and also exposes the behavior of firing the AFTER STATEMENT triggers
once per row update.
Per bug #14808 from Philippe Beaudoin. Back-patch to v10.
Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org
This improves the regression tests' coverage of rbtree.c from pretty
awful (because some of the functions aren't used yet) to basically 100%.
Victor Drobny, reviewed by Aleksander Alekseev and myself
Discussion: https://postgr.es/m/c9d61310e16e75f8acaf6cb1c48b7b77@postgrespro.ru
* Remove no-such-user test case, output isn't stable, and we really
don't need to be testing such cases here anyway.
* Fix the process exit code test logic to match PostgresNode::psql
(but I didn't bother with looking at the "core" flag).
* Give up on inf/nan tests.
Per buildfarm.
Issuing a savepoint-related command in a Query message that contains
multiple SQL statements led to a FATAL exit with a complaint about
"unexpected state STARTED". This is a shortcoming of commit 4f896dac1,
which attempted to prevent such misbehaviors in multi-statement strings;
its quick hack of marking the individual statements as "not top-level"
does the wrong thing in this case, and isn't a very accurate description
of the situation anyway.
To fix, let's introduce into xact.c an explicit model of what happens for
multi-statement Query strings. This is an "implicit transaction block
in progress" state, which for many purposes works like the normal
TBLOCK_INPROGRESS state --- in particular, IsTransactionBlock returns true,
causing the desired result that PreventTransactionChain will throw error.
But in case of error abort it works like TBLOCK_STARTED, allowing the
transaction to be cancelled without need for an explicit ROLLBACK command.
Commit 4f896dac1 is reverted in toto, so that we go back to treating the
individual statements as "top level". We could have left it as-is, but
this allows sharpening the error message for PreventTransactionChain
calls inside functions.
Except for getting a normal error instead of a FATAL exit for savepoint
commands, this patch should result in no user-visible behavioral change
(other than that one error message rewording). There are some things
we might want to do in the line of changing the appearance or wording of
error and warning messages around this behavior, which would be much
simpler to do now that it's an explicitly modeled state. But I haven't
done them here.
Although this fixes a long-standing bug, no backpatch. The consequences
of the bug don't seem severe enough to justify the risk that this commit
itself creates some new issue.
Patch by me, but it owes something to previous investigation by
Takayuki Tsunakawa, who also reported the bug in the first place.
Also thanks to Michael Paquier for reviewing.
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F6BE40D@G01JPEXMBYT05
Index columns are referenced by ordinal number rather than name, e.g.
CREATE INDEX coord_idx ON measured (x, y, (z + t));
ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000;
Incompatibility note for release notes:
\d+ for indexes now also displays Stats Target
Authors: Alexander Korotkov, with contribution by Adrien NAYRAT
Review: Adrien NAYRAT, Simon Riggs
Wordsmith: Simon Riggs
This command acts somewhat like \g, but instead of executing the query
buffer, it merely prints a description of the columns that the query
result would have. (Of course, this still requires parsing the query;
if parse analysis fails, you get an error anyway.) We accomplish this
using an unnamed prepared statement, which should be invisible to psql
users.
Pavel Stehule, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/CAFj8pRBhYVvO34FU=EKb=nAF5t3b++krKt1FneCmR0kuF5m-QA@mail.gmail.com
The parenthesized style has only been used in a few modules. Change
that to use the style that is predominant across the whole tree.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Ryan Murphy <ryanfmurphy@gmail.com>
This moves the data directories from using temporary directories with
randomness in the directory name to a static name, to make it easier to
debug. The data directory will be retained if tests fail or the test
code dies/exits with failure, and is automatically removed on the next
make check.
If the environment variable PG_TEST_NOCLEAN is defined, the data
directories will be retained regardless of test or exit status.
Author: Daniel Gustafsson <daniel@yesql.se>
This will be useful for hash partitioning, which needs a way to seed
the hash functions to avoid problems such as a hash index on a hash
partitioned table clumping all values into a small portion of the
bucket space; it's also useful for anything that wants a 64-bit hash
value rather than a 32-bit hash value.
Just in case somebody wants a 64-bit hash value that is compatible
with the existing 32-bit hash values, make the low 32-bits of the
64-bit hash value match the 32-bit hash value when the seed is 0.
Robert Haas and Amul Sul
Discussion: http://postgr.es/m/CA+Tgmoafx2yoJuhCQQOL5CocEi-w_uG4S2xT0EtgiJnPGcHW3g@mail.gmail.com
Previously, we expanded the inheritance hierarchy in the order in
which find_all_inheritors had locked the tables, but that turns out
to block quite a bit of useful optimization. For example, a
partition-wise join can't count on two tables with matching bounds
to get expanded in the same order.
Where possible, this change results in expanding partitioned tables in
*bound* order. Bound order isn't well-defined for a list-partitioned
table with a null-accepting partition or for a list-partitioned table
where the bounds for a single partition are interleaved with other
partitions. However, when expansion in bound order is possible, it
opens up further opportunities for optimization, such as
strength-reducing MergeAppend to Append when the expansion order
matches the desired sort order.
Patch by me, with cosmetic revisions by Ashutosh Bapat.
Discussion: http://postgr.es/m/CA+TgmoZrKj7kEzcMSum3aXV4eyvvbh9WD=c6m=002WMheDyE3A@mail.gmail.com
PG_MODULE_MAGIC has been around since 8.2, with 8.1 long since in EOL,
so remove the mention of #ifdef guards for compiling against pre-8.2
sources from the documentation.
Author: Daniel Gustafsson <daniel@yesql.se>
Up until now, when parallel query was used, no details about the
sort method or space used by the workers were available; details
were shown only for any sorting done by the leader. Fix that.
Commit 1177ab1dab forced the test case
added by commit 1f6d515a67 to run
without parallelism; now that we have this infrastructure, allow
that again, with a little tweaking to make it pass with and without
force_parallel_mode.
Robert Haas and Tom Lane
Discussion: http://postgr.es/m/CA+Tgmoa2VBZW6S8AAXfhpHczb=Rf6RqQ2br+zJvEgwJ0uoD_tQ@mail.gmail.com
If we only need, say, 10 tuples in total, then we certainly don't need
more than 10 tuples from any single process. Pushing down the limit
lets workers exit early when possible. For Gather Merge, there is
an additional benefit: a Sort immediately below the Gather Merge can
be done as a bounded sort if there is an applicable limit.
Robert Haas and Tom Lane
Discussion: http://postgr.es/m/CA+TgmoYa3QKKrLj5rX7UvGqhH73G1Li4B-EKxrmASaca2tFu9Q@mail.gmail.com
Fix thinko in commit 8be8510cf: it's okay to have dbid == 0 in normal
(non-pin) entries in pg_shdepend, because global objects such as
databases are entered that way. The test would pass so long as it
was run in a cluster containing no databases/tablespaces owned by,
or granted to, roles other than the bootstrap superuser. That's the
expected situation for "make check", but for "make installcheck", not
so much.
Reported by Ryan Murphy.
Discussion: https://postgr.es/m/CAHeEsBc6EQe0mxGBKDXAwJbntgfvoAd5MQC-5362SmC3Tng_6g@mail.gmail.com
Test that blessed records can be transferred through a TupleQueue and
correctly decoded by another backend. While touching the file, make
sure that force_parallel_mode settings only cover relevant tests.
Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20170823054644.efuzftxjpfi6wwqs%40alap3.anarazel.de
The test case added by commit 1f6d515a6 fails on buildfarm members that
have force_parallel_mode turned on, because we currently don't report sort
performance details from worker processes back to the master. To fix that,
just make the test table be temp rather than regular; that's a good idea
anyway to forestall any possible interference from auto-analyze.
(The restriction that workers can't access temp tables might go away
someday, but almost certainly not before the other thing gets fixed.)
Also, improve the test so that we retain as much as possible of the
EXPLAIN ANALYZE output. This aids debugging failures, and might also
expose problems that the preceding version masked.
Discussion: http://postgr.es/m/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w@mail.gmail.com
This is a mechanical change in preparation for a later commit that
will change the layout of TupleDesc. Introducing a macro to abstract
the details of where attributes are stored will allow us to change
that in separate step and revise it in future.
Author: Thomas Munro, editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
That code patch was good as far as it went, but the associated test case
has exposed fundamental brain damage in the parallel scan mechanism,
which is going to take nontrivial work to correct. In the interests of
getting the buildfarm back to green so that unrelated work can proceed,
let's temporarily remove the test case.
Perusal of the code coverage report shows that the existing regression
test cases for LIMIT/OFFSET don't exercise the nodeLimit code paths
involving backwards scan, empty results, or null values of LIMIT/OFFSET.
Improve the coverage.
Perusal of the code coverage report shows that the existing regression
test cases for INTERSECT and EXCEPT seemingly all prefer the SETOP_HASHED
implementation. Add some test cases in which we force use of the
SETOP_SORTED mode.
The previous message didn't mention the name of the table or the
bounds. Put the table name in the primary error message and the
bounds in the detail message.
Amit Langote, changed slightly by me. Suggestions on the exac
phrasing from Tom Lane, David G. Johnston, and Dean Rasheed.
Discussion: http://postgr.es/m/CA+Tgmoae6bpwVa-1BMaVcwvCCeOoJ5B9Q9-RHWo-1gJxfPBZ5Q@mail.gmail.com
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc. The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.
The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain. This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.
Add test cases illustrating the problems, and back-patch to all
supported branches.
Discussion: https://postgr.es/m/15268.1502309024@sss.pgh.pa.us
In commit 5c77690f6, we added polling in front of most of the
get_slot_xmins calls in 001_stream_rep.pl, but today's results from
buildfarm member nightjar show that at least one more poll loop
is needed.
Proactively add a poll loop before the next-to-last get_slot_xmins call
as well. It may be that there is no race condition there because the
standby_2 server is shut down at that point, but I'm quite tired of
fighting with this test script. The empirical evidence that it's safe,
from the buildfarm, is no stronger than the evidence for the other
call that nightjar just proved unsafe.
The only remaining get_slot_xmins calls without wait_slot_xmins
protection are the first two, which should be OK since nothing has
happened at that point. It's tempting to ignore that special case
and merge get_slot_xmins and wait_slot_xmins into a single function.
I didn't go that far though.
Discussion: https://postgr.es/m/18436.1502228036@sss.pgh.pa.us
This would lead to failures if local and remote tables have a different
column order. The tests previously didn't catch that because they only
tested the initial data copy. So add another test that exercises the
apply worker.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
lo_put() surely should require UPDATE permission, the same as lowrite(),
but it failed to check for that, as reported by Chapman Flack. Oversight
in commit c50b7c09d; backpatch to 9.4 where that was introduced.
Tom Lane and Michael Paquier
Security: CVE-2017-7548
Commit 3eefc51053 claimed to make
pg_user_mappings enforce the qualifications user_mapping_options had
been enforcing, but its removal of a longstanding restriction left them
distinct when the current user is the subject of a mapping yet has no
server privileges. user_mapping_options emits no rows for such a
mapping, but pg_user_mappings includes full umoptions. Change
pg_user_mappings to show null for umoptions. Back-patch to 9.2, like
the above commit.
Reviewed by Tom Lane. Reported by Jeff Janes.
Security: CVE-2017-7547
Some authentication methods allowed it, others did not. In the client-side,
libpq does not even try to authenticate with an empty password, which makes
using empty passwords hazardous: an administrator might think that an
account with an empty password cannot be used to log in, because psql
doesn't allow it, and not realize that a different client would in fact
allow it. To clear that confusion and to be be consistent, disallow empty
passwords in all authentication methods.
All the authentication methods that used plaintext authentication over the
wire, except for BSD authentication, already checked that the password
received from the user was not empty. To avoid forgetting it in the future
again, move the check to the recv_password_packet function. That only
forbids using an empty password with plaintext authentication, however.
MD5 and SCRAM need a different fix:
* In stable branches, check that the MD5 hash stored for the user does not
not correspond to an empty string. This adds some overhead to MD5
authentication, because the server needs to compute an extra MD5 hash, but
it is not noticeable in practice.
* In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty
string, or a password hash that corresponds to an empty string, is
specified. The user-visible behavior is the same as in the stable branches,
the user cannot log in, but it seems better to stop the empty password from
entering the system in the first place. Secondly, it is fairly expensive to
check that a SCRAM hash doesn't correspond to an empty string, because
computing a SCRAM hash is much more expensive than an MD5 hash by design,
so better avoid doing that on every authentication.
We could clear the password on CREATE/ALTER ROLE also in stable branches,
but we would still need to check at authentication time, because even if we
prevent empty passwords from being stored in pg_authid, there might be
existing ones there already.
Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema.
Security: CVE-2017-7546
If you do ALTER COLUMN SET NOT NULL against an inheritance parent table,
it will recurse to mark all the child columns as NOT NULL as well. This
is necessary for consistency: if the column is labeled NOT NULL then
reading it should never produce nulls.
However, that didn't happen in the case where ALTER ... ADD PRIMARY KEY
marks a target column NOT NULL that wasn't before. That was questionable
from the beginning, and now Tushar Ahuja points out that it can lead to
dump/restore failures in some cases. So let's make that case recurse too.
Although this is meant to fix a bug, it's enough of a behavioral change
that I'm pretty hesitant to back-patch, especially in view of the lack
of similar field complaints. It doesn't seem to be too late to put it
into v10 though.
Michael Paquier, editorialized on slightly by me
Discussion: https://postgr.es/m/b8794d6a-38f0-9d7c-ad4b-e85adf860fc9@enterprisedb.com
ALTER USER ... SET did not support all the syntax variants of ALTER ROLE
... SET. Fix that, and to avoid further deviations of this kind, unify
many the grammar rules for ROLE/USER/GROUP commands.
Reported-by: Pavel Golub <pavel@microolap.com>
Otherwise, partitioned tables with RETURNING expressions or subject
to a WITH CHECK OPTION do not work properly.
Amit Langote, reviewed by Amit Khandekar and Etsuro Fujita. A few
comment changes by me.
Discussion: http://postgr.es/m/9a39df80-871e-6212-0684-f93c83be4097@lab.ntt.co.jp
The buildfarm is still showing at least three distinct behaviors for
a bad locale name in CREATE COLLATION. Although this test was helpful
for getting the error reporting code into some usable shape, it doesn't
seem worth carrying multiple expected-files in order to support the
test in perpetuity. So pull it back out.
Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
DROP CASCADE doesn't currently promise to visit dependent objects in
a fixed order, so when the regression tests use it, we typically need
to suppress the details of which objects get dropped in order to have
predictable test output. Traditionally we've done that by setting
client_min_messages higher than NOTICE, but there's a better way:
we can "\set VERBOSITY terse" in psql. That suppresses the DETAIL
message with the object list, but we still get the basic notice telling
how many objects were dropped. So at least the test case can verify
that the expected number of objects were dropped.
The VERBOSITY method was already in use in a few places, but run
around and use it wherever it makes sense.
Discussion: https://postgr.es/m/10766.1501608885@sss.pgh.pa.us
We were just printing errno, which is certainly not gonna work on
Windows. Now, it's not entirely clear from Microsoft's documentation
whether _create_locale() adheres to standard Windows error reporting
conventions, but let's assume it does and try to map the GetLastError
result to an errno. If this turns out not to work, probably the best
thing to do will be to assume the error is always ENOENT on Windows.
This is a longstanding bug, but given the lack of previous field
complaints, I'm not excited about back-patching it.
Per report from Murtuza Zabuawala.
Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
Most of our collations code has special handling for the locale names
"C" and "POSIX", allowing those collations to be used whether or not
the system libraries think those locale names are valid, or indeed
whether said libraries even have any locale support. But we missed
handling things that way in CREATE COLLATION. This meant you couldn't
clone the C/POSIX collations, nor explicitly define a new collation
using those locale names, unless the libraries allow it. That's pretty
pointless, as well as being a violation of pg_newlocale_from_collation's
API specification.
The practical effect of this change is quite limited: it allows creating
such collations even on platforms that don't HAVE_LOCALE_T, and it allows
making "POSIX" collation objects on Windows, which before this would only
let you make "C" collation objects. Hence, even though this is a bug fix
IMO, it doesn't seem worth the trouble to back-patch.
In passing, suppress the DROP CASCADE detail messages at the end of the
collation regression test. I'm surprised we've never been bit by
message ordering issues there.
Per report from Murtuza Zabuawala.
Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
When output of IPC::Run::run () is redirected to scalar references, in
certain circumstances the Msys perl does not correctly detect that the
end of file has been seen, making the test hang indefinitely. One such
circumstance is when the command is 'pg_ctl start', and such a change
was made in commit f13ea95f9e. The workaround, which only applies on
MSys, is to redirect the output to temporary files and then read them in
when the process has finished.
Patch by me, reviewed and tweaked by Tom Lane.
This module becomes much more useful if we allow it to be used as base
class for external projects. To achieve this, change the exported
get_new_node function into a class method instead, and use the standard
Perl idiom of accepting the class as first argument. This method works
as expected for subclasses. The standalone function is kept for
backwards compatibility, though it could be removed in pg11.
Author: Chap Flackman, based on an earlier patch from Craig Ringer
Discussion: https://postgr.es/m/CAMsr+YF8kO+4+K-_U4PtN==2FndJ+5Bn6A19XHhMiBykEwv0wA@mail.gmail.com
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT. For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command. Third-party tools might not be happy either.
In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.
Per report from Gilles Darold. Back-patch to all supported versions.
Discussion: https://postgr.es/m/ec05659a-40ff-4510-fc45-ca9d965d0838@dalibo.com
I believe this changed as a consequence of commit 54baa4813: trying to
clone the "C" collation now produces a true clone with collencoding -1,
hence the error message if it's duplicate no longer specifies an encoding.
Per buildfarm member crake, which apparently hadn't been running this
test for the last few weeks.
The order of partitions listed by \d+ is in general locale-dependent.
Rename the partitions in the test added by d363d42bb9 to force them to
be listed in a consistent order.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax. However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too. The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join. Fix by printing ON TRUE in that case.
This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings. Back-patch to 9.3 where that commit appeared.
Per report from Tushar Ahuja.
Discussion: https://postgr.es/m/98b283cd-6dda-5d3f-f8ac-87db8c76a3da@enterprisedb.com
Before, we always used a dummy value of 1, but that's not right when
the partitioned table being modified is inside of a WITH clause
rather than part of the main query.
Amit Langote, reported and reviewd by Etsuro Fujita, with a comment
change by me.
Discussion: http://postgr.es/m/ee12f648-8907-77b5-afc0-2980bcb0aa37@lab.ntt.co.jp
It seems pretty confusing to have tests named both largeobject and
large_object. The latter is of very recent vintage (commit ff992c074),
so get rid of it in favor of merging into the former.
Also, enable the LO comment test that was added by commit 70ad7ed4e,
since the later commit added the then-missing pg_upgrade functionality.
The large_object.sql test case is almost completely redundant with that,
but not quite: it seems like creating a user-defined LO with an OID in
the system range might be an interesting case for pg_upgrade, so let's
keep it.
Like the earlier patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/18665.1500306372@sss.pgh.pa.us
The grammar will only accept something syntactically similar to a function
call in a function-in-FROM expression. However, there are various ways
to input something that ruleutils.c won't deparse that way, potentially
leading to a view or rule that fails dump/reload. Fix by inserting a
dummy CAST around anything that isn't going to deparse as a function
(which is one of the ways to get something like that in there in the
first place).
In HEAD, also make use of the infrastructure added by this to avoid
emitting unnecessary parentheses in CREATE INDEX deparsing. I did
not change that in back branches, thinking that people might find it
to be unexpected/unnecessary behavioral change.
In HEAD, also fix incorrect logic for when to add extra parens to
partition key expressions. Somebody apparently thought they could
get away with simpler logic than pg_get_indexdef_worker has, but
they were wrong --- a counterexample is PARTITION BY LIST ((a[1])).
Ignoring the prettyprint flag for partition expressions isn't exactly
a nice solution anyway.
This has been broken all along, so back-patch to all supported branches.
Discussion: https://postgr.es/m/10477.1499970459@sss.pgh.pa.us
Further investigation shows that ruleutils isn't quite up to speed either
for cases where we have a domain-over-array: it needs to be prepared to
look past a CoerceToDomain at the top level of field and element
assignments, else it decompiles them incorrectly. Potentially this would
result in failure to dump/reload a rule, if it looked like the one in the
new test case. (I also added a test for EXPLAIN; that output isn't broken,
but clearly we need more test coverage here.)
Like commit b1cb32fb6, this bug is reachable in cases we already support,
so back-patch all the way.
We allow INSERT and UPDATE commands to assign to the same column more than
once, as long as the assignments are to subfields or elements rather than
the whole column. However, this failed when the target column was a domain
over array rather than plain array. Fix by teaching process_matched_tle()
to look through CoerceToDomain nodes, and add relevant test cases.
Also add a group of test cases exercising domains over array of composite.
It's doubtless accidental that CREATE DOMAIN allows this case while not
allowing straight domain over composite; but it does, so we'd better make
sure we don't break it. (I could not find any documentation mentioning
either side of that, so no doc changes.)
It's been like this for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
Commit c46c0e5202 failed to pass the
TransitionCaptureState object to ExecARInsertTriggers() in the case
where it's using heap_multi_insert and there are indexes. Repair.
Thomas Munro, from a report by David Fetter
Discussion: https://postgr.es/m/20170708084213.GA14720%40fetter.org
The previous logic, whilst not actually wrong, was overly complex and
involved doing two binary searches, where only one was really
necessary. This simplifies that logic and improves the comments.
One visible change is that if the new partition overlaps multiple
existing partitions, the error message now always reports the overlap
with the first existing partition (the one with the lowest
bounds). The old code would sometimes report the clash with the first
partition and sometimes with the last one.
Original patch idea from Amit Langote, substantially rewritten by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
Buildfarm members hornet and sungazer have shown multiple instances of
"Failed test 'xmin of non-cascaded slot with hs feedback has changed'".
The reason seems to be that the test is checking the current xmin of the
master server's replication slot against a past xmin of the first slave
server's replication slot. Even though the latter slot is downstream of
the former, it's possible for its reported xmin to be ahead of the former's
reported xmin, because those numbers are updated whenever the respective
downstream walreceiver feels like it (see logic in WalReceiverMain).
Instrumenting this test shows that indeed the slave slot's xmin does often
advance before the master's does, especially if an autovacuum transaction
manages to occur during the relevant window. If we happen to capture such
an advanced xmin as $xmin, then the subsequent wait_slot_xmins call can
fall through before the master's xmin has advanced at all, and then if it
advances before the get_slot_xmins call, we can get the observed failure.
Yeah, that's a bit of a long chain of deduction, but it's hard to explain
any other way how the test can get past an "xmin <> '$xmin'" check only
to have the next query find that xmin does equal $xmin.
Fix by keeping separate images of the master and slave slots' xmins
and testing their has-xmin-advanced conditions independently.
Since reducing pg_ctl's reaction time in commit c61559ec3, some
slower buildfarm members have shown erratic failures in this test.
The reason turns out to be that the test assumes synchronous
replication (because it does not provide any lag time for a commit
to replicate before shutting down the servers), but it had only
enabled sync rep in one direction. The observed symptoms correspond
to failure to replicate the last committed transaction in the other
direction, which can be expected to happen if the shutdown command
is issued soon enough and we are providing no synchronous-commit
guarantees.
Fix that, and add a bit more paranoid state checking at the bottom
of the script.
Michael Paquier and myself
Discussion: https://postgr.es/m/908.1498965681@sss.pgh.pa.us
By default, Perl's split() function drops trailing empty fields,
which is not what we want here. Oversight in commit fb093e4cb.
We'd managed to miss it thus far thanks to the very limited usage
of this function.
Discussion: https://postgr.es/m/14837.1499029831@sss.pgh.pa.us
The original coding here was very confusing, because it named the
two servers it set up "master" and "slave" even though it swapped
their replication roles multiple times. At any given point in the
script it was very unobvious whether "$node_master" actually referred
to the server named "master" or the other one. Instead, pick arbitrary
names for the two servers --- I used "london" and "paris" --- and
distinguish those permanent names from the nonce references $cur_master
and $cur_slave. Add logging to help distinguish which is which at
any given point. Also, use distinct data and transaction names to
make all the prepared transactions easily distinguishable in the
postmaster logs. (There was one place where we intentionally tested
that the server could cope with re-use of a transaction name, but
it seems like one place is sufficient for that purpose.)
Also, add checks at the end to make sure that all the transactions
that were supposed to be committed did survive.
Discussion: https://postgr.es/m/28238.1499010855@sss.pgh.pa.us
Add an optional "expected" argument to override the default assumption
that we're waiting for the query to return "t". This allows replacing
a handwritten polling loop in recovery/t/007_sync_rep.pl with use of
poll_query_until(); AFAICS that's the only remaining ad-hoc polling
loop in our TAP tests.
Change poll_query_until() to probe ten times per second not once per
second. Like some similar changes I've been making recently, the
one-second interval seems to be rooted in ancient traditions rather
than the actual likely wait duration on modern machines. I'd consider
reducing it further if there were a convenient way to spawn just one
psql for the whole loop rather than one per probe attempt.
Discussion: https://postgr.es/m/12486.1498938782@sss.pgh.pa.us
Several callers of PostgresNode::poll_query_until() neglected to check
for failure; I do not think that's optional. Also, rewrite one place
that had reinvented poll_query_until() for no very good reason.
The point of this loop is to insert 1000 rows into the test table
and consume 1000 XIDs. I can't see any good reason why it's useful
to launch 1000 psqls and 1000 backend processes to accomplish that.
Pushing the looping into a plpgsql DO block shaves about 10 seconds
off the runtime of the src/test/recovery TAP tests on my machine;
that's over 10% of the runtime of that test suite.
It is, in fact, sufficiently more efficient that we now demonstrably
need wait_slot_xmins() afterwards, or the slaves' xmins may not have
moved yet.
We now disallow having triggers with both transition tables and ON
INSERT OR UPDATE (which was a PG extension to the spec anyway),
because in this case it's not at all clear how the transition tables
should work for an INSERT ... ON CONFLICT query. Separate ON INSERT
and ON UPDATE triggers with transition tables are allowed, and the
transition tables for these reflect only the inserted and only the
updated tuples respectively.
Patch by Thomas Munro
Discussion: https://postgr.es/m/CAEepm%3D11KHQ0JmETJQihSvhZB5mUZL2xrqHeXbCeLhDiqQ39%3Dw%40mail.gmail.com
check_agg_arguments_walker threw an error upon seeing a SRF or window
function, but that is too aggressive: if the function is within a
sub-select then it's perfectly fine. I broke the SRF case in commit
0436f6bde by copying the logic for window functions ... but that was
broken too, and had been since commit eaccfded9.
Repair both cases in HEAD, and the window function case back to 9.3.
9.2 gets this right.
By default, wal_retrieve_retry_interval is five seconds, which is far
more than is needed in any of our TAP tests, leaving the test cases
just twiddling their thumbs for significant stretches. Moreover,
because it's so large, we get basically no testing of the retry-before-
master-is-ready code path. Hence, make PostgresNode::init set up
wal_retrieve_retry_interval = '500ms' as part of its customization of
test clusters' postgresql.conf. This shaves quite a few seconds off
the runtime of the recovery TAP tests.
Back-patch into 9.6. We have wal_retrieve_retry_interval in 9.5,
but the test infrastructure isn't there.
Discussion: https://postgr.es/m/31624.1498500416@sss.pgh.pa.us
Remove hard-wired sleep(2) delays in 001_stream_rep.pl in favor of using
poll_query_until to check for the desired state to appear. In addition,
add such a wait before the last test in the script, as it's possible
to demonstrate failures there after upcoming improvements in pg_ctl.
(We might end up adding polling before each of the get_slot_xmins calls in
this script, but I feel no great need to do that until shown necessary.)
In passing, clarify the description strings for some of the test cases.
Michael Paquier and Craig Ringer, pursuant to a complaint from me
Discussion: https://postgr.es/m/8962.1498425057@sss.pgh.pa.us
The comparison with the target rows on the subscriber side was done with
datumIsEqual(), which can have false negatives. For instance, it didn't
work reliably for text columns. So use the equality operator provided
by the type cache instead.
Also add more user documentation about replica identity requirements.
Reported-by: Tatsuo Ishii <ishii@sraoss.co.jp>
It's essential that initdb.c's setup_depend() scan each system catalog
that could contain objects that need to have "p" (pin) entries in pg_depend
or pg_shdepend. Forgetting to add that, either when a catalog is first
invented or when it first acquires DATA() entries, is an obvious bug
hazard. We can detect such omissions at reasonable cost by probing every
OID-containing system catalog to see whether the lowest-numbered OID in it
is pinned. If so, the catalog must have been properly accounted for in
setup_depend(). If the lowest OID is above FirstNormalObjectId then the
catalog must have been empty at the end of initdb, so it doesn't matter.
There are a small number of catalogs whose first entry is made later in
initdb than setup_depend(), resulting in nonempty expected output of the
test, but these can be manually inspected to see that they are OK. Any
future mistake of this ilk will manifest as a new entry in the test's
output.
Since pg_conversion is already in the test's output, add it to the set of
catalogs scanned by setup_depend(). That has no effect today (hence, no
catversion bump here) but it will protect us if we ever do add pin-worthy
conversions.
This test is very much like the catalog sanity checks embodied in
opr_sanity.sql and type_sanity.sql, but testing pg_depend doesn't seem to
fit naturally into either of those scripts' charters. Hence, invent a new
test script misc_sanity.sql, which can be a home for this as well as tests
on any other catalogs we might want in future.
Discussion: https://postgr.es/m/8068.1498155068@sss.pgh.pa.us
I misplaced the IF NOT EXISTS clause in commit 7b504eb282, before the
word STATISTICS. Put it where it belongs.
Patch written independently by Amit Langote and myself. I adopted his
submitted test case with a slight edit also.
Reported-by: Bruno Wolff III
Discussion: https://postgr.es/m/20170621004237.GB8337@wolff.to
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:
* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
than the expected column 33.
On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list. This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.
There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses. I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
Currently, there are no built-in functions that are SECURITY DEFINER.
But we just found an instance where one was mistakenly marked that way,
so it seems prudent to add a test about it. If we ever grow some
functions that are intentionally SECURITY DEFINER, we can alter the
expected output of this test, or adjust the query to filter out functions
for which it's okay.
Per suggestion from Robert Haas.
Discussion: https://postgr.es/m/CA+TgmoYXg7McY33+jbWmG=rS-HNUur0S6W8Q8kVNFf7epFimVA@mail.gmail.com
David Rowley found that the "use the smallest per-column selectivity"
heuristic applied in some cases by get_foreign_key_join_selectivity()
was badly off if the FK columns are independent, producing estimates
much worse than we got before that code was added in 9.6.
One case where that heuristic was used was for LEFT and FULL outer joins
with the referenced rel on the outside of the join. But we should not
really need to special-case those here. eqjoinsel() never has had such a
special case; the correction is applied by calc_joinrel_size_estimate()
instead. Let's just estimate such cases like inner joins and rely on that
later adjustment. (I think there was something of a thinko here, in that
the comments seem to be thinking about the selectivity as defined for
semi/anti joins; but that shouldn't apply to left/full joins.) Add a
regression test exercising such a case to show that this is sane in
at least some cases.
The other case where we used that heuristic was for SEMI/ANTI outer joins,
either if the referenced rel was on the outside, or if it was on the inside
but was part of a join within the RHS. In either case, the FK doesn't give
us a lot of traction towards estimating the selectivity. To ensure that
we don't have regressions from what happened before 9.6, let's punt by
ignoring the FK in such cases and applying the traditional selectivity
calculation. (We might be able to improve on that later, but for now
I just want to be sure it's not worse than 9.5.)
Report and patch by David Rowley, simplified a bit by me. Back-patch
to 9.6 where this code was added.
Discussion: https://postgr.es/m/CAKJS1f8NO8oCDcxrteohG6O72uU1saEVT9qX=R8pENr5QWerXw@mail.gmail.com
When a new base type is created using the old-style procedure of first
creating the input/output functions with "opaque" in place of the base
type, the "opaque" argument/return type is changed to the final base type,
on CREATE TYPE. However, we did not create a pg_depend record when doing
that, so the functions were left not depending on the type.
Fixes bug #14706, reported by Karen Huddleston.
Discussion: https://www.postgresql.org/message-id/20170614232259.1424.82774@wrigleys.postgresql.org
Show "All tables" property in \dRp and \dRp+. Don't list tables for
such publications in \dRp+, since it's redundant and the list could be
very long.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Jeff Janes <jeff.janes@gmail.com>
In a CHECK clause, a null result means true, whereas in a WHERE clause
it means false. predtest.c provided different functions depending on
which set of semantics applied to the predicate being proved, but had
no option to control what a null meant in the clauses provided as
axioms. Add one.
Use that in the partitioning code when figuring out whether the
validation scan on a new partition can be skipped. Rip out the
old logic that attempted (not very successfully) to compensate
for the absence of the necessary support in predtest.c.
Ashutosh Bapat and Robert Haas, reviewed by Amit Langote and
incorporating feedback from Tom Lane.
Discussion: http://postgr.es/m/CAFjFpReT_kq_uwU_B8aWDxR7jNGE=P0iELycdq5oupi=xSQTOw@mail.gmail.com
expression_returns_set() used to short-circuit its recursion upon
seeing certain node types, such as DistinctExpr, that it knew the
executor did not support set-valued arguments for. That was never
inherent, though, just a reflection of laziness in execQual.c.
With the new implementation of SRFs there is no reason to think
that any scalar-valued expression node could not have a set-valued
subexpression, except for AggRefs and WindowFuncs where we know there
is a parser check rejecting it. And indeed, the shortcut causes
unexpected failures for cases such as a SRF underneath DistinctExpr,
because the planner stops looking for SRFs too soon.
Discussion: https://postgr.es/m/5259.1497044025@sss.pgh.pa.us
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.
Dean Rasheed, reviewed by Amit Langote.
Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Discussion: https://postgr.es/m/000001d2d5de$d8d66170$8a832450$@pcorp.us
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update relation_is_updatable() to
handle it. Specifically, partitioned tables and simple views built on
top of them are updatable.
This affects the SQL-callable functions pg_relation_is_updatable() and
pg_column_is_updatable(), and the views information_schema.views and
information_schema.columns.
Dean Rasheed, reviewed by Ashutosh Bapat.
Discussion: https://postgr.es/m/CAEZATCXnbiFkMXgF4Ez1pmM2c-tS1z33bSq7OGbw7QQhHov%2B6Q%40mail.gmail.com
ExecInitModifyTable() thought there was a plan per partition, but no,
there's only one. The problem had escaped detection so far because there
would only be visible misbehavior if there were a SubPlan (not an InitPlan)
in the quals being duplicated for each partition. However, valgrind
detected a bogus memory access in test cases added by commit 4f7a95be2,
and investigation of that led to discovery of the bug. The additional
test case added here crashes without the patch.
Patch by Amit Langote, test case by me.
Discussion: https://postgr.es/m/10974.1497227727@sss.pgh.pa.us
The new partitioned table capability added a new relkind, namely
RELKIND_PARTITIONED_TABLE. Update fireRIRrules() to apply RLS
policies on RELKIND_PARTITIONED_TABLE as it does RELKIND_RELATION.
In addition, add RLS regression test coverage for partitioned tables.
Issue raised by Fakhroutdinov Evgenievich and patch by Mike Palmiotto.
Regression test editorializing by me.
Discussion: https://postgr.es/m/flat/20170601065959.1486.69906@wrigleys.postgresql.org
Since tuple-routing implicitly checks the partitioning constraints
at least for the levels of the partitioning hierarchy it traverses,
there's normally no need to revalidate the partitioning constraint
after performing tuple routing. However, if there's a BEFORE trigger
on the target partition, it could modify the tuple, causing the
partitioning constraint to be violated. Catch that case.
Also, instead of checking the root table's partition constraint after
tuple-routing, check it beforehand. Otherwise, the rules for when
the partitioning constraint gets checked get too complicated, because
you sometimes have to check part of the constraint but not all of it.
This effectively reverts commit 39162b2030
in favor of a different approach altogether.
Report by me. Initial debugging by Jeevan Ladhe. Patch by Amit
Langote, reviewed by me.
Discussion: http://postgr.es/m/CA+Tgmoa9DTgeVOqopieV8d1QRpddmP65aCdxyjdYDoEO5pS5KA@mail.gmail.com
The larger part of this patch replaces usages of MyProc->procLatch
with MyLatch. The latter works even early during backend startup,
where MyProc->procLatch doesn't yet. While the affected code
shouldn't run in cases where it's not initialized, it might get copied
into places where it might. Using MyLatch is simpler and a bit faster
to boot, so there's little point to stick with the previous coding.
While doing so I noticed some weaknesses around newly introduced uses
of latches that could lead to missed events, and an omitted
CHECK_FOR_INTERRUPTS() call in worker_spi.
As all the actual bugs are in v10 code, there doesn't seem to be
sufficient reason to backpatch this.
Author: Andres Freund
Discussion:
https://postgr.es/m/20170606195321.sjmenrfgl2nu6j63@alap3.anarazel.dehttps://postgr.es/m/20170606210405.sim3yl6vpudhmufo@alap3.anarazel.de
Backpatch: -
There was a grammar ambiguity between SET PUBLICATION name REFRESH and
SET PUBLICATION SKIP REFRESH, because SKIP is not a reserved word. To
resolve that, fold the refresh choice into the WITH options. Refreshing
is the default now.
Reported-by: tushar <tushar.ahuja@enterprisedb.com>
Given the possibility of race conditions and so on, it seems entirely
unsafe to just assume that shm_toc_lookup() always finds the key it's
looking for --- but that was exactly what all but one call site were
doing. To fix, add a "bool noError" argument, similarly to what we
have in many other functions, and throw an error on an unexpected
lookup failure. Remove now-redundant Asserts that a rather random
subset of call sites had.
I doubt this will throw any light on buildfarm member lorikeet's
recent failures, because if an unnoticed lookup failure were involved,
you'd kind of expect a null-pointer-dereference crash rather than the
observed symptom. But you never know ... and this is better coding
practice even if it never catches anything.
Discussion: https://postgr.es/m/9697.1496675981@sss.pgh.pa.us
If we allow this, whatever outer command has the table open will not know
about the new index and may fail to update it as needed, as shown in a
report from Laurenz Albe. We already had such a prohibition in place for
ALTER TABLE, but the CREATE INDEX syntax missed the check.
Fixing it requires an API change for DefineIndex(), which conceivably
would break third-party extensions if we were to back-patch it. Given
how long this problem has existed without being noticed, fixing it in
the back branches doesn't seem worth that risk.
Discussion: https://postgr.es/m/A737B7A37273E048B164557ADEF4A58B53A4DC9A@ntex2010i.host.magwien.gv.at
When costing a nestloop with stop-at-first-inner-match semantics, and a
non-indexscan inner path, final_cost_nestloop() wants to charge the full
scan cost of the inner rel at least once, with additional scans charged
at inner_rescan_run_cost which might be less. However the logic for
doing this effectively assumed that outer_matched_rows is at least 1.
If it's zero, which is not unlikely for a small outer rel, we ended up
charging inner_run_cost plus N times inner_rescan_run_cost, as much as
double the correct charge for an outer rel with only one row that
we're betting won't be matched. (Unless the inner rel is materialized,
in which case it has very small inner_rescan_run_cost and the cost
is not so far off what it should have been.)
The upshot of this was that the planner had a tendency to select plans
that failed to make effective use of the stop-at-first-inner-match
semantics, and that might have Materialize nodes in them even when the
predicted number of executions of the Materialize subplan was only 1.
This was not so obvious before commit 9c7f5229a, because the case only
arose in connection with semi/anti joins where there's not freedom to
reverse the join order. But with the addition of unique-inner joins,
it could result in some fairly bad planning choices, as reported by
Teodor Sigaev. Indeed, some of the test cases added by that commit
have plans that look dubious on closer inspection, and are changed
by this patch.
Fix the logic to ensure that we don't charge for too many inner scans.
I chose to adjust it so that the full-freight scan cost is associated
with an unmatched outer row if possible, not a matched one, since that
seems like a better model of what would happen at runtime.
This is a longstanding bug, but given the lesser impact in back branches,
and the lack of field complaints, I won't risk a back-patch.
Discussion: https://postgr.es/m/CAKJS1f-LzkUsFxdJ_-Luy38orQ+AdEXM5o+vANR+-pHAWPSecg@mail.gmail.com
Previously the changes to the "data" part of the sequence, i.e. the
one containing the current value, were not transactional, whereas the
definition, including minimum and maximum value were. That leads to
odd behaviour if a schema change is rolled back, with the potential
that out-of-bound sequence values can be returned.
To avoid the issue create a new relfilenode fork whenever ALTER
SEQUENCE is executed, similar to how TRUNCATE ... RESTART IDENTITY
already is already handled.
This commit also makes ALTER SEQUENCE RESTART transactional, as it
seems to be too confusing to have some forms of ALTER SEQUENCE behave
transactionally, some forms not. This way setval() and nextval() are
not transactional, but DDL is, which seems to make sense.
This commit also rolls back parts of the changes made in 3d092fe540
and f8dc1985f as they're now not needed anymore.
Author: Andres Freund
Discussion: https://postgr.es/m/20170522154227.nvafbsm62sjpbxvd@alap3.anarazel.de
Backpatch: Bug is in master/v10 only
Per our message style guidelines, error messages incorporating the
results of format_type_be() and its siblings should not add quotes
around those results, because those functions already add quotes
at need. Fix a few places that hadn't gotten that memo.
json_populate_record throws an error if asked to convert a JSON scalar
or array into a composite type. jsonb_populate_record was returning
a record full of NULL fields instead. It seems better to make it
throw an error for this case as well.
Nikita Glukhov
Discussion: https://postgr.es/m/fbd1d566-bba0-a3de-d6d0-d3b1d7c24ff2@postgrespro.ru
Fix failure to check that we got a plain Const from const-simplification of
a coercion request. This is the cause of bug #14666 from Tian Bing: there
is an int4 to money cast, but it's only stable not immutable (because of
dependence on lc_monetary), resulting in a FuncExpr that the code was
miserably unequipped to deal with, or indeed even to notice that it was
failing to deal with. Add test cases around this coercion behavior.
In view of the above, sprinkle the code liberally with castNode() macros,
in hope of catching the next such bug a bit sooner. Also, change some
functions that were randomly declared to take Node* to take more specific
pointer types. And change some struct fields that were declared Node*
but could be given more specific types, allowing removal of assorted
explicit casts.
Place PARTITION_MAX_KEYS check a bit closer to the code it's protecting.
Likewise check only-one-key-for-list-partitioning restriction in a less
random place.
Avoid not-per-project-style usages like !strcmp(...).
Fix assorted failures to avoid scribbling on the input of parse
transformation. I'm not sure how necessary this is, but it's entirely
silly for these functions to be expending cycles to avoid that and not
getting it right.
Add guards against partitioning on system columns.
Put backend/nodes/ support code into an order that matches handling
of these node types elsewhere.
Annotate the fact that somebody added location fields to PartitionBoundSpec
and PartitionRangeDatum but forgot to handle them in
outfuncs.c/readfuncs.c. This is fairly harmless for production purposes
(since readfuncs.c would just substitute -1 anyway) but it's still bogus.
It's not worth forcing a post-beta1 initdb just to fix this, but if we
have another reason to force initdb before 10.0, we should go back and
clean this up.
Contrariwise, somebody added location fields to PartitionElem and
PartitionSpec but forgot to teach exprLocation() about them.
Consolidate duplicative code in transformPartitionBound().
Improve a couple of error messages.
Improve assorted commentary.
Re-pgindent the files touched by this patch; this affects a few comment
blocks that must have been added quite recently.
Report: https://postgr.es/m/20170524024550.29935.14396@wrigleys.postgresql.org
Use 'COLLATE "C"' to force locale-independent sorting of the iexit
view results in select_views.sql. We aren't particularly interested
in the exact sorting behavior here, and this doesn't change the shape
of the generated plan, so it seems like a wash as far as the goals
of this test go.
This is in response to bug #14637 from Tomasz Kontusz. It doesn't
fully resolve his problem, because he also saw some diffs in the
create_index test. But other people have had issues with select_views
too, and this fix lets us drop the select_views_1.out variant expected
file altogether, which is a nice win from a maintenance standpoint.
Emre Hasegeli
Discussion: https://postgr.es/m/20170501000609.24360.24248@wrigleys.postgresql.org
Commit 9aa3c782c added code to allow CREATE TABLE/CREATE TYPE to not fail
when the desired type name conflicts with an autogenerated array type, by
dint of renaming the array type out of the way. But I (tgl) overlooked
that the same case arises in ALTER TABLE/TYPE RENAME. Fix that too.
Back-patch to all supported branches.
Report and patch by Vik Fearing, modified a bit by me
Discussion: https://postgr.es/m/0f4ade49-4f0b-a9a3-c120-7589f01d1eb8@2ndquadrant.com
Logical replication supports replicating between tables with different
column order. But this failed for the initial table sync because of a
logic error in how the column list for the internal COPY command was
composed. Fix that and also add a test.
Also fix a minor omission in the column name mapping cache. When
creating the mapping list, it would not skip locally dropped columns.
So if a remote column had the same name as a locally dropped
column (...pg.dropped...), then the expected error would not occur.
The cash_div_intX functions applied rint() to the result of the division.
That's not merely useless (because the result is already an integer) but
it causes precision loss for values larger than 2^52 or so, because of
the forced conversion to float8.
On the other hand, the cash_mul_fltX functions neglected to apply rint() to
their multiplication results, thus possibly causing off-by-one outputs.
Per C standard, arithmetic between any integral value and a float value is
performed in float format. Thus, cash_mul_flt4 and cash_div_flt4 produced
answers good to only about six digits, even when the float value is exact.
We can improve matters noticeably by widening the float inputs to double.
(It's tempting to consider using "long double" arithmetic if available,
but that's probably too much of a stretch for a back-patched fix.)
Also, document that cash_div_intX operators truncate rather than round.
Per bug #14663 from Richard Pistole. Back-patch to all supported branches.
Discussion: https://postgr.es/m/22403.1495223615@sss.pgh.pa.us
Otherwise, set_plan_refs() can get applied to the same list
multiple times through different references, leading to chaos.
Amit Langote, Dilip Kumar, and Robert Haas, reviewed by Ashutosh
Bapat. Original report by Sveinn Sveinsson.
Discussion: http://postgr.es/m/20170517141151.1435.79890@wrigleys.postgresql.org
This seemed like a good idea originally because there's no way to mark
a range partition as accepting NULL, but that now seems more like a
current limitation than something we want to lock down for all time.
For example, there's a proposal to add the notion of a default
partition which accepts all rows not otherwise routed, which directly
conflicts with the idea that a range-partitioned table should never
allow nulls anywhere. So let's change this while we still can, by
putting the NOT NULL test into the partition constraint instead of
changing the column properties.
Amit Langote and Robert Haas, reviewed by Amit Kapila
Discussion: http://postgr.es/m/8e2dd63d-c6fb-bb74-3c2b-ed6d63629c9d@lab.ntt.co.jp
When creating a subscription with slot_name = NONE, we failed to check
that also create_slot = false and enabled = false were set. This
created an invalid subscription and could later lead to a crash if a
NULL slot name was accessed. Add more checks around that for
robustness.
Reported-by: tushar <tushar.ahuja@enterprisedb.com>
Add some tests for parsing different option combinations. Fix some of
the resulting error messages for recent changes in option naming.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Modifying the permissions of a persistent file isn't really much nicer
than modifying its contents, even if git doesn't currently notice it.
Adjust the test script to make a copy and set the permissions of that
instead.
Michael Paquier, per a gripe from me. Back-patch to 9.5 where these
tests were introduced.
Discussion: https://postgr.es/m/14836.1494885946@sss.pgh.pa.us
The CommentStmt made by RebuildConstraintComment() has to pstrdup the
relation name, else it will contain a dangling pointer after that
relcache entry is flushed. (I'm less sure that pstrdup'ing conname
is necessary, but let's be safe.) Failure to do this leads to weird
errors or crashes, as reported by Marko Elezovic.
Bug introduced by commit e42375fc8, so back-patch to 9.5 as that was.
Fix by David Rowley, regression test by Michael Paquier
Discussion: https://postgr.es/m/DB6PR03MB30775D58E732D4EB0C13725B9AE00@DB6PR03MB3077.eurprd03.prod.outlook.com
In 1753b1b027, the pg_sequence system
catalog was introduced. This made sequence metadata changes
transactional, while the actual sequence values are still behaving
nontransactionally. This requires some refinement in how ALTER
SEQUENCE, which operates on both, locks the sequence and the catalog.
The main problems were:
- Concurrent ALTER SEQUENCE causes "tuple concurrently updated" error,
caused by updates to pg_sequence catalog.
- Sequence WAL writes and catalog updates are not protected by same
lock, which could lead to inconsistent recovery order.
- nextval() disregarding uncommitted ALTER SEQUENCE changes.
To fix, nextval() and friends now lock the sequence using
RowExclusiveLock instead of AccessShareLock. ALTER SEQUENCE locks the
sequence using ShareRowExclusiveLock. This means that nextval() and
ALTER SEQUENCE block each other, and ALTER SEQUENCE on the same sequence
blocks itself. (This was already the case previously for the OWNER TO,
RENAME, and SET SCHEMA variants.) Also, rearrange some code so that the
entire AlterSequence is protected by the lock on the sequence.
As an exception, use reduced locking for ALTER SEQUENCE ... RESTART.
Since that is basically a setval(), it does not require the full locking
of other ALTER SEQUENCE actions. So check whether we are only running a
RESTART and run with less locking if so.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Jason Petersen <jason@citusdata.com>
Reported-by: Andres Freund <andres@anarazel.de>
pg_stat_get_snapshot_timestamp() returns the timestamp seen in the "global"
stats file. Because pgstat_write_statsfiles() writes per-DB stats files
before the global file (or at least before renaming it into place), there
is a window where the test backend can see all the stats updates that
wait_for_stats() was checking for (all of which come from the per-DB file)
but also see the same global stats file it had seen at the start of the
test script. This results in a failure in only the "snapshot_newer" query,
as reported by a couple of buildfarm members recently.
I suspect that this ought to be back-patched. Commit 4e37b3e15 has
evidently increased the probability of this window getting hit, but
it's not apparent why it could not have been hit before. I'll refrain
for the moment though.
Commit 60690a6fe attempted to fix the wait_for_stats() function in this
test so that it would wait properly if the tenk2 scans were done in
parallel workers instead of the main session (typically as a consequence of
force_parallel_mode being turned on). However, we made it test for whether
the main session's actions had been reported by looking for inserts on
'trunc_stats_test'. This is the Wrong Thing, because those aren't the last
updates we expect the main session to do. As shown by recent failures on
buildfarm member frogmouth, it's entirely likely that the trunc_stats_test
updates will be reported in a separate message from later updates, which
means there can be a window in which wait_for_stats() will exit but not all
the updates we are expecting to see will have arrived. We should test for
the last updates we're expecting, namely those on 'trunc_stats_test4'.
Unfortunately, I doubt that this explains frogmouth's failures, because
there's no reason to believe that it's running the tenk2 queries in
parallel. Still, the test is wrong on its own terms, so fix and back-patch
to 9.6 where parallel query came in.
ALTER COLUMN TYPE on a column used by a statistics object fails since
commit 928c4de30, because the relevant switch in ATExecAlterColumnType
is unprepared for columns to have dependencies from OCLASS_STATISTIC_EXT
objects.
Although the existing types of extended statistics don't actually need us
to do any work for a column type change, it seems completely indefensible
that that assumption is hidden behind the failure of an unrelated module
to contain any code for the case. Hence, create and call an API function
in statscmds.c where the assumption can be explained, and where we could
add code to deal with the problem when it inevitably becomes real.
Also, the reason this wasn't handled before, neither for extended stats
nor for the last half-dozen new OCLASS kinds :-(, is that the default:
in that switch suppresses compiler warnings, allowing people to miss the
need to consider it when adding an OCLASS. We don't really need a default
because surely getObjectClass should only return valid values of the enum;
so remove it, and add the missed OCLASS entries where they should be.
Discussion: https://postgr.es/m/20170512221010.nglatgt5azzdxjlj@alvherre.pgsql
Consistently refer to such an entry as a "statistics object", not just
"statistics" or "extended statistics". Previously we had a mismash of
terms, accompanied by utter confusion as to whether the term was
singular or plural. That's not only grating (at least to the ear of
a native English speaker) but could be outright misleading, eg in error
messages that seemed to be referring to multiple objects where only one
could be meant.
This commit fixes the code and a lot of comments (though I may have
missed a few). I also renamed two new SQL functions,
pg_get_statisticsextdef -> pg_get_statisticsobjdef
pg_statistic_ext_is_visible -> pg_statistics_obj_is_visible
to conform better with this terminology.
I have not touched the SGML docs other than fixing those function
names; the docs certainly need work but it seems like a separable task.
Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
On faster machines, the overall runtime for running the core regression
tests is under twenty seconds these days, of which the hard-wired delays
in the stats test are a significant fraction. But on closer inspection,
it seems like we shouldn't need those.
The initial 2-second delay is there only to reduce the risk of the test's
stats messages not getting sent due to contention. But analysis of the
last ten years' worth of buildfarm runs shows no evidence that such
failures actually occur. (We do see failures that look like stats
messages not getting sent, particularly on Windows; but there is little
reason to believe that the initial delay reduces their frequency.)
The later 1-second delay is there to ensure that our session's stats
will have gotten sent. But we could also do that by starting a fresh
session, which takes well under 1 second even on very slow machines.
Hence, let's remove both delays and see what happens. The first delay
was the only test of pg_sleep_for() in the regression tests, but we can
move that responsibility into wait_for_stats().
Discussion: https://postgr.es/m/17795.1493869423@sss.pgh.pa.us
A stats object ought to have a dependency on each individual column
it reads, not the entire table. Doing this honestly lets us get rid
of the hard-wired logic in RemoveStatisticsExt, which seems to have
been misguidedly modeled on RemoveStatistics; and it will be far easier
to extend to multiple tables later.
Also, add overlooked dependency on owner, and make the dependency on
schema be NORMAL like every other such dependency.
There remains some unfinished work here, which is to allow statistics
objects to be extension members. That takes more effort than just
adding the dependency call, though, so I left it out for now.
initdb forced because this changes the set of pg_depend records that
should exist for a statistics object.
Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
Previously, we had the WITH clause in the middle of the command, where
you'd specify both generic options as well as statistic types. Few
people liked this, so this commit changes it to remove the WITH keyword
from that clause and makes it accept statistic types only. (We
currently don't have any generic options, but if we invent in the
future, we will gain a new WITH clause, probably at the end of the
command).
Also, the column list is now specified without parens, which makes the
whole command look more similar to a SELECT command. This change will
let us expand the command to supporting expressions (not just columns
names) as well as multiple tables and their join conditions.
Tom added lots of code comments and fixed some parts of the CREATE
STATISTICS reference page, too; more changes in this area are
forthcoming. He also fixed a potential problem in the alter_generic
regression test, reducing verbosity on a cascaded drop to avoid
dependency on message ordering, as we do in other tests.
Tom also closed a security bug: we documented that table ownership was
required in order to create a statistics object on it, but didn't
actually implement it.
Implement tab-completion for statistics objects. This can stand some
more improvement.
Authors: Alvaro Herrera, with lots of cleanup by Tom Lane
Discussion: https://postgr.es/m/20170420212426.ltvgyhnefvhixm6i@alvherre.pgsql
For CREATE/ALTER PUBLICATION/SUBSCRIPTION, use similar option style as
other statements that use a WITH clause for options.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Certain recovery tests use the Perl IPC::Run module's start/kill_kill
method of processing. On at least some versions of perl this causes the
whole process and its caller to crash. If we ever find a better way of
doing these tests they can be re-enabled on this platform. This does not
affect Mingw or Cygwin builds, which use a different perl and a
different shell and so are not affected.
Per discussion, "location" is a rather vague term that could refer to
multiple concepts. "LSN" is an unambiguous term for WAL locations and
should be preferred. Some function names, view column names, and function
output argument names used "lsn" already, but others used "location",
as well as yet other terms such as "wal_position". Since we've already
renamed a lot of things in this area from "xlog" to "wal" for v10,
we may as well incur a bit more compatibility pain and make these names
all consistent.
David Rowley, minor additional docs hacking by me
Discussion: https://postgr.es/m/CAKJS1f8O0njDKe8ePFQ-LK5-EjwThsDws6ohJ-+c6nWK+oUxtg@mail.gmail.com
It turned out this approach had problems, because a DROP command should
not have any options other than CASCADE and RESTRICT. Instead, always
attempt to drop the slot if there is one configured, but also add an
ALTER SUBSCRIPTION action to set the slot to NONE.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/29431.1493730652@sss.pgh.pa.us
Previously it would allow an invalid connection string to be set.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: tushar <tushar.ahuja@enterprisedb.com>
Both views replace the umoptions field with NULL when the user does not
meet qualifications to see it. They used different qualifications, and
pg_user_mappings documented qualifications did not match its implemented
qualifications. Make its documentation and implementation match those
of user_mapping_options. One might argue for stronger qualifications,
but these have long, documented tenure. pg_user_mappings has always
exhibited this problem, so back-patch to 9.2 (all supported versions).
Michael Paquier and Feike Steenbergen. Reviewed by Jeff Janes.
Reported by Andrew Wheelwright.
Security: CVE-2017-7486
Some selectivity estimation functions run user-supplied operators over
data obtained from pg_statistic without security checks, which allows
those operators to leak pg_statistic data without having privileges on
the underlying tables. Fix by checking that one of the following is
satisfied: (1) the user has table or column privileges on the table
underlying the pg_statistic data, or (2) the function implementing the
user-supplied operator is leak-proof. If neither is satisfied, planning
will proceed as if there are no statistics available.
At least one of these is satisfied in most cases in practice. The only
situations that are negatively impacted are user-defined or
not-leak-proof operators on a security-barrier view.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Peter Eisentraut <peter_e@gmx.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Security: CVE-2017-7484
Storing passwords in plaintext hasn't been a good idea for a very long
time, if ever. Now seems like a good time to finally forbid it, since we're
messing with this in PostgreSQL 10 anyway.
Remove the CREATE/ALTER USER UNENCRYPTED PASSSWORD 'foo' syntax, since
storing passwords unencrypted is no longer supported. ENCRYPTED PASSWORD
'foo' is still accepted, but ENCRYPTED is now just a noise-word, it does
the same as just PASSWORD 'foo'.
Likewise, remove the --unencrypted option from createuser, but accept
--encrypted as a no-op for backward compatibility. AFAICS, --encrypted was
a no-op even before this patch, because createuser encrypted the password
before sending it to the server even if --encrypted was not specified. It
added the ENCRYPTED keyword to the SQL command, but since the password was
already in encrypted form, it didn't make any difference. The documentation
was not clear on whether that was intended or not, but it's moot now.
Also, while password_encryption='on' is still accepted as an alias for
'md5', it is now marked as hidden, so that it is not listed as an accepted
value in error hints, for example. That's not directly related to removing
'plain', but it seems better this way.
Reviewed by Michael Paquier
Discussion: https://www.postgresql.org/message-id/16e9b768-fd78-0b12-cfc1-7b6b7f238fde@iki.fi
The salt is stored base64-encoded. With the old 10 bytes raw length, it was
always padded to 16 bytes after encoding. We might as well use 12 raw bytes
for the salt, and it's still encoded into 16 bytes.
Similarly for the random nonces, use a raw length that's divisible by 3, so
that there's no padding after base64 encoding. Make the nonces longer while
we're at it. 10 bytes was probably enough to prevent replay attacks, but
there's no reason to be skimpy here.
Per suggestion from Álvaro Hernández Tortosa.
Discussion: https://www.postgresql.org/message-id/df8c6e27-4d8e-5281-96e5-131a4e638fc8@8kdata.com
GiST's getNextNearest() function attempts to pfree the previously-returned
tuple if any (that is, scan->xs_hitup in HEAD, or scan->xs_itup in older
branches). However, if we are rescanning a plan node after ending a
previous scan early, those tuple pointers could be pointing to garbage,
because they would be pointing into the scan's pageDataCxt or queueCxt
which has been reset. In a debug build this reliably results in a crash,
although I think it might sometimes accidentally fail to fail in
production builds.
To fix, clear the pointer field anyplace we reset a context it might
be pointing into. This may be overkill --- I think probably only the
queueCxt case is involved in this bug, so that resetting in gistrescan()
would be sufficient --- but dangling pointers are generally bad news,
so let's avoid them.
Another plausible answer might be to just not bother with the pfree in
getNextNearest(). The reconstructed tuples would go away anyway in the
context resets, and I'm far from convinced that freeing them a bit earlier
really saves anything meaningful. I'll stick with the original logic in
this patch, but if we find more problems in the same area we should
consider that approach.
Per bug #14641 from Denis Smirnov. Back-patch to 9.5 where this
logic was introduced.
Discussion: https://postgr.es/m/20170504072034.24366.57688@wrigleys.postgresql.org
It only produced <row> elements but no wrapping <table> element.
By contrast, cursor_to_xmlschema produced a schema that is now correct
but did not previously match the XML data produced by cursor_to_xml.
In passing, also fix a minor misunderstanding about moving cursors in
the tests related to this.
Reported-by: filip@jirsak.org
Based-on-patch-by: Thomas Munro <thomas.munro@enterprisedb.com>
This removes a test case added by commit b69ec7cc9, which was intended
to exercise a corner case involving the rule used at that time that
materialized views were unpopulated iff they had physical size zero.
We got rid of that rule very shortly later, in commit 1d6c72a55, but
kept the test case. However, because the case now asks what VACUUM
will do to a zero-sized physical file, it would be pretty surprising
if the answer were ever anything but "nothing" ... and if things were
indeed that broken, surely we'd find it out from other tests. Since
the test involves a table that's fairly large by regression-test
standards (100K rows), it's quite slow to run. Dropping it should
save some buildfarm cycles, so let's do that.
Discussion: https://postgr.es/m/32386.1493831320@sss.pgh.pa.us
Due to a missing CommandCounterIncrement() call, parsing of a non-utility
command in an extension script would not see the effects of the immediately
preceding DDL command, unless that command's execution ends with
CommandCounterIncrement() internally ... which some do but many don't.
Report by Philippe Beaudoin, diagnosis by Julien Rouhaud.
Rather remarkably, this bug has evaded detection since extensions were
invented, so back-patch to all supported branches.
Discussion: https://postgr.es/m/2cf7941e-4e41-7714-3de8-37b1a8f74dff@free.fr
If the inner relation can be proven unique, that is it can have no more
than one matching row for any row of the outer query, then we might as
well implement the semijoin as a plain inner join, allowing substantially
more freedom to the planner. This is a form of outer join strength
reduction, but it can't be implemented in reduce_outer_joins() because
we don't have enough info about the individual relations at that stage.
Instead do it much like remove_useless_joins(): once we've built base
relations, we can make another pass over the SpecialJoinInfo list and
get rid of any entries representing reducible semijoins.
This is essentially a followon to the inner-unique patch (commit 9c7f5229a)
and makes use of the proof machinery that that patch created. We need only
minor refactoring of innerrel_is_unique's API to support this usage.
Per performance complaint from Teodor Sigaev.
Discussion: https://postgr.es/m/f994fc98-389f-4a46-d1bc-c42e05cb43ed@sigaev.ru
The inner-unique patch (commit 9c7f5229a) supposed that if we're
considering a JOIN_UNIQUE_INNER join path, we can always set inner_unique
for the join, because the inner path produced by create_unique_path should
be unique relative to the outer relation. However, that's true only if
we're considering joining to the whole outer relation --- otherwise we may
be applying only some of the join quals, and so the inner path might be
non-unique from the perspective of this join. Adjust the test to only
believe that we can set inner_unique if we have the whole semijoin LHS on
the outer side.
There is more that can be done in this area, but this commit is only
intended to provide the minimal fix needed to get correct plans.
Per report from Teodor Sigaev. Thanks to David Rowley for preliminary
investigation.
Discussion: https://postgr.es/m/f994fc98-389f-4a46-d1bc-c42e05cb43ed@sigaev.ru
Even though no actual tuples are ever inserted into a partitioned
table (the actual tuples are in the partitions, not the partitioned
table itself), we still need to have a ResultRelInfo for the
partitioned table, or per-statement triggers won't get fired.
Amit Langote, per a report from Rajkumar Raghuwanshi. Reviewed by me.
Discussion: http://postgr.es/m/CAKcux6%3DwYospCRY2J4XEFuVy0L41S%3Dfic7rmkbsU-GXhhSbmBg%40mail.gmail.com
Convert the binary_coercible() and physically_coercible() functions from
SQL to plpgsql. It's not that plpgsql is inherently better at doing
queries; if you simply convert the previous single SQL query into one
RETURN expression, it's no faster. The problem with the existing code
is that it fools the plancache into deciding that it's worth re-planning
the query every time, since constant-folding with a concrete value for $2
allows elimination of at least one sub-SELECT. In reality that's using the
planner to do the equivalent of a few runtime boolean tests, causing the
function to run much slower than it should. Splitting the AND/OR logic
into separate plpgsql statements allows each if-expression to acquire a
static plan.
Also, get rid of some uses of obj_description() in favor of explicitly
joining to pg_description, allowing the joins to be optimized better.
(Someday we might improve the SQL-function-inlining logic enough that
this happens automatically, but today is not that day.)
Together, these changes reduce the runtime of the opr_sanity regression
test by about a factor of two on one of my slower machines. They don't
seem to help as much on a fast machine, but this should at least benefit
the buildfarm.
Currently, trying to validate a NO INHERIT constraint on the parent will
search for the constraint in child tables (where it is not supposed to
exist), wrongly causing a "constraint does not exist" error.
Amit Langote, per a report from Hans Buschmann.
Discussion: http://postgr.es/m/20170421184012.24362.19@wrigleys.postgresql.org
Before restarting a tablesync worker for the same relation, wait
wal_retrieve_retry_interval (currently 5s by default). This avoids
restarting failing workers in a tight loop.
We keep the last start times in a hash table last_start_times that is
separate from the table_states list, because that list is cleared out on
syscache invalidation, which happens whenever a table finishes syncing.
The hash table is kept until all tables have finished syncing.
A future project might be to unify these two and keep everything in one
data structure, but for now this is a less invasive change to accomplish
the original purpose.
For the test suite, set wal_retrieve_retry_interval to its minimum
value, to not increase the test suite run time.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
Declarative partitioning duplicated the TypedTableElement productions,
evidently to remove the need to specify WITH OPTIONS when creating
partitions. Instead, simply make WITH OPTIONS optional in the
TypedTableElement production and remove all of the duplicate
PartitionElement-related productions. This change simplifies the
syntax and makes WITH OPTIONS optional when adding defaults, constraints
or storage parameters to columns when creating either typed tables or
partitions.
Also update pg_dump to no longer include WITH OPTIONS, since it's not
necessary, and update the documentation to reflect that WITH OPTIONS is
now optional.
Our general rule for pg_get_X(oid) functions is to simply return NULL
when passed an invalid or inappropriate OID. Teach pg_get_partkeydef to
do this also, making it easier for users to use this function when
querying against tables with both partitions and non-partitions (such as
pg_class).
As a concrete example, this makes pg_dump's life a little easier.
Author: Amit Langote
There is no need to forbid ALTER TABLE ONLY on partitioned tables,
when no partitions exist yet. This can be handy for users who are
building up their partitioned table independently and will create actual
partitions later.
In addition, this is how pg_dump likes to operate in certain instances.
Author: Amit Langote, with some error message word-smithing by me
In quorum-based synchronous replication, all the standbys listed in
synchronous_standby_names equally have chances to be chosen
as synchronous standbys. So they should have the same priority.
However, previously, quorum standbys whose names appear earlier
in the list were given higher priority values though the difference of
those priority values didn't affect the selection of synchronous standbys.
Users could see those "meaningless" priority values in pg_stat_replication
and this was confusing.
This commit gives all the quorum synchronous standbys the same
highest priority, i.e., 1, in order to remove such confusion.
Author: Fujii Masao
Reviewed-by: Masahiko Sawada, Kyotaro Horiguchi
Discussion: http://postgr.es/m/CAHGQGwEKOw=SmPLxJzkBsH6wwDBgOnVz46QjHbtsiZ-d-2RGUg@mail.gmail.com
Fix machine-dependent sorting of column numbers. (Odd behavior
would only materialize for column numbers above 255, but that's
certainly legal.)
Fix poor choice of SQLSTATE for some errors, and improve error message
wording. (Notably, "is not a scalar type" is a totally misleading way
to explain "does not have a default btree opclass".)
Avoid taking AccessExclusiveLock on the associated relation during DROP
STATISTICS. That's neither necessary nor desirable, and it could easily
have put us into situations where DROP fails (compare commit 68ea2b7f9).
Adjust/improve comments.
David Rowley and Tom Lane
Discussion: https://postgr.es/m/CAKJS1f-GmCfPvBbAEaM5xoVOaYdVgVN1gicALSoYQ77z-+vLbw@mail.gmail.com
archive_command and restore_command need to refer to Windows paths, not
Msys virtual file system paths, as postgres is completely unaware of the
latter, so prefix them with the Windows path to the virtual file system
root. Clean psql and pg_recvlogical output of carriage returns.
PostgresNode blithely ignored the exit status of pg_ctl, and in general
made no effort to be sure that the server was running when it should be.
This caused it to miss server crashes, which is a serious shortcoming
in a test scaffold. Make it complain if pg_ctl fails, and modify the
start and stop logic to complain if the server doesn't start, or doesn't
stop, when expected.
Also, have it turn off the "restart_after_crash" configuration parameter
in created clusters, as bitter experience has shown that leaving that on
can mask crashes too.
We might at some point need variant functions that allow for, eg,
server start failure to be expected. But no existing test case appears
to want that, and it surely shouldn't be the default behavior.
Note that this *will* break the buildfarm, as it will expose known
bugs that the previous testing failed to. I'm committing it despite
that, to verify that we get the expected failures in the buildfarm
not just in manual testing.
Back-patch into 9.6 where PostgresNode was introduced. (The 9.6
branch is not expected to show any failures.)
Discussion: https://postgr.es/m/21432.1492886428@sss.pgh.pa.us
Although the documentation for append_conf said clearly that it didn't
add a newline, many test authors seem to have forgotten that ... or maybe
they just consulted the example at the top of the POD documentation,
which clearly shows adding a config entry without bothering to add a
trailing newline. The worst part of that is that it works, as long as
you don't do it more than once, since the backend isn't picky about
whether config files end with newlines. So there's not a strong forcing
function reminding test authors not to do it like that. Upshot is that
this is a terribly fragile way to go about things, and there's at least
one existing test case that is demonstrably broken and not testing what
it thinks it is.
Let's just make append_conf append a newline, instead; that is clearly
way safer than the old definition.
I also cleaned up a few call sites that were unnecessarily ugly.
(I left things alone in places where it's plausible that additional
config lines would need to be added someday.)
Back-patch the change in append_conf itself to 9.6 where it was added,
as having a definitional inconsistency between branches would obviously
be pretty hazardous for back-patching TAP tests. The other changes are
just cosmetic and don't need to be back-patched.
Discussion: https://postgr.es/m/19751.1492892376@sss.pgh.pa.us
It doesn't make any immediate difference to PostgreSQL, but might as well
follow the standard, since one exists. (I looked at RFC 5803 earlier, but
didn't fully understand it back then.)
The new format uses Base64 instead of hex to encode StoredKey and
ServerKey, which makes the verifiers slightly smaller. Using the same
encoding for the salt and the keys also means that you only need one
encoder/decoder instead of two. Although we have code in the backend to
do both, we are talking about teaching libpq how to create SCRAM verifiers
for PQencodePassword(), and libpq doesn't currently have any code for hex
encoding.
Bump catversion, because this renders any existing SCRAM verifiers in
pg_authid invalid.
Discussion: https://www.postgresql.org/message-id/351ba574-85ea-d9b8-9689-8c928dd0955d@iki.fi
Give a more specific error message than "xyz is not a table".
Also document in CREATE PUBLICATION which kinds of relations are not
supported.
based on patch by Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
is_parallel_safe() supposed that the only relevant property of a SubPlan
was the parallel safety of the referenced subplan tree. This is wrong:
the testexpr or args subtrees might contain parallel-unsafe stuff, as
demonstrated by the test case added here. However, just recursing into the
subtrees fails in a different way: we'll typically find PARAM_EXEC Params
representing the subplan's output columns in the testexpr. The previous
coding supposed that any Param must be treated as parallel-restricted, so
that a naive attempt at fixing this disabled parallel pushdown of SubPlans
altogether. We must instead determine, for any visited Param, whether it
is one that would be computed by a surrounding SubPlan node; if so, it's
safe to push down along with the SubPlan node.
We might later be able to extend this logic to cope with Params used for
correlated subplans and other cases; but that's a task for v11 or beyond.
Tom Lane and Amit Kapila
Discussion: https://postgr.es/m/7064.1492022469@sss.pgh.pa.us
Since it appears that v10 is going to move the goalposts by some amount
in terms of where you can and can't invoke set-returning functions,
arrange for the executor's "set-valued function called in context that
cannot accept a set" errors to include a syntax position if possible,
pointing to the specific SRF that can't be called where it's located.
The main bit of infrastructure needed for this is to make the query source
text accessible in the executor; but it turns out that commit 4c728f382
already did that. We just need a new function executor_errposition()
modeled on parser_errposition(), and we're ready to rock.
While experimenting with this, I noted that the error position wasn't
properly reported if it occurred in a plpgsql FOR-over-query loop,
which turned out to be because SPI_cursor_open_internal wasn't providing
an error context callback during PortalStart. Fix that.
There's a whole lot more that could be done with this infrastructure
now that it's there, but this is not the right time in the development
cycle for that sort of work. Hence, resist the temptation to plaster
executor_errposition() calls everywhere ... for the moment.
Discussion: https://postgr.es/m/5263.1492471571@sss.pgh.pa.us
Per discussion, plain "scram" is confusing because we actually implement
SCRAM-SHA-256 rather than the original SCRAM that uses SHA-1 as the hash
algorithm. If we add support for SCRAM-SHA-512 or some other mechanism in
the SCRAM family in the future, that would become even more confusing.
Most of the internal files and functions still use just "scram" as a
shorthand for SCRMA-SHA-256, but I did change PASSWORD_TYPE_SCRAM to
PASSWORD_TYPE_SCRAM_SHA_256, as that could potentially be used by 3rd
party extensions that hook into the password-check hook.
Michael Paquier did this in an earlier version of the SCRAM patch set
already, but I didn't include that in the version that was committed.
Discussion: https://www.postgresql.org/message-id/fde71ff1-5858-90c8-99a9-1c2427e7bafb@iki.fi
We were accepting creation of extended statistics only for regular
tables, but they can usefully be created for foreign tables, partitioned
tables, and materialized views, too. Allow those cases.
While at it, make sure all the rejected cases throw a consistent error
message, and add regression tests for the whole thing.
Author: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f-BmGo410bh5RSPZUvOO0LhmHL2NYmdrC_Jm8pk_FfyCA@mail.gmail.com
Either because of a previous ALTER TABLE .. SET STATISTICS 0 or because
of being invoked with a partial column list, ANALYZE could fail to
acquire sufficient data to build extended statistics. Previously, this
would draw an ERROR and fail to collect any statistics at all (extended
and regular). Change things so that we raise a WARNING instead, and
remove a hint that was wrong in half the cases.
Reported by: David Rowley
Discussion: https://postgr.es/m/CAKJS1f9Kk0NF6Fg7TA=JUXsjpS9kX6NVu27pb5QDCpOYAvb-Og@mail.gmail.com
Coverity complained because bgw.bgw_extra wasn't being filled in by
ApplyLauncherRegister(). The most future-proof fix is to memset the
whole BackgroundWorker struct to zeroes. While at it, let's apply the
same coding rule to other places that set up BackgroundWorker structs;
four out of five had the same or related issues.
validateCheckConstraint() shouldn't try to access the storage for
a partitioned table, because it no longer has any. Creating a
_RETURN table on a partitioned table shouldn't be allowed, both
because there's no value in it and because trying to do so would
involve a validation scan against its nonexistent storage.
Amit Langote, reviewed by Tom Lane. Regression test outputs
updated to pass by me.
Discussion: http://postgr.es/m/e5c3cbd3-1551-d6f8-c9e2-51777d632fd2@lab.ntt.co.jp
Commit 96a7128b made pg_dump and pg_dumpall sync their output by
default. However, there's no great need for that in testing, and it
could impose a performance penalty, so we add the --no-sync flag to most
of the test cases.
Michael Paquier
We decided in f1b4c771ea to pass the
original slot to ExecConstraints(), but that breaks when there are
BEFORE ROW triggers involved. So we need to do reverse-map the tuples
back to the original descriptor instead, as Amit originally proposed.
Amit Langote, reviewed by Ashutosh Bapat. One overlooked comment
fixed by me.
Discussion: http://postgr.es/m/b3a17254-6849-e542-2353-bde4e880b6a4@lab.ntt.co.jp
Commit 4deb41381 modified isolationtester's query to see whether a
session is blocked to also check for waits occurring in GetSafeSnapshot.
However, it did that in a way that enormously increased the query's
runtime under CLOBBER_CACHE_ALWAYS, causing the buildfarm members
that use that to run about four times slower than before, and in some
cases fail entirely. To fix, push the entire logic into a dedicated
backend function. This should actually reduce the CLOBBER_CACHE_ALWAYS
runtime from what it was previously, though I've not checked that.
In passing, expose a SQL function to check for safe-snapshot blockage,
comparable to pg_blocking_pids. This is more or less free given the
infrastructure built to solve the other problem, so we might as well.
Thomas Munro
Discussion: https://postgr.es/m/20170407165749.pstcakbc637opkax@alap3.anarazel.de
If there can certainly be no more than one matching inner row for a given
outer row, then the executor can move on to the next outer row as soon as
it's found one match; there's no need to continue scanning the inner
relation for this outer row. This saves useless scanning in nestloop
and hash joins. In merge joins, it offers the opportunity to skip
mark/restore processing, because we know we have not advanced past the
first possible match for the next outer row.
Of course, the devil is in the details: the proof of uniqueness must
depend only on joinquals (not otherquals), and if we want to skip
mergejoin mark/restore then it must depend only on merge clauses.
To avoid adding more planning overhead than absolutely necessary,
the present patch errs in the conservative direction: there are cases
where inner_unique or skip_mark_restore processing could be used, but
it will not do so because it's not sure that the uniqueness proof
depended only on "safe" clauses. This could be improved later.
David Rowley, reviewed and rather heavily editorialized on by me
Discussion: https://postgr.es/m/CAApHDvqF6Sw-TK98bW48TdtFJ+3a7D2mFyZ7++=D-RyPsL76gw@mail.gmail.com
When adding atomics back in b64d92f1a, I added 64bit support as
optional; there wasn't yet a direct user in sight. That turned out to
be a bit short-sighted, it'd already have been useful a number of times.
Add a fallback implementation of 64bit atomics, just like the one we
have for 32bit atomics.
Additionally optimize reads/writes to 64bit on a number of platforms
where aligned writes of that size are atomic. This can now be tested
with PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY.
Author: Andres Freund
Reviewed-By: Amit Kapila
Discussion: https://postgr.es/m/20160330230914.GH13305@awork2.anarazel.de
As reported by Sean Johnston in bug #14614, since 9.6 the planner can fail
due to trying to look up the referent of a Var with varno 0. This happens
because we generate such Vars in generate_append_tlist, for lack of any
better way to describe the output of a SetOp node. In typical situations
nothing really cares about that, but given nested set-operation queries
we will call estimate_num_groups on the output of the subquery, and that
wants to know what a Var actually refers to. That logic used to look at
subquery->targetList, but in commit 3fc6e2d7f I'd switched it to look at
subroot->processed_tlist, ie the actual output of the subquery plan not the
parser's idea of the result. It seemed like a good idea at the time :-(.
As a band-aid fix, change it back.
Really we ought to have an honest way of naming the outputs of SetOp steps,
which suggests that it'd be a good idea for the parser to emit an RTE
corresponding to each one. But that's a task for another day, and it
certainly wouldn't yield a back-patchable fix.
Report: https://postgr.es/m/20170407115808.25934.51866@wrigleys.postgresql.org
An important step of SASLprep normalization, is to convert the string to
Unicode normalization form NFKC. Unicode normalization requires a fairly
large table of character decompositions, which is generated from data
published by the Unicode consortium. The script to generate the table is
put in src/common/unicode, as well test code for the normalization.
A pre-generated version of the tables is included in src/include/common,
so you don't need the code in src/common/unicode to build PostgreSQL, only
if you wish to modify the normalization tables.
The SASLprep implementation depends on the UTF-8 functions from
src/backend/utils/mb/wchar.c. So to use it, you must also compile and link
that. That doesn't change anything for the current users of these
functions, the backend and libpq, as they both already link with wchar.o.
It would be good to move those functions into a separate file in
src/commmon, but I'll leave that for another day.
No documentation changes included, because there is no details on the
SCRAM mechanism in the docs anyway. An overview on that in the protocol
specification would probably be good, even though SCRAM is documented in
detail in RFC5802. I'll write that as a separate patch. An important thing
to mention there is that we apply SASLprep even on invalid UTF-8 strings,
to support other encodings.
Patch by Michael Paquier and me.
Discussion: https://www.postgresql.org/message-id/CAB7nPqSByyEmAVLtEf1KxTRh=PWNKiWKEKQR=e1yGehz=wbymQ@mail.gmail.com
With this change array fields are populated from json(b) arrays, and
composite fields are populated from json(b) objects.
Along the way, some significant code refactoring is done to remove
redundancy in the way to populate_record[_set] and to_record[_set]
functions operate, and some significant efficiency gains are made by
caching tuple descriptors.
Nikita Glukhov, edited some by me.
Reviewed by Aleksander Alekseev and Tom Lane.
tupconvert.c's functions formerly considered that an explicit tuple
conversion was necessary if the input and output tupdescs contained
different type OIDs. The point of that was to make sure that a composite
datum resulting from the conversion would contain the destination rowtype
OID in its composite-datum header. However, commit 3838074f8 entirely
misunderstood what that check was for, thinking that it had something to do
with presence or absence of an OID column within the tuple. Removal of the
check broke the no-op conversion path in ExecEvalConvertRowtype, as
reported by Ashutosh Bapat.
It turns out that of the dozen or so call sites for tupconvert.c functions,
ExecEvalConvertRowtype is the only one that cares about the composite-datum
header fields in the output tuple. In all the rest, we'd much rather avoid
an unnecessary conversion whenever the tuples are physically compatible.
Moreover, the comments in tupconvert.c only promise physical compatibility
not a metadata match. So, let's accept the removal of the guarantee about
the output tuple's rowtype marking, recognizing that this is a API change
that could conceivably break third-party callers of tupconvert.c. (So,
let's remember to mention it in the v10 release notes.)
However, commit 3838074f8 did have a bit of a point here, in that two
tuples mustn't be considered physically compatible if one has HEAP_HASOID
set and the other doesn't. (Some of the callers of tupconvert.c might not
really care about that, but we can't assume it in general.) The previous
check accidentally covered that issue, because no RECORD types ever have
OIDs, while if two tupdescs have the same named composite type OID then,
a fortiori, they have the same tdhasoid setting. If we're removing the
type OID match check then we'd better include tdhasoid match as part of
the physical compatibility check.
Without that hack in tupconvert.c, we need ExecEvalConvertRowtype to take
responsibility for inserting the correct rowtype OID label whenever
tupconvert.c decides it need not do anything. This is easily done with
heap_copy_tuple_as_datum, which will be considerably faster than a tuple
disassembly and reassembly anyway; so from a performance standpoint this
change is a win all around compared to what happened in earlier branches.
It just means a couple more lines of code in ExecEvalConvertRowtype.
Ashutosh Bapat and Tom Lane
Discussion: https://postgr.es/m/CAFjFpRfvHABV6+oVvGcshF8rHn+1LfRUhj7Jz1CDZ4gPUwehBg@mail.gmail.com
The original code was overly optimistic about the cost of scanning a
BRIN index, leading to BRIN indexes being selected when they'd be a
worse choice than some other index. This complete rewrite should be
more accurate.
Author: David Rowley, based on an earlier patch by Emre Hasegeli
Reviewed-by: Emre Hasegeli
Discussion: https://postgr.es/m/CAKJS1f9n-Wapop5Xz1dtGdpdqmzeGqQK4sV2MK-zZugfC14Xtw@mail.gmail.com
When sending a tuple attribute, the previous coding erroneously sent the
length byte before encoding conversion, which would lead to protocol
failures on the receiving side if the length did not match the following
string.
To fix that, use pq_sendcountedtext() for sending tuple attributes,
which takes care of all of that internally. To match the API of
pq_sendcountedtext(), send even text values without a trailing zero byte
and have the receiving end put it in place instead. This matches how
the standard FE/BE protocol behaves.
Reported-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Also add opr_sanity check that all preloaded immutable functions are
parallel safe. (Per discussion, this does not necessarily have to be
true for all possible such functions, but deviations would be unlikely
enough that maintaining such a test is reasonable.)
Reported-by: David Rowley <david.rowley@2ndquadrant.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
This is the SQL standard-conforming variant of PostgreSQL's serial
columns. It fixes a few usability issues that serial columns have:
- CREATE TABLE / LIKE copies default but refers to same sequence
- cannot add/drop serialness with ALTER TABLE
- dropping default does not drop sequence
- need to grant separate privileges to sequence
- other slight weirdnesses because serial is some kind of special macro
Reviewed-by: Vitaly Burovoy <vitaly.burovoy@gmail.com>
When using integer timestamps, the interval-comparison functions tried
to compute the overall magnitude of an interval as an int64 number of
microseconds. As reported by Frazer McLean, this overflows for intervals
exceeding about 296000 years, which is bad since we nominally allow
intervals many times larger than that. That results in wrong comparison
results, and possibly in corrupted btree indexes for columns containing
such large interval values.
To fix, compute the magnitude as int128 instead. Although some compilers
have native support for int128 calculations, many don't, so create our
own support functions that can do 128-bit addition and multiplication
if the compiler support isn't there. These support functions are designed
with an eye to allowing the int128 code paths in numeric.c to be rewritten
for use on all platforms, although this patch doesn't do that, or even
provide all the int128 primitives that will be needed for it.
Back-patch as far as 9.4. Earlier releases did not guard against overflow
of interval values at all (commit 146604ec4 fixed that), so it seems not
very exciting to worry about overly-large intervals for them.
Before 9.6, we did not assume that unreferenced "static inline" functions
would not draw compiler warnings, so omit functions not directly referenced
by timestamp.c, the only present consumer of int128.h. (We could have
omitted these functions in HEAD too, but since they were written and
debugged on the way to the present patch, and they look likely to be needed
by numeric.c, let's keep them in HEAD.) I did not bother to try to prevent
such warnings in a --disable-integer-datetimes build, though.
Before 9.5, configure will never define HAVE_INT128, so the part of
int128.h that exploits a native int128 implementation is dead code in the
9.4 branch. I didn't bother to remove it, thinking that keeping the file
looking similar in different branches is more useful.
In HEAD only, add a simple test harness for int128.h in src/tools/.
In back branches, this does not change the float-timestamps code path.
That's not subject to the same kind of overflow risk, since it computes
the interval magnitude as float8. (No doubt, when this code was originally
written, overflow was disregarded for exactly that reason.) There is a
precision hazard instead :-(, but we'll avert our eyes from that question,
since no complaints have been reported and that code's deprecated anyway.
Kyotaro Horiguchi and Tom Lane
Discussion: https://postgr.es/m/1490104629.422698.918452336.26FA96B7@webmail.messagingengine.com
Follow on patch in the multi-variate statistics patch series.
CREATE STATISTICS s1 WITH (dependencies) ON (a, b) FROM t;
ANALYZE;
will collect dependency stats on (a, b) and then use the measured
dependency in subsequent query planning.
Commit 7b504eb282 added
CREATE STATISTICS with n-distinct coefficients. These are now
specified using the mutually exclusive option WITH (ndistinct).
Author: Tomas Vondra, David Rowley
Reviewed-by: Kyotaro HORIGUCHI, Álvaro Herrera, Dean Rasheed, Robert Haas
and many other comments and contributions
Discussion: https://postgr.es/m/56f40b20-c464-fad2-ff39-06b668fac47c@2ndquadrant.com
This improves code coverage and lays a foundation for testing
similar issues in a distributed environment.
Author: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
When changing the type of a sequence, adjust the min/max values of the
sequence if it looks like the previous values were the default values.
Previously, it would leave the old values in place, requiring manual
adjustments even in the usual/default cases.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Vitaly Burovoy <vitaly.burovoy@gmail.com>
Per discussion, the TAP tests are really more verbose than necessary, so
remove the --verbose flag from PROVE_FLAGS. Also add comments to let
folks know how they can enable it if they really wish to, as suggested
by Craig Ringer.
Author: Michael Paquier, additional comments by me.
Discussion: https://postgr.es/m/CAMsr%2BYGAzcMDOZ_BirnMCL6Sb%3DMUjP0FRE82YBDSbXcf6pm9Yg%40mail.gmail.com
In commit e984ef586 I (tgl) simplified the behavior of \p to just print
the current query buffer; but Daniel Vérité points out that this made it
inconsistent with the behavior of \g and \w. It should print the same
thing \g would execute. Fix that, and improve related comments.
Daniel Vérité
Discussion: https://postgr.es/m/9b4ea968-753f-4b5f-b46c-d7d3bf7c8f90@manitou-mail.org
When the BRIN summary tuple for a page range becomes too "wide" for the
values actually stored in the table (because the tuples that were
present originally are no longer present due to updates or deletes), it
can be useful to remove the outdated summary tuple, so that a future
summarization can install a tighter summary.
This commit introduces a SQL-callable interface to do so.
Author: Álvaro Herrera
Reviewed-by: Eiji Seki
Discussion: https://postgr.es/m/20170228045643.n2ri74ara4fhhfxf@alvherre.pgsql
Previously, only VACUUM would cause a page range to get initially
summarized by BRIN indexes, which for some use cases takes too much time
since the inserts occur. To avoid the delay, have brininsert request a
summarization run for the previous range as soon as the first tuple is
inserted into the first page of the next range. Autovacuum is in charge
of processing these requests, after doing all the regular vacuuming/
analyzing work on tables.
This doesn't impose any new tasks on autovacuum, because autovacuum was
already in charge of doing summarizations. The only actual effect is to
change the timing, i.e. that it occurs earlier. For this reason, we
don't go any great lengths to record these requests very robustly; if
they are lost because of a server crash or restart, they will happen at
a later time anyway.
Most of the new code here is in autovacuum, which can now be told about
"work items" to process. This can be used for other things such as GIN
pending list cleaning, perhaps visibility map bit setting, both of which
are currently invoked during vacuum, but do not really depend on vacuum
taking place.
The requests are at the page range level, a granularity for which we did
not have SQL-level access; we only had index-level summarization
requests via brin_summarize_new_values(). It seems reasonable to add
SQL-level access to range-level summarization too, so add a function
brin_summarize_range() to do that.
Authors: Álvaro Herrera, based on sketch from Simon Riggs.
Reviewed-by: Thomas Munro.
Discussion: https://postgr.es/m/20170301045823.vneqdqkmsd4as4ds@alvherre.pgsql
A QueryEnvironment concept is added, which allows new types of
objects to be passed into queries from parsing on through
execution. At this point, the only thing implemented is a
collection of EphemeralNamedRelation objects -- relations which
can be referenced by name in queries, but do not exist in the
catalogs. The only type of ENR implemented is NamedTuplestore, but
provision is made to add more types fairly easily.
An ENR can carry its own TupleDesc or reference a relation in the
catalogs by relid.
Although these features can be used without SPI, convenience
functions are added to SPI so that ENRs can easily be used by code
run through SPI.
The initial use of all this is going to be transition tables in
AFTER triggers, but that will be added to each PL as a separate
commit.
An incidental effect of this patch is to produce a more informative
error message if an attempt is made to modify the contents of a CTE
from a referencing DML statement. No tests previously covered that
possibility, so one is added.
Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
On EXEC_BACKEND builds, this can fail if ASLR is in use.
Backpatch to 9.5. On master, completely remove the bgw_main field
completely, since there is no situation in which it is safe for an
EXEC_BACKEND build. On 9.6 and 9.5, leave the field intact to avoid
breaking things for third-party code that doesn't care about working
under EXEC_BACKEND. Prior to 9.5, there are no in-core bgworker
entrypoints.
Petr Jelinek, reviewed by me.
Discussion: http://postgr.es/m/09d8ad33-4287-a09b-a77f-77f8761adb5e@2ndquadrant.com
Commit e306df7f9 added a test case that depends on "the" being a
stop word, which it is not in non-English locales. Since the
point of the test is to check stopword behavior, fix by forcibly
selecting the 'english' configuration.
Per buildfarm.
This reverts commit 8355a011a0, which
turns out to have been a misguided effort. We can't really support
this in a partitioning hierarchy after all for exactly the reasons
stated in the documentation removed by that commit. It's still
possible to use ON CONFLICT .. DO NOTHING (or for that matter ON
CONFLICT .. DO UPDATE) on individual partitions if desired, but
but to allow this on a partitioned table implies that we have some
way of evaluating uniqueness across the whole partitioning
hierarchy, which is false.
Shinoda Noriyoshi noticed that the old code was crashing (which we
could fix, though not in a nice way) and Amit Langote realized
that this was indicative of a fundamental problem with the commit
being reverted here.
Discussion: http://postgr.es/m/ff3dc21d-7204-c09c-50ac-cf11a8c45c81@lab.ntt.co.jp
This patch adds nestable conditional blocks to psql. The control
structure feature per se is complete, but the boolean expressions
understood by \if and \elif are pretty primitive; basically, after
variable substitution and backtick expansion, the result has to be
"true" or "false" or one of the other standard spellings of a boolean
value. But that's enough for many purposes, since you can always
do the heavy lifting on the server side; and we can extend it later.
Along the way, pay down some of the technical debt that had built up
around psql/command.c:
* Refactor exec_command() into a function per command, instead of
being a 1500-line monstrosity. This makes the file noticeably longer
because of repetitive function header/trailer overhead, but it seems
much more readable.
* Teach psql_get_variable() and psqlscanslash.l to suppress variable
substitution and backtick expansion on the basis of the conditional
stack state, thereby allowing removal of the OT_NO_EVAL kluge.
* Fix the no-doubt-once-expedient hack of sometimes silently substituting
mainloop.c's previous_buf for query_buf when calling HandleSlashCmds.
(It's a bit remarkable that commands like \r worked at all with that.)
Recall of a previous query is now done explicitly in the slash commands
where that should happen.
Corey Huinker, reviewed by Fabien Coelho, further hacking by me
Discussion: https://postgr.es/m/CADkLM=c94OSRTnat=LX0ivNq4pxDNeoomFfYvBKM5N_xfmLtAA@mail.gmail.com
The V0 convention is failure prone because we've so far assumed that a
function is V0 if PG_FUNCTION_INFO_V1 is missing, leading to crashes
if a function was coded against the V1 interface. V0 doesn't allow
proper NULL, SRF and toast handling. V0 doesn't offer features that
V1 doesn't.
Thus remove V0 support and obsolete fmgr README contents relating to
it.
Author: Andres Freund, with contributions by Peter Eisentraut & Craig Ringer
Reviewed-By: Peter Eisentraut, Craig Ringer
Discussion: https://postgr.es/m/20161208213441.k3mbno4twhg2qf7g@alap3.anarazel.de
Reduce noise from TAP tests by changing 'diag' to 'note', so output only
goes to the test's log file not stdout, unless in verbose mode. This
also removes the junk on screen when running the TAP tests in parallel.
Author: Craig Ringer <craig@2ndquadrant.com>
Formerly, a Var referencing an already-dropped column was allowed and would
always produce a NULL value. However, that behavior was implemented in
slot_getattr which the new expression code doesn't use; thus there is now a
risk of returning theoretically-deleted data. We had regression test cases
that purported to exercise this, but they failed to expose any problem,
apparently because plpgsql filters the dropped column and produces an
output tuple that has a NULL there already.
Ideally the DROP or ALTER attempt in these test cases would get rejected
due to dependency checks; but until that happens, let's modify the behavior
so that we fail the query during executor start. This was already true for
the related case of a column having changed type underneath us, and there's
no obvious reason why we need to be laxer for dropped columns.
In passing, adjust the error messages in CheckVarSlotCompatibility to
include the composite type name. In the cases shown in the regression
tests this is always just "record", but it should be more useful in
actual stale-plan cases, where the slot tupdesc would be a table's
tupdesc directly.
Discussion: https://postgr.es/m/16803.1490723570@sss.pgh.pa.us
Automatically drop all logical replication slots associated with a
database when the database is dropped. Previously we threw an ERROR
if a slot existed. Now we throw ERROR only if a slot is active in
the database being dropped.
Craig Ringer
As suggested by Tom Lane, avoid printing specific estimated cost values,
because they vary across architectures; instead, verify plan shapes (in
this case, HashAggregate vs. GroupAggregate), as we do in other planner
tests.
We can now remove expected/stats_ext_1.out.
Author: Tomas Vondra
The previous default 'pg_log' might have indicated by its "pg_" prefix
that it is an internal system directory. The new default is more in
line with the typical naming of directories with user-facing log files.
Together with the renaming of pg_clog and pg_xlog, this should clear up
that difference.
Author: Andreas Karlsson <andreas@proxel.se>
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
There was a thinko whereby we tested the wrong tuple after fetching it
from cache; avoid that by using generate_relation_name instead, which is
simpler. Also, the statistics name was not qualified, so add that. (It
could be argued that qualification should be conditional on the schema
not being on search path. We can add that later, but at least this form
is correct.)
Author: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f8RjLeVZJ2+93pdQGuZJeBF-ifsHaFMR-q-6-Z0qxA8cA@mail.gmail.com
This extends the Aggregate node with two new features: HashAggregate
can now run multiple hashtables concurrently, and a new strategy
MixedAggregate populates hashtables while doing sorted grouping.
The planner will now attempt to save as many sorts as possible when
planning grouping sets queries, while not exceeding work_mem for the
estimated combined sizes of all hashtables used. No SQL-level changes
are required. There should be no user-visible impact other than the
new EXPLAIN output and possible changes to result ordering when ORDER
BY was not used (which affected a few regression tests). The
enable_hashagg option is respected.
Author: Andrew Gierth
Reviewers: Mark Dilger, Andres Freund
Discussion: https://postgr.es/m/87vatszyhj.fsf@news-spur.riddles.org.uk
Previously, auxiliary processes and background workers not connected
to a database (such as the logical replication launcher) weren't
shown. Include them, so that we can see the associated wait state
information. Add a new column to identify the processes type, so that
people can filter them out easily using SQL if they wish.
Before this patch was written, there was discussion about whether we
should expose this information in a separate view, so as to avoid
contaminating pg_stat_activity with things people might not want to
see. But putting everything in pg_stat_activity was a more popular
choice, so that's what the patch does.
Kuntal Ghosh, reviewed by Amit Langote and Michael Paquier. Some
revisions and bug fixes by me.
Discussion: http://postgr.es/m/CA+TgmoYES5nhkEGw9nZXU8_FhA8XEm8NTm3-SO+3ML1B81Hkww@mail.gmail.com
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.
This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.
The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
out operation metadata sequentially; including the avoidance of
nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
constant re-checks at evaluation time
Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.
The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
overhead of expression evaluation, by caching state in prepared
statements. That'd be helpful in OLTPish scenarios where
initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
been made here too.
The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
initialization, whereas previously they were done during
execution. In edge cases this can lead to errors being raised that
previously wouldn't have been, e.g. a NULL array being coerced to a
different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
during expression initialization, previously it was re-built
every time a domain check was evaluated. For normal queries this
doesn't change much, but e.g. for plpgsql functions, which caches
ExprStates, the old set could stick around longer. The behavior
around might still change.
Author: Andres Freund, with significant changes by Tom Lane,
changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
As explained at the head of parallel_schedule, we place an arbitrary limit
of 20 test cases per parallel group. Commit c7a9fa399 overlooked this.
Least messy solution seems to be to move the "comments" test to the next
group, since it doesn't really belong in a group of datatype tests anyway.
Commit e3920ac82 created "regress_subscription_user2" in subscription.sql,
but forgot to drop it, causing the regression tests to fail if run twice
without re-initdb'ing.
If the upstream walsender is using a physical replication slot, store the
catalog_xmin in the slot's catalog_xmin field. If the upstream doesn't use a
slot and has only a PGPROC entry behaviour doesn't change, as we store the
combined xmin and catalog_xmin in the PGPROC entry.
Author: Craig Ringer
These tests require the test database to be in UTF8 encoding. Until
there is a better solution, take them out of the default test set and
treat them like the existing collate.linux.utf8 test, meaning it has to
be selected manually.
Because tuple packing is different (because of the MAXALIGN difference),
the expected costs of a seqscan is different.
The commonly used trick of eliding costs in EXPLAIN output (COSTS OFF)
would make the tests completely pointless. Instead, add an alternative
expected file.
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.czhttps://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
If your connection to the database server is lost while a COMMIT is
in progress, it may be difficult to figure out whether the COMMIT was
successful or not. This function will tell you, provided that you
don't wait too long to ask. It may be useful in other situations,
too.
Craig Ringer, reviewed by Simon Riggs and by me
Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason
we cannot attempt to perform SCRAM authentication instead of MD5. The worst
that can happen is that the client doesn't support SCRAM, and the
authentication will fail. But previously, it would fail for sure, because
we would not even try. SCRAM is strictly more secure than MD5, so there's
no harm in trying it. This allows for a more graceful transition from MD5
passwords to SCRAM, as user passwords can be changed to SCRAM verifiers
incrementally, without changing pg_hba.conf.
Refactor the code in auth.c to support that better. Notably, we now have to
look up the user's pg_authid entry before sending the password challenge,
also when performing MD5 authentication. Also simplify the concept of a
"doomed" authentication. Previously, if a user had a password, but it had
expired, we still performed SCRAM authentication (but always returned error
at the end) using the salt and iteration count from the expired password.
Now we construct a fake salt, like we do when the user doesn't have a
password or doesn't exist at all. That simplifies get_role_password(), and
we can don't need to distinguish the "user has expired password", and
"user does not exist" cases in auth.c.
On second thoughts, also rename uaSASL to uaSCRAM. It refers to the
mechanism specified in pg_hba.conf, and while we use SASL for SCRAM
authentication at the protocol level, the mechanism should be called SCRAM,
not SASL. As a comparison, we have uaLDAP, even though it looks like the
plain 'password' authentication at the protocol level.
Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us
Reviewed-by: Michael Paquier
Add a column collprovider to pg_collation that determines which library
provides the collation data. The existing choices are default and libc,
and this adds an icu choice, which uses the ICU4C library.
The pg_locale_t type is changed to a union that contains the
provider-specific locale handles. Users of locale information are
changed to look into that struct for the appropriate handle to use.
Also add a collversion column that records the version of the collation
when it is created, and check at run time whether it is still the same.
This detects potentially incompatible library upgrades that can corrupt
indexes and other structures. This is currently only supported by
ICU-provided collations.
initdb initializes the default collation set as before from the `locale
-a` output but also adds all available ICU locales with a "-x-icu"
appended.
Currently, ICU-provided collations can only be explicitly named
collations. The global database locales are still always libc-provided.
ICU support is enabled by configure --with-icu.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Adds write_lag, flush_lag and replay_lag cols to pg_stat_replication.
Implements a lag tracker module that reports the lag times based upon
measurements of the time taken for recent WAL to be written, flushed and
replayed and for the sender to hear about it. These times
represent the commit lag that was (or would have been) introduced by each
synchronous commit level, if the remote server was configured as a
synchronous standby. For an asynchronous standby, the replay_lag column
approximates the delay before recent transactions became visible to queries.
If the standby server has entirely caught up with the sending server and
there is no more WAL activity, the most recently measured lag times will
continue to be displayed for a short time and then show NULL.
Physical replication lag tracking is automatic. Logical replication tracking
is possible but is the responsibility of the logical decoding plugin.
Tracking is a private module operating within each walsender individually,
with values reported to shared memory. Module not used outside of walsender.
Design and code is good enough now to commit - kudos to the author.
In many ways a difficult topic, with important and subtle behaviour so this
shoudl be expected to generate discussion and multiple open items: Test now!
Author: Thomas Munro, following designs by Fujii Masao and Simon Riggs
Review: Simon Riggs, Ian Barwick and Craig Ringer
Add functionality for a new subscription to copy the initial data in the
tables and then sync with the ongoing apply process.
For the copying, add a new internal COPY option to have the COPY source
data provided by a callback function. The initial data copy works on
the subscriber by receiving COPY data from the publisher and then
providing it locally into a COPY that writes to the destination table.
A WAL receiver can now execute full SQL commands. This is used here to
obtain information about tables and publications.
Several new options were added to CREATE and ALTER SUBSCRIPTION to
control whether and when initial table syncing happens.
Change pg_dump option --no-create-subscription-slots to
--no-subscription-connect and use the new CREATE SUBSCRIPTION
... NOCONNECT option for that.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Tested-by: Erik Rijkers <er@xs4all.nl>
Previously, the new owner had to be a superuser. The new rules are more
refined similar to other objects.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Uses page-based mechanism to ensure we’re using the correct timeline.
Tests are included to exercise the functionality using a cold disk-level copy
of the master that's started up as a replica with slots intact, but the
intended use of the functionality is with later features.
Craig Ringer, reviewed by Simon Riggs and Andres Freund
Allows testing of logical decoding using SQL interface and/or pg_recvlogical
Most logical decoding tests are in contrib/test_decoding. This module
is for work that doesn't fit well there, like where server restarts
are required.
Craig Ringer
Partitioned tables do not contain any data; only their unpartitioned
descendents need to be scanned. However, the partitioned tables still
need to be locked, even though they're not scanned. To make that
work, Append and MergeAppend relations now need to carry a list of
(unscanned) partitioned relations that must be locked, and InitPlan
must lock all partitioned result relations.
Aside from the obvious advantage of avoiding some work at execution
time, this has two other advantages. First, it may improve the
planner's decision-making in some cases since the empty relation
might throw things off. Second, it paves the way to getting rid of
the storage for partitioned tables altogether.
Amit Langote, reviewed by me.
Discussion: http://postgr.es/m/6837c359-45c4-8044-34d1-736756335a15@lab.ntt.co.jp
of SP-GiST.
Bug exists since initial commit of box opclass for SP-GiST,
so backpath to 9.6
Author: Nikita Glukhov with minor editorization of tests by me
Reviewed-by: Kyotaro Horiguchi, Anastasia Lubennikova
https://commitfest.postgresql.org/13/981/