We do this via a subclass for any branch older than the minimum known
to be compatible with the main package (currently release 12).
This should be useful for constructing cross-version tests.
In theory this could be extended back any number of versions, with
varying degrees of compatibility.
Reviewed by Michael Paquier and Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/a3efd19a-d5c9-fdf2-6094-4cde056a2708@dunslane.net
COPY FROM supports the HEADER option to silently discard the header
line from a CSV or text file. It is possible to load by mistake a
file that matches the expected format, for example, if two text
columns have been swapped, resulting in garbage in the database.
This adds a new option value HEADER MATCH that checks the column names
in the header line against the actual column names and errors out if
they do not match.
Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
The current logical replication behavior is to send every transaction to
subscriber even if the transaction is empty. This can happen because
transaction doesn't contain changes from the selected publications or all
the changes got filtered. It is a waste of CPU cycles and network
bandwidth to build/transmit these empty transactions.
This patch addresses the above problem by postponing the BEGIN message
until the first change is sent. While processing a COMMIT message, if
there was no other change for that transaction, do not send the COMMIT
message. This allows us to skip sending BEGIN/COMMIT messages for empty
transactions.
When skipping empty transactions in synchronous replication mode, we send
a keepalive message to avoid delaying such transactions.
Author: Ajin Cherian, Hou Zhijie, Euler Taveira
Reviewed-by: Peter Smith, Takamichi Osumi, Shi Yu, Masahiko Sawada, Greg Nancarrow, Vignesh C, Amit Kapila
Discussion: https://postgr.es/m/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com
Curently, some TAP test that directly call the underlying function
PostgreSQL::Test::Utils::run_log() care about the return value, but
none of those that call it via PostgreSQL::Test::Cluster::run_log() care.
However, I'd like to add a test that will care, so adjust this function
to return whatever it gets back from the underlying function, just as
we do for a number of other functions in this module.
Discussion: http://postgr.es/m/CA+Tgmobj6u-nWF-j=FemygUhobhryLxf9h-wJN7W-2rSsseHNA@mail.gmail.com
This introduces the SQL/JSON functions for querying JSON data using
jsonpath expressions. The functions are:
JSON_EXISTS()
JSON_QUERY()
JSON_VALUE()
All of these functions only operate on jsonb. The workaround for now is
to cast the argument to jsonb.
JSON_EXISTS() tests if the jsonpath expression applied to the jsonb
value yields any values. JSON_VALUE() must return a single value, and an
error occurs if it tries to return multiple values. JSON_QUERY() must
return a json object or array, and there are various WRAPPER options for
handling scalar or multi-value results. Both these functions have
options for handling EMPTY and ERROR conditions.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
This view is similar to pg_hba_file_rules view, except that it is
associated with the parsing of pg_ident.conf. Similarly to its cousin,
this view is useful to check via SQL if changes planned in pg_ident.conf
would work upon reload or restart, or to diagnose a previous failure.
Bumps catalog version.
Author: Julien Rouhaud
Reviewed-by: Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
The regression tests include a query to check the execution path of
pg_hba_file_rules, but it has never checked that a given cluster has
correct contents in pg_hba.conf. This commit extends the query of
pg_hba_file_rules to report any errors if anything bad is found. For
EXEC_BACKEND builds, any connection attempt would fail when loading
pg_hba.conf if any incorrect content is found when parsed, so a failure
would be detected before even running this query. However, this can
become handy for clusters where pg_hba.conf can be reloaded, where new
connection attempts are not subject to a fresh loading of pg_hba.conf.
Author: Julien Rouhaud, based on an idea from me
Discussion: https://postgr.es/m/YkFhpydhyeNNo3Xl@paquier.xyz
This patch intrdocuces the SQL standard IS JSON predicate. It operates
on text and bytea values representing JSON as well as on the json and
jsonb types. Each test has an IS and IS NOT variant. The tests are:
IS JSON [VALUE]
IS JSON ARRAY
IS JSON OBJECT
IS JSON SCALAR
IS JSON WITH | WITHOUT UNIQUE KEYS
These are mostly self-explanatory, but note that IS JSON WITHOUT UNIQUE
KEYS is true whenever IS JSON is true, and IS JSON WITH UNIQUE KEYS is
true whenever IS JSON is true except it IS JSON OBJECT is true and there
are duplicate keys (which is never the case when applied to jsonb values).
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Generally if a role is granted membership to another role with NOINHERIT
they must use SET ROLE to access the privileges of that role, however
with predefined roles the membership and privilege is conflated. Fix that
by replacing is_member_of_role with has_privs_for_role for predefined
roles. Patch does not remove is_member_of_role from acl.h, but it does
add a warning not to use that function for privilege checking. Not
backpatched based on hackers list discussion.
Author: Joshua Brindle
Reviewed-by: Stephen Frost, Nathan Bossart, Joe Conway
Discussion: https://postgr.es/m/flat/CAGB+Vh4Zv_TvKt2tv3QNS6tUM_F_9icmuj0zjywwcgVi4PAhFA@mail.gmail.com
Commit f9fd176461 effectively gave
every role ADMIN OPTION on itself. However, this appears to be
something that happened accidentally as a result of refactoring
work rather than an intentional decision. Almost a decade later,
it was discovered that this was a security vulnerability. As a
result, commit fea164a72a restricted
this implicit ADMIN OPTION privilege to be exercisable only when
the role being administered is the same as the session user and
when no security-restricted operation is in progress. That
commit also documented the existence of this implicit privilege
for what seems to be the first time.
The effect of the privilege is to allow a login role to grant
the privileges of that role, and optionally ADMIN OPTION on it,
to some other role. That's an unusual thing to do, because generally
membership is granted in roles used as groups, rather than roles
used as users. Therefore, it does not seem likely that removing
the privilege will break things for many PostgreSQL users.
However, it will make it easier to reason about the permissions
system. This is the only case where a user who has not been given any
special permission (superuser, or ADMIN OPTION on some role) can
modify role membership, so removing it makes things more consistent.
For example, if a superuser sets up role A and B and grants A to B
but no other privileges to anyone, she can now be sure that no one
else will be able to revoke that grant. Without this change, that
would have been true only if A was a non-login role.
Patch by me. Reviewed by Tom Lane and Stephen Frost.
Discussion: http://postgr.es/m/CA+Tgmoawdt03kbA+dNyBcNWJpRxu0f4X=69Y3+DkXXZqmwMDLg@mail.gmail.com
MERGE performs actions that modify rows in the target table using a
source table or query. MERGE provides a single SQL statement that can
conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise
require multiple PL statements. For example,
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular tables, partitioned tables and inheritance
hierarchies, including column and row security enforcement, as well as
support for row and statement triggers and transition tables therein.
MERGE is optimized for OLTP and is parameterizable, though also useful
for large scale ETL/ELT. MERGE is not intended to be used in preference
to existing single SQL commands for INSERT, UPDATE or DELETE since there
is some overhead. MERGE can be used from PL/pgSQL.
MERGE does not support targetting updatable views or foreign tables, and
RETURNING clauses are not allowed either. These limitations are likely
fixable with sufficient effort. Rewrite rules are also not supported,
but it's not clear that we'd want to support them.
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions)
Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
Per ECMAScript standard (ECMA-262, referenced by SQL standard), the
syntax forms
.1
1.
should be allowed for decimal numeric literals, but the existing
implementation rejected them.
Also, by the same standard, reject trailing junk after numeric
literals.
Note that the ECMAScript standard for numeric literals is in respects
like these slightly different from the JSON standard, which might be
the original cause for this discrepancy.
A change is that this kind of syntax is now rejected:
1.type()
This needs to be written as
(1).type()
This is correct; normal JavaScript also does not accept this syntax.
We also need to fix up the jsonpath output function for this case. We
put parentheses around numeric items if they are followed by another
path item.
Reviewed-by: Nikita Glukhov <n.gluhov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/50a828cc-0a00-7791-7883-2ed06dfb2dbb@enterprisedb.com
So far the first of the retries introduced in f28bf667f6 resolves the
issue. But I (Andres) am still suspicious that the start of the failures might
indicate a problem.
To reduce noise, stop reporting a failure if a retry resolves the problem. To
allow figuring out what causes the slow slot drop, add a few more debug
messages to ReplicationSlotDropPtr.
See also commit afdeff1052, fe0972ee5e and f28bf667f6.
Discussion: https://postgr.es/m/20220327213219.smdvfkq2fl74flow@alap3.anarazel.de
pg_stat_get_replication_slot() accidentally was marked as non-strict, crashing
when called with NULL input. As it's already released, introduce an explicit
NULL check in 14, fix the catalog in HEAD.
Bumps catversion in HEAD.
Discussion: https://postgr.es/m/20220326212432.s5n2maw6kugnpyxw@alap3.anarazel.de
Backpatch: 14-, where replication slot stats were introduced
This patch introduces the SQL/JSON standard constructors for JSON:
JSON()
JSON_ARRAY()
JSON_ARRAYAGG()
JSON_OBJECT()
JSON_OBJECTAGG()
For the most part these functions provide facilities that mimic
existing json/jsonb functions. However, they also offer some useful
additional functionality. In addition to text input, the JSON() function
accepts bytea input, which it will decode and constuct a json value from.
The other functions provide useful options for handling duplicate keys
and null values.
This series of patches will be followed by a consolidated documentation
patch.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
This is a follow-up to commit 7dac61402 which removed a set of unused
modules from the TAP test.
The Config references in the pg_ctl and pg_rewind tests were removed
in commit 1c6d46293. Fcntl ':mode' and File::stat in the pg_ctl test
were added in c37b3d08c which was probably a leftover from an earlier
version of the patch, as the function using these was added to another
module in that commit.
The Config reference in the ldap test was added in ee56c3b21 which in
turn use $^O instead of interrogating Config.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87lewyqk45.fsf@wibble.ilmari.org
One of the TAP tests added in 923def9a53 did not wait after creating a
subscription, and wait_for_catchup is not sufficient for this. So if the
tablesync workers happen do not complete quickly enough, the test won't
see the expected results.
This probably explains intermittent failures on a couple buildfarm
animals (komodoensis, petalura and snapper).
Reported-by: Tom Lane
Discussion: https://postgr.es/m/170549.1648330634@sss.pgh.pa.us
The SSL TAP tests were tightly coupled to the OpenSSL implementation,
making it hard to add support for additional SSL/TLS backends. This
refactoring makes the test avoid depending on specific implementations
The SSLServer Perl module is renamed SSL::Server, which in turn use
SSL::Backend::X where X is the backend pointed to by with_ssl. Each
backend will implement its own module responsible for setting up keys,
certs and to resolve sslkey values to their implementation specific
value (file paths or vault nicknames etc). Further, switch_server_cert
now takes a set of named parameters rather than a fixed set which used
defaults. The modules also come with POD documentation.
There are a few testcases which still use OpenSSL specifics, but it's
not entirely clear how to abstract those until we have another library
implemented.
Original patch by me, with lots of rework by Andrew Dunstan to turn it
into better Perl.
Discussion: https://postgr.es/m/AA18A362-CA65-4F9A-AF61-76AE318FE97C@yesql.se
This allows specifying an optional column list when adding a table to
logical replication. The column list may be specified after the table
name, enclosed in parentheses. Columns not included in this list are not
sent to the subscriber, allowing the schema on the subscriber to be a
subset of the publisher schema.
For UPDATE/DELETE publications, the column list needs to cover all
REPLICA IDENTITY columns. For INSERT publications, the column list is
arbitrary and may omit some REPLICA IDENTITY columns. Furthermore, if
the table uses REPLICA IDENTITY FULL, column list is not allowed.
The column list can contain only simple column references. Complex
expressions, function calls etc. are not allowed. This restriction could
be relaxed in the future.
During the initial table synchronization, only columns included in the
column list are copied to the subscriber. If the subscription has
several publications, containing the same table with different column
lists, columns specified in any of the lists will be copied.
This means all columns are replicated if the table has no column list
at all (which is treated as column list with all columns), or when of
the publications is defined as FOR ALL TABLES (possibly IN SCHEMA that
matches the schema of the table).
For partitioned tables, publish_via_partition_root determines whether
the column list for the root or the leaf relation will be used. If the
parameter is 'false' (the default), the list defined for the leaf
relation is used. Otherwise, the column list for the root partition
will be used.
Psql commands \dRp+ and \d <table-name> now display any column lists.
Author: Tomas Vondra, Alvaro Herrera, Rahila Syed
Reviewed-by: Peter Eisentraut, Alvaro Herrera, Vignesh C, Ibrar Ahmed,
Amit Kapila, Hou zj, Peter Smith, Wang wei, Tang, Shi yu
Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
The previous method for doing that was to write zeroes into a
predetermined set of page locations. However, there's a roughly
1-in-64K chance that the existing checksum will match by chance,
and yesterday several buildfarm animals started to reproducibly
see that, resulting in test failures because no checksum mismatch
was reported.
Since the checksum includes the page LSN, test success depends on
the length of the installation's WAL history, which is affected by
(at least) the initial catalog contents, the set of locales installed
on the system, and the length of the pathname of the test directory.
Sooner or later we were going to hit a chance match, and today is
that day.
Harden these tests by specifically inverting the checksum field and
leaving all else alone, thereby guaranteeing that the checksum is
incorrect.
In passing, fix places that were using seek() to set up for syswrite(),
a combination that the Perl docs very explicitly warn against. We've
probably escaped problems because no regular buffered I/O is done on
these filehandles; but if it ever breaks, we wouldn't deserve or get
much sympathy.
Although we've only seen problems in HEAD, now that we recognize the
environmental dependencies it seems like it might be just a matter
of time until someone manages to hit this in back-branch testing.
Hence, back-patch to v11 where we started doing this kind of test.
Discussion: https://postgr.es/m/3192026.1648185780@sss.pgh.pa.us
Commit 75b1521dae added support for logical replication of sequences,
including grammar changes, but it did not update preprocess_pubobj_list
accordingly. This can cause segfaults with "continuations", i.e. when
command specifies a list of objects:
CREATE PUBLICATION p FOR SEQUENCE s1, s2;
Reported by Amit Kapila, patch by me.
Reported-by: Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1JxDNKGBSNTyN-Xj2JRjzFo+ziSqJbjH==vuO0YF_CQrg@mail.gmail.com
Crash recovery on standby may encounter missing directories when
replaying create database WAL records. Prior to this patch, the standby
would fail to recover in such a case. However, the directories could be
legitimately missing. Consider a sequence of WAL records as follows:
CREATE DATABASE
DROP DATABASE
DROP TABLESPACE
If, after replaying the last WAL record and removing the tablespace
directory, the standby crashes and has to replay the create database
record again, the crash recovery must be able to move on.
This patch adds a mechanism similar to invalid-page tracking, to keep a
tally of missing directories during crash recovery. If all the missing
directory references are matched with corresponding drop records at the
end of crash recovery, the standby can safely continue following the
primary.
Backpatch to 13, at least for now. The bug is older, but fixing it in
older branches requires more careful study of the interactions with
commit e6d8069522, which appeared in 13.
A new TAP test file is added to verify the condition. However, because
it depends on commit d6d317dbf6, it can only be added to branch
master. I (Álvaro) manually verified that the code behaves as expected
in branch 14. It's a bit nervous-making to leave the code uncovered by
tests in older branches, but leaving the bug unfixed is even worse.
Also, the main reason this fix took so long is precisely that we
couldn't agree on a good strategy to approach testing for the bug, so
perhaps this is the best we can do.
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Author: Paul Guo <paulguo@gmail.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
Move DLSUFFIX from makefiles into header files for all platforms.
Move the DLSUFFIX assignment from src/makefiles/ to src/templates/,
have configure read it, and then substitute it into Makefile.global
and pg_config.h. This avoids the need for all makefile rules that
need it to locally set CPPFLAGS. It also resolves an inconsistent
setup between the two Windows build systems.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/2f9861fb-8969-9005-7518-b8e60f2bead9@enterprisedb.com
Tom noticed evidence in the buildfarm suggesting the failures might just be
really slow process exits. To investigate further, instead of giving up after
seeing multiple walsender pids once, retry. For now continue to report test
failure if a retry succeeds.
See also commit afdeff1052 and fe0972ee5e.
Per suggestion from Tom Lane.
Discussion: https://postgr.es/m/3042597.1648148740@sss.pgh.pa.us
A TOAST table can normally have only one index, but there are corner
cases where it has more; for example, transiently during REINDEX
CONCURRENTLY. In such a case, the pg_statio_all_tables view produced
multiple rows for the owning table, one per TOAST index. Refactor the
view to avoid that, instead summing the stats across all the indexes,
as we do for regular table indexes.
While this has been wrong for a long time, back-patching seems unwise
due to the difficulty of putting a system view change into back
branches.
Andrei Zubkov, tweaked a bit by me
Discussion: https://postgr.es/m/acefef4189706971fc475f912c1afdab1c48d627.camel@moonset.ru
The Config and Cwd modules were no longer used, but remained imported,
in a number of tests. Remove to keep the imports to the actually used
modules.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/A5A074CD-3198-492B-BE5E-7961EFC3733F@yesql.se
This commit adds support for decoding of sequences to the built-in
replication (the infrastructure was added by commit 0da92dc530).
The syntax and behavior mostly mimics handling of tables, i.e. a
publication may be defined as FOR ALL SEQUENCES (replicating all
sequences in a database), FOR ALL SEQUENCES IN SCHEMA (replicating
all sequences in a particular schema) or individual sequences.
To publish sequence modifications, the publication has to include
'sequence' action. The protocol is extended with a new message,
describing sequence increments.
A new system view pg_publication_sequences lists all the sequences
added to a publication, both directly and indirectly. Various psql
commands (\d and \dRp) are improved to also display publications
including a given sequence, or sequences included in a publication.
Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Amit Kapila, Hannu Krosing, Andres
Freund, Petr Jelinek
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
Invalidate abortedRecPtr and missingContrecPtr after a missing
continuation record is successfully skipped on a standby. This fixes a
PANIC caused when a recently promoted standby attempts to write an
OVERWRITE_RECORD with an LSN of the previously read aborted record.
Backpatch to 10 (all stable versions).
Author: Sami Imseih <simseih@amazon.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/44D259DE-7542-49C4-8A52-2AB01534DCA9@amazon.com
Commit 90efa2f556 caused some issues with EXEC_BACKEND builds and with
force_parallel_mode = regress setups. For the first issue we no longer
test if the module has been preloaded, and in fact we don't preload it,
but simply LOAD it in the test script. For the second issue we suppress
error messages emanating from parallel workers.
Mark Dilger
Discussion: https://postgr.es/m/7f6d54a1-4024-3b6e-e3ec-26cd394aac9e@dunslane.net
This includes tests of both the newly added name type object access
hooks and the older Oid type hooks, and provides a useful example
of how to use the hooks.
Mark Dilger, based on some code from Joshua Brindle.
Discussion: https://postgr.es/m/47F87A0E-C0E5-43A6-89F6-D403F2B45175@enterprisedb.com
A security invoker view checks permissions for accessing its
underlying base relations using the privileges of the user of the
view, rather than the privileges of the view owner. Additionally, if
any of the base relations are tables with RLS enabled, the policies of
the user of the view are applied, rather than those of the view owner.
This allows views to be defined without giving away additional
privileges on the underlying base relations, and matches a similar
feature available in other database systems.
It also allows views to operate more naturally with RLS, without
affecting the assignments of policies to users.
Christoph Heiss, with some additional hacking by me. Reviewed by
Laurenz Albe and Wolfgang Walther.
Discussion: https://postgr.es/m/b66dd6d6-ad3e-c6f2-8b90-47be773da240%40cybertec.at
This issue is environment-sensitive, where the SSL tests could fail in
various way by feeding on defaults provided by sslcert, sslkey,
sslrootkey, sslrootcert, sslcrl and sslcrldir coming from a local setup,
as of ~/.postgresql/ by default. Horiguchi-san has reported two
failures, but more advanced testing from me (aka inclusion of garbage
SSL configuration in ~/.postgresql/ for all the configuration
parameters) has showed dozens of failures that can be triggered in the
whole test suite.
History has showed that we are not good when it comes to address such
issues, fixing them locally like in dd87799, and such problems keep
appearing. This commit strengthens the entire test suite to put an end
to this set of problems by embedding invalid default values in all the
connection strings used in the tests. The invalid values are prefixed
in each connection string, relying on the follow-up values passed in the
connection string to enforce any invalid value previously set. Note
that two tests related to CRLs are required to fail with certain pre-set
configurations, but we can rely on enforcing an empty value instead
after the invalid set of values.
Reported-by: Kyotaro Horiguchi
Reviewed-by: Andrew Dunstan, Daniel Gustafsson, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20220316.163658.1122740600489097632.horikyota.ntt@gmail.com
backpatch-through: 10
This feature allows skipping the transaction on subscriber nodes.
If incoming change violates any constraint, logical replication stops
until it's resolved. Currently, users need to either manually resolve the
conflict by updating a subscriber-side database or by using function
pg_replication_origin_advance() to skip the conflicting transaction. This
commit introduces a simpler way to skip the conflicting transactions.
The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),
which allows the apply worker to skip the transaction finished at
specified LSN. The apply worker skips all data modification changes within
the transaction.
Author: Masahiko Sawada
Reviewed-by: Takamichi Osumi, Hou Zhijie, Peter Eisentraut, Amit Kapila, Shi Yu, Vignesh C, Greg Nancarrow, Haiying Tang, Euler Taveira
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
The planner needs to treat GroupingFunc like Aggref for many purposes,
in particular with respect to processing of the argument expressions,
which are not to be evaluated at runtime. A few places hadn't gotten
that memo, notably including subselect.c's processing of outer-level
aggregates. This resulted in assertion failures or wrong plans for
cases in which a GROUPING() construct references an outer aggregation
level.
Also fix missing special cases for GroupingFunc in cost_qual_eval
(resulting in wrong cost estimates for GROUPING(), although it's
not clear that that would affect plan shapes in practice) and in
ruleutils.c (resulting in excess parentheses in pretty-print mode).
Per bug #17088 from Yaoguang Chen. Back-patch to all supported
branches.
Richard Guo, Tom Lane
Discussion: https://postgr.es/m/17088-e33882b387de7f5c@postgresql.org
For GENERATED columns, we record all dependencies of the generation
expression as AUTO dependencies of the column itself. This means
that the generated column is silently dropped if any dependency
is removed, even if CASCADE wasn't specified. This is at least
a POLA violation, but I think it's actually based on a misreading
of the standard. The standard does say that you can't drop a
dependent GENERATED column in RESTRICT mode; but that's buried down
in a subparagraph, on a different page from some pseudocode that
makes it look like an AUTO drop is being suggested.
Change this to be more like the way that we handle regular default
expressions, ie record the dependencies as NORMAL dependencies of
the pg_attrdef entry. Also, make the pg_attrdef entry's dependency
on the column itself be INTERNAL not AUTO. That has two effects:
* the column will go away, not just lose its default, if any
dependency of the expression is dropped with CASCADE. So we
don't need any special mechanism to make that happen.
* it provides an additional cross-check preventing someone from
dropping the default expression without dropping the column.
catversion bump because of change in the contents of pg_depend
(which also requires a change in one information_schema view).
Per bug #17439 from Kevin Humphreys. Although this is a longstanding
bug, it seems impractical to back-patch because of the need for
catalog contents changes.
Discussion: https://postgr.es/m/17439-7df4421197e928f0@postgresql.org
DROP INDEX needs to lock the index's table before the index itself,
else it will deadlock against ordinary queries that acquire the
relation locks in that order. This is correctly mechanized for
plain indexes by RangeVarCallbackForDropRelation; but in the case of
a partitioned index, we neglected to lock the child tables in advance
of locking the child indexes. We can fix that by traversing the
inheritance tree and acquiring the needed locks in RemoveRelations,
after we have acquired our locks on the parent partitioned table and
index.
While at it, do some refactoring to eliminate confusion between
the actual and expected relkind in RangeVarCallbackForDropRelation.
We can save a couple of syscache lookups too, by having that function
pass back info that RemoveRelations will need.
Back-patch to v11 where partitioned indexes were added.
Jimmy Yih, Gaurab Dey, Tom Lane
Discussion: https://postgr.es/m/BYAPR05MB645402330042E17D91A70C12BD5F9@BYAPR05MB6454.namprd05.prod.outlook.com
When an update on a partitioned table referenced in foreign key
constraints causes a row to move from one partition to another,
the fact that the move is implemented as a delete followed by an insert
on the target partition causes the foreign key triggers to have
surprising behavior. For example, a given foreign key's delete trigger
which implements the ON DELETE CASCADE clause of that key will delete
any referencing rows when triggered for that internal DELETE, although
it should not, because the referenced row is simply being moved from one
partition of the referenced root partitioned table into another, not
being deleted from it.
This commit teaches trigger.c to skip queuing such delete trigger events
on the leaf partitions in favor of an UPDATE event fired on the root
target relation. Doing so is sensible because both the old and the new
tuple "logically" belong to the root relation.
The after trigger event queuing interface now allows passing the source
and the target partitions of a particular cross-partition update when
registering the update event for the root partitioned table. Along with
the two ctids of the old and the new tuple, the after trigger event now
also stores the OIDs of those partitions. The tuples fetched from the
source and the target partitions are converted into the root table
format, if necessary, before they are passed to the trigger function.
The implementation currently has a limitation that only the foreign keys
pointing into the query's target relation are considered, not those of
its sub-partitioned partitions. That seems like a reasonable
limitation, because it sounds rare to have distinct foreign keys
pointing to sub-partitioned partitions instead of to the root table.
This misbehavior stems from commit f56f8f8da6 (which added support for
foreign keys to reference partitioned tables) not paying sufficient
attention to commit 2f17844104 (which had introduced cross-partition
updates a year earlier). Even though the former commit goes back to
Postgres 12, we're not backpatching this fix at this time for fear of
destabilizing things too much, and because there are a few ABI breaks in
it that we'd have to work around in older branches. It also depends on
commit f4566345cf, which had its own share of backpatchability issues
as well.
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Eduard Català <eduard.catala@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFvkBCmfwkQX_yBqv2Wz8ugUGiBDxum8=WvVbfU1TXaNg@mail.gmail.com
Discussion: https://postgr.es/m/CAL54xNZsLwEM1XCk5yW9EqaRzsZYHuWsHQkA2L5MOSKXAwviCQ@mail.gmail.com
createdb() didn't check for collation attributes validity, which has
to be done explicitly on ICU < 54. It also forgot to close the ICU collator
opened during the check which leaks some memory.
To fix both, add a new check_icu_locale() that does all the appropriate
verification and close the ICU collator.
initdb also had some partial check for ICU < 54. To have consistent error
reporting across major ICU versions, and get rid of the need to include ucol.h,
remove the partial check there. The backend will report an error if needed
during the post-boostrap iniitialization phase.
Author: Julien Rouhaud <julien.rouhaud@free.fr>
Discussion: https://www.postgresql.org/message-id/20220319041459.qqqiqh335sga5ezj@jrouhaud
b048326 has added support for SET ACCESS METHOD in ALTER TABLE, but it
has missed a few things for materialized views:
- No documentation for this clause on the ALTER MATERIALIZED VIEW page.
- psql tab completion missing.
- No regression tests.
This commit closes the gap on all the points listed above.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220316133337.5dc9740abfa24c25ec9f67f5@sraoss.co.jp
The clauses SET TABLESPACE and ALL IN TABLESPACE are supported in ALTER
MATERIALIZED VIEW for a long time, and they behave mostly like ALTER
TABLE by reusing the same code paths, but there were zero tests for
them. This commit closes the gap with new tests in tablespace.sql.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220316133337.5dc9740abfa24c25ec9f67f5@sraoss.co.jp
The output of table_to_xmlschema() and allied functions includes
a regex describing valid values for these types ... but the regex
was itself invalid, as it failed to escape a literal "+" sign.
Report and fix by Renan Soares Lopes. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
Otherwise, the database encoding varies depending on the user's
environment, and so the test might fail depending on whether ICU
likes the encoding. In particular, the test fails completely
if the prevailing locale is C.
In commit bf7ca1587, I had the bright idea that we could make the
result of a whole-row Var (that is, foo.*) track any column aliases
that had been applied to the FROM entry the Var refers to. However,
that's not terribly logically consistent, because now the output of
the Var is no longer of the named composite type that the Var claims
to emit. bf7ca1587 tried to handle that by changing the output
tuple values to be labeled with a blessed RECORD type, but that's
really pretty disastrous: we can wind up storing such tuples onto
disk, whereupon they're not readable by other sessions.
The only practical fix I can see is to give up on what bf7ca1587
tried to do, and say that the column names of tuples produced by
a whole-row Var are always those of the underlying named composite
type, query aliases or no. While this introduces some inconsistencies,
it removes others, so it's not that awful in the abstract. What *is*
kind of awful is to make such a behavioral change in a back-patched
bug fix. But corrupt data is worse, so back-patched it will be.
(A workaround available to anyone who's unhappy about this is to
introduce an extra level of sub-SELECT, so that the whole-row Var is
referring to the sub-SELECT's output and not to a named table type.
Then the Var is of type RECORD to begin with and there's no issue.)
Per report from Miles Delahunty. The faulty commit dates to 9.5,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/2950001.1638729947@sss.pgh.pa.us
When publishing changes through a artition root, we should use the row
filter for the top-most ancestor. The relation may be added to multiple
publications, using different ancestors, and 52e4f0cd47 handled this
incorrectly. With c91f71b9dc we find the correct top-most ancestor, but
the code tried to fetch the row filter from all publications, including
those using a different ancestor etc. No row filter can be found for
such publications, which was treated as replicating all rows.
Similarly to c91f71b9dc, this seems to be a rare issue in practice. It
requires multiple publications including the same partitioned relation,
through different ancestors.
Fixed by only passing publications containing the top-most ancestor to
pgoutput_row_filter_init(), so that treating a missing row filter as
replicating all rows is correct.
Report and fix by me, test case by Hou zj. Reviews and improvements by
Amit Kapila.
Author: Tomas Vondra, Hou zj, Amit Kapila
Reviewed-by: Amit Kapila, Hou zj
Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
This adds the option to use ICU as the default locale provider for
either the whole cluster or a database. New options for initdb,
createdb, and CREATE DATABASE are used to select this.
Since some (legacy) code still uses the libc locale facilities
directly, we still need to set the libc global locale settings even if
ICU is otherwise selected. So pg_database now has three
locale-related fields: the existing datcollate and datctype, which are
always set, and a new daticulocale, which is only set if ICU is
selected. A similar change is made in pg_collation for consistency,
but in that case, only the libc-related fields or the ICU-related
field is set, never both.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
Using this system function with an in-place tablespace (created when
allow_in_place_tablespaces is enabled by specifying an empty string as
location) caused a failure when using readlink(), as the tablespace is,
in this case, not a symbolic link in pg_tblspc/ but a directory.
Rather than getting a failure, the commit changes
pg_tablespace_location() so as a relative path to the data directory is
returned for in-place tablespaces, to make a difference between
tablespaces created when allow_in_place_tablespaces is enabled or not.
Getting a path rather than an empty string that would match the CREATE
TABLESPACE command in this case is more useful for tests that would like
to rely on this function.
While on it, a regression test is added for this case. This is simple
to add in the main regression test suite thanks to regexp_replace() to
mask the part of the tablespace location dependent on its OID.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Thomas Munro
Discussion: https://postgr.es/m/YiG1RleON1WBcLnX@paquier.xyz
Commit 83fd4532a7 allowed publishing of changes via ancestors, for
publications defined with publish_via_partition_root. But the way
the ancestor was determined in get_rel_sync_entry() was incorrect,
simply updating the same variable. So with multiple publications,
replicating different ancestors, the outcome depended on the order
of publications in the list - the value from the last loop was used,
even if it wasn't the top-most ancestor.
This is a probably rare situation, as in most cases publications do
not overlap, so each partition has exactly one candidate ancestor
to replicate as and there's no ambiguity.
Fixed by tracking the "ancestor level" for each publication, and
picking the top-most ancestor. Adds a test case, verifying the
correct ancestor is used for publishing the changes and that this
does not depend on order of publications in the list.
Older releases have another bug in this loop - once all actions are
replicated, the loop is terminated, on the assumption that inspecting
additional publications is unecessary. But that misses the fact that
those additional applications may replicate different ancestors.
Fixed by removal of this break condition. We might still terminate the
loop in some cases (e.g. when replicating all actions and the ancestor
is the partition root).
Backpatch to 13, where publish_via_partition_root was introduced.
Initial report and fix by me, test added by Hou zj. Reviews and
improvements by Amit Kapila.
Author: Tomas Vondra, Hou zj, Amit Kapila
Reviewed-by: Amit Kapila, Hou zj
Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
This system function was being triggered once in the main regression
test suite to check its SRF configuration, and more in other test
modules but nothing checked the behavior of the options missing_ok and
include_dot_dirs. This commit adds some tests for both options, to
avoid mistakes if this code is manipulated in the future.
Extracted from a larger patch by the same author, with a few tweaks by
me.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20191227170220.GE12890@telsasoft.com
The upper case versions "OF", "TZH", and "TZM" are already supported,
and all other format codes that are supported in upper case are also
supported in lower case, so we should support these as well for
consistency.
Nitin Jadhav, with a tiny cosmetic change by me. Reviewed by Suraj
Kharage and David Zhang.
Discussion: http://postgr.es/m/CAMm1aWZ-oZyKd75+8D=VJ0sAoSwtdXWLP-MAWD4D8R1Dgandzw@mail.gmail.com
Logical replication apply workers for a subscription can easily get stuck
in an infinite loop of attempting to apply a change, triggering an error
(such as a constraint violation), exiting with the error written to the
subscription server log, and restarting.
To partially remedy the situation, this patch adds a new subscription
option named 'disable_on_error'. To be consistent with old behavior, this
option defaults to false. When true, both the tablesync worker and apply
worker catch any errors thrown and disable the subscription in order to
break the loop. The error is still also written in the logs.
Once the subscription is disabled, users can either manually resolve the
conflict/error or skip the conflicting transaction by using
pg_replication_origin_advance() function. After resolving the conflict,
users need to enable the subscription to allow apply process to proceed.
Author: Osumi Takamichi and Mark Dilger
Reviewed-by: Greg Nancarrow, Vignesh C, Amit Kapila, Wang wei, Tang Haiying, Peter Smith, Masahiko Sawada, Shi Yu
Discussion : https://postgr.es/m/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com
Starting in cc50080a82 create_index test fails when run with
synchronous_commit=off. synchronous_commit=off delays when hint bits may be
set. Some plans change depending on the number of all-visible pages, which in
turn can be influenced by the delayed hint bits.
Force synchronous_commit to `on` in test_setup.sql. Not very satisfying, but
there's no obvious alternative.
Reported-By: Aleksander Alekseev <aleksander@timescale.com>
Author: Andres Freund <andres@anarazel.de>
Author: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/CAJ7c6TPJNof1Q+vJsy3QebgbPgXdu2ErPvYkBdhD6_Ckv5EZRg@mail.gmail.com
Commit 8b069ef5d changed this function to look at pg_constraint.conindid
rather than searching pg_depend. That was a good performance improvement,
but it failed to preserve the exact semantics. The old code would only
return an index that was "owned by" (internally dependent on) the
specified constraint, whereas the new code will also return indexes that
are just referenced by foreign key constraints. This confuses ALTER
TABLE, which was implicitly expecting the previous semantics, into
failing with errors like
ERROR: relation 146621 has multiple clustered indexes
or
ERROR: "pk_attbl" is not an index for table "atref"
We can fix this without reverting the performance improvement by adding
a contype check in get_constraint_index(). Another way could be to
make ALTER TABLE check it, but I'm worried that extension code could
also have subtle dependencies on the old semantics.
Tom Lane and Japin Li, per bug #17409 from Holly Roberts.
Back-patch to v14 where the error crept in.
Discussion: https://postgr.es/m/17409-52871dda8b5741cb@postgresql.org
Slow hosts may avoid load-induced, spurious failures by setting
environment variable PG_TEST_TIMEOUT_DEFAULT to some number of seconds
greater than 180. Developers may see faster failures by setting that
environment variable to some lesser number of seconds. In tests, write
$PostgreSQL::Test::Utils::timeout_default wherever the convention has
been to write 180. This change raises the default for some briefer
timeouts. Back-patch to v10 (all supported versions).
Discussion: https://postgr.es/m/20220218052842.GA3627003@rfd.leadboat.com
pg_regress reported "Unix socket" as the default location whenever
HAVE_UNIX_SOCKETS is defined. However, that's not been accurate
on Windows since 8f3ec75de. Update this logic to match what libpq
actually does now.
This is just cosmetic, but still it's potentially misleading.
Back-patch to v13 where 8f3ec75de came in.
Discussion: https://postgr.es/m/3894060.1646415641@sss.pgh.pa.us
Commit 52e4f0cd47 didn't add tests for pg_dump support, so add a few tests
for it. Additionally, verify that catalogs are updated after few
ALTER PUBLICATION commands that modify row filters by using \d.
Reported-by: Tomas Vondra
Author: Shi yu, based on initial by Tomas Vondra
Reviewed-by: Euler Taveira and Amit Kapila
Discussion: https://postgr.es/m/6bdbd7fc-e81a-9a77-d963-24adeb95f29e@enterprisedb.com
This code seems to have been written on the assumption that
"unsigned long" is 32 bits; or at any rate it ignored the
possibility of conversion overflow. Rewrite, borrowing some
logic from oidin().
Discussion: https://postgr.es/m/3441768.1646343914@sss.pgh.pa.us
It was decided (refer to the Discussion link below) that the stats
collector is not an appropriate place to store the error information of
subscription workers.
This patch changes the pg_stat_subscription_workers view (introduced by
commit 8d74fc96db) so that it stores only statistics counters:
apply_error_count and sync_error_count, and has one entry for
each subscription. The removed error information such as error-XID and
the error message would be stored in another way in the future which is
more reliable and persistent.
After removing these error details, there is no longer any relation
information, so the subscription statistics are now a cluster-wide
statistics.
The patch also changes the view name to pg_stat_subscription_stats since
the word "worker" is an implementation detail that we use one worker for
one tablesync and one apply.
Author: Masahiko Sawada, based on suggestions by Andres Freund
Reviewed-by: Peter Smith, Haiying Tang, Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/20220125063131.4cmvsxbz2tdg6g65@alap3.anarazel.de
justify_interval, justify_hours, and justify_days didn't check for
overflow when promoting hours to days or days to months; but that's
possible when the upper field's value is already large. Detect and
report any such overflow.
Also, we can avoid unnecessary overflow in some cases in justify_interval
by pre-justifying the days field. (Thanks to Nathan Bossart for this
idea.)
Joe Koshakow
Discussion: https://postgr.es/m/CAAvxfHeNqsJ2xYFbPUf_8nNQUiJqkag04NW6aBQQ0dbZsxfWHA@mail.gmail.com
See also afdeff1052. Failures after that commit provided a few more hints,
but not yet enough to understand what's going on.
In 019_replslot_limit.pl shut down nodes with fast instead of immediate mode
if we observe the failure mode. That should tell us whether the failures we're
observing are just a timing issue under high load. PGCTLTIMEOUT should prevent
buildfarm animals from hanging endlessly.
Also adds a bit more logging to replication slot drop and ShutdownPostgres().
Discussion: https://postgr.es/m/20220225192941.hqnvefgdzaro6gzg@alap3.anarazel.de
realloc() will return NULL on a failed reallocation, so the destination
pointer must be inspected to avoid null pointer dereference. Further,
assigning the return value to the source pointer leak the allocation in
the case of reallocation failure. Fix by using pg_realloc instead which
has full error handling.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/9FC7E603-9246-4C62-B466-A39CFAF454AE@yesql.se
In the Publisher-Subscriber setup, after performing a DML operation on the
publisher, we need to wait for it to be replayed on the subscriber before
querying the same data on the subscriber. One of the tests missed the wait
step.
As per buildfarm.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv=e9Qd1TSYo8Og6x6Abfz3b9_htwinLp4ENPgV45DACQ@mail.gmail.com
Refactor the recovery tests to not carry a local duplicated copy of
the pump_until function which pumps a process until a defined string
is seen on a stream. This reduces duplication, and is in preparation
for another patch which will also use this functionality.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion https://postgr.es/m/YgynUafCyIu3jIhC@paquier.xyz
I have not been able to reproduce the occasional failures of
019_replslot_limit.pl we are seeing in the buildfarm and not for lack of
trying. The additional logging and increased log level will hopefully help.
Will be reverted once the cause is identified.
Discussion: https://postgr.es/m/20220218231415.c4plkp4i3reqcwip@alap3.anarazel.de
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.
The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.
If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.
The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.
This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.
If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.
Psql commands \dRp+ and \d <table-name> will display any row filters.
Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
Commit 75d22069e tried to throw a warning for setting a custom GUC whose
prefix belongs to a previously-loaded extension, if there is no such GUC
defined by the extension. But that caused unstable behavior with
parallel workers, because workers don't necessarily load extensions and
GUCs in the same order their leader did. To make that work safely, we
have to completely disallow the case. We now actually remove any such
GUCs at the time of initial extension load, and then throw an error not
just a warning if you try to add one later. While this might create a
compatibility issue for a few people, the improvement in error-detection
capability seems worth it; it's hard to believe that there's any good
use-case for choosing such GUC names.
This also un-reverts 5609cc01c (Rename EmitWarningsOnPlaceholders() to
MarkGUCPrefixReserved()), since that function's old name is now even
more of a misnomer.
Florin Irion and Tom Lane
Discussion: https://postgr.es/m/1902182.1640711215@sss.pgh.pa.us
When cleaning up temporary objects during process exit the cleanup could fail
with:
FATAL: cannot fetch toast data without an active snapshot
The bug is caused by RemoveTempRelationsCallback() not setting up a
snapshot. If an object with toasted catalog data needs to be cleaned up,
init_toast_snapshot() could fail with the above error.
Most of the time however the the problem is masked due to cached catalog
snapshots being returned by GetOldestSnapshot(). But dropping an object can
cause catalog invalidations to be emitted. If no further catalog accesses are
necessary between the invalidation processing and the next toast datum
deletion, the bug becomes visible.
It's easy to miss this bug because it typically happens after clients
disconnect and the FATAL error just ends up in the log.
Luckily temporary table cleanup at the next use of the same temporary schema
or during DISCARD ALL does not have the same problem.
Fix the bug by pushing a snapshot in RemoveTempRelationsCallback(). Also add
isolation tests for temporary object cleanup, including objects with toasted
catalog data.
A future HEAD only commit will add an assertion trying to make this more
visible.
Reported-By: Miles Delahunty
Author: Andres Freund
Discussion: https://postgr.es/m/CAOFAq3BU5Mf2TTvu8D9n_ZOoFAeQswuzk7yziAb7xuw_qyw5gw@mail.gmail.com
Backpatch: 10-
Following migration of Windows buildfarm members running TAP tests to
use of ucrt64 perl for those tests, special processing for msys perl is
no longer necessary and so is removed.
Backpatch to release 10
Discussion: https://postgr.es/m/c65a8781-77ac-ea95-d185-6db291e1baeb@dunslane.net
Double the default setting for hash_mem_multiplier, from 1.0 to 2.0.
This setting makes hash-based executor nodes use twice the usual
work_mem limit.
The PostgreSQL 15 release notes should have a compatibility note about
this change.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzndc_ROk6CY-bC6p9O53q974Y0Ey4WX8jcPbuTZYM4Q3A@mail.gmail.com
After this, the PostgreSQL lexers no longer accept numeric literals
with trailing non-digits, such as 123abc, which would be scanned as
two tokens: 123 and abc. This is undocumented and surprising, and it
might also interfere with some extended numeric literal syntax being
contemplated for the future.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
src/backend/utils/misc/check_guc is a script that cross-checks the
consistency of the GUCs with postgresql.conf.sample, making sure that
its format is in line with what guc.c has. It has never been run
automatically, and has rotten over the years, creating a lot of false
positives as per a report from Justin Pryzby.
d10e41d has introduced a SQL function to publish the most relevant flags
associated to a GUC, with tests added in the main regression test suite
to make sure that we avoid most of the inconsistencies in the GUC
settings, based on recent reports, but there was nothing able to
cross-check postgresql.conf.sample with the contents of guc.c.
This commit adds a TAP test that covers the remaining gap. It emulates
the most relevant checks that check_guc did, so as any format mistakes
are detected in postgresql.conf.sample at development stage, with the
following checks:
- Check that parameters marked as NOT_IN_SAMPLE are not in the sample
file.
- Check that there are no dead entries in postgresql.conf.sample for
parameters not marked as NOT_IN_SAMPLE.
- Check that no parameters are missing from the sample file if listed in
guc.c without NOT_IN_SAMPLE.
The idea of building a list of the GUCs by parsing the sample file comes
from Justin, and he wrote the regex used in the patch to find all the
GUCs (this same formatting rule basically applies for the last 20~ years
or so). In order to test this patch, I have played with manual
modifications of postgresql.conf.sample and guc.c, making sure that we
detect problems with the GUC rules and the sample file format.
The test is located in src/test/modules/test_misc, which is the best
location I could think about for such sanity checks, rather than the
main regression test suite (src/test/regress) to avoid a new type of
dependency with the source tree.
The first attempt of this patch was b0a55f4, where the location of
postgresql.conf.sample was retrieved using pg_config --sharedir. This
has proven to be an issue for distributions that patch pg_config to
enforce the installation paths at some wanted location (like Debian),
that may not exist when the test is run, hence causing a failure.
Instead of that, as per a suggestion from Andres Freund, rely on the
fact that the test is always executed from its directory in the source
tree and use a relative path to find the sample file. This works for
the CI, VPATH builds and on Windows, and tests like the recovery one
added in f47ed79 rely on that already.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/Yf9YGSwPiMu0c7fP@paquier.xyz
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
checkViewTupleDesc() didn't get the memo that it should verify
same attcollation along with same type/typmod. (A quick scan
did not find other similar oversights.)
Per bug #17404 from Pierre-Aurélien Georges. On another day
I might've back-patched this, but today I'm feeling paranoid
about unnecessary behavioral changes in back branches.
Discussion: https://postgr.es/m/17404-8a4a270ef30a6709@postgresql.org
The test has failed a couple of times on buildfarm member 'hoverfly'. It
gets stuck waiting for the standby to archive 000000020000000000000003
WAL segment. I don't understand why, but with DEBUG1, we will get messages
in the log whenever a segment is archived, which hopefully will give a
clue the next time it happens.
PostgreSQL currently accepts numeric literals with trailing
non-digits, such as 123abc where the abc is treated as the next token.
This may be a bit surprising. This commit adds test cases for this;
subsequent commits intend to change this behavior.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
While I was working on a patch to refactor things around xlog.c, I mixed
up EndOfLogTLI and replayTLI at the end of recovery. As a result, if you
recovered to a point with a lower-numbered timeline in a WAL segment
that has a higher TLI in the filename, the end-of-recovery WAL record
was created with invalid PrevTimeLineId. I noticed that while
self-reviewing, but no tests failed. So add a test to cover that corner
case.
Thanks to Amul Sul who also submitted a test case for the same corner
case, although this patch is different from that.
Reviewed-by: Amul Sul, Michael Paquier
Discussion: https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi
Discussion: https://www.postgresql.org/message-id/CAAJ_b94Vjt5cXGza_1MkjLQWciNdEemsmiWuQj0d%3DM7JfjAa1g%40mail.gmail.com
This adds to database objects the same version tracking that collation
objects have. There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.
This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported. But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com