Commit Graph

6259 Commits

Author SHA1 Message Date
Tomas Vondra c91f71b9dc Fix publish_as_relid with multiple publications
Commit 83fd4532a7 allowed publishing of changes via ancestors, for
publications defined with publish_via_partition_root. But the way
the ancestor was determined in get_rel_sync_entry() was incorrect,
simply updating the same variable. So with multiple publications,
replicating different ancestors, the outcome depended on the order
of publications in the list - the value from the last loop was used,
even if it wasn't the top-most ancestor.

This is a probably rare situation, as in most cases publications do
not overlap, so each partition has exactly one candidate ancestor
to replicate as and there's no ambiguity.

Fixed by tracking the "ancestor level" for each publication, and
picking the top-most ancestor. Adds a test case, verifying the
correct ancestor is used for publishing the changes and that this
does not depend on order of publications in the list.

Older releases have another bug in this loop - once all actions are
replicated, the loop is terminated, on the assumption that inspecting
additional publications is unecessary. But that misses the fact that
those additional applications may replicate different ancestors.

Fixed by removal of this break condition. We might still terminate the
loop in some cases (e.g. when replicating all actions and the ancestor
is the partition root).

Backpatch to 13, where publish_via_partition_root was introduced.

Initial report and fix by me, test added by Hou zj. Reviews and
improvements by Amit Kapila.

Author: Tomas Vondra, Hou zj, Amit Kapila
Reviewed-by: Amit Kapila, Hou zj
Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
2022-03-16 18:05:58 +01:00
Michael Paquier 6bdf1a1400 Fix collection of typos in the code and the documentation
Some words were duplicated while other places were grammatically
incorrect, including one variable name in the code.

Author: Otto Kekalainen, Justin Pryzby
Discussion: https://postgr.es/m/7DDBEFC5-09B6-4325-B942-B563D1A24BDC@amazon.com
2022-03-15 11:29:35 +09:00
Michael Paquier ff8b37ba80 Add more regression tests for pg_ls_dir()
This system function was being triggered once in the main regression
test suite to check its SRF configuration, and more in other test
modules but nothing checked the behavior of the options missing_ok and
include_dot_dirs.  This commit adds some tests for both options, to
avoid mistakes if this code is manipulated in the future.

Extracted from a larger patch by the same author, with a few tweaks by
me.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20191227170220.GE12890@telsasoft.com
2022-03-15 10:52:19 +09:00
Robert Haas 9dde82899c Support "of", "tzh", and "tzm" format codes.
The upper case versions "OF", "TZH", and "TZM" are already supported,
and all other format codes that are supported in upper case are also
supported in lower case, so we should support these as well for
consistency.

Nitin Jadhav, with a tiny cosmetic change by me. Reviewed by Suraj
Kharage and David Zhang.

Discussion: http://postgr.es/m/CAMm1aWZ-oZyKd75+8D=VJ0sAoSwtdXWLP-MAWD4D8R1Dgandzw@mail.gmail.com
2022-03-14 16:50:54 -04:00
Amit Kapila 705e20f855 Optionally disable subscriptions on error.
Logical replication apply workers for a subscription can easily get stuck
in an infinite loop of attempting to apply a change, triggering an error
(such as a constraint violation), exiting with the error written to the
subscription server log, and restarting.

To partially remedy the situation, this patch adds a new subscription
option named 'disable_on_error'. To be consistent with old behavior, this
option defaults to false. When true, both the tablesync worker and apply
worker catch any errors thrown and disable the subscription in order to
break the loop. The error is still also written in the logs.

Once the subscription is disabled, users can either manually resolve the
conflict/error or skip the conflicting transaction by using
pg_replication_origin_advance() function. After resolving the conflict,
users need to enable the subscription to allow apply process to proceed.

Author: Osumi Takamichi and Mark Dilger
Reviewed-by: Greg Nancarrow, Vignesh C, Amit Kapila, Wang wei, Tang Haiying, Peter Smith, Masahiko Sawada, Shi Yu
Discussion : https://postgr.es/m/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com
2022-03-14 09:32:40 +05:30
Andres Freund 7e12256b47 Force track_io_timing off in explain.sql to avoid failures when enabled.
Discussion: https://postgr.es/m/20201029231037.rkxo57ugnuchykpu@alap3.anarazel.de
2022-03-12 14:21:40 -08:00
Andres Freund 02fea8fdda Set synchronous_commit=on in test_setup.sql.
Starting in cc50080a82 create_index test fails when run with
synchronous_commit=off. synchronous_commit=off delays when hint bits may be
set. Some plans change depending on the number of all-visible pages, which in
turn can be influenced by the delayed hint bits.

Force synchronous_commit to `on` in test_setup.sql. Not very satisfying, but
there's no obvious alternative.

Reported-By: Aleksander Alekseev <aleksander@timescale.com>
Author: Andres Freund <andres@anarazel.de>
Author: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/CAJ7c6TPJNof1Q+vJsy3QebgbPgXdu2ErPvYkBdhD6_Ckv5EZRg@mail.gmail.com
2022-03-12 14:15:25 -08:00
Tom Lane 641f3dffcd Restore the previous semantics of get_constraint_index().
Commit 8b069ef5d changed this function to look at pg_constraint.conindid
rather than searching pg_depend.  That was a good performance improvement,
but it failed to preserve the exact semantics.  The old code would only
return an index that was "owned by" (internally dependent on) the
specified constraint, whereas the new code will also return indexes that
are just referenced by foreign key constraints.  This confuses ALTER
TABLE, which was implicitly expecting the previous semantics, into
failing with errors like
    ERROR:  relation 146621 has multiple clustered indexes
or
    ERROR:  "pk_attbl" is not an index for table "atref"

We can fix this without reverting the performance improvement by adding
a contype check in get_constraint_index().  Another way could be to
make ALTER TABLE check it, but I'm worried that extension code could
also have subtle dependencies on the old semantics.

Tom Lane and Japin Li, per bug #17409 from Holly Roberts.
Back-patch to v14 where the error crept in.

Discussion: https://postgr.es/m/17409-52871dda8b5741cb@postgresql.org
2022-03-11 13:47:29 -05:00
Andres Freund 45fb0de4dc ldap tests: Add paths for openbsd.
Discussion: https://postgr.es/m/721828a7-3043-6803-a85b-da63538db3cc@enterprisedb.com
2022-03-09 09:46:21 -08:00
Andres Freund ee56c3b216 ldap tests: Don't run on unsupported operating systems.
The tests currently fail on unsupported operating systems, rather than getting
skipped. The ony reason this doesn't cause problems is that the tests aren't
run by default.

Discussion: https://postgr.es/m/721828a7-3043-6803-a85b-da63538db3cc@enterprisedb.com
2022-03-09 09:31:02 -08:00
Peter Eisentraut ddf590b811 pycodestyle (PEP 8) cleanup in Python scripts
These are mainly whitespace changes.  I didn't fix "E501 line too
long", which would require more significant surgery.
2022-03-09 10:54:20 +01:00
Noah Misch 766075105c Use PG_TEST_TIMEOUT_DEFAULT for pg_regress suite non-elapsing timeouts.
Currently, only contrib/test_decoding has this property.  Use \getenv to
load the timeout value.

Discussion: https://postgr.es/m/20220218052842.GA3627003@rfd.leadboat.com
2022-03-04 18:53:13 -08:00
Noah Misch f2698ea02c Introduce PG_TEST_TIMEOUT_DEFAULT for TAP suite non-elapsing timeouts.
Slow hosts may avoid load-induced, spurious failures by setting
environment variable PG_TEST_TIMEOUT_DEFAULT to some number of seconds
greater than 180.  Developers may see faster failures by setting that
environment variable to some lesser number of seconds.  In tests, write
$PostgreSQL::Test::Utils::timeout_default wherever the convention has
been to write 180.  This change raises the default for some briefer
timeouts.  Back-patch to v10 (all supported versions).

Discussion: https://postgr.es/m/20220218052842.GA3627003@rfd.leadboat.com
2022-03-04 18:53:13 -08:00
Tom Lane 9240589798 Fix pg_regress to print the correct postmaster address on Windows.
pg_regress reported "Unix socket" as the default location whenever
HAVE_UNIX_SOCKETS is defined.  However, that's not been accurate
on Windows since 8f3ec75de.  Update this logic to match what libpq
actually does now.

This is just cosmetic, but still it's potentially misleading.
Back-patch to v13 where 8f3ec75de came in.

Discussion: https://postgr.es/m/3894060.1646415641@sss.pgh.pa.us
2022-03-04 13:23:58 -05:00
Amit Kapila ceb57afd3c Add some additional tests for row filters in logical replication.
Commit 52e4f0cd47 didn't add tests for pg_dump support, so add a few tests
for it. Additionally, verify that catalogs are updated after few
ALTER PUBLICATION commands that modify row filters by using \d.

Reported-by: Tomas Vondra
Author: Shi yu, based on initial by Tomas Vondra
Reviewed-by: Euler Taveira and Amit Kapila
Discussion: https://postgr.es/m/6bdbd7fc-e81a-9a77-d963-24adeb95f29e@enterprisedb.com
2022-03-04 07:54:12 +05:30
Tom Lane f7ea240aa7 Tighten overflow checks in tidin().
This code seems to have been written on the assumption that
"unsigned long" is 32 bits; or at any rate it ignored the
possibility of conversion overflow.  Rewrite, borrowing some
logic from oidin().

Discussion: https://postgr.es/m/3441768.1646343914@sss.pgh.pa.us
2022-03-03 20:04:35 -05:00
Amit Kapila 7a85073290 Reconsider pg_stat_subscription_workers view.
It was decided (refer to the Discussion link below) that the stats
collector is not an appropriate place to store the error information of
subscription workers.

This patch changes the pg_stat_subscription_workers view (introduced by
commit 8d74fc96db) so that it stores only statistics counters:
apply_error_count and sync_error_count, and has one entry for
each subscription. The removed error information such as error-XID and
the error message would be stored in another way in the future which is
more reliable and persistent.

After removing these error details, there is no longer any relation
information, so the subscription statistics are now a cluster-wide
statistics.

The patch also changes the view name to pg_stat_subscription_stats since
the word "worker" is an implementation detail that we use one worker for
one tablesync and one apply.

Author: Masahiko Sawada, based on suggestions by Andres Freund
Reviewed-by: Peter Smith, Haiying Tang, Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/20220125063131.4cmvsxbz2tdg6g65@alap3.anarazel.de
2022-03-01 06:17:52 +05:30
Tom Lane 54bd1e43ca Handle integer overflow in interval justification functions.
justify_interval, justify_hours, and justify_days didn't check for
overflow when promoting hours to days or days to months; but that's
possible when the upper field's value is already large.  Detect and
report any such overflow.

Also, we can avoid unnecessary overflow in some cases in justify_interval
by pre-justifying the days field.  (Thanks to Nathan Bossart for this
idea.)

Joe Koshakow

Discussion: https://postgr.es/m/CAAvxfHeNqsJ2xYFbPUf_8nNQUiJqkag04NW6aBQQ0dbZsxfWHA@mail.gmail.com
2022-02-28 15:36:54 -05:00
Andres Freund fe0972ee5e Add further debug info to help debug 019_replslot_limit.pl failures.
See also afdeff1052. Failures after that commit provided a few more hints,
but not yet enough to understand what's going on.

In 019_replslot_limit.pl shut down nodes with fast instead of immediate mode
if we observe the failure mode. That should tell us whether the failures we're
observing are just a timing issue under high load. PGCTLTIMEOUT should prevent
buildfarm animals from hanging endlessly.

Also adds a bit more logging to replication slot drop and ShutdownPostgres().

Discussion: https://postgr.es/m/20220225192941.hqnvefgdzaro6gzg@alap3.anarazel.de
2022-02-25 17:04:39 -08:00
Daniel Gustafsson 31d8d4740f Guard against reallocation failure in pg_regress
realloc() will return NULL on a failed reallocation, so the destination
pointer must be inspected to avoid null pointer dereference.  Further,
assigning the return value to the source pointer leak the allocation in
the case of reallocation failure.  Fix by using pg_realloc instead which
has full error handling.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/9FC7E603-9246-4C62-B466-A39CFAF454AE@yesql.se
2022-02-24 20:58:18 +01:00
Amit Kapila cfb4e209ec Fix one of the tests introduced in commit 52e4f0cd47.
In the Publisher-Subscriber setup, after performing a DML operation on the
publisher, we need to wait for it to be replayed on the subscriber before
querying the same data on the subscriber. One of the tests missed the wait
step.

As per buildfarm.

Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv=e9Qd1TSYo8Og6x6Abfz3b9_htwinLp4ENPgV45DACQ@mail.gmail.com
2022-02-24 08:54:39 +05:30
Daniel Gustafsson c7d7e12039 Remove duplicated word in comment
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/B7C15416-BD61-4926-9843-5C557BCD7007@yesql.se
2022-02-23 14:23:50 +01:00
Daniel Gustafsson 6da65a3f9a Add function to pump IPC process until string match
Refactor the recovery tests to not carry a local duplicated copy of
the pump_until function which pumps a process until a defined string
is seen on a stream. This reduces duplication, and is in preparation
for another patch which will also use this functionality.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion https://postgr.es/m/YgynUafCyIu3jIhC@paquier.xyz
2022-02-23 14:22:16 +01:00
Andres Freund afdeff1052 Add temporary debug info to help debug 019_replslot_limit.pl failures.
I have not been able to reproduce the occasional failures of
019_replslot_limit.pl we are seeing in the buildfarm and not for lack of
trying. The additional logging and increased log level will hopefully help.

Will be reverted once the cause is identified.

Discussion: https://postgr.es/m/20220218231415.c4plkp4i3reqcwip@alap3.anarazel.de
2022-02-22 18:02:34 -08:00
Peter Eisentraut 9467321649 Put typtype letters back into consistent order 2022-02-22 10:11:38 +01:00
Amit Kapila 52e4f0cd47 Allow specifying row filters for logical replication of tables.
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.

The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.

If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.

The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.

This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.

If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.

Psql commands \dRp+ and \d <table-name> will display any row filters.

Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
2022-02-22 08:11:50 +05:30
Tom Lane 88103567cb Disallow setting bogus GUCs within an extension's reserved namespace.
Commit 75d22069e tried to throw a warning for setting a custom GUC whose
prefix belongs to a previously-loaded extension, if there is no such GUC
defined by the extension.  But that caused unstable behavior with
parallel workers, because workers don't necessarily load extensions and
GUCs in the same order their leader did.  To make that work safely, we
have to completely disallow the case.  We now actually remove any such
GUCs at the time of initial extension load, and then throw an error not
just a warning if you try to add one later.  While this might create a
compatibility issue for a few people, the improvement in error-detection
capability seems worth it; it's hard to believe that there's any good
use-case for choosing such GUC names.

This also un-reverts 5609cc01c (Rename EmitWarningsOnPlaceholders() to
MarkGUCPrefixReserved()), since that function's old name is now even
more of a misnomer.

Florin Irion and Tom Lane

Discussion: https://postgr.es/m/1902182.1640711215@sss.pgh.pa.us
2022-02-21 14:10:43 -05:00
Andres Freund 7c38ef2a5d Fix temporary object cleanup failing due to toast access without snapshot.
When cleaning up temporary objects during process exit the cleanup could fail
with:
  FATAL: cannot fetch toast data without an active snapshot

The bug is caused by RemoveTempRelationsCallback() not setting up a
snapshot. If an object with toasted catalog data needs to be cleaned up,
init_toast_snapshot() could fail with the above error.

Most of the time however the the problem is masked due to cached catalog
snapshots being returned by GetOldestSnapshot(). But dropping an object can
cause catalog invalidations to be emitted. If no further catalog accesses are
necessary between the invalidation processing and the next toast datum
deletion, the bug becomes visible.

It's easy to miss this bug because it typically happens after clients
disconnect and the FATAL error just ends up in the log.

Luckily temporary table cleanup at the next use of the same temporary schema
or during DISCARD ALL does not have the same problem.

Fix the bug by pushing a snapshot in RemoveTempRelationsCallback(). Also add
isolation tests for temporary object cleanup, including objects with toasted
catalog data.

A future HEAD only commit will add an assertion trying to make this more
visible.

Reported-By: Miles Delahunty
Author: Andres Freund
Discussion: https://postgr.es/m/CAOFAq3BU5Mf2TTvu8D9n_ZOoFAeQswuzk7yziAb7xuw_qyw5gw@mail.gmail.com
Backpatch: 10-
2022-02-21 08:57:34 -08:00
Andrew Dunstan 1c6d462939
Remove most msys special processing in TAP tests
Following migration of Windows buildfarm members running TAP tests to
use of ucrt64 perl for those tests, special processing for msys perl is
no longer necessary and so is removed.

Backpatch to release 10

Discussion: https://postgr.es/m/c65a8781-77ac-ea95-d185-6db291e1baeb@dunslane.net
2022-02-20 11:51:45 -05:00
Andrew Dunstan 95d981338b
Remove PostgreSQL::Test::Utils::perl2host completely
Commit f1ac4a74de disabled this processing, and as nothing has broken (as
expected) here we proceed to remove the routine and adjust all the call
sites.

Backpatch to release 10

Discussion: https://postgr.es/m/0ba775a2-8aa0-0d56-d780-69427cf6f33d@dunslane.net
Discussion: https://postgr.es/m/20220125023609.5ohu3nslxgoygihl@alap3.anarazel.de
2022-02-20 11:51:45 -05:00
Andrew Dunstan f1ac4a74de
Disable perl2host() processing in TAP tests
This is a preliminary step towards removing it altogether, but this lets
us double check that nothing breaks in the buildfarm before we do.

Discussion: https://postgr.es/m/0ba775a2-8aa0-0d56-d780-69427cf6f33d@dunslane.net
2022-02-17 09:59:59 -05:00
Peter Geoghegan 8f388f6f55 Increase hash_mem_multiplier default to 2.0.
Double the default setting for hash_mem_multiplier, from 1.0 to 2.0.
This setting makes hash-based executor nodes use twice the usual
work_mem limit.

The PostgreSQL 15 release notes should have a compatibility note about
this change.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzndc_ROk6CY-bC6p9O53q974Y0Ey4WX8jcPbuTZYM4Q3A@mail.gmail.com
2022-02-16 18:41:52 -08:00
Peter Eisentraut 2549f0661b Reject trailing junk after numeric literals
After this, the PostgreSQL lexers no longer accept numeric literals
with trailing non-digits, such as 123abc, which would be scanned as
two tokens: 123 and abc.  This is undocumented and surprising, and it
might also interfere with some extended numeric literal syntax being
contemplated for the future.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
2022-02-16 10:37:31 +01:00
Michael Paquier 7265dbffad Add TAP test to automate the equivalent of check_guc, take two
src/backend/utils/misc/check_guc is a script that cross-checks the
consistency of the GUCs with postgresql.conf.sample, making sure that
its format is in line with what guc.c has.  It has never been run
automatically, and has rotten over the years, creating a lot of false
positives as per a report from Justin Pryzby.

d10e41d has introduced a SQL function to publish the most relevant flags
associated to a GUC, with tests added in the main regression test suite
to make sure that we avoid most of the inconsistencies in the GUC
settings, based on recent reports, but there was nothing able to
cross-check postgresql.conf.sample with the contents of guc.c.

This commit adds a TAP test that covers the remaining gap.  It emulates
the most relevant checks that check_guc did, so as any format mistakes
are detected in postgresql.conf.sample at development stage, with the
following checks:
- Check that parameters marked as NOT_IN_SAMPLE are not in the sample
file.
- Check that there are no dead entries in postgresql.conf.sample for
parameters not marked as NOT_IN_SAMPLE.
- Check that no parameters are missing from the sample file if listed in
guc.c without NOT_IN_SAMPLE.

The idea of building a list of the GUCs by parsing the sample file comes
from Justin, and he wrote the regex used in the patch to find all the
GUCs (this same formatting rule basically applies for the last 20~ years
or so).  In order to test this patch, I have played with manual
modifications of postgresql.conf.sample and guc.c, making sure that we
detect problems with the GUC rules and the sample file format.

The test is located in src/test/modules/test_misc, which is the best
location I could think about for such sanity checks, rather than the
main regression test suite (src/test/regress) to avoid a new type of
dependency with the source tree.

The first attempt of this patch was b0a55f4, where the location of
postgresql.conf.sample was retrieved using pg_config --sharedir.  This
has proven to be an issue for distributions that patch pg_config to
enforce the installation paths at some wanted location (like Debian),
that may not exist when the test is run, hence causing a failure.
Instead of that, as per a suggestion from Andres Freund, rely on the
fact that the test is always executed from its directory in the source
tree and use a relative path to find the sample file.  This works for
the CI, VPATH builds and on Windows, and tests like the recovery one
added in f47ed79 rely on that already.

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/Yf9YGSwPiMu0c7fP@paquier.xyz
2022-02-16 10:25:12 +09:00
Heikki Linnakangas 853c6400bf Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.

Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise.  We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.

It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.

Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.

Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 01:37:48 +02:00
Tom Lane 2523928b28 Reject change of output-column collation in CREATE OR REPLACE VIEW.
checkViewTupleDesc() didn't get the memo that it should verify
same attcollation along with same type/typmod.  (A quick scan
did not find other similar oversights.)

Per bug #17404 from Pierre-Aurélien Georges.  On another day
I might've back-patched this, but today I'm feeling paranoid
about unnecessary behavioral changes in back branches.

Discussion: https://postgr.es/m/17404-8a4a270ef30a6709@postgresql.org
2022-02-15 12:57:44 -05:00
Daniel Gustafsson 4d373e0528 Ensure that STDERR is empty in connect_ok tests
Connections performed via connect_ok() in TAP tests should not write
anything to STDERR.

Author: Jacob Champion <pchampion@vmware.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/9D4FFB61-392B-4A2C-B7E4-911797B4AC14@yesql.se
Discussion: https://postgr.es/m/ec146256e31afa0542f9fa970ec258c5f1a5f98.camel@vmware.com
2022-02-15 11:35:17 +01:00
Heikki Linnakangas 3279cef072 Add more logging to new 028_pitr_timelines.pl test.
The test has failed a couple of times on buildfarm member 'hoverfly'. It
gets stuck waiting for the standby to archive 000000020000000000000003
WAL segment. I don't understand why, but with DEBUG1, we will get messages
in the log whenever a segment is archived, which hopefully will give a
clue the next time it happens.
2022-02-15 11:55:52 +02:00
Peter Eisentraut 13d129333e Add test case for trailing junk after numeric literals
PostgreSQL currently accepts numeric literals with trailing
non-digits, such as 123abc where the abc is treated as the next token.
This may be a bit surprising.  This commit adds test cases for this;
subsequent commits intend to change this behavior.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
2022-02-15 07:58:49 +01:00
Heikki Linnakangas 50e5bc582a Add test case for an archive recovery corner case.
While I was working on a patch to refactor things around xlog.c, I mixed
up EndOfLogTLI and replayTLI at the end of recovery. As a result, if you
recovered to a point with a lower-numbered timeline in a WAL segment
that has a higher TLI in the filename, the end-of-recovery WAL record
was created with invalid PrevTimeLineId. I noticed that while
self-reviewing, but no tests failed. So add a test to cover that corner
case.

Thanks to Amul Sul who also submitted a test case for the same corner
case, although this patch is different from that.

Reviewed-by: Amul Sul, Michael Paquier
Discussion: https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi
Discussion: https://www.postgresql.org/message-id/CAAJ_b94Vjt5cXGza_1MkjLQWciNdEemsmiWuQj0d%3DM7JfjAa1g%40mail.gmail.com
2022-02-14 11:33:57 +02:00
Peter Eisentraut 37851a8b83 Database-level collation version tracking
This adds to database objects the same version tracking that collation
objects have.  There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.

This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported.  But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
2022-02-14 08:27:26 +01:00
Peter Eisentraut 9898c5e03c Improve correlation names in sanity tests
Some of the queries in the "sanity" tests in the regression test suite
(opr_sanity, type_sanity) are very confusing.  One main stumbling
block is that for some probably ancient reason many of the older
queries are written with correlation names p1, p2, etc. independent of
the name of the catalog. This one is a good example:

SELECT p1.oid, p1.oprname, p2.oid, p2.proname
FROM pg_operator AS p1, pg_proc AS p2          <-- HERE
WHERE p1.oprcode = p2.oid AND
    p1.oprkind = 'l' AND
    (p2.pronargs != 1
     OR NOT binary_coercible(p2.prorettype, p1.oprresult)
     OR NOT binary_coercible(p1.oprright, p2.proargtypes[0])
     OR p1.oprleft != 0);

This is better written as

SELECT o1.oid, o1.oprname, p1.oid, p1.proname
FROM pg_operator AS o1, pg_proc AS p1
WHERE o1.oprcode = p1.oid AND
    o1.oprkind = 'l' AND
    (p1.pronargs != 1
     OR NOT binary_coercible(p1.prorettype, o1.oprresult)
     OR NOT binary_coercible(o1.oprright, p1.proargtypes[0])
     OR o1.oprleft != 0);

This patch cleans up all the queries in this manner.

(As in the above case, I kept the digits like o1 and p1 even in cases
where only one of each letter is used in a query.  This is mainly to
keep the style consistent.)

Discussion: https://www.postgresql.org/message-id/flat/c538308b-319c-8784-e250-1284d12d5411%40enterprisedb.com
2022-02-14 07:11:51 +01:00
Alexander Korotkov 3f74daa8df Fix memory leak in IndexScan node with reordering
Fix ExecReScanIndexScan() to free the referenced tuples while emptying the
priority queue.  Backpatch to all supported versions.

Discussion: https://postgr.es/m/CAHqSB9gECMENBQmpbv5rvmT3HTaORmMK3Ukg73DsX5H7EJV7jw%40mail.gmail.com
Author: Aliaksandr Kalenik
Reviewed-by: Tom Lane, Alexander Korotkov
Backpatch-through: 10
2022-02-14 04:17:04 +03:00
Thomas Munro 7e6124ca7d Remove REGRESS_OUTPUTDIR environment variable.
Andres Freund points out that the tmp_check path is already available as
perl variable PostgreSQL::Test::Utils::tmp_check, so we can drop the new
environment variable introduced by commit f47ed79cc.

Discussion: https://postgr.es/m/20220213052955.dh7lheehit7bsemf%40alap3.anarazel.de
2022-02-14 12:52:57 +13:00
Michael Paquier 52377bb81a Revert "Add TAP test to automate the equivalent of check_guc"
This reverts commit b0a55f4, to remove for now the TAP test that did the
equivalent of check_guc.  The test has been using pg_config --sharedir
to find the location of postgresql.conf.sample.  While the buildfarm and
normal build environments rather liked that, this proves to be an issue
for Debian where pg_config is patched to not be relocatable, causing the
test to fail.

Rather than relying on pg_config, we'd better find the sample file based
on its location from the source directory.  However, this is also an
issue as a TAP test only offers the build directory as of TESTDIR in the
environment context, so this would fail with VPATH builds.  Perhaps the
source path could be provided additionally when running the TAP tests.
Or perhaps we may be able to get away by just switching to a SQL
approach, by using PG_ABS_SRCDIR but this is going to require some extra
loops to get the sample file from the correct path in src/backend/.  In
any case, this needs more thoughts, so just revert the test case until
something better is done about this relocation problem.

Reported-by: Christopher Berg
Discussion: https://postgr.es/m/YgYw25OXV5men8Fj@msg.df7cb.de
2022-02-12 12:53:59 +09:00
Peter Geoghegan 44fa84881f Simplify lazy_scan_heap's handling of scanned pages.
Redefine a scanned page as any heap page that actually gets pinned by
VACUUM's first pass over the heap, regardless of whether or not the page
was cleanup locked.  Although it's fundamentally impossible to prune a
heap page without a cleanup lock (since we cannot safely defragment the
page), we can do just about everything else.  The only notable further
exception is freezing tuples, though even that is arguably a consequence
of not being able to prune (not a separate issue).

VACUUM now does as much of the same processing as possible for pages
that could not be cleanup locked.  Any failure to do specific required
processing is treated as a special case exception, which will be rare in
practice.  We now collect any preexisting LP_DEAD items (left behind by
earlier opportunistic pruning) in the dead_items array for these heap
pages, and count their tuples in the usual way.  Steps used to decide if
we'll attempt relation truncation are performed in the usual way for
no-cleanup-lock scanned pages, too.

Although eliminating these special cases is intrinsically useful, it's
even more useful as an enabler of further simplifications.  The only
essential difference between aggressive and non-aggressive is that only
aggressive is _guaranteed_ to be able to advance relfrozenxid up to
FreezeLimit.  Advancing relfrozenxid is always useful, but before now
non-aggressive VACUUMs threw away the opportunity to do so whenever a
cleanup lock could not be acquired on any page, no matter what the
details were.  This was very pessimistic.

It isn't actually necessary to "behave aggressively" to maintain the
ability to advance relfrozenxid when a cleanup lock isn't immediately
available (most of the time).  The non-aggressive case will now make
sure that it isn't safe to advance relfrozenxid (without waiting) using
only a share lock.  It will usually notice that there are no tuples that
need to be frozen anyway, just like in the aggressive case -- and so it
no longer wastes an opportunity to advance relfrozenxid over nothing.
(The non-aggressive case still won't wait for a cleanup lock when there
really are tuples on the page that need to be frozen, since that really
would amount to "behaving aggressively".)

VACUUM currently has a tendency to set heap pages to all-visible in the
visibility map before it freezes all of the tuples on the page.  Only a
subsequent aggressive VACUUM will visit these pages to freeze their
tuples, usually only when the tuple XIDs are much older than the
vacuum_freeze_min_age GUC (FreezeLimit cutoff) is supposed to allow.
And so non-aggressive VACUUMs are still far less likely to be able to
advance relfrozenxid in practice, even with the enhancements from this
commit.  This remaining issue will be addressed by future work that
overhauls the criteria for freezing tuples.  Once that's in place,
almost every VACUUM operation will be able to advance relfrozenxid in
practice.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wznp=c=Opj8Z7RMR3G=ec3_JfGYMN_YvmCEjoPCHzWbx0g@mail.gmail.com
2022-02-11 14:32:17 -08:00
Tom Lane e5691cc917 Don't use_physical_tlist for an IOS with non-returnable columns.
createplan.c tries to save a runtime projection step by specifying
a scan plan node's output as being exactly the table's columns, or
index's columns in the case of an index-only scan, if there is not a
reason to do otherwise.  This logic did not previously pay attention
to whether an index's columns are returnable.  That worked, sort of
accidentally, until commit 9a3ddeb51 taught setrefs.c to reject plans
that try to read a non-returnable column.  I have no desire to loosen
setrefs.c's new check, so instead adjust use_physical_tlist() to not
try to optimize this way when there are non-returnable column(s).

Per report from Ryan Kelly.  Like the previous patch, back-patch
to all supported branches.

Discussion: https://postgr.es/m/CAHUie24ddN+pDNw7fkhNrjrwAX=fXXfGZZEHhRuofV_N_ftaSg@mail.gmail.com
2022-02-11 15:24:02 -05:00
Daniel Gustafsson 549ec201d6 Replace Test::More plans with done_testing
Rather than doing manual book keeping to plan the number of tests to run
in each TAP suite, conclude each run with done_testing() summing up the
the number of tests that ran. This removes the need for maintaning and
updating the plan count at the expense of an accurate count of remaining
during the test suite runtime.

This patch has been discussed a number of times, often in the context of
other patches which updates tests, so a larger number of discussions can
be found in the archives.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/DD399313-3D56-4666-8079-88949DAC870F@yesql.se
2022-02-11 20:54:44 +01:00
Daniel Gustafsson 6d503d2a47 Set SNI ClientHello extension to localhost in tests
The connection strings in the SSL client tests were using the host
set up from Cluster.pm which is a temporary pathname. When SNI is
enabled we pass the host to OpenSSL in order to set the server name
indication ClientHello extension via SSL_set_tlsext_host_name.

OpenSSL doesn't validate the hostname apart from checking the max
length, but LibreSSL checks for RFC 5890 conformance which results
in errors during testing as the pathname from Cluster.pm is not a
valid hostname.

Fix by setting the host explicitly to localhost, as that's closer
to the intent of the test.

Backpatch through 14 where SNI support came in.

Reported-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/17391-304f81bcf724b58b@postgresql.org
Backpatch-through: 14
2022-02-10 14:23:36 +01:00
Fujii Masao 400fc6b648 Add min() and max() aggregates for xid8.
Bump catalog version.

Author: Ken Kato
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/47d77b18c44f87f8222c4c7a3e2dee6b@oss.nttdata.com
2022-02-10 12:33:41 +09:00