Commit Graph

22159 Commits

Author SHA1 Message Date
Tom Lane db2760a841 Disallow creating an ICU collation if the DB encoding won't support it.
Previously this was allowed, but the collation effectively vanished
into the ether because of the way lookup_collation() works: you could
not use the collation, nor even drop it.  Seems better to give an
error up front than to leave the user wondering why it doesn't work.

(Because this test is in DefineCollation not CreateCollation, it does
not prevent pg_import_system_collations from creating ICU collations,
regardless of the initially-chosen encoding.)

Per bug #17170 from Andrew Bille.  Back-patch to v10 where ICU support
was added.

Discussion: https://postgr.es/m/17170-95845cf3f0a9c36d@postgresql.org
2021-09-03 16:39:03 -04:00
Tom Lane c95ede41b8 In count_usable_fds(), duplicate stderr not stdin.
We had a complaint that the postmaster fails to start if the invoking
program closes stdin.  That happens because count_usable_fds expects
to be able to dup(0), and if it can't, we conclude there are no free
FDs and go belly-up.  So far as I can find, though, there is no other
place in the server that touches stdin, and it's not unreasonable to
expect that a daemon wouldn't use that file.

As a simple improvement, let's dup FD 2 (stderr) instead.  Unlike stdin,
it *is* reasonable for us to expect that stderr be open; even if we are
configured not to touch it, common libraries such as libc might try to
write error messages there.

Per gripe from Mario Emmenlauer.  Given the lack of previous complaints,
I'm not excited about pushing this into stable branches, but it seems
OK to squeeze it into v14.

Discussion: https://postgr.es/m/48bafc63-c30f-3962-2ded-f2e985d93e86@emmenlauer.de
2021-09-02 18:53:10 -04:00
Tom Lane ce773f230d Fix float4/float8 hash functions to produce uniform results for NaNs.
The IEEE 754 standard allows a wide variety of bit patterns for NaNs,
of which at least two ("NaN" and "-NaN") are pretty easy to produce
from SQL on most machines.  This is problematic because our btree
comparison functions deem all NaNs to be equal, but our float hash
functions know nothing about NaNs and will happily produce varying
hash codes for them.  That causes unexpected results from queries
that hash a column containing different NaN values.  It could also
produce unexpected lookup failures when using a hash index on a
float column, i.e. "WHERE x = 'NaN'" will not find all the rows
it should.

To fix, special-case NaN in the float hash functions, not too much
unlike the existing special case that forces zero and minus zero
to hash the same.  I arranged for the most vanilla sort of NaN
(that coming from the C99 NAN constant) to still have the same
hash code as before, to reduce the risk to existing hash indexes.

I dithered about whether to back-patch this into stable branches,
but ultimately decided to do so.  It's a clear improvement for
queries that hash internally.  If there is anybody who has -NaN
in a hash index, they'd be well advised to re-index after applying
this patch ... but the misbehavior if they don't will not be much
worse than the misbehavior they had before.

Per bug #17172 from Ma Liangzhu.

Discussion: https://postgr.es/m/17172-7505bea9e04e230f@postgresql.org
2021-09-02 17:24:41 -04:00
Fujii Masao e04267844a Enhance pg_stat_reset_single_table_counters function.
This commit allows pg_stat_reset_single_table_counters() to reset statistics
for a single relation shared across all databases in the cluster to zero.

Bump catalog version.

Author: B Sadhu Prasad Patro
Reviewed-by: Mahendra Singh Thalor, Himanshu Upadhyaya, Dilip Kumar, Fujii Masao
Discussion: https://postgr.es/m/CAFF0-CGy7EHeF=AqqkGMF85cySPQBgDcvNk73G2O0vL94O5U5A@mail.gmail.com
2021-09-02 14:01:06 +09:00
Amit Kapila 31c389d8de Optimize fileset usage in apply worker.
Use one fileset for the entire worker lifetime instead of using
separate filesets for each streaming transaction. Now, the
changes/subxacts files for every streaming transaction will be
created under the same fileset and the files will be deleted
after the transaction is completed.

This patch extends the BufFileOpenFileSet and BufFileDeleteFileSet
APIs to allow users to specify whether to give an error on missing
files.

Author: Dilip Kumar, based on suggestion by Thomas Munro
Reviewed-by: Hou Zhijie, Masahiko Sawada, Amit Kapila
Discussion: https://postgr.es/m/E1mCC6U-0004Ik-Fs@gemulon.postgresql.org
2021-09-02 08:13:46 +05:30
Tomas Vondra 537ca68dbb Identify simple column references in extended statistics
Until now, when defining extended statistics, everything except a plain
column reference was treated as complex expression. So for example "a"
was a column reference, but "(a)" would be an expression. In most cases
this does not matter much, but there were a couple strange consequences.
For example

    CREATE STATISTICS s ON a FROM t;

would fail, because extended stats require at least two columns. But

    CREATE STATISTICS s ON (a) FROM t;

would succeed, because that requirement does not apply to expressions.
Moreover, that statistics object is useless - the optimizer will always
use the regular statistics collected for attribute "a".

So do a bit more work to identify those expressions referencing a single
column, and translate them to a simple column reference. Backpatch to
14, where support for extended statistics on expressions was introduced.

Reported-by: Justin Pryzby
Backpatch-through: 14
Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
2021-09-01 17:41:56 +02:00
Peter Eisentraut 590ecd9823 Fix incorrect format placeholders 2021-09-01 10:49:13 +02:00
Peter Geoghegan b175b9cde7 VACUUM VERBOSE: Don't report "pages removed".
It doesn't make any sense to report this information, since VACUUM
VERBOSE reports on heap relation truncation directly.  This was an
oversight in commit 7ab96cf6, which made VACUUM VERBOSE output a little
more consistent with nearby autovacuum-specific log output.  Adjust
comments that describe how this is supposed to work in passing.

Also bring truncation-related VACUUM VERBOSE output in line with the
convention established for VACUUM VERBOSE output by commit f4f4a649.

Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-, where VACUUM VERBOSE's output changed.
2021-08-31 20:37:18 -07:00
Michael Paquier c4f7a6b87f Refactor one conversion of SQLSTATE to string in elog.c
unpack_sql_state() has been introduced in d46bc44 to refactor the
unpacking of a SQLSTATE into a string, but it forgot one code path when
sending error reports to clients that could make use of it.  This
changes the code to also use unpack_sql_state() there, simplifying a bit
the code.

Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PuYituuD1-VVZUNcmCQuc3ZzZMPoO57POgm8tnXOkwJAA@mail.gmail.com
2021-09-01 11:48:08 +09:00
Peter Geoghegan 0f6aa893cb Remove obsolete nbtree relation extension comment.
Commit 0d1fe9f7 improved the approach that vacuumlazy.c takes when it
encounters an empty heap page.  It no acquires the relation extension
lock.
2021-08-31 16:55:39 -07:00
Peter Geoghegan 6320806ac3 vacuumlazy.c: Correct prune state comment.
Oversight in commit 7ab96cf6b3.
2021-08-31 16:35:01 -07:00
Tomas Vondra 13380e1476 Don't print extra parens around expressions in extended stats
The code printing expressions for extended statistics doubled the
parens, producing results like ((a+1)), which is unnecessary and not
consistent with how we print expressions elsewhere.

Fixed by tweaking the code to produce just a single set of parens.

Reported by Mark Dilger, fix by me. Backpatch to 14, where support for
extended statistics on expressions was added.

Reported-by: Mark Dilger
Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
2021-09-01 00:43:22 +02:00
Peter Geoghegan 47029f775a Remove unneeded old_rel_pages VACUUM state field.
The field hasn't been used since commit 3d351d91, which redefined
pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.

Also rename a local variable of the same name ("old_rel_pages"). This is
used by relation truncation to represent the original relation size at
the start of the ongoing VACUUM operation.  Rename it to orig_rel_pages,
since that's a lot clearer.  (This name matches similar nearby code.)
2021-08-31 14:59:52 -07:00
Tomas Vondra 5be8ce82e8 Fix lookup error in extended stats ownership check
When an ownership check on extended statistics object failed, the code
was calling aclcheck_error_type to report the failure, which is clearly
wrong, resulting in cache lookup errors. Fix by calling aclcheck_error.

This issue exists since the introduction of extended statistics, so
backpatch all the way back to PostgreSQL 10. It went unnoticed because
there were no tests triggering the error, so add one.

Reported-by: Mark Dilger
Backpatch-through: 10, where extended stats were introduced
Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
2021-08-31 18:33:38 +02:00
Tom Lane 589be6f6c7 Fix missed lock acquisition while inlining new-style SQL functions.
When starting to use a query parsetree loaded from the catalogs,
we must begin by applying AcquireRewriteLocks(), to obtain the same
relation locks that the parser would have gotten if the query were
entered interactively, and to do some other cleanup such as dealing
with later-dropped columns.  New-style SQL functions are just as
subject to this rule as other stored parsetrees; however, of the
places dealing with such functions, only init_sql_fcache had gotten
the memo.  In particular, if we successfully inlined a new-style
set-returning SQL function that contained any relation references,
we'd either get an assertion failure or attempt to use those
relation(s) sans locks.

I also added AcquireRewriteLocks calls to fmgr_sql_validator and
print_function_sqlbody.  Desultory experiments didn't demonstrate any
failures in those, but I suspect that I just didn't try hard enough.
Certainly we don't expect nearby code paths to operate without locks.

On the same logic of it-ought-to-have-the-same-effects-as-the-old-code,
call pg_rewrite_query() in fmgr_sql_validator, too.  It's possible
that neither code path there needs to bother with rewriting, but
doing the analysis to prove that is beyond my goals for today.

Per bug #17161 from Alexander Lakhin.

Discussion: https://postgr.es/m/17161-048a1cdff8422800@postgresql.org
2021-08-31 12:02:36 -04:00
Alvaro Herrera 961dd75657
Report tuple address in data-corruption error message
Most data-corruption reports mention the location of the problem, but
this one failed to.  Add it.

Backpatch all the way back.  In 12 and older, also assign the
ERRCODE_DATA_CORRUPTED error code as was done in commit fd6ec93bf8 for
13 and later.

Discussion: https://postgr.es/m/202108191637.oqyzrdtnheir@alvherre.pgsql
2021-08-30 16:29:12 -04:00
Amit Kapila bad6cef32c Fix incorrect error code in StartupReplicationOrigin().
ERRCODE_CONFIGURATION_LIMIT_EXCEEDED was used for checksum failure, use
ERRCODE_DATA_CORRUPTED instead.

Reported-by: Tatsuhito Kasahara
Author: Tatsuhito Kasahara
Backpatch-through: 9.6, where it was introduced
Discussion: https://postgr.es/m/CAP0=ZVLHtYffs8SOWcFJWrBGoRzT9QQbk+_aP+E5AHLNXiOorA@mail.gmail.com
2021-08-30 09:14:31 +05:30
Amit Kapila dcac5e7ac1 Refactor sharedfileset.c to separate out fileset implementation.
Move fileset related implementation out of sharedfileset.c to allow its
usage by backends that don't want to share filesets among different
processes. After this split, fileset infrastructure is used by both
sharedfileset.c and worker.c for the named temporary files that survive
across transactions.

Author: Dilip Kumar, based on suggestion by Andres Freund
Reviewed-by: Hou Zhijie, Masahiko Sawada, Amit Kapila
Discussion: https://postgr.es/m/E1mCC6U-0004Ik-Fs@gemulon.postgresql.org
2021-08-30 08:48:15 +05:30
Tom Lane 10d58228bb Doc: add a little about LACON execution to src/backend/regex/README.
I wrote this while thinking about a possible optimization, but it's
a useful description of the existing code regardless of whether the
optimization ever happens.  So push it separately.
2021-08-29 12:48:49 -04:00
Alvaro Herrera 375aed36ad
Keep stats up to date for partitioned tables
In the long-going saga for analyze on partitioned tables, one thing I
missed while reverting 0827e8af70 is the maintenance of analyze count
and last analyze time for partitioned tables.  This is a mostly trivial
change that enables users assess the need for invoking manual ANALYZE on
partitioned tables.

This patch, posted by Justin and modified a bit by me (Álvaro), can be
mostly traced back to Hosoya-san, though any problems introduced with
the scissors are mine.

Backpatch to 14, in line with 6f8127b739.

Co-authored-by: Yuzuko Hosoya <yuzukohosoya@gmail.com>
Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20210816222810.GE10479@telsasoft.com
2021-08-28 15:58:23 -04:00
Noah Misch 97ddda8a82 Fix data loss in wal_level=minimal crash recovery of CREATE TABLESPACE.
If the system crashed between CREATE TABLESPACE and the next checkpoint,
the result could be some files in the tablespace unexpectedly containing
no rows.  Affected files would be those for which the system did not
write WAL; see the wal_skip_threshold documentation.  Before v13, a
different set of conditions governed the writing of WAL; see v12's
<sect2 id="populate-pitr">.  (The v12 conditions were broader in some
ways and narrower in others.)  Users may want to audit non-default
tablespaces for unexpected short files.  The bug could have truncated an
index without affecting the associated table, and reindexing the index
would fix that particular problem.

This fixes the bug by making create_tablespace_directories() more like
TablespaceCreateDbspace().  create_tablespace_directories() was
recursively removing tablespace contents, reasoning that WAL redo would
recreate everything removed that way.  That assumption holds for other
wal_level values.  Under wal_level=minimal, the old approach could
delete files for which no other copy existed.  Back-patch to 9.6 (all
supported versions).

Reviewed by Robert Haas and Prabhat Sahu.  Reported by Robert Haas.

Discussion: https://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com
2021-08-27 23:33:23 -07:00
Tom Lane 3778bcb39a Count SP-GiST index scans in pg_stat statistics.
Somehow, spgist overlooked the need to call pgstat_count_index_scan().
Hence, pg_stat_all_indexes.idx_scan and equivalent columns never
became nonzero for an SP-GiST index, although the related per-tuple
counters worked fine.

This fix works a bit differently from other index AMs, in that the
counter increment occurs in spgrescan not spggettuple/spggetbitmap.
It looks like this won't make the user-visible semantics noticeably
different, so I won't go to the trouble of introducing an is-this-
the-first-call flag just to make the counter bumps happen in the
same places.

Per bug #17163 from Christian Quest.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/17163-b8c5cc88322a5e92@postgresql.org
2021-08-27 19:53:05 -04:00
Stephen Frost ce42efaa26 Use maintenance_io_concurrency for ANALYZE prefetch
When prefetching pages for ANALYZE, we should be using
maintenance_io_concurrenty (by calling
get_tablespace_maintenance_io_concurrency(), not
get_tablespace_io_concurrency()).

ANALYZE prefetching was introduced in c6fc50c, so back-patch to 14.

Backpatch-through: 14
Reported-By: Egor Rogov
Discussion: https://postgr.es/m/9beada99-34ce-8c95-fadb-451768d08c64%40postgrespro.ru
2021-08-27 19:23:14 -04:00
Peter Geoghegan bda822554b track_io_timing logging: Don't special case 0 ms.
Adjust track_io_timing related logging code added by commit 94d13d474d.
Make it consistent with other nearby autovacuum and autoanalyze logging
code by removing logic that suppressed zero millisecond outputs.

log_autovacuum_min_duration log output now reliably shows "read:" and
"write:" millisecond-based values in its report (when track_io_timing is
enabled).

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Stephen Frost <sfrost@snowman.net>
Discussion: https://postgr.es/m/CAH2-WznW0FNxSVQMSRazAMYNfZ6DR_gr5WE78hc6E1CBkkJpzw@mail.gmail.com
Backpatch: 14-, where the track_io_timing logging was introduced.
2021-08-27 13:34:00 -07:00
Peter Geoghegan fdfbfa24fa Reorder log_autovacuum_min_duration log output.
This order seems more natural.  It starts with details that are
particular to heap and index data structures, and ends with system-level
costs incurred during the autovacuum worker's VACUUM/ANALYZE operation.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkzxK6ahA9xxsOftRtBX_R0swuHZsvo4QUbak1Bz7hb7Q@mail.gmail.com
Backpatch: 14-, which enhanced the log output in various ways.
2021-08-27 13:08:41 -07:00
Peter Geoghegan de5dcb0796 vacuumlazy.c: Remove unnecessary parentheses.
This was arguably a minor oversight in commit b4af70cb, which cleaned up
the function signatures of functions that modify IndexBulkDeleteResult
variables.
2021-08-27 09:47:16 -07:00
Tom Lane 8f72becd6b Handle interaction of regexp's makesearch and MATCHALL more honestly.
Second thoughts about commit 824bf7190: we apply makesearch() to
an NFA after having determined whether it is a MATCHALL pattern.
Prepending ".*" doesn't make it non-MATCHALL, but it does change the
maximum possible match length, and makesearch() failed to update that.
This has no ill effects given the stylized usage of search NFAs, but
it seems like it's better to keep the data structure consistent.  In
particular, fixing this allows more honest handling of the MATCHALL
check in matchuntil(): we can now assert that maxmatchall is infinity,
instead of lamely assuming that it should act that way.

In passing, improve the code in dump[c]nfa so that infinite maxmatchall
is printed as "inf" not a magic number.
2021-08-27 12:18:58 -04:00
Amit Kapila abc0910e2e Add logical change details to logical replication worker errcontext.
Previously, on the subscriber, we set the error context callback for the
tuple data conversion failures. This commit replaces the existing error
context callback with a comprehensive one so that it shows not only the
details of data conversion failures but also the details of logical change
being applied by the apply worker or table sync worker. The additional
information displayed will be the command, transaction id, and timestamp.

The error context is added to an error only when applying a change but not
while doing other work like receiving data etc.

This will help users in diagnosing the problems that occur during logical
replication. It also can be used for future work that allows skipping a
particular transaction on the subscriber.

Author: Masahiko Sawada
Reviewed-by: Hou Zhijie, Greg Nancarrow, Haiying Tang, Amit Kapila
Tested-by: Haiying Tang
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
2021-08-27 08:30:23 +05:30
Peter Eisentraut 0d906b2c0b Fix handling of partitioned index in RelationGetNumberOfBlocksInFork()
Since a partitioned index doesn't have storage, getting the number of
blocks from it will not give sensible results.  Existing callers
already check that they don't call it that way, so there doesn't
appear to be a live problem.  But for correctness, handle
RELKIND_PARTITIONED_INDEX together with the other non-storage
relkinds.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/1d3a5fbe-f48b-8bea-80da-9a5c4244aef9@enterprisedb.com
2021-08-26 08:59:32 +02:00
Tom Lane 373e08a9f7 Remove redundant test.
The condition "context_start < context_end" is strictly weaker
than "context_end - context_start >= 50", so we don't need both.
Oversight in commit ffd3944ab, noted by tanghy.fnst.

In passing, line-wrap a nearby test to make it more readable.

Discussion: https://postgr.es/m/OS0PR01MB61137C4054774F44E3A9DC89FBC69@OS0PR01MB6113.jpnprd01.prod.outlook.com
2021-08-25 11:06:34 -04:00
Robert Haas a780b2fcce Fix broken snapshot handling in parallel workers.
Pengchengliu reported an assertion failure in a parallel woker while
performing a parallel scan using an overflowed snapshot. The proximate
cause is that TransactionXmin was set to an incorrect value.  The
underlying cause is incorrect snapshot handling in parallel.c.

In particular, InitializeParallelDSM() was unconditionally calling
GetTransactionSnapshot(), because I (rhaas) mistakenly thought that
was always retrieving an existing snapshot whereas, at isolation
levels less than REPEATABLE READ, it's actually taking a new one. So
instead do this only at higher isolation levels where there actually
is a single snapshot for the whole transaction.

By itself, this is not a sufficient fix, because we still need to
guarantee that TransactionXmin gets set properly in the workers. The
easiest way to do that seems to be to install the leader's active
snapshot as the transaction snapshot if the leader did not serialize a
transaction snapshot. This doesn't affect the results of future
GetTrasnactionSnapshot() calls since those have to take a new snapshot
anyway; what we care about is the side effect of setting TransactionXmin.

Report by Pengchengliu. Patch by Greg Nancarrow, except for some comment
text which I supplied.

Discussion: https://postgr.es/m/002f01d748ac$eaa781a0$bff684e0$@tju.edu.cn
2021-08-25 08:32:04 -04:00
Amit Kapila 29b5905470 Fix toast rewrites in logical decoding.
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.

To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.

Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
2021-08-25 09:53:07 +05:30
Fujii Masao 170aec63cd Avoid using ambiguous word "positive" in error message.
There are two identical error messages about valid value of modulus for
hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved
only one of them so that ambiguous word "positive" was avoided there,
and forgot to improve the other. This commit improves the other.
Which would reduce translator burden.

Back-pach to v11 where the error message exists.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:46:25 +09:00
Fujii Masao 085400fee9 Improve error message about valid value for distance in phrase operator.
The distance in phrase operator must be an integer value between zero
and MAXENTRYPOS inclusive. But previously the error message about
its valid value included the information about its upper limit
but not lower limit (i.e., zero). This commit improves the error message
so that it also includes the information about its lower limit.

Back-patch to v9.6 where full-text phrase search was supported.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:43:56 +09:00
Tom Lane 65dc30ced6 Fix regexp misbehavior with capturing parens inside "{0}".
Regexps like "(.){0}...\1" drew an "invalid backreference number".
That's not unreasonable on its face, since the capture group will
never be matched if it's iterated zero times.  However, other engines
such as Perl's don't complain about this, nor do we throw an error for
related cases such as "(.)|\1", even though that backref can never
succeed either.  Also, if the zero-iterations case happens at runtime
rather than compile time --- say, "(x)*...\1" when there's no "x" to
be found --- that's not an error, we just deem the backref to not
match.  Making this even less defensible, no error was thrown for
nested cases such as "((.)){0}...\2"; and to add insult to injury,
those cases could result in assertion failures instead.  (It seems
that nothing especially bad happened in non-assert builds, though.)

Let's just fix it so that no error is thrown and instead the backref
is deemed to never match, so that compile-time detection of no
iterations behaves the same as run-time detection.

Per report from Mark Dilger.  This appears to be an aboriginal error
in Spencer's library, so back-patch to all supported versions.

Pre-v14, it turns out to also be necessary to back-patch one aspect of
commits cb76fbd7e/00116dee5, namely to create capture-node subREs with
the begin/end states of their subexpressions, not the current lp/rp
of the outer parseqatom invocation.  Otherwise delsub complains that
we're trying to disconnect a state from itself.  This is a bit scary
but code examination shows that it's safe: in the pre-v14 code, if we
want to wrap iteration around the subexpression, the first thing we do
is overwrite the atom's begin/end fields with new states.  So the
bogus values didn't survive long enough to be used for anything, except
if no iteration is required, in which case it doesn't matter.

Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
2021-08-24 16:37:26 -04:00
Amit Kapila 1046a69b30 Fix Alter Subscription's Add/Drop Publication behavior.
The current refresh behavior tries to just refresh added/dropped
publications but that leads to removing wrong tables from subscription. We
can't refresh just the dropped publication because it is quite possible
that some of the tables are removed from publication by that time and now
those will remain as part of the subscription. Also, there is a chance
that the tables that were part of the publication being dropped are also
part of another publication, so we can't remove those.

So, we decided that by default, add/drop commands will also act like
REFRESH PUBLICATION which means they will refresh all the publications. We
can keep the old behavior for "add publication" but it is better to be
consistent with "drop publication".

Author: Hou Zhijie
Reviewed-by: Masahiko Sawada, Amit Kapila
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-08-24 08:25:21 +05:30
Tom Lane 9bbf6f7341 Prevent regexp back-refs from sometimes matching when they shouldn't.
The recursion in cdissect() was careless about clearing match data
for capturing parentheses after rejecting a partial match.  This
could allow a later back-reference to succeed when by rights it
should fail for lack of a defined referent.

To fix, think a little more rigorously about what the contract
between different levels of cdissect's recursion needs to be.
With the right spec, we can fix this using fewer rather than more
resets of the match data; the key decision being that a failed
sub-match is now explicitly responsible for clearing any matches
it may have set.

There are enough other cross-checks and optimizations in the code
that it's not especially easy to exhibit this problem; usually, the
match will fail as-expected.  Plus, regexps that are even potentially
vulnerable are most likely user errors, since there's just not much
point in writing a back-ref that doesn't always have a referent.
These facts perhaps explain why the issue hasn't been detected,
even though it's almost certainly a couple of decades old.

Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
2021-08-23 17:41:07 -04:00
Alvaro Herrera 515e3d84a0
Avoid creating archive status ".ready" files too early
WAL records may span multiple segments, but XLogWrite() does not
wait for the entire record to be written out to disk before
creating archive status files.  Instead, as soon as the last WAL page of
the segment is written, the archive status file is created, and the
archiver may process it.  If PostgreSQL crashes before it is able to
write and flush the rest of the record (in the next WAL segment), the
wrong version of the first segment file lingers in the archive, which
causes operations such as point-in-time restores to fail.

To fix this, keep track of records that span across segments and ensure
that segments are only marked ready-for-archival once such records have
been completely written to disk.

This has always been wrong, so backpatch all the way back.

Author: Nathan Bossart <bossartn@amazon.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com
2021-08-23 15:50:35 -04:00
Bruce Momjian f7bda63a48 Improve defaults shown in postgresql.conf.sample and pg_settings
Previously, these showed unlikely default values.  The new default value
128MB (since PG 10) is not always accurate since initdb tries several
increasing values, but it likely to be accurate.

Reported-by: Zhangjie <zhangjie2@fujitsu.com>

Discussion: https://postgr.es/m/TYWPR01MB7678772FD8640C404F1DC882F9079@TYWPR01MB7678.jpnprd01.prod.outlook.com

Author: Zhangjie

Backpatch-through: master
2021-08-23 12:33:38 -04:00
Michael Paquier a3fcbcda75 Fix backup manifests to generate correct WAL-Ranges across timelines
In a backup manifest, WAL-Ranges stores the range of WAL that is
required for the backup to be valid.  pg_verifybackup would then
internally use pg_waldump for the checks based on this data.

When the timeline where the backup started was more than 1 with a
history file looked at for the manifest data generation, the calculation
of the WAL range for the first timeline to check was incorrect.  The
previous logic used as start LSN the start position of the first
timeline, but it needs to use the start LSN of the backup.  This would
cause failures with pg_verifybackup, or any tools making use of the
backup manifests.

This commit adds a test based on a logic using a self-promoted node,
making it rather cheap.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210818.143031.1867083699202617521.horikyota.ntt@gmail.com
Backpatch-through: 13
2021-08-23 11:09:33 +09:00
David Rowley 22c4e88ebf Allow parallel DISTINCT
We've supported parallel aggregation since e06a38965.  At the time, we
didn't quite get around to also adding parallel DISTINCT. So, let's do
that now.

This is implemented by introducing a two-phase DISTINCT.  Phase 1 is
performed on parallel workers, rows are made distinct there either by
hashing or by sort/unique.  The results from the parallel workers are
combined and the final distinct phase is performed serially to get rid of
any duplicate rows that appear due to combining rows for each of the
parallel workers.

Author: David Rowley
Reviewed-by: Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvrjRxVKwQN0he79xS+9wyotFXL=RmoWqGGO2N45Farpgw@mail.gmail.com
2021-08-22 23:31:16 +12:00
Tom Lane 26ae660903 Improve error messages about misuse of SELECT INTO.
Improve two places in plpgsql, and one in spi.c, where an error
message would confusingly tell you that you couldn't use a SELECT
query, when what you had written *was* a SELECT query.  The actual
problem is that you can't use SELECT ... INTO in these contexts,
but the messages failed to make that apparent.  Special-case
SELECT INTO to make these errors more helpful.

Also, fix the same spots in plpgsql, as well as several messages
in exec_eval_expr(), to not quote the entire complained-of query or
expression in the primary error message.  That behavior very easily
led to violating our message style guideline about keeping the primary
error message short and single-line.  Also, since the important part
of the message was after the inserted text, it could make the real
problem very hard to see.  We can report the query or expression as
the first line of errcontext instead.

Per complaint from Roger Mason.  Back-patch to v14, since (a) some
of these messages are new in v14 and (b) v14's translatable strings
are still somewhat in flux.  The problem's older than that of
course, but I'm hesitant to change the behavior further back.

Discussion: https://postgr.es/m/1914708.1629474624@sss.pgh.pa.us
2021-08-21 10:22:14 -04:00
Tom Lane facce1da91 Fix performance bug in regexp's citerdissect/creviterdissect.
After detecting a sub-match "dissect" failure (i.e., a backref match
failure) in the i'th sub-match of an iteration node, we should proceed
by adjusting the attempted length of the i'th submatch.  As coded,
though, these functions changed the attempted length of the *last*
sub-match, and only after exhausting all possibilities for that would
they back up to adjust the next-to-last sub-match, and then the
second-from-last, etc; all of which is wasted effort, since only
changing the start or length of the i'th sub-match can possibly make
it succeed.  This oversight creates the possibility for exponentially
bad performance.  Fortunately the problem is masked in most cases by
optimizations or constraints applied elsewhere; which explains why
we'd not noticed it before.  But it is possible to reach the problem
with fairly simple, if contrived, regexps.

Oversight in my commit 173e29aa5.  That's pretty ancient now,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/1808998.1629412269@sss.pgh.pa.us
2021-08-20 14:19:04 -04:00
Tom Lane 8d2d6ec770 Avoid trying to lock OLD/NEW in a rule with FOR UPDATE.
transformLockingClause neglected to exclude the pseudo-RTEs for
OLD/NEW when processing a rule's query.  This led to odd errors
or even crashes later on.  This bug is very ancient, but it's
not terribly surprising that nobody noticed, since the use-case
for SELECT FOR UPDATE in a non-view rule is somewhere between
thin and non-existent.  Still, crashing is not OK.

Per bug #17151 from Zhiyong Wu.  Thanks to Masahiko Sawada
for analysis of the problem.

Discussion: https://postgr.es/m/17151-c03a3e6e4ec9aadb@postgresql.org
2021-08-19 12:12:35 -04:00
Andres Freund bed5eac2d5 Unset MyBEEntry, making elog.c's call to pgstat_get_my_query_id() safe.
Previously log messages late during shutdown could end up using either another
backend's PgBackendStatus (multi user) or segfault (single user) because
pgstat_get_my_query_id()'s check for !MyBEEntry didn't filter out use after
pgstat_beshutdown_hook().

This became a bug in 4f0b0966c8, but was a bit fishy before. But given
there's no known problematic cases before 14, it doesn't seem worth
backpatching further.

Also fixes a wrong filename in a comment, introduced in e1025044.

Reported-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/Julien Rouhaud <rjuju123@gmail.com>
Backpatch: 14-
2021-08-19 05:07:53 -07:00
Amit Kapila 4cd7a18968 Rename LOGICAL_REP_MSG_STREAM_END to LOGICAL_REP_MSG_STREAM_STOP.
In the code, most places used the term "Stream Stop" for the logical
stream message. This commit improves consistency by renaming LogicalRepMsgType
"LOGICAL_REP_MSG_STREAM_END" to "LOGICAL_REP_MSG_STREAM_STOP".

Author: Masahiko Sawada
Reviewed-by: Hou Zhijie, Amit Kapila
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
2021-08-19 09:34:26 +05:30
Michael Paquier 2576dcfb76 Revert refactoring of hex code to src/common/
This is a combined revert of the following commits:
- c3826f8, a refactoring piece that moved the hex decoding code to
src/common/.  This code was cleaned up by aef8948, as it originally
included no overflow checks in the same way as the base64 routines in
src/common/ used by SCRAM, making it unsafe for its purpose.
- aef8948, a more advanced refactoring of the hex encoding/decoding code
to src/common/ that added sanity checks on the result buffer for hex
decoding and encoding.  As reported by Hans Buschmann, those overflow
checks are expensive, and it is possible to see a performance drop in
the decoding/encoding of bytea or LOs the longer they are.  Simple SQLs
working on large bytea values show a clear difference in perf profile.
- ccf4e27, a cleanup made possible by aef8948.

The reverts of all those commits bring back the performance of hex
decoding and encoding back to what it was in ~13.  Fow now and
post-beta3, this is the simplest option.

Reported-by: Hans Buschmann
Discussion: https://postgr.es/m/1629039545467.80333@nidsa.net
Backpatch-through: 14
2021-08-19 09:20:13 +09:00
Tom Lane 2313dda9d4 Fix check_agg_arguments' examination of aggregate FILTER clauses.
Recursion into the FILTER clause was mis-implemented, such that a
relevant Var or Aggref at the very top of the FILTER clause would
be ignored.  (Of course, that'd have to be a plain boolean Var or
boolean-returning aggregate.)  The consequence would be
mis-identification of the correct semantic level of the aggregate,
which could lead to not-per-spec query behavior.  If the FILTER
expression is an aggregate, this could also lead to failure to issue
an expected "aggregate function calls cannot be nested" error, which
would likely result in a core dump later on, since the planner and
executor aren't expecting such cases to appear.

The root cause is that commit b560ec1b0 blindly copied some code
that assumed it's recursing into a List, and thus didn't examine the
top-level node.  To forestall questions about why this call doesn't
look like the others, as well as possible future copy-and-paste
mistakes, let's change all three check_agg_arguments_walker calls in
check_agg_arguments, even though only the one for the filter clause
is really broken.

Per bug #17152 from Zhiyong Wu.  This has been wrong since we
implemented FILTER, so back-patch to all supported versions.
(Testing suggests that pre-v11 branches manage to avoid crashing
in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg.
But I'm not sure how thorough that protection is, and anyway the
wrong-behavior issue remains, so fix 9.6 and 10 too.)

Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
2021-08-18 18:12:51 -04:00
Tom Lane 6b71c925cb Prevent ALTER TYPE/DOMAIN/OPERATOR from changing extension membership.
If recordDependencyOnCurrentExtension is invoked on a pre-existing,
free-standing object during an extension update script, that object
will become owned by the extension.  In our current code this is
possible in three cases:

* Replacing a "shell" type or operator.
* CREATE OR REPLACE overwriting an existing object.
* ALTER TYPE SET, ALTER DOMAIN SET, and ALTER OPERATOR SET.

The first of these cases is intentional behavior, as noted by the
existing comments for GenerateTypeDependencies.  It seems like
appropriate behavior for CREATE OR REPLACE too; at least, the obvious
alternatives are not better.  However, the fact that it happens during
ALTER is an artifact of trying to share code (GenerateTypeDependencies
and makeOperatorDependencies) between the CREATE and ALTER cases.
Since an extension script would be unlikely to ALTER an object that
didn't already belong to the extension, this behavior is not very
troubling for the direct target object ... but ALTER TYPE SET will
recurse to dependent domains, and it is very uncool for those to
become owned by the extension if they were not already.

Let's fix this by redefining the ALTER cases to never change extension
membership, full stop.  We could minimize the behavioral change by
only changing the behavior when ALTER TYPE SET is recursing to a
domain, but that would complicate the code and it does not seem like
a better definition.

Per bug #17144 from Alex Kozhemyakin.  Back-patch to v13 where ALTER
TYPE SET was added.  (The other cases are older, but since they only
affect the directly-named object, there's not enough of a problem to
justify changing the behavior further back.)

Discussion: https://postgr.es/m/17144-e67d7a8f049de9af@postgresql.org
2021-08-17 14:29:22 -04:00
Tom Lane 78a843f119 Improve regex compiler's arc moving/copying logic.
The functions moveins(), copyins(), moveouts(), copyouts() are
required to preserve the invariant that there are no duplicate arcs in
the regex's NFA.  Spencer's original implementation of them was O(N^2)
since it checked separately for a match to each source arc.  In commit
579840ca0 I improved that by adding sort/merge logic to be used if
more than a few arcs are to be moved/copied.  However, I now realize
that that missed a bet.  At many call sites, the target state is newly
made and cannot have any existing in-arcs (respectively out-arcs)
that could be duplicates.  So spending any cycles at all on checking
for duplicates is wasted effort; in these cases we can just blindly
move/copy all the source arcs.  Add code paths to do that.

It turns out that for copyins()/copyouts(), *all* the call sites have
this property, making all the "improved" logic in them flat out
unreachable.  Perhaps we'll need the full capability again someday,
so I just #ifdef'd those paths out rather than removing them entirely.

In passing, add a few test cases to improve code coverage in this
area as well as in regc_locale.c/regc_pg_locale.c.

Discussion: https://postgr.es/m/810272.1629064063@sss.pgh.pa.us
2021-08-17 12:00:02 -04:00