Commit Graph

39410 Commits

Author SHA1 Message Date
Amit Kapila 26b3455afa Fix partition table's REPLICA IDENTITY checking on the subscriber.
In logical replication, we will check if the target table on the
subscriber is updatable by comparing the replica identity of the table on
the publisher with the table on the subscriber. When the target table is a
partitioned table, we only check its replica identity but not for the
partition tables. This leads to assertion failure while applying changes
for update/delete as we expect those to succeed only when the
corresponding partition table has a primary key or has a replica
identity defined.

Fix it by checking the replica identity of the partition table while
applying changes.

Reported-by: Shi Yu
Author: Shi Yu, Hou Zhijie
Reviewed-by: Amit Langote, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
2022-06-21 08:07:43 +05:30
Peter Eisentraut 660ee7bec2 Message and documentation refinements 2022-06-19 17:39:50 +02:00
Tom Lane 9cd43f6cbd Fix busted tab completion of extension versions.
In 02b8048ba I (tgl) got rid of the need for most tab-completion queries
to return pre-quoted identifiers.  But I over-hastily removed the
quote_ident call from Query_for_list_of_available_extension_versions*
too; those still need it, because what is returned isn't an identifier
at all and will (almost?) always need quoting.

Arguably we should use quote_literal here instead.  But quote_ident
works too and people may be used to that behavior, so stick with it.

In passing, fix inconsistent omission of schema-qualification in
Query_for_list_of_encodings.  That's not a security issue per our
current guidelines, but it ought to be like the rest.

Jeff Janes

Discussion: https://postgr.es/m/CAMkU=1yV+egSYrzWvbDY8VZ6bKEMrKbzxr-HTuiHi+wDgSUMgA@mail.gmail.com
2022-06-18 19:45:38 -04:00
Tomas Vondra e3fcca0d0d Revert changes in HOT handling of BRIN indexes
This reverts commits 5753d4ee32 and fe60b67250 that modified HOT to
ignore BRIN indexes. The commit message for 5753d4ee32 claims that:

    When determining whether an index update may be skipped by using
    HOT, we can ignore attributes indexed only by BRIN indexes. There
    are no index pointers to individual tuples in BRIN, and the page
    range summary will be updated anyway as it relies on visibility
    info.

This is partially incorrect - it's true BRIN indexes don't point to
individual tuples, so HOT chains are not an issue, but the visibitlity
info is not sufficient to keep the index up to date. This can easily
result in corrupted indexes, as demonstrated in the hackers thread.

This does not mean relaxing the HOT restrictions for BRIN is a lost
cause, but it needs to handle the two aspects (allowing HOT chains and
updating the page range summaries) as separate. But that requires a
major changes, and it's too late for that in the current dev cycle.

Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
2022-06-16 15:02:49 +02:00
Michael Paquier 664da2a389 Fix comment in regression tests for large objects
The values assigned to INV_WRITE and INV_READ were reversed in the
tests, which would be confusing when writing tests specific to read or
write operations on LOs.

Author: Yugo Nagata
Discussion: https://postgr.es/m/20220527153028.61a4608f66abcd026fd3806f@sraoss.co.jp
2022-06-16 17:21:04 +09:00
Amit Kapila b7658c24c7 Fix data inconsistency between publisher and subscriber.
We were not updating the partition map cache in the subscriber even when
the corresponding remote rel is changed. Due to this data was getting
incorrectly replicated for partition tables after the publisher has
changed the table schema.

Fix it by resetting the required entries in the partition map cache after
receiving a new relation mapping from the publisher.

Reported-by: Shi Yu
Author: Shi Yu, Hou Zhijie
Reviewed-by: Amit Langote, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
2022-06-16 08:45:07 +05:30
Michael Paquier a059e15cde Re-add locally-generated files in pg_upgrade's .gitignore and Makefile
This reverts the changes to pg_upgrade's Makefile and .gitignore done in
15b6d21.  The TAP tests run in isolation, executing pg_upgrade in
tmp_check/ in the build directory so as any files created in the
execution path (reindex_hash.sql and delete_old_cluster.{sh,bat}) are
never in the tree, so entries are not necessary in this case.  However,
not having these impacts the cleanliness of the code tree when running
./pg_upgrade directly from src/bin/pg_upgrade/.

This commit adds back to .gitignore all the files generated in the
execution path, and the Makefile rule to clean them up if they exist.

Per gripe from Tom Lane.

Discussion: https://postgr.es/m/90595.1655227384@sss.pgh.pa.us
2022-06-15 15:36:16 +09:00
Amit Kapila 5a97b13254 Fix cache look-up failures while applying changes in logical replication.
While building a new attrmap which maps partition attribute numbers to
remoterel's, we incorrectly update the map for dropped column attributes.
Later, it caused cache look-up failure when we tried to use the map to
fetch the information about attributes.

This also fixes the partition map cache invalidation which was using the
wrong type cast to fetch the entry. We were using stale partition map
entry after invalidation which leads to the assertion or cache look-up
failure.

Reported-by: Shi Yu
Author: Hou Zhijie, Shi Yu
Reviewed-by: Amit Langote, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
2022-06-15 09:52:12 +05:30
Michael Paquier a3ff08e0b0 Tweak behavior of psql --single-transaction depending on ON_ERROR_STOP
This commit, in completion of 157f873, forces a ROLLBACK for
--single-transaction only when ON_ERROR_STOP is used when one of the
steps defined by -f/-c fails.  Hence, COMMIT is always used when
ON_ERROR_STOP is not set, ignoring the status code of the last action
taken in the set of switches specified by -c/-f (previously ROLLBACK
would have been issued even without ON_ERROR_STOP if the last step
failed, while COMMIT was issued if a step in-between failed as long as
the last step succeeded, leading to more inconsistency).

While on it, this adds much more test coverage in this area when not
using ON_ERROR_STOP with multiple switch patterns involving -c and -f
for query files, single queries and slash commands.

The behavior of ON_ERROR_STOP is arguably a bug, but there was no much
support for a backpatch to force a ROLLBACK on a step failure, so this
change is done only on HEAD for now.

Per discussion with Tom Lane and Kyotaro Horiguchi.

Discussion: https://postgr.es/m/Yqbc8bAdwnP02na4@paquier.xyz
2022-06-15 11:24:52 +09:00
Tom Lane ba412c905a Avoid ecpglib core dump with out-of-order operations.
If an application executed operations like EXEC SQL PREPARE
without having first established a database connection, it could
get a core dump instead of the expected clean failure.  This
occurred because we did "pthread_getspecific(actual_connection_key)"
without ever having initialized the TSD key actual_connection_key.
The results of that are probably platform-specific, but at least
on Linux it often leads to a crash.

To fix, add calls to ecpg_pthreads_init() in the code paths that
might use actual_connection_key uninitialized.  It's harmless
(and hopefully inexpensive) to do that more than once.

Per bug #17514 from Okano Naoki.  The problem's ancient, so
back-patch to all supported branches.

Discussion: https://postgr.es/m/17514-edd4fad547c5692c@postgresql.org
2022-06-14 18:16:46 -04:00
Tom Lane 4e54d231ae pg_upgrade: further tweaking of make_outputdirs().
Use the same error message for all cases of pathname overrun,
since users aren't going to much care which one was too long.
Add missing newline to said error (as pg_upgrade's version
of pg_fatal requires that).
Add pathname overrun checks for the individual log files,
not just the directories.
Remove initial newline in log files; the new scheme here
guarantees that we'll never be appending to an old file.

Kyotaro Horiguchi and Tom Lane

Discussion: https://postgr.es/m/20220613.120551.729848632120189555.horikyota.ntt@gmail.com
2022-06-13 14:28:05 -04:00
Andrew Dunstan 19408aae7f Make subscription tests pass with log_error_verbosity=verbose
Recent additions to the subscription tests check for log entries, but
fail to account for the possible presence of an SQL errror code, which
happens if log_error_verbosity is set to 'verbose'. Add this into the
regular expressions that are checked for.
2022-06-12 09:17:17 -04:00
Tom Lane 1218780cce Un-break whole-row Vars referencing domain-over-composite types.
In commit ec62cb0aa, I foolishly replaced ExecEvalWholeRowVar's
lookup_rowtype_tupdesc_domain call with just lookup_rowtype_tupdesc,
because I didn't see how a domain could be involved there, and
there were no regression test cases to jog my memory.  But the
existing code was correct, so revert that change and add a test
case showing why it's necessary.  (Note: per comment in struct
DatumTupleFields, it is correct to produce an output tuple that's
labeled with the base composite type, not the domain; hence just
blindly looking through the domain is correct here.)

Per bug #17515 from Dan Kubb.  Back-patch to v11 where domains over
composites became a thing.

Discussion: https://postgr.es/m/17515-a24737438363aca0@postgresql.org
2022-06-10 10:35:57 -04:00
Peter Eisentraut 2172455865 Fix collation of JSON_TABLE output columns
The output columns of JSON_TABLE should have the collations of their
data type.  The existing implementation sets the default collation if
the type is collatable.

Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/9d75ce67-0121-5050-5bec-bf5009db55ce%40enterprisedb.com
2022-06-10 06:05:08 +02:00
Etsuro Fujita 4a8a5dd7f5 Improve comments for trivial_subqueryscan().
This function can be called from mark_async_capable_plan(), a helper
function for create_append_plan(), before set_subqueryscan_references(),
to determine the triviality of a SubqueryScan that is a child of an
Append plan node, which is done before doing finalize_plan() on the
SubqueryScan (if necessary) and set_plan_references() on the subplan,
unlike when called from set_subqueryscan_references().  The reason why
this is safe wouldn't be that obvious, so add comments explaining this.

Follow-up for commit c2bb02bc2.

Reviewed by Zhihong Yu.

Discussion: https://postgr.es/m/CAPmGK17%2BGiJBthC6va7%2B9n6t75e-M1N0U18YB2G1B%2BE5OdrNTA%40mail.gmail.com
2022-06-09 19:30:00 +09:00
Peter Eisentraut e77de23fbb psql: Show notices immediately (again)
The new show-all-results feature in psql (7844c9918) went out of its
way to show notices next to the results of the statements (in a
multi-statement string) that caused them.  This also had the
consequence that notices for a single statement were not shown until
after the statement had executed, instead of right away.  After some
discussion, it seems very difficult to satisfy both of these goals, so
here we are giving up on the first goal and just show the notices as
we get them.  This restores the pre-7844c9918 behavior for notices.

Reported-by: Alastair McKinley <a.mckinley@analyticsengines.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/PAXPR02MB760039506C87A2083AD85575E3DA9%40PAXPR02MB7600.eurprd02.prod.outlook.com
2022-06-09 08:49:13 +02:00
Tom Lane 7ab5b4eb48 Be more careful about GucSource for internally-driven GUC settings.
The original advice for hard-wired SetConfigOption calls was to use
PGC_S_OVERRIDE, particularly for PGC_INTERNAL GUCs.  However,
that's really overkill for PGC_INTERNAL GUCs, since there is no
possibility that we need to override a user-provided setting.
Instead use PGC_S_DYNAMIC_DEFAULT in most places, so that the
value will appear with source = 'default' in pg_settings and thereby
not be shown by psql's new \dconfig command.  The one exception is
that when changing in_hot_standby in a hot-standby session, we still
use PGC_S_OVERRIDE, because people felt that seeing that in \dconfig
would be a good thing.

Similarly use PGC_S_DYNAMIC_DEFAULT for the auto-tune value of
wal_buffers (if possible, that is if wal_buffers wasn't explicitly
set to -1), and for the typical 2MB value of max_stack_depth.

In combination these changes remove four not-very-interesting
entries from the typical output of \dconfig, all of which people
fingered as "why is that showing up?" in the discussion thread.

Discussion: https://postgr.es/m/3118455.1649267333@sss.pgh.pa.us
2022-06-08 13:26:18 -04:00
Michael Paquier ca899d98b4 Fix portability issue in TAP tests of psql for locales
Some locales use a comma as decimal separator (like Czech or French),
and psql's 001_basic.pl for \timing was not able to handle that
properly.  This fixes the matching regexes to be able to handle both
comma and dot as possible decimal separators, as per a suggestion from
Andrew Dunstan.

psql tests were the only place with such a portability issue
(check-world passed here with a forced LANG/LANGUAGE).  These tests are
new as of c0280bc, so there is no need for a backpatch.

Reported-by: Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRBz8iQmd2aOaCLvO-rJY6vZr-h6Q0qvV0J+yb78J7uiaA@mail.gmail.com
2022-06-08 11:24:06 +09:00
Michael Paquier 4fff78f009 Restructure pg_upgrade output directories for better idempotence
38bfae3 has moved the contents written to files by pg_upgrade under a
new directory called pg_upgrade_output.d/ located in the new cluster's
data folder, and it used a simple structure made of two subdirectories
leading to a fixed structure: log/ and dump/.  This design has made
weaker pg_upgrade on repeated calls, as we could get failures when
creating one or more of those directories, while potentially losing the
logs of a previous run (logs are retained automatically on failure, and
cleaned up on success unless --retain is specified).  So a user would
need to clean up pg_upgrade_output.d/ as an extra step for any repeated
calls of pg_upgrade.  The most common scenario here is --check followed
by the actual upgrade, but one could see a failure when specifying an
incorrect input argument value.  Removing entirely the logs would have
the disadvantage of removing all the past information, even if --retain
was specified at some past step.

This result is annoying for a lot of users and automated upgrade flows.
So, rather than requiring a manual removal of pg_upgrade_output.d/, this
redesigns the set of output directories in a more dynamic way, based on
a suggestion from Tom Lane and Daniel Gustafsson.  pg_upgrade_output.d/
is still the base path, but a second directory level is added, mostly
named after an ISO-8601-formatted timestamp (in short human-readable,
with milliseconds appended to the name to avoid any conflicts).  The
logs and dumps are saved within the same subdirectories as previously,
as of log/ and dump/, but these are located inside the subdirectory
named after the timestamp.

The logs of a given run are removed only after a successful run if
--retain is not used, and pg_upgrade_output.d/ is kept if there are any
logs from a previous run.  Note that previously, pg_upgrade would have
kept the logs even after a successful --check but that was inconsistent
compared to the case without --check when using --retain.  The code in
charge of the removal of the output directories is now refactored into a
single routine.

Two TAP tests are added with some --check commands (one failure case and
one success case), to look after the issue fixed here.  Note that the
tests had to be tweaked a bit to fit with the new directory structure so
as it can find any logs generated on failure.  This is still going to
require a change in the buildfarm client for the case where pg_upgrade
is tested without the TAP test, though, but I'll tackle that with a
separate patch where needed.

Reported-by: Tushar Ahuja
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Justin Pryzby
Discussion: https://postgr.es/m/77e6ecaa-2785-97aa-f229-4b6e047cbd2b@enterprisedb.com
2022-06-08 10:53:01 +09:00
David Rowley fa5185b26c Harden Memoization code against broken data types
Bug #17512 highlighted that a suitably broken data type could cause the
backend to crash if either the hash function or equality function were in
someway non-deterministic based on their input values.  Such a data type
could cause a crash of the backend due to some code which assumes that
we'll always find a hash table entry corresponding to an item in the
Memoize LRU list.

Here we remove the assumption that we'll always find the entry
corresponding to the given LRU list item and add run-time checks to verify
we have found the given item in the cache.

This is not a fix for bug #17512, but it will turn the crash reported by
that bug report into an internal ERROR.

Reported-by: Ales Zeleny
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
2022-06-08 12:39:09 +12:00
Tom Lane bf4717b091 Fix off-by-one loop termination condition in pg_stat_get_subscription().
pg_stat_get_subscription scanned one more LogicalRepWorker array entry
than is really allocated.  In the worst case this could lead to SIGSEGV,
if the LogicalRepCtx data structure is near the end of shared memory.
That seems quite unlikely though (thanks to the ordering of calls in
CreateSharedMemoryAndSemaphores) and we've heard no field reports of it.
A more likely misbehavior is one row of garbage data in the function's
result, but even that is not real likely because of the check that the
pid field matches some live backend.

Report and fix by Kuntal Ghosh.  This bug is old, so back-patch
to all supported branches.

Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
2022-06-07 15:34:30 -04:00
Tom Lane 51da231597 Don't fail on libpq-generated error reports in pg_amcheck.
An error PGresult generated by libpq itself, such as a report of
connection loss, won't have broken-down error fields.
should_processing_continue() blithely assumed that
PG_DIAG_SEVERITY_NONLOCALIZED would always be present, and would
dump core if it wasn't.

Per grepping to see if 6d157e7cb's mistake was repeated elsewhere.
2022-06-06 11:26:57 -04:00
Tom Lane 6d157e7cb8 Don't fail on libpq-generated error reports in ecpg_raise_backend().
An error PGresult generated by libpq itself, such as a report of
connection loss, won't have broken-down error fields.
ecpg_raise_backend() blithely assumed that PG_DIAG_MESSAGE_PRIMARY
would always be present, and would end up passing a NULL string
pointer to snprintf when it isn't.  That would typically crash
before 3779ac62d, and it would fail to provide a useful error report
in any case.  Best practice is to substitute PQerrorMessage(conn)
in such cases, so do that.

Per bug #17421 from Masayuki Hirose.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/17421-790ff887e3188874@postgresql.org
2022-06-06 11:20:21 -04:00
Michael Paquier 157f8739ad Fix psql's single transaction mode on client-side errors with -c/-f switches
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend.  This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.

Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.

Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
2022-06-06 11:05:59 +09:00
Tom Lane f00a4f02ac Automatically count the number of output lines in psql/help.c.
The hard-wired PageOutput arguments in usage() and sibling functions
have been a perennial maintenance gotcha, and there's no reason to
think we'll ever get any better about that.  Let's get rid of those
magic constants by constructing the output in a buffer where we can
count the newlines before calling PageOutput.  (Perhaps this is
microscopically slower; but none of these functions are performance
critical, and anyway we might well be buying back all the cost by
avoiding having to pass most of the data through snprintf.c.  I could
not detect any speed difference in a desultory check.)  This also
gets rid of the need to assume that platform-specific variations in
the output are insignificant.

While at it, make the code shorter and more abstract by inventing
helper macros HELP0() and HELPN() to encapsulate the specific
output actions being invoked.

Discussion: https://postgr.es/m/365160.1654289490@sss.pgh.pa.us
2022-06-04 11:54:05 -04:00
Michael Paquier 15b6d21553 Force run of pg_upgrade in the build directory in its TAP test
TAP tests are run from their own directory in the source tree, and in a
VPATH build the execution of the pg_upgrade command was leaving behind a
file in the source tree, that should be left untouched.  In order to
avoid this issue, the test moves to PostgreSQL::Test::Utils::tmp_check,
so as any files generated by pg_upgrade do not impact the source tree,
but the build tree.  This has as nice side-effect to make unnessary the
presence of such files in pg_upgrade's .gitignore and Makefile.  This
strategy is similar to psql's test 010_tab_completion.pl, though the
reasons behind this choice are different.

In passing, fix one misleading test name that was added by 99f6f19.

Per discussion with Peter Eisentraut, Andrew Dunstan, Tom Lane, Andres
Freund and myself.

Discussion: https://postgr.es/m/f80ace33-11fb-1cd3-20f8-98f51d151088@enterprisedb.com
2022-06-04 12:16:52 +09:00
Tom Lane 1fbe94084f Improve psql \?'s description of large-object-related commands.
Provide a gloss of which command does what, as all other backslash
commands have.  Put the large-object command section into a more
considered spot in the list.

In passing, update the output-lines count in helpVariables()
(oversight in 7844c9918, looks like).

Thibaud Walkowiak, reviewed by Nathan Bossart and myself

Discussion: https://postgr.es/m/43f0439c-df3e-a045-ac99-af33523cc2d4@dalibo.com
2022-06-03 15:49:36 -04:00
Amit Kapila fd0b9dcebd Prohibit combining publications with different column lists.
Currently, we simply combine the column lists when publishing tables on
multiple publications and that can sometimes lead to unexpected behavior.
Say, if a column is published in any row-filtered publication, then the
values for that column are sent to the subscriber even for rows that don't
match the row filter, as long as the row matches the row filter for any
other publication, even if that other publication doesn't include the
column.

The main purpose of introducing a column list is to have statically
different shapes on publisher and subscriber or hide sensitive column
data. In both cases, it doesn't seem to make sense to combine column
lists.

So, we disallow the cases where the column list is different for the same
table when combining publications. It can be later extended to combine the
column lists for selective cases where required.

Reported-by: Alvaro Herrera
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
2022-06-02 08:31:50 +05:30
Michael Paquier 99f6f19799 Add missing test names in TAP tests of pg_upgrade
While on it, this removes the inclusion of getcwd() as The test code
does not rely on it.

Author: Peter Eisentraut
Discussion: https://postgr.es/m/f80ace33-11fb-1cd3-20f8-98f51d151088@enterprisedb.com
2022-06-02 09:21:32 +09:00
Tom Lane dd1c8dd101 Silence compiler warnings from some older compilers.
Since a117cebd6, some older gcc versions issue "variable may be used
uninitialized in this function" complaints for brin_summarize_range.
Silence that using the same coding pattern as in bt_index_check_internal;
arguably, a117cebd6 had too narrow a view of which compilers might give
trouble.

Nathan Bossart and Tom Lane.  Back-patch as the previous commit was.

Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
2022-06-01 17:21:45 -04:00
Tom Lane eb39610f82 Fix pl/perl test case so it will still work under Perl 5.36.
Perl 5.36 has reclassified the warning condition that this test
case used, so that the expected error fails to appear.  Tweak
the test so it instead exercises a case that's handled the same
way in all Perl versions of interest.

This appears to meet our standards for back-patching into
out-of-support branches: it changes no user-visible behavior
but enables testing of old branches with newer tools.
Hence, back-patch as far as 9.2.

Dagfinn Ilmari Mannsåker, per report from Jitka Plesníková.

Discussion: https://postgr.es/m/564579.1654093326@sss.pgh.pa.us
2022-06-01 16:15:47 -04:00
Alvaro Herrera e28bb88519
Revert changes to CONCURRENTLY that "sped up" Xmin advance
This reverts commit d9d076222f "VACUUM: ignore indexing operations
with CONCURRENTLY".

These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation.  Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them.  This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur.  In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.

Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away.  We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.

Backpatch to 14, where this change appeared.

Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
2022-05-31 21:24:59 +02:00
Tom Lane 16c80e7d0c Ensure ParseTzFile() closes the input file after failing.
We hadn't noticed this because (a) few people feed invalid
timezone abbreviation files to the server, and (b) in typical
scenarios guc.c would throw ereport(ERROR) and then transaction
abort handling would silently clean up the leaked file reference.
However, it was possible to observe file leakage warnings if one
breaks an already-active abbreviation file, because guc.c does
not throw ERROR when loading supposedly-validated settings during
session start or SIGHUP processing.

Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me)

Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
2022-05-31 14:47:44 -04:00
Robert Haas f5bfba5413 shm_mq_sendv: Fix flushing bug when receiver not yet attached.
With the old logic, when the reciever had not yet attached, we would
never call shm_mq_inc_bytes_written(), even if force_flush = true
was specified. That could result in a situation where data that the
sender believes it has sent is never received.

Along the way, remove a useless function prototype for a nonexistent
function from shm_mq.h.

Commit 46846433a0 introduced these
problems.

Pavan Deolasee, with a few changes by me.

Discussion: https://postgr.es/m/CABOikdPkwtLLCTnzzmpSMXo3QZa2yXq0J7Q61ssdLFAJYrOVvQ@mail.gmail.com
2022-05-31 08:46:54 -04:00
Amit Kapila 0a050ee000 Fix typo in hash README.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pu-V22PiJF2ym9_NVZe-+qnycfyEX24dZm=7URWhDHJ3w@mail.gmail.com
2022-05-31 14:37:41 +05:30
Michael Paquier 0efa51357e Remove useless tests for TRUNCATE on foreign tables
foreign_data has kept around a set of tests for TRUNCATE to look after
the case of foreign tables, with[out] inheritance and with[out]
partitions, assuming that the command is not supported for this relkind.
However, TRUNCATE is supported on foreign tables if the FDW involved is
able to handle the command, like postgres_fdw.

Note that postgres_fdw includes tests to cover all the cases removed by
this commit (which had misleading comments), so these did not provide
any additional coverage anyway.

Author: Yugo Nagata
Discussion: https://postgr.es/m/20220527172543.0a2fdb469cf048b81c0967d3@sraoss.co.jp
2022-05-31 09:44:00 +09:00
Thomas Munro 12e28aac8e Add debugging help in OwnLatch().
Build farm animal gharial recently failed a few times in a parallel
worker's call to OwnLatch() with "ERROR:  latch already owned".  Let's
turn that into a PANIC and show the PID of the owner, to try to learn
more.

Discussion: https://postgr.es/m/CA%2BhUKGJ_0RGcr7oUNzcHdn7zHqHSB_wLSd3JyS2YC_DYB%2B-V%3Dg%40mail.gmail.com
2022-05-31 12:06:11 +12:00
Tom Lane 5f0adec253 Make STRING an unreserved_keyword.
Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a
type_func_name_keyword, thereby breaking applications that use
"string" as a table name, column name, function parameter name, etc.
That seems like a pretty bad thing, not least because the SQL spec
says that STRING is an unreserved keyword.

This is easy enough to fix so far as the core grammar is concerned.
However, doing so causes some ECPG test cases to fail, specifically
those that use "string" as a typedef name.  It turns out this is
because portions of the ECPG grammar allow type_func_name_keywords
but not unreserved_keywords as typedef names.  That's pretty horrid,
and it's mildly astonishing that we've not heard complaints about it
before.  We can fix two of those uses trivially, but the ones in the
var_type production are less easy.  As a stopgap, hard-code STRING as
an allowed alternative in var_type.

Per report from Alastair McKinley.

Discussion: https://postgr.es/m/3661437.1653855582@sss.pgh.pa.us
2022-05-30 14:05:20 -04:00
Peter Eisentraut a8cca6026e logging: Also add the command prefix to detail and hint messages
This makes the output line up better and allows filtering messages by
command.

Discussion: https://www.postgresql.org/message-id/ba6d4fac-9e33-91f9-94fb-1e4c144a48b9@enterprisedb.com
2022-05-30 07:26:06 +02:00
Heikki Linnakangas fc36ac52eb Fix COPY FROM when database encoding is SQL_ASCII.
In the codepath when no encoding conversion is required, the check for
incomplete character at the end of input incorrectly used server
encoding's max character length, instead of the client's. Usually the
server and client encodings are the same when we're not performing
encoding conversion, but SQL_ASCII is an exception.

In the passing, also fix some outdated comments that still talked about
the old COPY protocol. It was removed in v14.

Per bug #17501 from Vitaly Voronov. Backpatch to v14 where this was
introduced.

Discussion: https://www.postgresql.org/message-id/17501-128b1dd039362ae6@postgresql.org
2022-05-29 23:54:25 +03:00
Andres Freund 0107855b14 Align stats_fetch_consistency definition with guc.c default.
Somewhat embarrassing oversight in 98f897339b. Does not have a functional
impact, but is unnecessarily confusing.

Reported-By: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Yo2351qVYqd/bJws@paquier.xyz
2022-05-28 13:11:59 -07:00
Michael Paquier b4529005fd Revert "Add single-item cache when looking at topmost XID of a subtrans XID"
This reverts commit 06f5295 as per issues with this approach, both in
terms of efficiency impact and stability.  First, contrary to the
single-item cache for transaction IDs in transam.c, the cache may finish
by not be hit for a long time, and without an invalidation mechanism to
clear it, it would cause inconsistent results on wraparound for
example.  Second, the use of SubTransGetTopmostTransaction() for the
caching has a limited impact on performance.  SubTransGetParent() could
have more impact, though the benchmarking of the single-item approach
still needs to be proved, particularly under the conditions where SLRU
lookups are stressed in parallel with overflowed snapshots (aka more
than 64 subxids generated, for example).

After discussion with Andres Freund.

Discussion: https://postgr.es/m/20220524235250.gtt3uu5zktfkr4hv@alap3.anarazel.de
2022-05-28 15:02:08 +09:00
Michael Paquier f1431f3bff Handle NULL for short descriptions of custom GUC variables
If a short description is specified as NULL in one of the various
DefineCustomXXXVariable() functions available to external modules to
define a custom parameter, SHOW ALL would crash.  This change teaches
SHOW ALL to properly handle NULL short descriptions, as well as any code
paths that manipulate it, to gain in flexibility.  Note that
help_config.c was already able to do that, when describing a set of GUCs
for postgres --describe-config.

Author: Steve Chavez
Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com
Backpatch-through: 10
2022-05-28 12:12:40 +09:00
David Rowley 3e9abd2eb1 Teach remove_unused_subquery_outputs about window run conditions
9d9c02ccd added code to allow the executor to take shortcuts when quals
on monotonic window functions guaranteed that once the qual became false
it could never become true again.  When possible, baserestrictinfo quals
are converted to become these quals, which we call run conditions.

Unfortunately, in 9d9c02ccd, I forgot to update
remove_unused_subquery_outputs to teach it about these run conditions.
This could cause a WindowFunc column which was unused in the target list
but referenced by an upper-level WHERE clause to be removed from the
subquery when the qual in the WHERE clause was converted into a window run
condition.  Because of this, the entire WindowClause would be removed from
the query resulting in additional rows making it into the resultset when
they should have been filtered out by the WHERE clause.

Here we fix this by recording which target list items in the subquery have
run conditions. That gets passed along to remove_unused_subquery_outputs
to tell it not to remove these items from the target list.

Bug: #17495
Reported-by: Jeremy Evans
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17495-7ffe2fa0b261b9fa@postgresql.org
2022-05-27 10:37:58 +12:00
Tom Lane 2b65de7fc2 Remove misguided SSL key file ownership check in libpq.
Commits a59c79564 et al. tried to sync libpq's SSL key file
permissions checks with what we've used for years in the backend.
We did not intend to create any new failure cases, but it turns out
we did: restricting the key file's ownership breaks cases where the
client is allowed to read a key file despite not having the identical
UID.  In particular a client running as root used to be able to read
someone else's key file; and having seen that I suspect that there are
other, less-dubious use cases that this restriction breaks on some
platforms.

We don't really need an ownership check, since if we can read the key
file despite its having restricted permissions, it must have the right
ownership --- under normal conditions anyway, and the point of this
patch is that any additional corner cases where that works should be
deemed allowable, as they have been historically.  Hence, just drop
the ownership check, and rearrange the permissions check to get rid
of its faulty assumption that geteuid() can't be zero.  (Note that the
comparable backend-side code doesn't have to cater for geteuid() == 0,
since the server rejects that very early on.)

This does have the end result that the permissions safety check used
for a root user's private key file is weaker than that used for
anyone else's.  While odd, root really ought to know what she's doing
with file permissions, so I think this is acceptable.

Per report from Yogendra Suralkar.  Like the previous patch,
back-patch to all supported branches.

Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
2022-05-26 14:14:05 -04:00
Tom Lane 6217053f4e Avoid ERRCODE_INTERNAL_ERROR in oracle_compat.c functions.
repeat() checked for integer overflow during its calculation of the
required output space, but it just passed the resulting integer to
palloc().  This meant that result sizes between 1GB and 2GB led to
ERRCODE_INTERNAL_ERROR, "invalid memory alloc request size" rather
than ERRCODE_PROGRAM_LIMIT_EXCEEDED, "requested length too large".
That seems like a bit of a wart, so add an explicit AllocSizeIsValid
check to make these error cases uniform.

Do likewise in the sibling functions lpad() etc.  While we're here,
also modernize their overflow checks to use pg_mul_s32_overflow() etc
instead of expensive divisions.

Per complaint from Japin Li.  This is basically cosmetic, so I don't
feel a need to back-patch.

Discussion: https://postgr.es/m/ME3P282MB16676ED32167189CB0462173B6D69@ME3P282MB1667.AUSP282.PROD.OUTLOOK.COM
2022-05-26 12:25:10 -04:00
Michael Paquier 0dc379de64 Add tab completion for table_rewrite's CREATE EVENT TRIGGER in psql
Author: Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB5716DEFF787B925C4778228C94D69@OS0PR01MB5716.jpnprd01.prod.outlook.com
2022-05-25 14:21:05 +09:00
Andres Freund 98f897339b Fix stats_fetch_consistency default value indicated in postgresql.conf.sample.
Mistake in 5891c7a8ed, likely made when switching the default value from none
to fetch during development.

Reported-By: Nathan Bossart <nathandbossart@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/20220524220147.GA1298892@nathanxps13
2022-05-24 21:26:39 -07:00
Michael Paquier c9dfe2e83a Remove duplicated words in comments of pgstat.c and pgstat_internal.h
Author: Atsushi Torikoshi
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/d00ddbf29f9d09b3a471e64977560de1@oss.nttdata.com
2022-05-24 11:00:41 +09:00
Peter Eisentraut da1c0acd0a pg_upgrade: Tweak translatable strings
"\r" (for progress output) must not be inside a translatable string
(gettext gets upset).

In passing, move the minimum supported version number to a separate
argument, so that we don't have to retranslate this string every year
now.
2022-05-23 10:54:39 +02:00
Peter Eisentraut 9520f8d92a psql: Update \timing also in case of an error
The changes to show all query results (7844c9918) broke \timing output
in case of an error; it didn't update the timing result and showed
0.000 ms.

Fix by updating the timing result also in the error case.  Also, for
robustness, update the timing result any time a result is obtained,
not only for the last, so a sensible value is always available.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/3813350.1652111765%40sss.pgh.pa.us
2022-05-23 10:07:36 +02:00
John Naylor 6e647ef0e7 Remove debug messages from tuplesort_sort_memtuples()
These were of value only during development.

Reported by Justin Pryzby
Discussion: https://www.postgresql.org/message-id/20220519201254.GU19626%40telsasoft.com
2022-05-23 13:11:43 +07:00
Andres Freund 7fdbdf2049 pgstat: fix stats.spec instability on slow machines.
On slow machines the modified test could end up switching the order in which
transactional stats are reported in one session and non-transactional stats in
another session. As stats handling of truncate is implemented as setting
live/dead rows 0, the order in which a truncate's stats changes are applied,
relative to normal stats updates, matters. The handling of stats for truncate
hasn't changed due to shared memory stats, this is longstanding behavior.

We might want to improve truncate's stats handling in the future, but for now
just change the order of forced flushed to make the test stable.

Reported-By: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/YoZf7U/WmfmFYFEx@msg.df7cb.de
2022-05-22 15:25:13 -07:00
Tom Lane c7461fc255 Show 'AS "?column?"' explicitly when it's important.
ruleutils.c was coded to suppress the AS label for a SELECT output
expression if the column name is "?column?", which is the parser's
fallback if it can't think of something better.  This is fine, and
avoids ugly clutter, so long as (1) nothing further up in the parse
tree relies on that column name or (2) the same fallback would be
assigned when the rule or view definition is reloaded.  Unfortunately
(2) is far from certain, both because ruleutils.c might print the
expression in a different form from how it was originally written
and because FigureColname's rules might change in future releases.
So we shouldn't rely on that.

Detecting exactly whether there is any outer-level use of a SELECT
column name would be rather expensive.  This patch takes the simpler
approach of just passing down a flag indicating whether there *could*
be any outer use; for example, the output column names of a SubLink
are not referenceable, and we also do not care about the names exposed
by the right-hand side of a setop.  This is sufficient to suppress
unwanted clutter in all but one case in the regression tests.  That
seems like reasonable evidence that it won't be too much in users'
faces, while still fixing the cases we need to fix.

Per bug #17486 from Nicolas Lutic.  This issue is ancient, so
back-patch to all supported branches.

Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
2022-05-21 14:45:58 -04:00
Tom Lane e19272ef60 Remove unused-and-misspelled function extern declaration.
Commit c65507763 added "extern XLogRecPtr CalculateMaxmumSafeLSN(void)",
which bears no trace of connection to anything else in that patch
or anywhere else.  Remove it again.

Sergei Kornilov (also spotted by Bharath Rupireddy)

Discussion: https://postgr.es/m/706501646056870@vla3-6a5326aeb4ee.qloud-c.yandex.net
Discussion: https://postgr.es/m/CALj2ACVoQ7NEf43Xz0rfxsGOKYTN5r4VZp2DO2_5p+CMzsRPFw@mail.gmail.com
2022-05-21 13:26:08 -04:00
Tom Lane a916cb9d5a Avoid overflow hazard when clamping group counts to "long int".
Several places in the planner tried to clamp a double value to fit
in a "long" by doing
	(long) Min(x, (double) LONG_MAX);
This is subtly incorrect, because it casts LONG_MAX to double and
potentially back again.  If long is 64 bits then the double value
is inexact, and the platform might round it up to LONG_MAX+1
resulting in an overflow and an undesirably negative output.

While it's not hard to rewrite the expression into a safe form,
let's put it into a common function to reduce the risk of someone
doing it wrong in future.

In principle this is a bug fix, but since the problem could only
manifest with group count estimates exceeding 2^63, it seems unlikely
that anyone has actually hit this or will do so anytime soon.  We're
fixing it mainly to satisfy fuzzer-type tools.  That being the case,
a HEAD-only fix seems sufficient.

Andrey Lepikhov

Discussion: https://postgr.es/m/ebbc2efb-7ef9-bf2f-1ada-d6ec48f70e58@postgrespro.ru
2022-05-21 13:13:44 -04:00
Michael Paquier eaa5ebe046 Improve and fix some issues in the TAP tests of pg_upgrade
This is based on a set of suggestions from Noah, with the following
changes made:
- The set of databases created in the tests are now prefixed with
"regression" to not trigger any warnings with name restrictions when
compiling the code with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, and
now only the first name checks after the Windows case of double quotes
mixed with backslashes.
- Fix an issue with EXTRA_REGRESS_OPTS, which were not processed in a
way consistent with 027_stream_regress.pl (missing space between the
option string and pg_regress).  This got introduced in 7dd3ee5.
- Add a check on the exit code of the pg_regress command, to catch
failures after running the regression tests.

Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/YoHhWD5vQzb2mmiF@paquier.xyz
2022-05-21 12:01:48 +09:00
Tom Lane 5e5fa32335 Remove portability hazard in unsafe_tests/sql/guc_privs.sql.
This new-in-v15 test case assumed it could set max_stack_depth as high
as 2MB.  You might think that'd be true on any modern platform but
you'd be wrong, as I found out while experimenting with NetBSD/hppa.

This test is about privileges not platform capabilities, so there seems
no need to use any value greater than the 100kB setting already used
in a couple of places in the core regression tests.  There's certainly
no call to expect people to raise their platform's default ulimit just
to run this test.
2022-05-20 13:42:02 -04:00
Alvaro Herrera 6029861916
Fix DDL deparse of CREATE OPERATOR CLASS
When an implicit operator family is created, it wasn't getting reported.
Make it do so.

This has always been missing.  Backpatch to 10.

Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-by: Leslie LEMAIRE <leslie.lemaire@developpement-durable.gouv.fr>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Michael Paquiër <michael@paquier.xyz>
Discussion: https://postgr.es/m/f74d69e151b22171e8829551b1159e77@developpement-durable.gouv.fr
2022-05-20 18:52:55 +02:00
Michael Paquier b39838889e Add pg_version() to PostgreSQL::Test::Cluster
_pg_version (version number based on PostgreSQL::Version) is a field
private to Cluster.pm but there was no helper routine to retrieve it
from a Cluster's node.  The same is done for install_path, for example,
and the version object becomes handy when writing tests that need
version-specific handling.

Reviewed-by: Andrew Dunstan, Daniel Gustafsson
Discussion: https://postgr.es/m/YoWfoJTc987tsxpV@paquier.xyz
2022-05-20 18:29:51 +09:00
Peter Eisentraut 25f915b31e pg_waldump: Improve option parsing error messages
I rephrased the error messages to be more in the style of
option_parse_int(), and also made use of the new "detail" message
facility.  I didn't actually use option_parse_int() (which could be
used for -n) because libpgfeutils wasn't used here yet and I wanted to
keep this just to string changes.  But it could be done in the future.
2022-05-20 09:26:21 +02:00
Alvaro Herrera 8d061acd12
Repurpose PROC_COPYABLE_FLAGS as PROC_XMIN_FLAGS
This is a slight, convenient semantics change from what commit
0f0cfb4940 ("Fix parallel operations that prevent oldest xmin from
advancing") introduced that lets us simplify the coding in the one place
where it is used.

Backpatch to 13.  This is related to commit 6fea65508a ("Tighten
ComputeXidHorizons' handling of walsenders") rewriting the code site
where this is used, which has not yet been backpatched, but it may well
be in the future.

Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/202204191637.eldwa2exvguw@alvherre.pgsql
2022-05-19 16:20:32 +02:00
David Rowley c4a4e760f6 Fix incorrect comments for Memoize struct
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/0635f5aa-4973-8dc2-4e4e-df9fd5778a65@enterprisedb.com
Backpatch-through: 14, where Memoize was added
2022-05-19 17:14:23 +12:00
Amit Kapila 0ff20288e1 Extend pg_publication_tables to display column list and row filter.
Commit 923def9a53 and 52e4f0cd47 allowed to specify column lists and row
filters for publication tables. This commit extends the
pg_publication_tables view and pg_get_publication_tables function to
display that information.

This information will be useful to users and we also need this for the
later commit that prohibits combining multiple publications with different
column lists for the same table.

Author: Hou Zhijie
Reviewed By: Amit Kapila, Alvaro Herrera, Shi Yu, Takamichi Osumi
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
2022-05-19 08:20:55 +05:30
Alvaro Herrera 62221ef187
Update xml_1.out and xml_2.out
Commit 0fbf011200 should have updated them but didn't.
2022-05-18 23:19:53 +02:00
Alvaro Herrera 12e423e21d
Fix EXPLAIN MERGE output when no tuples are processed
An 'else' clause was misplaced in commit 598ac10be1, making zero-rows
output look a bit silly.  Add a test case for it.

Pointed out by Tom Lane.

Discussion: https://postgr.es/m/21030.1652893083@sss.pgh.pa.us
2022-05-18 21:20:49 +02:00
Alvaro Herrera 0fbf011200
Check column list length in XMLTABLE/JSON_TABLE alias
We weren't checking the length of the column list in the alias clause of
an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was
possible to make the server crash by passing an overly long one.  Fix it
by throwing an error in that case, like the other places that deal with
alias lists.

In passing, modify the equivalent test used for join RTEs to look like
the other ones, which was different for no apparent reason.

This bug came in when XMLTABLE was born in version 10; backpatch to all
stable versions.

Reported-by: Wang Ke <krking@zju.edu.cn>
Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
2022-05-18 20:28:31 +02:00
Alvaro Herrera 598ac10be1
Make EXPLAIN MERGE output format more compact
We can use a single line to print all tuple counts that MERGE processed,
for conciseness, and elide those that are zeroes.  Non-text formats
report all numbers, as is typical.

Per comment from Justin Pryzby <pryzby@telsasoft.com>

Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
2022-05-18 18:33:04 +02:00
Michael Paquier 27f1366050 pgbench: Restore compatibility of --partitions=0
A value of 0 is allowed for this option since its creation, that would
map with the default of having no partitions for pgbench_accounts, but
6f164e6 broke that by enforcing an error.  This commit restores the
original behavior.

Author: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqGAGobiiHR8nH382HJxqm1mzZs8=3oKPXnXivWoFSZmNA@mail.gmail.com
2022-05-18 09:47:38 +09:00
Michael Paquier bbf7c2d9e9 Fix typo in walreceiver.c
s/primary_slotname/primary_slot_name/.

Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACX3=pHkCpoGG-z+O=7Gp5YZv70jmfTyGnNV7YF3SkK73g@mail.gmail.com
2022-05-18 09:06:22 +09:00
Peter Eisentraut 6a8a7b1ccb Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: dde45df385dab9032155c1f867b677d55695310c
2022-05-16 11:12:42 +02:00
Peter Eisentraut cd690e07fd pg_upgrade: Add missing gettext triggers
Forgot to add it in one place in the previous commit.
2022-05-16 09:41:02 +02:00
Peter Eisentraut cd46d42a51 pg_upgrade: Add missing gettext triggers
prep_status_progress() is new.
2022-05-16 08:55:01 +02:00
David Rowley 1e731ed12a Fix incorrect row estimates used for Memoize costing
In order to estimate the cache hit ratio of a Memoize node, one of the
inputs we require is the estimated number of times the Memoize node will
be rescanned.  The higher this number, the large the cache hit ratio is
likely to become.  Unfortunately, the value being passed as the number of
"calls" to the Memoize was incorrectly using the Nested Loop's
outer_path->parent->rows instead of outer_path->rows.  This failed to
account for the fact that the outer_path might be parameterized by some
upper-level Nested Loop.

This problem could lead to Memoize plans appearing more favorable than
they might actually be.  It could also lead to extended executor startup
times when work_mem values were large due to the planner setting overly
large MemoizePath->est_entries resulting in the Memoize hash table being
initially made much larger than might be required.

Fix this simply by passing outer_path->rows rather than
outer_path->parent->rows.  Also, adjust the expected regression test
output for a plan change.

Reported-by: Pavel Stehule
Author: David Rowley
Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
2022-05-16 16:07:56 +12:00
Thomas Munro 93759c665d Fix slow animal timeouts in 032_relfilenode_reuse.pl.
Per BF animal chipmunk:  CREATE DATABASE could apparently fail due to an
AV process being in the template database and not quitting fast enough
for the 5 second timeout in CountOtherDBBackends().  The test script had
autovacuum_naptime=1s to encourage more activity and opening of fds, but
that wasn't strictly necessary for this test.  Take it out.

Per BF animal skink:  there was a 300s timeout for all tests in the
script, but apparently that was not enough under valgrind.  Let's use
the standard timeout $PostgreSQL::Test::Utils::timeout_default, but
restart it for each query we run.

Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGKa8HNJaA24gqiiFoGy0ysndeVoJsHvX_q1-DVLFaGAmw%40mail.gmail.com
2022-05-14 11:58:10 +12:00
Michael Paquier fcab82a2d7 Fix comment in pg_proc.c
pgstat_create_function() creates stats for a function in a transactional
fashion, so the stats would be dropped if transaction creating the
function is aborted, not committed.

Author: Amul Sul
Discussion: https://postgr.es/m/CAAJ_b97x1T3xgAMWNj4w7kSgN0nTuG-vLrQJ4NB-dsNr0Kudxw@mail.gmail.com
2022-05-14 08:27:59 +09:00
Alvaro Herrera c4f113e8fe
Clean up newlines following left parentheses
Like commit c9d2977519.
2022-05-13 23:52:35 +02:00
Tom Lane 3ab9a63cb6 Rename JsonIsPredicate.value_type, fix JSON backend/nodes/ infrastructure.
I started out with the intention to rename value_type to item_type to
avoid a collision with a typedef name that appears on some platforms.

Along the way, I noticed that the adjacent field "format" was not being
correctly handled by the backend/nodes/ infrastructure functions:
copyfuncs.c erroneously treated it as a scalar, while equalfuncs,
outfuncs, and readfuncs omitted handling it at all.  This looks like
it might be cosmetic at the moment because the field is always NULL
after parse analysis; but that's likely a bug in itself, and the code's
certainly not very future-proof.  Let's fix it while we can still do so
without forcing an initdb on beta testers.

Further study found a few other inconsistencies in the backend/nodes/
infrastructure for the recently-added JSON node types, so fix those too.

catversion bumped because of potential change in stored rules.

Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
2022-05-13 11:40:08 -04:00
Robert Haas 4f2400cb3f Add a new shmem_request_hook hook.
Currently, preloaded libraries are expected to request additional
shared memory and LWLocks in _PG_init().  However, it is not unusal
for such requests to depend on MaxBackends, which won't be
initialized at that time.  Such requests could also depend on GUCs
that other modules might change.  This introduces a new hook where
modules can safely use MaxBackends and GUCs to request additional
shared memory and LWLocks.

Furthermore, this change restricts requests for shared memory and
LWLocks to this hook.  Previously, libraries could make requests
until the size of the main shared memory segment was calculated.
Unlike before, we no longer silently ignore requests received at
invalid times.  Instead, we FATAL if someone tries to request
additional shared memory or LWLocks outside of the hook.

Nathan Bossart and Julien Rouhaud

Discussion: https://postgr.es/m/20220412210112.GA2065815%40nathanxps13
Discussion: https://postgr.es/m/Yn2jE/lmDhKtkUdr@paquier.xyz
2022-05-13 09:31:06 -04:00
Peter Eisentraut 30ed71e423 Indent C code in flex and bison files
In the style of pgindent, done semi-manually.

Discussion: https://www.postgresql.org/message-id/flat/7d062ecc-7444-23ec-a159-acd8adf9b586%40enterprisedb.com
2022-05-13 07:17:29 +02:00
Andres Freund 0cf16cb8ca Don't report stats in LogicalRepApplyLoop() when in xact.
pgstat_report_stat() is only supposed to be called outside of transactions. In
5891c7a8ed I added a pgstat_report_stat() call into LogicalRepApplyLoop()'s
timeout branch. While not commonly reached inside a transaction, it is
reachable (e.g. due to network bottlenecks or the sender being stalled / slow
for some reason).

To fix, add a !IsTransactionState() check.

No test added because there's no easy way to reproduce this case without
patching the code.

Reported-By: Erik Rijkers <er@xs4all.nl>
Discussion: https://postgr.es/m/b3463b8c-2328-dcac-0136-af95715493c1@xs4all.nl
2022-05-12 18:54:26 -07:00
Michael Paquier 8d33412665 Remove PGDLLIMPORT marker from __pg_log_level
Per discussion with Tom Lane and Andres Freund.  I have misunderstood
the intention behind the choice done in 9a374b7.

Discussion: https://postgr.es/m/20220512153737.6kbbcf4qyvwgq4s2@alap3.anarazel.de
2022-05-13 09:39:13 +09:00
Andres Freund 07d683b54a Remove function declaration for function in pg_proc.
The declaration is automatically generated. Noticed when experimenting with
adding PGDLLIMPORT markers for functions.

Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
2022-05-12 12:39:33 -07:00
Andres Freund 0699b1ae2d Add missing binary_upgrade.h includes.
A few places used binary_upgrade_* variables without including the header,
which worked without warnings because the variables are defined in those
places. However that can cause linker complaints with MSVC - except that we
don't see them right now, due to the use of a symbol export file.

Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
2022-05-12 12:39:33 -07:00
Andres Freund 09cd33f47b Add 'static' to file-local variables missing it.
Noticed when comparing the set of exported symbols without / with
-fvisibility=hidden after adding PGDLLIMPORT to intentionally exported
symbols.

Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
2022-05-12 12:39:33 -07:00
Andres Freund 905c020bef Add missing 'extern' to function prototypes.
Postgres style is to spell out extern. Noticed while scripting adding
PGDLLIMPORT markers to functions.

Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
2022-05-12 12:39:33 -07:00
Tom Lane c2f436151e Do pre-release housekeeping on catalog data.
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES.  For reference, the command was

./renumber_oids.pl --first-mapped-oid 8000 --target-oid 6205
2022-05-12 15:35:15 -04:00
Tom Lane 23e7b38bfe Pre-beta mechanical code beautification.
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
2022-05-12 15:17:30 -04:00
Tom Lane 93909599cd libpq: drop pending pipelined commands in pqDropConnection().
The original coding did this in pqDropServerData(), which seems
fairly backwards.  Pending commands are more like already-queued
output data, which is dropped in pqDropConnection().  Moving the
operation means that we clear the command queue immediately upon
detecting connection drop, which improves the sanity of subsequent
behavior.  In particular this eliminates duplicated error message
text as a consequence of code added in b15f25446, which supposed
that a nonempty command queue must mean the prior operation is
still active.

There might be an argument for backpatching this to v14; but as with
b15f25446, I'm unsure about interactions with 618c16707.  For now,
given the lack of complaints about v14's behavior, leave it alone.

Per report from Peter Eisentraut.

Discussion: https://postgr.es/m/de57761c-b99b-3435-b0a6-474c72b1149a@enterprisedb.com
2022-05-12 13:08:31 -04:00
Andres Freund b5f44225b8 Mark a few 'bbsink' related functions / variables static.
Discussion: https://postgr.es/m/20220506234924.6mxxotl3xl63db3l@alap3.anarazel.de
2022-05-12 09:11:31 -07:00
Tom Lane 79b58c6f68 Make pull_var_clause() handle GroupingFuncs exactly like Aggrefs.
This follows in the footsteps of commit 2591ee8ec by removing one more
ill-advised shortcut from planning of GroupingFuncs.  It's true that
we don't intend to execute the argument expression(s) at runtime, but
we still have to process any Vars appearing within them, or we risk
failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
to print such an expression).  Vars in upper plan nodes have to have
referents in the next plan level, whether we ever execute 'em or not.

Per bug #17479 from Michael J. Sullivan.  Back-patch to all supported
branches.

Richard Guo

Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
2022-05-12 11:31:46 -04:00
Michael Paquier 5edeb57428 Add some missing PGDLLIMPORT markings
Three variables in pqsignal.h (UnBlockSig, BlockSig and StartupBlockSig)
were not marked with PGDLLIMPORT, as they are declared in a way that
prevents mark_pgdllimport.pl to detect them.  These variables are
redefined in a style more consistent with the other headers, allowing
the script to find and mark them.

PGDLLIMPORT was missing for __pg_log_level in logging.h, so add it
back.  The marking got accidentally removed in 9a374b77, just after its
addition in 8ec5694.

While on it, add a comment in mark_pgdllimport.pl explaining what are
the arguments needed by the script (aka a list of header paths).

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20220506234924.6mxxotl3xl63db3l@alap3.anarazel.de
2022-05-12 15:08:45 +09:00
John Naylor 7761b9faab Use correct datum macros in more tuplesort specialization functions.
Also clarify that ApplySignedSortComparator() is not built on 32-bit machines.

Folow-up to c90c16591

Reviewed-by: David Rowley
Discussion: https://www.postgresql.org/message-id/CAFBsxsFmt4_JUP8XgSJqwaAS9a9s8K8_PvMu%3Dj%3DDfwU%3D8QjNPw%40mail.gmail.com
2022-05-12 09:20:32 +07:00
Robert Haas ab02d702ef Remove non-functional code for unloading loadable modules.
The code for unloading a library has been commented-out for over 12
years, ever since commit 602a9ef5a7, and we're
no closer to supporting it now than we were back then.

Nathan Bossart, reviewed by Michael Paquier and by me.

Discussion: http://postgr.es/m/Ynsc9bRL1caUSBSE@paquier.xyz
2022-05-11 15:30:30 -04:00
Michael Paquier 45edde037e Fix typos and grammar in code and test comments
This fixes the grammar of some comments in a couple of tests (SQL and
TAP), and in some C files.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220511020334.GH19626@telsasoft.com
2022-05-11 15:38:55 +09:00
Thomas Munro 0d3431497d Add logging for excessive ProcSignalBarrier waits.
To enable diagnosis of systems that are not processing ProcSignalBarrier
requests promptly, add a LOG message every 5 seconds if we seem to be
wedged.  Although you could already see this state as a wait event in
pg_stat_activity, the log message also shows the PID of the process that
is preventing progress.

Also add DEBUG1 logging around the whole wait loop.

Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA%2BTgmoYJ03r5359gQutRGP9BtigYCg3_UskcmnVjBf-QO3-0pQ%40mail.gmail.com
2022-05-11 18:03:03 +12:00
Amit Kapila f95d53eded Fix the logical replication timeout during large transactions.
The problem is that we don't send keep-alive messages for a long time
while processing large transactions during logical replication where we
don't send any data of such transactions. This can happen when the table
modified in the transaction is not published or because all the changes
got filtered. We do try to send the keep_alive if necessary at the end of
the transaction (via WalSndWriteData()) but by that time the
subscriber-side can timeout and exit.

To fix this we try to send the keepalive message if required after
processing certain threshold of changes.

Reported-by: Fabrice Chapuis
Author: Wang wei and Amit Kapila
Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
2022-05-11 11:11:44 +05:30
Michael Paquier 8bbf8461a3 Silence extra logging when using "postgres -C" on runtime-computed GUCs
Presently, the server may emit a variety of log messages when inspecting
a runtime-computed GUC, mostly in the shape of one LOG message with the
default configuration, related to the startup sequence launched as such
GUCs require a load of the control file and of external shared
libraries.

For example, the server will always emit a "database system is shut
down" LOG (unless the user has set log_min_messages higher than LOG),
which is an annoying behavior as "postgres -C" is expected to only emit
in its output the parameter value we are looking for.  The parameter
value is sent to stdout, while the logs are sent to stderr so we could
recommend to use a redirection, but there was not much love for this
workaround either.

To avoid such extra log messages, per discussion, this change sets
log_min_messages to FATAL internally when -C is used on a
runtime-computed GUC (even if set to PANIC in postgresql.conf).  At
FATAL, the user will still receive messages explaining why a GUC value
cannot be inspected, and will know if the command is attempted on a
server already running, something not supported yet for a
runtime-computed GUC.

Reported-by: Magnus Hagander, Bruce Momjian
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/Yni6ZHkGotUU+RSf@paquier.xyz
2022-05-11 14:21:06 +09:00
Peter Eisentraut 3aa7a3d2a3 Add missing source files to nls.mk 2022-05-11 06:16:56 +02:00
Michael Paquier 0826ac89ac Improve setup of environment values for commands in MSVC's vcregress.pl
The current setup assumes that commands for lz4, zstd and gzip always
exist by default if not enforced by a user's environment.  However,
vcpkg, as one example, installs libraries but no binaries, so this
default setup to assume that a command should always be present would
cause failures.  This commit improves the detection of such external
commands as follows:
* If a ENV value is available, trust the environment/user and use it.
* If a ENV value is not available, check its execution by looking in the
current PATH, by launching a simple "$command --version" (that should be
portable enough).
** On execution failure, ignore ENV{command}.
** On execution success, set ENV{command} = "$command".

Note that this new rule applies to gzip, lz4 and zstd but not tar that
we assume will always exist.  Those commands are set up in the
environment only when using bincheck and taptest.  The CI includes all
those commands and I have checked that their setup is correct there.  I
have also tested this change in a MSVC environment where we have none of
those commands.

While on it, remove the references to lz4 from the documentation and
vcregress.pl in ~v13.  --with-lz4 has been added in v14~ so there is no
point to have this information in these older branches.

Reported-by: Andrew Dunstan
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/14402151-376b-a57a-6d0c-10ad12608e12@dunslane.net
Backpatch-through: 10
2022-05-11 10:21:52 +09:00
David Rowley c90c16591c Fix some incorrect preprocessor tests in tuplesort specializations
697492434 added 3 new quicksort specialization functions for common
datatypes.

That commit was not very consistent in how it would determine if we're
compiling for 32-bit or 64-bit machines.  It would sometimes use
USE_FLOAT8_BYVAL and at other times check if SIZEOF_DATUM == 8.  This
could cause theoretical problems due to the way USE_FLOAT8_BYVAL is now
defined based on SIZEOF_VOID_P >= 8.  If pointers for some reason were
ever larger than 8-bytes then we'd end up doing 32-bit comparisons
mistakenly.  Let's just always check SIZEOF_DATUM >= 8.

It also seems that ssup_datum_signed_cmp is just never used on 32-bit
builds, so let's just ifdef that out to make sure we never accidentally
use that comparison function on such machines.  This also allows us to
ifdef out 1 of the 3 new specialization quicksort functions in 32-bit
builds which seems to shrink down the binary by over 4KB on my machine.

In passing, also add the missing DatumGetInt32() / DatumGetInt64() macros
in the comparison functions.

Discussion: https://postgr.es/m/CAApHDvqcQExRhtRa9hJrJB_5egs3SUfOcutP3m+3HO8A+fZTPA@mail.gmail.com
Reviewed-by: John Naylor
2022-05-11 11:38:13 +12:00
Peter Eisentraut 9700b250c5 Formatting and punctuation improvements in sample configuration files 2022-05-10 21:15:56 +02:00
Peter Eisentraut 93e6892f67 Remove some tabs in SQL code in C string literals
This is not handled uniformly throughout the code, but at least nearby
code can be consistent.
2022-05-10 20:57:37 +02:00
Michael Paquier 7dd3ee5084 Fix several issues with the TAP tests of pg_upgrade
This commit addresses the following issues in the TAP tests of
pg_upgrade, introduced in 322becb:
- Remove --port and --host for commands that already rely on a node's
environment PGHOST and PGPORT.
- Switch from run_log() to command_ok(), as all the commands executed in
the tests should succeed.
- Change EXTRA_REGRESS_OPTS to make it count as a shell fragment (fixing
s/OPT/OPTS on a way), to be compatible with the various Makefiles using
it as well as 027_stream_regress.pl in the recovery tests.  The command
built for the execution the pg_regress command is reformatted, while on
it, to map with the recovery test doing the same thing (we should
refactor and consolidate that in the future, perhaps).
- Re-add the test for database names stressing the behavior of
backslashes with double quotes, mostly here for Windows.

Tests doable with the upgrade across different major versions still work
the same way.

Reported-by: Noah Misch
Discussion: https://postgr.es/m/20220502042718.GB1565149@rfd.leadboat.com
2022-05-10 11:31:31 +09:00
Tom Lane fe20afaee8 Fix core dump in transformValuesClause when there are no columns.
The parser code that transformed VALUES from row-oriented to
column-oriented lists failed if there were zero columns.
You can't write that straightforwardly (though probably you
should be able to), but the case can be reached by expanding
a "tab.*" reference to a zero-column table.

Per bug #17477 from Wang Ke.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/17477-0af3c6ac6b0a6ae0@postgresql.org
2022-05-09 14:15:37 -04:00
Tom Lane 29904f5f2f Revert "Disallow infinite endpoints in generate_series() for timestamps."
This reverts commit eafdf9de06
and its back-branch counterparts.  Corey Huinker pointed out that
we'd discussed this exact change back in 2016 and rejected it,
on the grounds that there's at least one usage pattern with LIMIT
where an infinite endpoint can usefully be used.  Perhaps that
argument needs to be re-litigated, but there's no time left before
our back-branch releases.  To keep our options open, restore the
status quo ante; if we do end up deciding to change things, waiting
one more quarter won't hurt anything.

Rather than just doing a straight revert, I added a new test case
demonstrating the usage with LIMIT.  That'll at least remind us of
the issue if we forget again.

Discussion: https://postgr.es/m/3603504.1652068977@sss.pgh.pa.us
Discussion: https://postgr.es/m/CADkLM=dzw0Pvdqp5yWKxMd+VmNkAMhG=4ku7GnCZxebWnzmz3Q@mail.gmail.com
2022-05-09 11:40:40 -04:00
Noah Misch 0abc1a059e In REFRESH MATERIALIZED VIEW, set user ID before running user code.
It intended to, but did not, achieve this.  Adopt the new standard of
setting user ID just after locking the relation.  Back-patch to v10 (all
supported versions).

Reviewed by Simon Riggs.  Reported by Alvaro Herrera.

Security: CVE-2022-1552
2022-05-09 08:35:08 -07:00
Noah Misch a117cebd63 Make relation-enumerating operations be security-restricted operations.
When a feature enumerates relations and runs functions associated with
all found relations, the feature's user shall not need to trust every
user having permission to create objects.  BRIN-specific functionality
in autovacuum neglected to account for this, as did pg_amcheck and
CLUSTER.  An attacker having permission to create non-temp objects in at
least one schema could execute arbitrary SQL functions under the
identity of the bootstrap superuser.  CREATE INDEX (not a
relation-enumerating operation) and REINDEX protected themselves too
late.  This change extends to the non-enumerating amcheck interface.
Back-patch to v10 (all supported versions).

Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.
Reported by Alexander Lakhin.

Security: CVE-2022-1552
2022-05-09 08:35:08 -07:00
Peter Eisentraut f45f8b7ff3 Add missing source files to nls.mk 2022-05-09 07:17:08 +02:00
Michael Paquier 7863ee4def Fix control file update done in restartpoints still running after promotion
If a cluster is promoted (aka the control file shows a state different
than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still
processing, this function could miss an update of the control file for
"checkPoint" and "checkPointCopy" but still do the recycling and/or
removal of the past WAL segments, assuming that the to-be-updated LSN
values should be used as reference points for the cleanup.  This causes
a follow-up restart attempting crash recovery to fail with a PANIC on a
missing checkpoint record if the end-of-recovery checkpoint triggered by
the promotion did not complete while the cluster abruptly stopped or
crashed before the completion of this checkpoint.  The PANIC would be
caused by the redo LSN referred in the control file as located in a
segment already gone, recycled by the previous restartpoint with
"checkPoint" out-of-sync in the control file.

This commit fixes the update of the control file during restartpoints so
as "checkPoint" and "checkPointCopy" are updated even if the cluster has
been promoted while a restartpoint is running, to be on par with the set
of WAL segments actually recycled in the end of CreateRestartPoint().

This problem exists in all the stable branches.  However, commit
7ff23c6, by removing the last call of CreateCheckPoint() from the
startup process, has made this bug much easier to reason about as
concurrent checkpoints are not possible anymore.  No backpatch is done
yet, mostly out of caution from me as a point release is close by, but
we need to think harder about the case of concurrent checkpoints at
promotion if the bgwriter is not considered as running by the startup
process in ~v14, so this change is done only on HEAD for the moment.

Reported-by: Fujii Masao, Rui Zhao
Author: Kyotaro Horiguchi
Reviewed-by: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com
2022-05-09 08:39:59 +09:00
Thomas Munro a22652ebbc Fix race in 032_relfilenode_reuse.pl.
Add wait_for_catchup() call to the test added by commit e2f65f42.  Per
slow build farm animal grison.

Also fix a comment.

Discussion: https://postgr.es/m/CA%2BhUKGLJ2Vy8hVQmnYotmTaEKZK0%3D-GcXgNAgcHzArZvtS4L_g%40mail.gmail.com
2022-05-08 19:19:36 +12:00
Thomas Munro e2f65f4255 Fix old-fd issues using global barriers everywhere.
Commits 4eb21763 and b74e94dc introduced a way to force every backend to
close all relation files, to fix an ancient Windows-only bug.

This commit extends that behavior to all operating systems and adds
a couple of extra barrier points, to fix a totally different class of
bug: the reuse of relfilenodes in scenarios that have no other kind of
cache invalidation to prevent file descriptor mix-ups.

In all releases, data corruption could occur when you moved a database
to another tablespace and then back again.  Despite that, no back-patch
for now as the infrastructure required is too new and invasive.  In
master only, since commit aa010514, it could also happen when using
CREATE DATABASE with a user-supplied OID or via pg_upgrade.

Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220209220004.kb3dgtn2x2k2gtdm%40alap3.anarazel.de
2022-05-07 16:47:29 +12:00
Thomas Munro b74e94dc27 Rethink PROCSIGNAL_BARRIER_SMGRRELEASE.
With sufficiently bad luck, it was possible for IssuePendingWritebacks()
to reopen a file after we'd processed PROCSIGNAL_BARRIER_SMGRRELEASE and
before the file was unlinked by some other backend.  That left a small
hole in commit 4eb21763's plan to fix all spurious errors from DROP
TABLESPACE and similar on Windows.

Fix by closing md.c's segments, instead of just closing fd.c's
descriptors, and then teaching smgrwriteback() not to open files that
aren't already open.

Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20220209220004.kb3dgtn2x2k2gtdm%40alap3.anarazel.de
2022-05-07 16:32:10 +12:00
Robert Haas 701d918a42 Fix misleading comments about background worker registration.
Since 6bc8ef0b7f, the maximum number
of backends can't change as background workers are registered, but
these comments still reflect the way things worked prior to that.

Also, per recent discussion, some modules call SetConfigOption()
from _PG_init(). It's not entirely clear to me whether we want to
regard that as a fully supported operation, but since we know it's
a thing that happens, it at least deserves a mention in the comments,
so add that.

Nathan Bossart, reviewed by Anton A. Melnikov

Discussion: http://postgr.es/m/20220419154658.GA2487941@nathanxps13
2022-05-06 09:24:06 -04:00
Daniel Gustafsson 17ec5fa502 Clear the OpenSSL error queue before cryptohash operations
Setting up an EVP context for ciphers banned under FIPS generate
two OpenSSL errors in the queue, and as we only consume one from
the queue the other is at the head for the next invocation:

  postgres=# select md5('foo');
  ERROR:  could not compute MD5 hash: unsupported
  postgres=# select md5('foo');
  ERROR:  could not compute MD5 hash: initialization error

Clearing the error queue when creating the context ensures that
we don't pull in an error from an earlier operation.

Discussion: https://postgr.es/m/C89D932C-501E-4473-9750-638CFCD9095E@yesql.se
2022-05-06 14:41:31 +02:00
Michael Paquier 59a32f0093 Fix typo in origin.c
Introduced in 5aa2350.

Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PsuWz6_7aCmivNU5FahgQxDUTQtc3+__XnWkBzQcfn43w@mail.gmail.com
2022-05-06 20:01:15 +09:00
Peter Eisentraut 7e367924e3 Update SQL features
Update a few items that have become supported or mostly supported but
weren't updated at the time.
2022-05-06 09:17:38 +02:00
Tom Lane ab3479bf55 Update time zone data files to tzdata release 2022a.
DST law changes in Palestine.  Historical corrections for
Chile and Ukraine.
2022-05-05 14:54:53 -04:00
Andres Freund 9e6b7b45ca Fix timing issue in deadlock recovery conflict test.
Per buildfarm members longfin and skink.

Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
Backpatch: 10-
2022-05-04 12:50:38 -07:00
Tom Lane c40ba5f318 Fix rowcount estimate for SubqueryScan that's under a Gather.
SubqueryScan was always getting labeled with a rowcount estimate
appropriate for non-parallel cases.  However, nodes that are
underneath a Gather should be treated as processing only one
worker's share of the rows, whether the particular node is explicitly
parallel-aware or not.  Most non-scan-level node types get this
right automatically because they base their rowcount estimate on
that of their input sub-Path(s).  But SubqueryScan didn't do that,
instead using the whole-relation rowcount estimate as if it were
a non-parallel-aware scan node.  If there is a parallel-aware node
below the SubqueryScan, this is wrong, and it results in inflating
the cost estimates for nodes above the SubqueryScan, which can cause
us to not choose a parallel plan, or choose a silly one --- as indeed
is visible in the one regression test whose results change with this
patch.  (Although that plan tree appears to contain no SubqueryScans,
there were some in it before setrefs.c deleted them.)

To fix, use path->subpath->rows not baserel->tuples as the number
of input tuples we'll process.  This requires estimating the quals'
selectivity afresh, which is slightly annoying; but it shouldn't
really add much cost thanks to the caching done in RestrictInfo.

This is pretty clearly a bug fix, but I'll refrain from back-patching
as people might not appreciate plan choices changing in stable branches.
The fact that it took us this long to identify the bug suggests that
it's not a major problem.

Per report from bucoo, though this is not his proposed patch.

Discussion: https://postgr.es/m/202204121457159307248@sohu.com
2022-05-04 14:44:40 -04:00
Peter Eisentraut dc2be6ed47 Remove JsonPathSpec typedef
It doesn't seem very useful, and it's a bit in the way of the planned
node support automation.

Discussion: https://www.postgresql.org/message-id/202204191140.3wsbevfhqmu3@alvherre.pgsql
2022-05-04 17:36:31 +02:00
Peter Eisentraut d47a11da9e Add missing enum tag in enum used in nodes
Similar to 983bdc4fac.

Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/202204191140.3wsbevfhqmu3@alvherre.pgsql
2022-05-04 17:34:22 +02:00
Peter Eisentraut cfb63b994e Simplify configure test
The test for lz4.h used AC_CHECK_HEADERS, but nothing was using the
resulting symbol HAVE_LZ4_H.  Change this to use AC_CHECK_HEADER
instead.  This was probably an oversight, seeing that the nearby
similar tests do this correctly.
2022-05-04 14:20:36 +02:00
Daniel Gustafsson 0432490d29 Rename libpq test programs with libpq_ prefix
The testclient and uri-regress programs in the libpq test suite had
quite generic names which didn't convey much meaning. Since they are
installed as part of the MSVC test runs, ensure that their purpose
is a little bit clearer by renaming with a libpq_ prefix. While at
it rename uri-regress to uri_regress to avoid mixing dash and under-
score in the same filename.

Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20220501080706.GA1542365@rfd.leadboat.com
2022-05-04 14:15:25 +02:00
Peter Eisentraut 2e77180d45 Fix incorrect format placeholders 2022-05-04 07:57:39 +02:00
Andres Freund 8f1537d10e Fix possibility of self-deadlock in ResolveRecoveryConflictWithBufferPin().
The tests added in 9f8a050f68 failed nearly reliably on FreeBSD in CI, and
occasionally on the buildfarm. That turns out to be caused not by a bug in the
test, but by a longstanding bug in recovery conflict handling.

The standby timeout handler, used by ResolveRecoveryConflictWithBufferPin(),
executed SendRecoveryConflictWithBufferPin() inside a signal handler. A bad
idea, because the deadlock timeout handler (or a spurious latch set) could
have interrupted ProcWaitForSignal(). If unlucky that could cause a
self-deadlock on ProcArrayLock, if the deadlock check is in
SendRecoveryConflictWithBufferPin()->CancelDBBackends().

To fix, set a flag in StandbyTimeoutHandler(), and check the flag in
ResolveRecoveryConflictWithBufferPin().

Subsequently the recovery conflict tests will be backpatched.

Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
Backpatch: 10-
2022-05-02 18:25:00 -07:00
Andres Freund 21e184403b Add tests for recovery deadlock conflicts.
The recovery conflict tests added in 9f8a050f68 surfaced a bug in the
interaction between buffer pin and deadlock recovery conflicts. To make sure
that the bugfix won't break deadlock conflict detection, add a test for that
scenario.

031_recovery_conflict.pl will later be backpatched, with this included.

Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
2022-05-02 17:19:11 -07:00
Etsuro Fujita d89f97e83e Fix typo in comment. 2022-05-02 16:45:00 +09:00
Jeff Davis ed57cac84d pg_walinspect: fix case where flush LSN is in the middle of a record.
Instability in the test for pg_walinspect revealed that
pg_get_wal_records_info_till_end_of_wal(x) would try to decode all the
records with a start LSN earlier than the flush LSN, even though that
might include a partial record at the end of the range. In that case,
read_local_xlog_page_no_wait() would return NULL when it tried to read
past the flush LSN, which would be interpreted as an error by the
caller. That caused a test failure only on a BF animal that had been
restarted recently, but could be expected to happen in the wild quite
easily depending on the alignment of various parameters.

Fix by using private data in read_local_xlog_page_no_wait() to signal
end-of-wal to the caller, so that it can be properly distinguished
from a real error.

Discussion: https://postgr.es/m/Ymd/e5eeZMNAkrXo%40paquier.xyz
Discussion: https://postgr.es/m/111657.1650910309@sss.pgh.pa.us

Authors: Thomas Munro, Bharath Rupireddy.
2022-04-30 09:05:32 -07:00
Tom Lane ccd10a9bfa Tighten enforcement of variable CONSTANT markings in plpgsql.
I noticed that plpgsql would allow assignment of a new value to a
variable even when that variable is marked CONSTANT, if the variable
is used as an output parameter in CALL or is a refcursor variable
that OPEN assigns a new value to.  Fix these oversights.

In the CALL case, the check has to be done at runtime because we
cannot know at parse time which parameters are OUT parameters.
For OPEN, it seems best to likewise enforce at runtime because
then we needn't throw error if the variable has a nonnull value
(since OPEN will only try to overwrite a null value).

Although this is surely a bug fix, no back-patch: it seems unlikely
that anyone would thank us for breaking formerly-working code in
minor releases.

Discussion: https://postgr.es/m/214453.1651182729@sss.pgh.pa.us
2022-04-30 11:54:28 -04:00
Andrew Dunstan a79153b7a2 Claim SQL standard compliance for SQL/JSON features
Discussion: https://postgr.es/m/d03d809c-d0fb-fd6a-1476-d6dc18ec940e@dunslane.net
2022-04-29 09:01:05 -04:00
Andrew Dunstan 9c3d25e178 Fix JSON_OBJECTAGG uniquefying bug
Commit f4fb45d15c contained a bug in removing items with null values when
unique keys are required, where the leading items that are sorted
contained such values. Fix that and add a test for it.

Discussion: https://postgr.es/m/CAJA4AWQ_XbSmsNbW226UqNyRLJ+wb=iQkQMj77cQyoNkqtf=2Q@mail.gmail.com
2022-04-28 15:28:20 -04:00
Etsuro Fujita 5c854e7a2c Disable asynchronous execution if using gating Result nodes.
mark_async_capable_plan(), which is called from create_append_plan() to
determine whether subplans are async-capable, failed to take into
account that the given subplan created from a given subpath might
include a gating Result node if the subpath is a SubqueryScanPath or
ForeignPath, causing a segmentation fault there when the subplan created
from a SubqueryScanPath includes the Result node, or causing
ExecAsyncRequest() to throw an error about an unrecognized node type
when the subplan created from a ForeignPath includes the Result node,
because in the latter case the Result node was unintentionally
considered as async-capable, but we don't currently support executing
Result nodes asynchronously.  Fix by modifying mark_async_capable_plan()
to disable asynchronous execution in such cases.  Also, adjust code in
the ProjectionPath case in mark_async_capable_plan(), for consistency
with other cases, and adjust/improve comments there.

is_async_capable_path() added in commit 27e1f1456, which was rewritten
to mark_async_capable_plan() in a later commit, has the same issue,
causing the error at execution mentioned above, so back-patch to v14
where the aforesaid commit went in.

Per report from Justin Pryzby.

Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby.

Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
2022-04-28 15:15:00 +09:00
Michael Paquier 55b5686511 Revert recent changes with durable_rename_excl()
This reverts commits 2c902bb and ccfbd92.  Per buildfarm members
kestrel, rorqual and calliphoridae, the assertions checking that a TLI
history file should not exist when created by a WAL receiver have been
failing, and switching to durable_rename() over durable_rename_excl()
would cause the newest TLI history file to overwrite the existing one.
We need to think harder about such cases, so revert the new logic for
now.

Note that all the failures have been reported in the test
025_stuck_on_old_timeline.

Discussion: https://postgr.es/m/511362.1651116498@sss.pgh.pa.us
2022-04-28 13:08:16 +09:00
John Naylor e84f82ab5c Fix SQL syntax in comment in logical/worker.c
Euler Taveira

Disussion: https://www.postgresql.org/message-id/25f95189-eef8-43c4-9d7b-419b651963c8%40www.fastmail.com
2022-04-28 09:27:32 +07:00
Michael Paquier 2c902bbf19 Remove durable_rename_excl()
ccfbd92 has replaced all existing in-core callers of this function in
favor of durable_rename().  durable_rename_excl() is by nature unsafe on
crashes happening at the wrong time, so just remove it.

Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
2022-04-28 11:10:40 +09:00
Michael Paquier ccfbd9287d Replace existing durable_rename_excl() calls with durable_rename()
durable_rename_excl() attempts to avoid overwriting any existing files
by using link() and unlink(), falling back to rename() on some platforms
(e.g., Windows where link() followed by unlink() is not concurrent-safe,
see 909b449).  Most callers of durable_rename_excl() use it just in case
there is an existing file, but it happens that for all of them we never
expect a target file to exist (WAL segment recycling, creation of
timeline history file and basic_archive).

basic_archive used durable_rename_excl() to avoid overwriting an archive
concurrently created by another server.  Now, there is a stat() call to
avoid overwriting an existing archive a couple of lines above, so note
that this change opens a small TOCTOU window in this module between the
stat() call and durable_rename().

Furthermore, as mentioned in the top comment of durable_rename_excl(),
this routine can result in multiple hard links to the same file and data
corruption, with two or more links to the same file in pg_wal/ if a
crash happens before the unlink() call during WAL recycling.
Specifically, this would produce links to the same file for the current
WAL file and the next one because the half-recycled WAL file was
re-recycled during crash recovery of a follow-up cluster restart.

This change replaces all calls to durable_rename_excl() with
durable_rename().  This removes the protection against accidentally
overwriting an existing file, but some platforms are already living
without it, and all those code paths never expect an existing file (a
couple of assertions are added to check after that, in case).

This is a bug fix, but knowing the unlikeliness of the problem involving
one of more crashes at an exceptionally bad moment, no backpatch is
done.  This could be revisited in the future.

Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
2022-04-28 10:11:45 +09:00
Peter Eisentraut 755df30e48 Fix incorrect format placeholders 2022-04-27 09:49:10 +02:00
Peter Eisentraut 9ddf251f94 Handle NULL fields in WRITE_INDEX_ARRAY
Unlike existing WRITE_*_ARRAY macros, WRITE_INDEX_ARRAY needs to
handle the case that the field is NULL.  We already have the
convention to print NULL fields as "<>", so we do that here as well.
There is currently no corresponding read function for this, so reading
this back in is not implemented, but it could be if needed.

Reported-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-LN%3DbF8f9eU2R94dJtF54DfDvBq%2BovqHnOQqbinYDrUw%40mail.gmail.com
2022-04-27 09:15:09 +02:00
Michael Paquier 21a10368eb Add some isolation tests for CLUSTER
This commit adds two isolation tests for CLUSTER, using:
- A normal table, making sure that CLUSTER blocks and completes if the
table is locked by a concurrent session.
- A partitioned table with a partition owned by a different user.  If
the partitioned table is locked by a concurrent session, CLUSTER on the
partitioned table should block.  If the partition owned by a different
user is locked, CLUSTER on its partitioned table should complete and
skip the partition.  3f19e17 has added an early check to ignore such a
partition with a SQL regression test, but this was not checking that
CLUSTER should not block.

Discussion: https://postgr.es/m/YlqveniXn9AI6RFZ@paquier.xyz
2022-04-26 13:41:17 +09:00
Andrew Dunstan b787c554c2 Inhibit mingw CRT's auto-globbing of command line arguments
For some reason by default the mingw C Runtime takes it upon itself to
expand program arguments that look like shell globbing characters. That
has caused much scratching of heads and mis-attribution of the causes of
some TAP test failures, so stop doing that.

This removes an inconsistency with Windows binaries built with MSVC,
which have no such behaviour.

Per suggestion from Noah Misch.

Backpatch to all live branches.

Discussion: https://postgr.es/m/20220423025927.GA1274057@rfd.leadboat.com
2022-04-25 15:47:55 -04:00
Alvaro Herrera dec8ad367e
Drop unlogged table after test is done
Another test is constructed on top of regression tests, which does not
work correctly with unlogged tables.  For now, cope with that by making
sure no unlogged table is left behind.

Per buildfarm pink after 4fb5c794e5.
2022-04-25 15:48:13 +02:00
Alvaro Herrera 4fb5c794e5
Cover brin/gin/gist/spgist ambuildempty routines in regression tests
Changing some TEMP or permanent tables to UNLOGGED is sufficient to
invoke these ambuildempty routines, which were all not uncovered by any
tests.  These changes do not otherwise affect the test suite.

Author: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b95nneRCLM-=qELEdgCYSk6W_++-C+Q_t+wH3SW-hF50iw@mail.gmail.com
2022-04-25 15:00:49 +02:00
Alvaro Herrera 0bd56172b2
Always pfree strings returned by GetDatabasePath
Several places didn't do it, and in many cases it didn't matter because
it would be a small allocation in a short-lived context; but other
places may accumulate a few (for example, in CreateDatabaseUsingFileCopy,
one per tablespace).  In most databases this is highly unlikely to be
very serious either, but it seems better to make the code consistent in
case there's future copy-and-paste.

The only case of actual concern seems to be the aforementioned routine,
which is new with commit 9c08aea6a3, so there's no need to backpatch.

As pointed out by Coverity.
2022-04-25 10:32:13 +02:00
Tom Lane f819020d40 Fix incautious CTE matching in rewriteSearchAndCycle().
This function looks for a reference to the recursive WITH CTE,
but it checked only the CTE name not ctelevelsup, so that it could
seize on a lower CTE that happened to have the same name.  This
would result in planner failures later, either weird errors such as
"could not find attribute 2 in subquery targetlist", or crashes
or assertion failures.  The code also merely Assert'ed that it found
a matching entry, which is not guaranteed at all by the parser.

Per bugs #17320 and #17318 from Zhiyong Wu.
Thanks to Kyotaro Horiguchi for investigation.

Discussion: https://postgr.es/m/17320-70e37868182512ab@postgresql.org
Discussion: https://postgr.es/m/17318-2eb65a3a611d2368@postgresql.org
2022-04-23 12:16:12 -04:00
Noah Misch c1da0acbb0 Test ALIGNOF_DOUBLE==4 compatibility under ALIGNOF_DOUBLE==8.
Today's test case detected alignment problems only when executing on
AIX.  This change lets popular platforms detect the same problems.

Reviewed by Masahiko Sawada.

Discussion: https://postgr.es/m/20220415072601.GG862547@rfd.leadboat.com
2022-04-22 20:20:11 -07:00
Robert Haas a66e722cc1 Remove some recently-added pg_dump test cases.
Commit d2d3547979 included a pretty
extensive set of test cases, and some of them don't work on all
of our Windows machines. This happens because IPC::Run expands
its arguments as shell globs on a few machines, but doesn't on most
of the buildfarm. It might be good to fix that problem systematically
somehow, but in the meantime, there are enough test cases for this
commit that it seems OK to just remove the ones that are failing.

Discussion: http://postgr.es/m/3a190754-b2b0-d02b-dcfd-4ec1610ffbcb@dunslane.net
Discussion: http://postgr.es/m/CA+TgmoYRGUcFBy6VgN0+Pn4f6Wv=2H0HZLuPHqSy6VC8Ba7vdg@mail.gmail.com
2022-04-22 16:19:39 -04:00
David Rowley 99c754129d Fix performance regression in tuplesort specializations
697492434 added 3 new qsort specialization functions aimed to improve the
performance of sorting many of the common pass-by-value data types when
they're the leading or only sort key.

Unfortunately, that has caused a performance regression when sorting
datasets where many of the values being compared were equal.  What was
happening here was that we were falling back to the standard sort
comparison function to handle tiebreaks.  When the two given Datums
compared equally we would incur both the overhead of an indirect function
call to the standard comparer to perform the tiebreak and also the
standard comparer function would go and compare the leading key needlessly
all over again.

Here improve the situation in the 3 new comparison functions.  We now
return 0 directly when the two Datums compare equally and we're performing
a 1-key sort.

Here we don't do anything to help the multi-key sort case where the
leading key uses one of the sort specializations functions.  On testing
this case, even when the leading key's values are all equal, there
appeared to be no performance regression.  Let's leave it up to future
work to optimize that case so that the tiebreak function no longer
re-compares the leading key over again.

Another possible fix for this would have been to add 3 additional sort
specialization functions to handle single-key sorts for these
pass-by-value types.  The reason we didn't do that here is that we may
deem some other sort specialization to be more useful than single-key
sorts.  It may be impractical to have sort specialization functions for
every single combination of what may be useful and it was already decided
that further analysis into which ones are the most useful would be delayed
until the v16 cycle.  Let's not let this regression force our hand into
trying to make that decision for v15.

Author: David Rowley
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CA+hUKGJRbzaAOUtBUcjF5hLtaSHnJUqXmtiaLEoi53zeWSizeA@mail.gmail.com
2022-04-22 16:02:15 +12:00
Tom Lane 92e7a53752 Remove inadequate assertion check in CTE inlining.
inline_cte() expected to find exactly as many references to the
target CTE as its cterefcount indicates.  While that should be
accurate for the tree as emitted by the parser, there are some
optimizations that occur upstream of here that could falsify it,
notably removal of unused subquery output expressions.

Trying to make the accounting 100% accurate seems expensive and
doomed to future breakage.  It's not really worth it, because
all this code is protecting is downstream assumptions that every
referenced CTE has a plan.  Let's convert those assertions to
regular test-and-elog just in case there's some actual problem,
and then drop the failing assertion.

Per report from Tomas Vondra (thanks also to Richard Guo for
analysis).  Back-patch to v12 where the faulty code came in.

Discussion: https://postgr.es/m/29196a1e-ed47-c7ca-9be2-b1c636816183@enterprisedb.com
2022-04-21 17:58:52 -04:00
Tom Lane 914611ea73 Fix missed cases in libpq's error handling.
Commit 618c16707 invented an "error_result" flag in PGconn, which
intends to represent the state that we have an error condition and
need to build a PGRES_FATAL_ERROR PGresult from the message text in
conn->errorMessage, but have not yet done so.  (Postponing construction
of the error object simplifies dealing with out-of-memory conditions
and with concatenation of messages for multiple errors.)  For nearly all
purposes, this "virtual" PGresult object should act the same as if it
were already materialized.  But a couple of places in fe-protocol3.c
didn't get that memo, and were only testing conn->result as they used
to, without also checking conn->error_result.

In hopes of reducing the probability of similar mistakes in future,
I invented a pgHavePendingResult() macro that includes both tests.

Per report from Peter Eisentraut.

Discussion: https://postgr.es/m/b52277b9-fa66-b027-4a37-fb8989c73ff8@enterprisedb.com
2022-04-21 17:12:49 -04:00
Tom Lane 2cb1272445 Rethink method for assigning OIDs to the template0 and postgres DBs.
Commit aa0105141 assigned fixed OIDs to template0 and postgres
in a very ad-hoc way.  Notably, instead of teaching Catalog.pm
about these OIDs, the unused_oids script was just hacked to
not show them as unused.  That's problematic since, for example,
duplicate_oids wouldn't report any future conflict.  Hence,
invent a macro DECLARE_OID_DEFINING_MACRO() that can be used to
define an OID that is known to Catalog.pm and will participate
in duplicate-detection as well as renumbering by renumber_oids.pl.
(We don't anticipate renumbering these particular OIDs, but we
might as well build out all the Catalog.pm infrastructure while
we're here.)

Another issue is that aa0105141 neglected to touch IsPinnedObject,
with the result that it now claimed template0 and postgres are
pinned.  The right thing to do there seems to be to teach it that
no database is pinned, since in fact DROP DATABASE doesn't check
for pinned-ness (and at least for these cases, that is an
intentional choice).  It's not clear whether this wrong answer
had any visible effect, but perhaps it could have resulted in
erroneous management of dependency entries.

In passing, rename the TemplateDbOid macro to Template1DbOid
to reduce confusion (likely we should have done that way back
when we invented template0, but we didn't), and rename the
OID macros for template0 and postgres to have a similar style.

There are no changes to postgres.bki here, so no need for a
catversion bump.

Discussion: https://postgr.es/m/2935358.1650479692@sss.pgh.pa.us
2022-04-21 16:23:15 -04:00
Tom Lane 40eba064b2 Use DECLARE_TOAST_WITH_MACRO() to simplify toast-table declarations.
This is needed so that renumber_oids.pl can handle renumbering
shared catalog declarations, which need to provide C macros for
the OIDs of the shared toast table and index.  The previous
method of writing a C macro separately was error-prone anyway.

Also teach renumber_oids.pl about DECLARE_UNIQUE_INDEX_PKEY,
as we missed doing when inventing that macro.

There are no changes to postgres.bki here, so no need for a
catversion bump.

Discussion: https://postgr.es/m/2995325.1650487527@sss.pgh.pa.us
2022-04-21 12:02:23 -04:00
Peter Geoghegan ba6af6aa0b vacuumlazy.c: MultiXactIds are MXIDs, not XMIDs.
Oversights in commits 0b018fab and f3c15cbe.
2022-04-20 18:29:02 -07:00
Peter Geoghegan 8ab0ebb9a8 Fix CLUSTER tuplesorts on abbreviated expressions.
CLUSTER sort won't use the datum1 SortTuple field when clustering
against an index whose leading key is an expression.  This makes it
unsafe to use the abbreviated keys optimization, which was missed by the
logic that sets up SortSupport state.  Affected tuplesorts output tuples
in a completely bogus order as a result (the wrong SortSupport based
comparator was used for the leading attribute).

This issue is similar to the bug fixed on the master branch by recent
commit cc58eecc5d.  But it's a far older issue, that dates back to the
introduction of the abbreviated keys optimization by commit 4ea51cdfe8.

Backpatch to all supported versions.

Author: Peter Geoghegan <pg@bowt.ie>
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com
Backpatch: 10-
2022-04-20 17:17:43 -07:00
Tom Lane eafdf9de06 Disallow infinite endpoints in generate_series() for timestamps.
Such cases will lead to infinite loops, so they're of no practical
value.  The numeric variant of generate_series() already threw error
for this, so borrow its message wording.

Per report from Richard Wesley.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
2022-04-20 18:08:23 -04:00
Robert Haas d2d3547979 Allow db.schema.table patterns, but complain about random garbage.
psql, pg_dump, and pg_amcheck share code to process object name
patterns like 'foo*.bar*' to match all tables with names starting in
'bar' that are in schemas starting with 'foo'. Before v14, any number
of extra name parts were silently ignored, so a command line '\d
foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'.  In v14,
as a result of commit 2c8726c4b0, we
instead treated this as a request for table quux in a schema named
'foo.bar.baz.bletch'. That caused problems for people like Justin
Pryzby who were accustomed to copying strings of the form
db.schema.table from messages generated by PostgreSQL itself and using
them as arguments to \d.

Accordingly, revise things so that if an object name pattern contains
more parts than we're expecting, we throw an error, unless there's
exactly one extra part and it matches the current database name.
That way, thisdb.myschema.mytable is accepted as meaning just
myschema.mytable, but otherdb.myschema.mytable is an error, and so
is some.random.garbage.myschema.mytable.

Mark Dilger, per report from Justin Pryzby and discussion among
various people.

Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com
2022-04-20 11:37:29 -04:00
Peter Eisentraut 6c0f9f60f1 Fix incorrect format placeholders 2022-04-20 16:11:14 +02:00
Alvaro Herrera e70813fbc4
set_deparse_plan: Reuse variable to appease Coverity
Coverity complains that dpns->outer_plan is deferenced (to obtain
->targetlist) when possibly NULL.  We can avoid this by using
dpns->outer_tlist instead, which was already obtained a few lines up.

The fact that we end up with
  dpns->inner_tlist = dpns->outer_tlist
is a bit suspicious-looking and maybe worthy of more investigation, but
I'll leave that for another day.

Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
2022-04-20 11:44:08 +02:00
Alvaro Herrera a87e759569
Move ModifyTableContext->lockmode to UpdateContext
Should have been done this way to start with, but I failed to notice
This way we avoid some pointless initialization, and better contains the
variable to exist in the scope where it is really used.

Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
2022-04-20 11:18:04 +02:00
Alvaro Herrera 3dcc6bf406
ExecModifyTable: use context.planSlot instead of planSlot
There's no reason to keep a separate local variable when we have a place
for it elsewhere.  This allows to simplify some code.

Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
2022-04-20 10:34:58 +02:00
Tom Lane 344a225cb9 Fix breakage in AlterFunction().
An ALTER FUNCTION command that tried to update both the function's
proparallel property and its proconfig list failed to do the former,
because it stored the new proparallel value into a tuple that was
no longer the interesting one.  Carelessness in 7aea8e4f2.

(I did not bother with a regression test, because the only likely
future breakage would be for someone to ignore the comment I added
and add some other field update after the heap_modify_tuple step.
A test using existing function properties could not catch that.)

Per report from Bryn Llewellyn.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/8AC9A37F-99BD-446F-A2F7-B89AD0022774@yugabyte.com
2022-04-19 23:03:59 -04:00
Michael Paquier 83cca409ed Remove duplicated word in comment of basebackup.c
Oversight in 39969e2.

Author: Martín Marqués
Discussion: https://postgr.es/m/CABeG9LviA01oHC5h=ksLUuhMyXxmZR_tftRq6q3341CMT=j=4g@mail.gmail.com
2022-04-20 11:05:34 +09:00
Peter Eisentraut f2a2bf66c8 Fix extract epoch from interval calculation
The new numeric code for extract epoch from interval accidentally
truncated the DAYS_PER_YEAR value to an integer, leading to results
that mismatched the floating-point interval_part calculations.

The commit a2da77cdb4 that introduced
this actually contains the regression test change that this reverts.
I suppose this was missed at the time.

Reported-by: Joseph Koshakow <koshy44@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAAvxfHd5n%3D13NYA2q_tUq%3D3%3DSuWU-CufmTf-Ozj%3DfrEgt7pXwQ%40mail.gmail.com
2022-04-19 21:04:52 +02:00
Tatsuo Ishii a62bff74b1 Fix aggregate logging of pgbench.
Remove meaningless "failures" column from the aggregate logging. It
was just a sum of "serialization failures" and "deadlock failures".
Pointed out by Tom Lane. Patch reviewed by Fabien COELHO.

Discussion: https://postgr.es/m/4183048.1649536705%40sss.pgh.pa.us
2022-04-19 17:04:27 +09:00
Amit Kapila dd4ab6fd65 Fix the check to limit sync workers.
We don't allow to invoke more sync workers once we have reached the sync
worker limit per subscription. But the check to enforce this also doesn't
allow to launch an apply worker if it gets restarted.

This code was introduced by commit de43897122 but we caught the problem
only with the test added by recent commit c91f71b9dc which started failing
occasionally in the buildfarm.

As per buildfarm.
Diagnosed-by: Amit Kapila, Masahiko Sawada, Tomas Vondra
Author: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
	    https://postgr.es/m/f90d2b03-4462-ce95-a524-d91464e797c8@enterprisedb.com
2022-04-19 08:49:49 +05:30
Tom Lane 587de223f0 Add missing error handling in pg_md5_hash().
It failed to provide an error string as expected for the
admittedly-unlikely case of OOM in pg_cryptohash_create().
Also, make it initialize *errstr to NULL for success,
as pg_md5_binary() does.

Also add missing comments.  Readers should not have to
reverse-engineer the API spec for a publicly visible routine.
2022-04-18 20:04:55 -04:00
Tom Lane 36d4efe779 Avoid invalid array reference in transformAlterTableStmt().
Don't try to look at the attidentity field of system attributes,
because they're not there in the TupleDescAttr array.  Sometimes
this is harmless because we accidentally pick up a zero, but
otherwise we'll report "no owned sequence found" from an attempt
to alter a system attribute.  (It seems possible that a SIGSEGV
could occur, too, though I've not seen it in testing.)

It's not in this function's charter to complain that you can't
alter a system column, so instead just hard-wire an assumption
that system attributes aren't identities.  I didn't bother with
a regression test because the appearance of the bug is very
erratic.

Per bug #17465 from Roman Zharkov.  Back-patch to all supported
branches.  (There's not actually a live bug before v12, because
before that get_attidentity() did the right thing anyway.
But for consistency I changed the test in the older branches too.)

Discussion: https://postgr.es/m/17465-f2a554a6cb5740d3@postgresql.org
2022-04-18 12:16:45 -04:00
Michael Paquier 1a8b110539 Fix second race condition in 002_archiving.pl with archive_cleanup_command
Checking the execution of archive_cleanup_command on a standby requires
a valid checkpoint coming from its primary, but the logic did not check
that the standby replayed up to the point of the checkpoint, causing the
test checking for the execution of archive_cleanup_command to fail.
This race was more visible in slow environments.

Issue introduced in 46dea24, so no backpatch is needed.

Author: Tom Lane
Discussion: https://postgr.es/m/4015413.1649454951@sss.pgh.pa.us
2022-04-18 13:41:40 +09:00
Michael Paquier e61efafcb8 Fix race in TAP test 002_archiving.pl when restoring history file
This test, introduced in df86e52, uses a second standby to check that
it is able to remove correctly RECOVERYHISTORY and RECOVERYXLOG at the
end of recovery.  This standby uses the archives of the primary to
restore its contents, with some of the archive's contents coming from
the first standby previously promoted.  In slow environments, it was
possible that the test did not check what it should, as the history file
generated by the promotion of the first standby may not be stored yet on
the archives the second standby feeds on.  So, it could be possible that
the second standby selects an incorrect timeline, without restoring a
history file at all.

This commits adds a wait phase to make sure that the history file
required by the second standby is archived before this cluster is
created.  This relies on poll_query_until() with pg_stat_file() and an
absolute path, something not supported in REL_10_STABLE.

While on it, this adds a new test to check that the history file has
been restored by looking at the logs of the second standby.  This
ensures that a RECOVERYHISTORY, whose removal needs to be checked,
is created in the first place.  This should make the test more robust.

This test has been introduced by df86e52, but it came in light as an
effect of the bug fixed by acf1dd42, where the extra restore_command
calls made the test much slower.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/YlT23IvsXkGuLzFi@paquier.xyz
Backpatch-through: 11
2022-04-18 11:39:50 +09:00
Michael Paquier 42e44f3b38 Handle compression level in pg_receivewal for LZ4
The new option set of pg_receivewal introduced in 042a923 to control the
compression method makes it now easy to pass down various options,
including the compression level.  The change to be able to do is simple,
and requires one LZ4F_preferences_t fed to LZ4F_compressBegin().

Note that LZ4F_INIT_PREFERENCES could be used to initialize the contents
of LZ4F_preferences_t as required by LZ4, but this is only available
since v1.8.3.  memset()'ing its structure to 0 is enough.

Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
2022-04-18 10:18:34 +09:00
Noah Misch 42dbbca58e Add a temp-install prerequisite to src/interfaces/ecpg "checktcp".
The target failed, tested $PATH binaries, or tested a stale temporary
installation.  Commit c66b438db6 missed
this.  Back-patch to v10 (all supported versions).
2022-04-16 17:43:54 -07:00
Thomas Munro acf1dd4234 Don't retry restore_command while reading ahead.
Suppress further attempts to read ahead in the WAL if we run out of
data, until the records already decoded have been replayed.  This
restores the traditional behavior for continuous archive recovery, which
is to retry the failing restore_command only every 5 seconds.  With the
coding in 5dc0418f, we would start retrying every time through the
recovery loop when our WAL decoding window hit the end of the current
segment and we tried to look ahead into a not-yet-available next file.
That was very slow.

Also change the no_readahead_until mechanism to use <= rather than <,
which seems more useful.  Otherwise we'd either get one extra unwanted
retry of restore_command, or we'd need to add 1 to an LSN.

No change in behavior for regular streaming.  That was already limited
by the flushedUpto variable, which won't be updated until we replay what
we have already.

Reported by Andres Freund while analyzing the failure of a TAP test on
build farm animal skink (investigation ongoing but probably due to
otherwise unrelated timing bugs triggered by this slowness magnified by
valgrind).

Discussion: https://postgr.es/m/20220409005910.alw46xqmmgny2sgr%40alap3.anarazel.de
2022-04-17 10:50:19 +12:00
Andres Freund 4a736a161c pgstat: Use correct lock level in pgstat_drop_all_entries().
Previously we didn't, which lead to an assertion failure when resetting
partially loaded statistics. This was encountered on the buildfarm, for
as-of-yet unknown reasons.

Ttighten up a validity check when reading the stats file, verifying 'E'
signals the end of the file (rather than just stopping reading). That's then
used in a test appending to the stats file that crashed before the fix in
pgstat_drop_all_entries().

Reported by buildfarm animals mylodon and kestrel, via Tom Lane.

Discussion: https://postgr.es/m/1656446.1650043715@sss.pgh.pa.us
2022-04-16 14:44:58 -07:00
Tom Lane 9f4f0a0dad Fix incorrect logic in HaveRegisteredOrActiveSnapshot().
This function gave the wrong answer when there's more than one
RegisteredSnapshots entry, whether or not any of them is the
CatalogSnapshot.  This leads to assertion failure in some scenarios
involving fetching toasted data using a cursor.  (As per discussion,
I'm dubious that this is the right contract to be enforcing at all;
but it surely doesn't help to be enforcing it incorrectly.)

Fetching toasted data using a cursor is evidently under-tested,
so add a test case too.

Per report from Erik Rijkers.  This is new code, so no need for
back-patch.

Discussion: https://postgr.es/m/dc9dd229-ed30-6c62-4c41-d733ffff776b@xs4all.nl
2022-04-16 16:04:50 -04:00
Andrew Dunstan a17fd67d2f
Build libpq test programs under MSVC
This allows the newly added TAP tests to run.
2022-04-16 09:36:08 -04:00
Noah Misch 5fbb2d8f10 Use standard timeout, in 010_pg_basebackup.pl.
Per buildfarm member mandrill.  The test is new in v15, so no back-patch.
2022-04-15 23:15:38 -07:00
Peter Geoghegan d3609dd254 Fix multi-table VACUUM VERBOSE accounting.
Per-backend global variables like VacuumPageHit are initialized once per
VACUUM command.  This was missed by commit 49c9d9fc, which unified
VACUUM VERBOSE and autovacuum logging.  As a result of that oversight,
incorrect values were shown when multiple relations were processed by a
single VACUUM VERBOSE command.

Relations that happened to be processed later on would show "buffer
usage:" values that incorrectly included buffer accesses made while
processing earlier unrelated relations.  The same accesses were counted
multiple times.

To fix, take initial values for the tracker variables at the start of
heap_vacuum_rel(), and report delta values later on.
2022-04-15 15:48:39 -07:00
Tom Lane 7129a9791e psql: fix \l display for pre-v15 databases.
With a pre-v15 server, show NULL for the "ICU Locale" column,
matching what you see in v15 when the database locale isn't ICU.
The previous coding incorrectly repeated datcollate here.

(There's an unfinished discussion about whether to consolidate
these columns in \l output, but in any case we'd want this fix
for \l+ output.)

Euler Taveira, per report from Christoph Berg

Discussion: https://postgr.es/m/YlmIFCqu+TZSW4rB@msg.df7cb.de
2022-04-15 18:31:01 -04:00
Tom Lane 6fea65508a Tighten ComputeXidHorizons' handling of walsenders.
ComputeXidHorizons (nee GetOldestXmin) thought that it could identify
walsenders by checking for proc->databaseId == 0.  Perhaps that was
safe when the code was written, but it's been wrong at least since
autovacuum was invented.  Background processes that aren't connected
to any particular database, such as the autovacuum launcher and
logical replication launcher, look like that too.

This imprecision is harmful because when such a process advertises an
xmin, the result is to hold back dead-tuple cleanup in all databases,
though it'd be sufficient to hold it back in shared catalogs (which
are the only relations such a process can access).  Aside from being
generally inefficient, this has recently been seen to cause regression
test failures in the buildfarm, as a consequence of the logical
replication launcher's startup transaction preventing VACUUM from
marking pages of a user table as all-visible.

We only want that global hold-back effect for the case where a
walsender is advertising a hot standby feedback xmin.  Therefore,
invent a new PGPROC flag that says that a process' xmin should be
considered globally, and check that instead of using the incorrect
databaseId == 0 test.  Currently only a walsender sets that flag,
and only if it is not connected to any particular database.  (This is
for bug-compatibility with the undocumented behavior of the existing
code, namely that feedback sent by a client who has connected to a
particular database would not be applied globally.  I'm not sure this
is a great definition; however, such a client is capable of issuing
plain SQL commands, and I don't think we want xmins advertised for
such commands to be applied globally.  Perhaps this could do with
refinement later.)

While at it, I rewrote the comment in ComputeXidHorizons, and
re-ordered the commented-upon if-tests, to make them match up
for intelligibility's sake.

This is arguably a back-patchable bug fix, but given the lack of
complaints I think it prudent to let it age awhile in HEAD first.

Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
2022-04-15 17:50:05 -04:00
Peter Geoghegan bdb71dbe80 VACUUM VERBOSE: Show dead items for an empty table.
Be consistent about the lines that VACUUM VERBOSE outputs by including
an "index scan not needed: " line for completely empty tables. This
makes the output more readable, especially with multiple distinct VACUUM
operations processed by the same VACUUM command.  It's also more
consistent; even empty tables can use the failsafe, which wasn't
reported in the standard way until now.

Follow-up to commit 6e20f460, which taught VACUUM VERBOSE to be more
consistent about reporting on scanned pages with empty tables.
2022-04-15 14:20:56 -07:00
Peter Geoghegan 357c8455e6 Adjust VACUUM's removable cutoff log message.
The age of OldestXmin (a.k.a. "removable cutoff") when VACUUM ends often
indicates the approximate number of XIDs consumed while VACUUM ran.
However, there is at least one important exception: the cutoff could be
held back by a snapshot that was acquired before our VACUUM even began.
Successive VACUUM operations may even use exactly the same old cutoff in
extreme cases involving long held snapshots.

The log messages that described how removable cutoff aged (which were
added by commit 872770fd) created the impression that we were reporting
on how VACUUM's usable cutoff advanced while VACUUM ran, which was
misleading in these extreme cases.  Fix by using a more general wording.

Per gripe from Tom Lane.

In passing, relocate related instrumentation code for clarity.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/1643035.1650035653@sss.pgh.pa.us
2022-04-15 13:21:43 -07:00
Tom Lane 91998539b2 Revert "Temporarily add some probes of tenk1's relallvisible in create_index.sql."
This reverts commit 5bb2b6abc8.
Not needed anymore.
2022-04-15 13:29:39 -04:00
Andrew Dunstan f7a605f636 Small cleanups in SQL/JSON code
These are to keep Coverity happy. In one case remove a redundant NULL
check, and in another explicitly ignore a function result that is already
known.
2022-04-15 07:49:20 -04:00
Andres Freund 5cd1c40b3c pgstat: set timestamps of fixed-numbered stats after a crash.
When not loading stats at startup (i.e. pgstat_discard_stats() getting
called), reset timestamps of fixed numbered stats would be left at
0. Oversight in 5891c7a8ed.

Instead use pgstat_reset_after_failure() and add tests verifying that
fixed-numbered reset timestamps are set appropriately.

Reported-By: "David G. Johnston" <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/CAKFQuwamFuaQHKdhcMt4Gbw5+Hca2UE741B8gOOXoA=TtAd2Yw@mail.gmail.com
2022-04-14 17:40:25 -07:00
Alvaro Herrera 3f19e176ae
Have CLUSTER ignore partitions not owned by caller
If a partitioned table has partitions owned by roles other than the
owner of the partitioned table, don't include them in the to-be-
clustered list.  This is similar to what VACUUM FULL does (except we do
it sooner, because there is no reason to postpone it).  Add a simple
test to verify that only owned partitions are clustered.

While at it, change memory context switch-and-back to occur once per
partition instead of outside of the loop.

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
2022-04-14 22:11:06 +02:00
Tom Lane 5bb2b6abc8 Temporarily add some probes of tenk1's relallvisible in create_index.sql.
This is to gather some more evidence about why buildfarm member wrasse
is failing.  We should revert it (or at least scale it way back) once
that's resolved.

Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
2022-04-14 12:14:01 -04:00
Andrew Dunstan 4cd8717af3 Improve a couple of sql/json error messages
Fix the grammar in two, and add a hint to one.
2022-04-14 10:26:29 -04:00
Andrew Dunstan fcdb35c32a Fix transformJsonBehavior
Commit 1a36bc9dba conained some logic that was a little opaque and
could have involved a NULL dereference, as complained about by Coverity.
Make the logic more transparent and in doing so avoid the NULL
dereference.
2022-04-14 09:24:22 -04:00
David Rowley a00fd066b1 Add missing spaces after single-line comments
Only 1 of 3 of these changes appear to be handled by pgindent. That change
is new to v15.  The remaining two appear to be left alone by pgindent. The
exact reason for that is not 100% clear to me.  It seems related to the
fact that it's a line that contains *only* a single line comment and no
actual code.  It does not seem worth investigating this in too much
detail.  In any case, these do not conform to our usual practices, so fix
them.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
2022-04-14 09:28:56 +12:00
Tom Lane b5607b0746 Fix case sensitivity in psql's tab completion for GUC names.
Input for these should be case-insensitive, but was not completely
so.  Comparing to the similar queries for timezone names, I realized
that we'd missed forcing the comparison pattern to lower-case.
With that, it behaves as I expect.

While here, flatten the sub-selects in these queries; I don't
find that those add any readability.

Discussion: https://postgr.es/m/3369130.1649348542@sss.pgh.pa.us
2022-04-13 16:26:34 -04:00
Tom Lane 139d46ee26 Further tweak the default behavior of psql's \dconfig.
Define "parameters with non-default settings" as being those that
not only have pg_settings.source different from 'default', but
also have a current value different from the hard-wired boot_val.
Adding the latter restriction removes a number of not-very-interesting
cases where the active setting is chosen by initdb but in practice
tends to be the same all the time.

Per discussion with Jonathan Katz.

Discussion: https://postgr.es/m/YlFQLzlPi4QD0wSi@msg.df7cb.de
2022-04-13 15:03:58 -04:00
Tom Lane 7b7ed046cb Prevent access to no-longer-pinned buffer in heapam_tuple_lock().
heap_fetch() used to have a "keep_buf" parameter that told it to return
ownership of the buffer pin to the caller after finding that the
requested tuple TID exists but is invisible to the specified snapshot.
This was thoughtlessly removed in commit 5db6df0c0, which broke
heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function
needs to do more accesses to the tuple even if it's invisible.  The net
effect is that we would continue to touch the page for a microsecond or
two after releasing pin on the buffer.  Usually no harm would result;
but if a different session decided to defragment the page concurrently,
we could see garbage data and mistakenly conclude that there's no newer
tuple version to chain up to.  (It's hard to say whether this has
happened in the field.  The bug was actually found thanks to a later
change that allowed valgrind to detect accesses to non-pinned buffers.)

The most reasonable way to fix this is to reintroduce keep_buf,
although I made it behave slightly differently: buffer ownership
is passed back only if there is a valid tuple at the requested TID.
In HEAD, we can just add the parameter back to heap_fetch().
To avoid an API break in the back branches, introduce an additional
function heap_fetch_extended() in those branches.

In HEAD there is an additional, less obvious API change: tuple->t_data
will be set to NULL in all cases where buffer ownership is not returned,
in particular when the tuple exists but fails the time qual (and
!keep_buf).  This is to defend against any other callers attempting to
access non-pinned buffers.  We concluded that making that change in back
branches would be more likely to introduce problems than cure any.

In passing, remove a comment about heap_fetch that was obsoleted by
9a8ee1dc6.

Per bug #17462 from Daniil Anisimov.  Back-patch to v12 where the bug
was introduced.

Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
2022-04-13 13:35:07 -04:00
Alvaro Herrera 24d2b2680a
Remove extraneous blank lines before block-closing braces
These are useless and distracting.  We wouldn't have written the code
with them to begin with, so there's no reason to keep them.

Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
2022-04-13 19:16:02 +02:00
Alvaro Herrera ed0fbc8e5a
Release cache tuple when no longer needed
There was a small buglet in commit 52e4f0cd47 whereby a tuple acquired
from cache was not released, giving rise to WARNING messages; fix that.

While at it, restructure the code a bit on stylistic grounds.

Author: Hou zj <houzj.fnst@fujitsu.com>
Reported-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAHut+PvKTyhTBtYCQsP6Ph7=o-oWRSX+v+PXXLXp81-o2bazig@mail.gmail.com
2022-04-13 18:19:38 +02:00
Andrew Dunstan 112fdb3528 Fix finalization for json_objectagg and friends
Commit f4fb45d15c misguidedly tried to free some state during aggregate
finalization for json_objectagg. This resulted in attempts to access
freed memory, especially when the function is used as a window function.
Commit 4eb9798879 attempted to ameliorate that, but in fact it should
just be ripped out, which is done here. Also add some regression tests
for json_objectagg in various flavors as a window function.

Original report from Jaime Casanova, diagnosis by Andres Freund.

Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to
2022-04-13 10:37:43 -04:00
Peter Eisentraut a038679cd8 Fix incorrect format placeholders 2022-04-13 14:04:51 +02:00
Michael Paquier b940918dc8 Remove "recheck" argument from check_index_is_clusterable()
The last usage of this argument in this routine can be tracked down to
7e2f9062, aka 11 years ago.  Getting rid of this argument can also be an
advantage for extensions calling check_index_is_clusterable(), as it
removes any need to worry about the meaning of what a recheck would be
at this level.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
2022-04-13 15:32:35 +09:00
Michael Paquier 042a923ad5 Rework compression options of pg_receivewal
Since babbbb5 and the introduction of LZ4 in pg_receivewal, the
compression of the WAL archived is controlled by two options:
- --compression-method with "gzip", "none" or "lz4" as possible value.
- --compress=N to specify a compression level.  This includes a
backward-incompatible change where a value of 0 leads to a failure
instead of no compression enforced.

This commit takes advantage of a4b5754 and 3603f7c to rework the
compression options of pg_receivewal, as of:
- The removal of --compression-method.
- The extenction of --compress to use the same grammar as pg_basebackup,
with a METHOD:DETAIL format, where a METHOD is "gzip", "none" or "lz4"
and a DETAIL is a comma-separated list of options, the only keyword
supported is now "level" to control the compression level.  If only an
integer is specified as value of this option, "none" is implied on 0
and "gzip" is implied otherwise.  This brings back --compress to be
backward-compatible with ~14, while still supporting LZ4.

This has also the advantage of centralizing the set of checks used by
pg_receivewal to validate its compression options.

Author: Michael Paquier
Reviewed-by: Robert Haas, Georgios Kokolatos
Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
2022-04-13 11:09:51 +09:00
Robert Haas 7fc0e7de9f Revert the addition of GetMaxBackends() and related stuff.
This reverts commits 0147fc7, 4567596, aa64f23, and 5ecd018.
There is no longer agreement that introducing this function
was the right way to address the problem. The consensus now
seems to favor trying to make a correct value for MaxBackends
available to mdules executing their _PG_init() functions.

Nathan Bossart

Discussion: http://postgr.es/m/20220323045229.i23skfscdbvrsuxa@jrouhaud
2022-04-12 14:45:23 -04:00
Tom Lane 2c9381840f Remove not-very-useful early checks of __pg_log_level in logging.h.
Enforce __pg_log_level message filtering centrally in logging.c,
instead of relying on the calling macros to do it.  This is more
reliable (e.g. it works correctly for direct calls to pg_log_generic)
and it saves a percent or so of total code size because we get rid of
so many duplicate checks of __pg_log_level.

This does mean that argument expressions in a logging macro will be
evaluated even if we end up not printing anything.  That seems of
little concern for INFO and higher levels as those messages are printed
by default, and most of our frontend programs don't even offer a way to
turn them off.  I left the unlikely() checks in place for DEBUG
messages, though.

Discussion: https://postgr.es/m/3993549.1649449609@sss.pgh.pa.us
2022-04-12 13:25:29 -04:00