Commit Graph

4312 Commits

Author SHA1 Message Date
Alexander Korotkov f1ea98a797 Reduce non-leaf keys overlap in GiST indexes produced by a sorted build
The GiST sorted build currently chooses split points according to the only page
space utilization.  That may lead to higher non-leaf keys overlap and, in turn,
slower search query answers.

This commit makes the sorted build use the opclass's picksplit method.  Once
four pages at the level are accumulated, the picksplit method is applied until
each split partition fits the page.  Some of our split algorithms could show
significant performance degradation while processing 4-times more data at once.
But those opclasses haven't received the sorted build support and shouldn't
receive it before their split algorithms are improved.

Discussion: https://postgr.es/m/CAHqSB9jqtS94e9%3D0vxqQX5dxQA89N95UKyz-%3DA7Y%2B_YJt%2BVW5A%40mail.gmail.com
Author: Aliaksandr Kalenik, Sergei Shoulbakov, Andrey Borodin
Reviewed-by: Björn Harrtell, Darafei Praliaskouski, Andres Freund
Reviewed-by: Alexander Korotkov
2022-02-07 23:20:42 +03:00
Robert Haas 00c360a89c In basic_archive tests, insist on wal_level='replica'.
That's normally the default, but buildfarm member thorntail has
other ideas.

Nathan Bossart

Discussion: http://postgr.es/m/20220203212539.GA1082940@nathanxps13
2022-02-03 16:40:32 -05:00
Robert Haas 5ef1eefd76 Allow archiving via loadable modules.
Running a shell command for each file to be archived has a lot of
overhead and may not offer as much error checking as you want, or the
exact semantics that you want. So, offer the option to call a loadable
module for each file to be archived, rather than running a shell command.

Also, add a 'basic_archive' contrib module as an example implementation
that archives to a local directory.

Nathan Bossart, with a little bit of kibitzing by me.

Discussion: http://postgr.es/m/20220202224433.GA1036711@nathanxps13
2022-02-03 14:05:02 -05:00
Tom Lane b426bd48ee Simplify coding around path_contains_parent_reference().
Given the existing stipulation that path_contains_parent_reference()
must only be invoked on canonicalized paths, we can simplify things
in the wake of commit c10f830c5.  It is now only possible to see
".." at the start of a relative path.  That means we can simplify
path_contains_parent_reference() itself quite a bit, and it makes
the two existing outside call sites dead code, since they'd already
checked that the path is absolute.

We could now fold path_contains_parent_reference() into its only
remaining caller path_is_relative_and_below_cwd().  But it seems
better to leave it as a separately callable function, in case any
extensions are using it.

Also document the pre-existing requirement for
path_is_relative_and_below_cwd's input to be likewise canonicalized.

Shenhao Wang and Tom Lane

Discussion: https://postgr.es/m/OSBPR01MB4214FA221FFE046F11F2AD74F2D49@OSBPR01MB4214.jpnprd01.prod.outlook.com
2022-01-31 13:53:38 -05:00
Tom Lane c10f830c51 Make canonicalize_path() more canonical.
Teach canonicalize_path() how to strip all unnecessary uses of "."
and "..", replacing the previous ad-hoc code that got rid of only
some such cases.  In particular, we can always remove all such
uses from absolute paths.

The proximate reason to do this is that Windows rejects paths
involving ".." in some cases (in particular, you can't put one in a
symlink), so we ought to be sure we don't use ".." unnecessarily.
Moreover, it seems like good cleanup on general principles.

There is other path-munging code that could be simplified now, but
we'll leave that for followup work.

It is tempting to call this a bug fix and back-patch it.  On the other
hand, the misbehavior can only be reached if a highly privileged user
does something dubious, so it's not unreasonable to say "so don't do
that".  And this patch could result in unexpected behavioral changes,
in case anybody was expecting uses of ".." to stay put.  So at least
for now, just put it in HEAD.

Shenhao Wang, editorialized a bit by me

Discussion: https://postgr.es/m/OSBPR01MB4214FA221FFE046F11F2AD74F2D49@OSBPR01MB4214.jpnprd01.prod.outlook.com
2022-01-31 12:05:37 -05:00
Alvaro Herrera b3d7d6e462
Remove xloginsert.h from xlog.h
xlog.h is directly and indirectly #included in a lot of places.  With
this change, xloginsert.h is no longer unnecessarily included in the
large number of them that don't need it.

Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CALj2ACVe-W+WM5P44N7eG9C2_FmaeM8Dq5aCnD3fHt0Ba=WR6w@mail.gmail.com
2022-01-30 12:25:24 -03:00
Peter Eisentraut 43f33dc018 Add HEADER support to COPY text format
The COPY CSV format supports the HEADER option to output a header
line.  This patch adds the same option to the default text format.  On
input, the HEADER option causes the first line to be skipped, same as
with CSV.

Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr>
Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
2022-01-28 09:44:47 +01:00
Etsuro Fujita 9e283fc85d postgres_fdw: Fix handling of a pending asynchronous request in postgresReScanForeignScan().
Commit 27e1f1456 failed to process a pending asynchronous request made
for a given ForeignScan node in postgresReScanForeignScan() (if any) in
cases where we would only reset the next_tuple counter in that function,
contradicting the assumption that there should be no pending
asynchronous requests that have been made for async-capable subplans for
the parent Append node after ReScan.  This led to an assert failure in
an assert-enabled build.  I think this would also lead to mis-rewinding
the cursor in that function in the case where we have already fetched
one batch for the ForeignScan node and the asynchronous request has been
made for the second batch, because even in that case we would just reset
the counter when called from that function, so we would fail to execute
MOVE BACKWARD ALL.

To fix, modify that function to process the asynchronous request before
restarting the scan.

While at it, add a comment to a function to match other places.

Per bug #17344 from Alexander Lakhin.  Back-patch to v14 where the
aforesaid commit came in.

Patch by me.  Test case by Alexander Lakhin, adjusted by me.  Reviewed
and tested by Alexander Lakhin and Dmitry Dolgov.

Discussion: https://postgr.es/m/17344-226b78b00de73a7e@postgresql.org
2022-01-27 16:15:00 +09:00
Noah Misch ce6d79368e On sparc64+ext4, suppress test failures from known WAL read failure.
Buildfarm members kittiwake, tadarida and snapper began to fail
frequently when commits 3cd9c3b921 and
f47ed79cc8 added tests of concurrency, but
the problem was reachable before those commits.  Back-patch to v10 (all
supported versions).

Discussion: https://postgr.es/m/20220116210241.GC756210@rfd.leadboat.com
2022-01-26 18:06:19 -08:00
Etsuro Fujita 6c07f9ebce postgres_fdw: Fix subabort cleanup of connections used in asynchronous execution.
Commit 27e1f1456 resets the per-connection states of connections used to
scan foreign tables asynchronously during abort cleanup at main
transaction end, but it failed to do so during subabort cleanup at
subtransaction end, leading to a segmentation fault when re-executing an
asynchronous-foreign-table-scan query in a transaction that was
cancelled in a subtransaction of it.

Fix by modifying pgfdw_abort_cleanup() to reset the per-connection state
of a given connection also when called for subabort cleanup.  Also,
modify that function to do the reset in both the abort-cleanup and
subabort-cleanup cases if necessary, to save cycles, and improve a
comment on it a little bit.

Back-patch to v14 where the aforesaid commit came in.

Reviewed by Alexander Pyhalov

Discussion: https://postgr.es/m/CAPmGK14cCV-JA7kNsyt2EUTKvZ4xkr2LNRthi1U1C3cqfGppAw@mail.gmail.com
2022-01-21 17:45:00 +09:00
Peter Eisentraut 941460fcf7 Add Boolean node
Before, SQL-level boolean constants were represented by a string with
a cast, and internal Boolean values in DDL commands were usually
represented by Integer nodes.  This takes the place of both of these
uses, making the intent clearer and having some amount of type safety.

Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
2022-01-17 10:38:23 +01:00
Tom Lane 134d974636 Include permissive/enforcing state in sepgsql log messages.
SELinux itself does this (at least in modern releases), and it
seems like a good idea to reduce confusion.

Dave Page

Discussion: https://postgr.es/m/CA+OCxowsQoLEYc=jN7OtNvOdX0Jg5L7nMYt++=k0X78HGq-sXg@mail.gmail.com
2022-01-12 14:23:13 -05:00
Michael Paquier b69aba7457 Improve error handling of cryptohash computations
The existing cryptohash facility was causing problems in some code paths
related to MD5 (frontend and backend) that relied on the fact that the
only type of error that could happen would be an OOM, as the MD5
implementation used in PostgreSQL ~13 (the in-core implementation is
used when compiling with or without OpenSSL in those older versions),
could fail only under this circumstance.

The new cryptohash facilities can fail for reasons other than OOMs, like
attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to
1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this
would cause incorrect reports to show up.

This commit extends the cryptohash APIs so as callers of those routines
can fetch more context when an error happens, by using a new routine
called pg_cryptohash_error().  The error states are stored within each
implementation's internal context data, so as it is possible to extend
the logic depending on what's suited for an implementation.  The default
implementation requires few error states, but OpenSSL could report
various issues depending on its internal state so more is needed in
cryptohash_openssl.c, and the code is shaped so as we are always able to
grab the necessary information.

The core code is changed to adapt to the new error routine, painting
more "const" across the call stack where the static errors are stored,
particularly in authentication code paths on variables that provide
log details.  This way, any future changes would warn if attempting to
free these strings.  The MD5 authentication code was also a bit blurry
about the handling of "logdetail" (LOG sent to the postmaster), so
improve the comments related that, while on it.

The origin of the problem is 87ae969, that introduced the centralized
cryptohash facility.  Extra changes are done for pgcrypto in v14 for the
non-OpenSSL code path to cope with the improvements done by this
commit.

Reported-by: Michael Mühlbeyer
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com
Backpatch-through: 14
2022-01-11 09:55:16 +09:00
Tom Lane 54b1cb7eb7 Fix results of index-only scans on btree_gist char(N) indexes.
If contrib/btree_gist is used to make a GIST index on a char(N)
(bpchar) column, and that column is retrieved via an index-only
scan, what came out had all trailing spaces removed.  Since
that doesn't happen in any other kind of table scan, this is
clearly a bug.  The cause is that gbt_bpchar_compress() strips
trailing spaces (using rtrim1) before a new index entry is made.
That was probably a good idea when this code was first written,
but since we invented index-only scans, it's not so good.

One answer could be to mark this opclass as incapable of index-only
scans.  But to do so, we'd need an extension module version bump,
followed by manual action by DBAs to install the updated version
of btree_gist.  And it's not really a desirable place to end up,
anyway.

Instead, let's fix the code by removing the unwanted space-stripping
action and adjusting the opclass's comparison logic to ignore
trailing spaces as bpchar normally does.  This will not hinder
cases that work today, since index searches with this logic will
act the same whether trailing spaces are stored or not.  It will
not by itself fix the problem of getting space-stripped results
from index-only scans, of course.  Users who care about that can
REINDEX affected indexes after installing this update, to immediately
replace all improperly-truncated index entries.  Otherwise, it can
be expected that the index's behavior will change incrementally as
old entries are replaced by new ones.

Per report from Alexander Lakhin.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/696c995b-b37f-5526-f45d-04abe713179f@gmail.com
2022-01-08 14:54:39 -05:00
Bruce Momjian 27b77ecf9f Update copyright for 2022
Backpatch-through: 10
2022-01-07 19:04:57 -05:00
Fujii Masao 353aa01687 postgres_fdw: Add regression test for postgres_fdw.application_name GUC.
Commit 449ab63505 added postgres_fdw.application_name GUC that specifies
a value for application_name configuration parameter used when postgres_fdw
establishes a connection to a foreign server. Also commit 6e0cb3dec1
allowed it to include escape sequences. Both commits added the regression
tests for the GUC, but those tests were reverted by commits 98dbef90eb and
5e64ad3697 because they were unstable and caused some buildfarm members
to report the failure.

This is the third try to add the regression test for
postgres_fdw.application_name GUC.

One of issues to make the test unstable was to have used
postgres_fdw_disconnect_all() to close the existing remote connections.
The test expected that the remote connection and its corresponding backend
at the remote server disappeared just after postgres_fdw_disconnect_all()
was executed, but it could take a bit time for them to disappear.
To make sure that they exit, this commit makes the test use
pg_terminate_backend() with the timeout at the remote server, instead.
If the timeout is set to greater than zero, this function waits until
they are actually terminated (or until the given time has passed).

Another issue was that the test didn't take into consideration the case
where postgres_fdw.application_name containing some escape sequences was
converted to the string larger than NAMEDATALEN. In this case it was
truncated to less than NAMEDATALEN when it's passed to the remote server,
but the test expected wrongly that full string of application_name was
always viewable. This commit changes the test so that it can handle that case.

Author: Fujii Masao
Reviewed-by: Masahiko Sawada, Hayato Kuroda, Kyotaro Horiguchi
Discussion: https://postgr.es/m/3220909.1631054766@sss.pgh.pa.us
Discussion: https://postgr.es/m/20211224.180006.2247635208768233073.horikyota.ntt@gmail.com
Discussion: https://postgr.es/m/e7b61420-a97b-8246-77c4-a0d48fba5a45@oss.nttdata.com
2022-01-07 15:31:56 +09:00
Tom Lane c2e8bd2751 Enable routine running of citext's UTF8-specific test cases.
These test cases have been commented out since citext was invented,
because at the time we had no nice way to deal with tests that
have restrictions such as requiring UTF8 encoding.  But now we do
have a convention for that, ie put them into a separate test file
with an early-exit path.  So let's enable these tests to run when
their prerequisites are satisfied.

(We may have to tighten the prerequisites beyond the "encoding = UTF8
and locale != C" checks made here.  But let's put it on the buildfarm
and see what blows up.)

Dag Lem

Discussion: https://postgr.es/m/ygezgoacs4e.fsf_-_@sid.nimrod.no
2022-01-05 13:30:07 -05:00
Tom Lane 8a2e323f20 Handle mixed returnable and non-returnable columns better in IOS.
We can revert the code changes of commit b5febc1d1 now, because
commit 9a3ddeb51 installed a real solution for the difficulty
that b5febc1d1 just dodged, namely that the planner might pick
the wrong one of several index columns nominally containing the
same value.  It only matters which one we pick if we pick one
that's not returnable, and that mistake is now foreclosed.

Although both of the aforementioned commits were back-patched,
I don't feel a need to take any risk by back-patching this one.
The cases that it improves are very corner-ish.

Discussion: https://postgr.es/m/3179992.1641150853@sss.pgh.pa.us
2022-01-03 16:12:11 -05:00
Michael Paquier 234ba62769 pg_stat_statements: Remove obsolete comment
Since 4f0b096, pgss_store() does nothing if compute_query_id is disabled
or if no other module computed a query identifier, but the top comment
of this function did not reflect that.  This behavior is already
documented in its own code path, and this just removes the inconsistent
comment.

Author: Kyotaro Horiguchi
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/20211122.153823.1325120762360533122.horikyota.ntt@gmail.com
Backpatch-through: 14
2022-01-03 17:34:45 +09:00
Tom Lane cab5b9ab2c Revert changes about warnings/errors for placeholders.
Revert commits 5609cc01c, 2ed8a8cc5, and 75d22069e until we have
a less broken idea of how this should work in parallel workers.
Per buildfarm.

Discussion: https://postgr.es/m/1640909.1640638123@sss.pgh.pa.us
2021-12-27 16:01:10 -05:00
Tom Lane 5609cc01c6 Rename EmitWarningsOnPlaceholders() to MarkGUCPrefixReserved().
This seems like a clearer name for what it does now.

Provide a compatibility macro so that extensions don't have to convert
to the new name right away.

Discussion: https://postgr.es/m/116024.1640111629@sss.pgh.pa.us
2021-12-27 14:39:08 -05:00
Fujii Masao 5e64ad3697 postgres_fdw: Revert unstable tests for postgres_fdw.application_name.
Commit 6e0cb3dec1 added the tests that check that escape sequences in
postgres_fdw.application_name setting are replaced with status information
expectedly. But they were unstable and caused some buildfarm
members to report the failure. This commit reverts those unstable tests.
2021-12-24 17:39:59 +09:00
Fujii Masao 6e0cb3dec1 postgres_fdw: Allow postgres_fdw.application_name to include escape sequences.
application_name that used when postgres_fdw establishes a connection to
a foreign server can be specified in either or both a connection parameter
of a server object and GUC postgres_fdw.application_name. This commit
allows those parameters to include escape sequences that begins with
% character. Then postgres_fdw replaces those escape sequences with
status information. For example, %d and %u are replaced with user name
and database name in local server, respectively. This feature enables us
to add information more easily to track remote transactions or queries,
into application_name of a remote connection.

Author: Hayato Kuroda
Reviewed-by: Kyotaro Horiguchi, Masahiro Ikeda, Hou Zhijie, Fujii Masao
Discussion: https://postgr.es/m/TYAPR01MB5866FAE71C66547C64616584F5EB9@TYAPR01MB5866.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/TYCPR01MB5870D1E8B949DAF6D3B84E02F5F29@TYCPR01MB5870.jpnprd01.prod.outlook.com
2021-12-24 16:55:11 +09:00
Michael Paquier 2e577c9446 Remove assertion for ALTER TABLE .. DETACH PARTITION CONCURRENTLY
One code path related to this flavor of ALTER TABLE was checking that
the relation to detach has to be a normal table or a partitioned table,
which would fail if using the command with a different relation kind.

Views, sequences and materialized views cannot be part of a partition
tree, so these would cause the command to fail anyway, but the assertion
was triggered.  Foreign tables can be part of a partition tree, and
again the assertion would have failed.  The simplest solution is just to
remove this assertion, so as we get the same failure as the
non-concurrent code path.

While on it, add a regression test in postgres_fdw for the concurrent
partition detach of a foreign table, as per a suggestion from Alexander
Lakhin.

Issue introduced in 71f4c8c.

Reported-by: Alexander Lakhin
Author: Michael Paquier, Alexander Lakhin
Reviewed-by: Peter Eisentraut, Kyotaro Horiguchi
Discussion: https://postgr.es/m/17339-a9e09aaf38a3457a@postgresql.org
Backpatch-through: 14
2021-12-22 15:38:00 +09:00
Tom Lane 1fada5d81e Add missing EmitWarningsOnPlaceholders() calls.
Extensions that define any custom GUCs should call
EmitWarningsOnPlaceholders after doing so, to help catch misspellings.
Many of our contrib modules hadn't gotten the memo on that, though.

Also add such calls to src/test/modules extensions that have GUCs.
While these aren't really user-facing, they should illustrate good
practice not faulty practice.

Shinya Kato

Discussion: https://postgr.es/m/524fa2c0a34f34b68fbfa90d0760d515@oss.nttdata.com
2021-12-21 12:12:24 -05:00
Tom Lane 0e6e7f0806 Merge dblink's paths test script into its main test.
There's no longer any reason to fire up a separate psql run
to create these functions.  (Some refactoring in the main
regression tests is also called for, but that will take
more thought.)

Discussion: https://postgr.es/m/1655733.1639871614@sss.pgh.pa.us
2021-12-20 16:49:13 -05:00
Tom Lane dc9c3b0ff2 Remove dynamic translation of regression test scripts, step 2.
"git mv" all the input/*.source and output/*.source files into
the corresponding sql/ and expected/ directories.  Then remove
the pg_regress and Makefile infrastructure associated with
dynamic translation.

Discussion: https://postgr.es/m/1655733.1639871614@sss.pgh.pa.us
2021-12-20 14:15:52 -05:00
Tom Lane d1029bb5a2 Remove dynamic translation of regression test scripts, step 1.
pg_regress has long had provisions for dynamically substituting path
names into regression test scripts and result files, but use of that
feature has always been a serious pain in the neck, mainly because
updating the result files requires tedious manual editing.  Let's
get rid of that in favor of passing down the paths in environment
variables.

In addition to being easier to maintain, this way is capable of
dealing with path names that require escaping at runtime, for example
paths containing single-quote marks.  (There are other stumbling
blocks in the way of actually building in a path that looks like
that, but removing this one seems like a good thing to do.)  The key
coding rule that makes that possible is to concatenate pieces of a
dynamically-variable string using psql's \set command, and then use
the :'variable' notation to quote and escape the string for the next
level of interpretation.

In hopes of making this change more transparent to "git blame",
I've split it into two steps.  This commit adds the necessary
pg_regress.c support and changes all the *.source files in-place
so that they no longer require any dynamic translation.  The next
commit will just "git mv" them into the regular sql/ and expected/
directories.

Discussion: https://postgr.es/m/1655733.1639871614@sss.pgh.pa.us
2021-12-20 14:06:15 -05:00
Michael Paquier ece8c76192 Remove assertion for replication origins in PREPARE TRANSACTION
When using replication origins, pg_replication_origin_xact_setup() is an
optional choice to be able to set a LSN and a timestamp to mark the
origin, which would be additionally added to WAL for transaction commits
or aborts (including 2PC transactions).  An assertion in the code path
of PREPARE TRANSACTION assumed that this data should always be set, so
it would trigger when using replication origins without setting up an
origin LSN.  Some tests are added to cover more this kind of scenario.

Oversight in commit 1eb6d65.

Per discussion with Amit Kapila and Masahiko Sawada.

Discussion: https://postgr.es/m/YbbBfNSvMm5nIINV@paquier.xyz
Backpatch-through: 11
2021-12-14 10:58:15 +09:00
Tom Lane 07eee5a0dc Create a new type category for "internal use" types.
Historically we've put type "char" into the S (String) typcategory,
although calling it a string is a stretch considering it can only
store one byte.  (In our actual usage, it's more like an enum.)
This choice now seems wrong in view of the special heuristics
that parse_func.c and parse_coerce.c have for TYPCATEGORY_STRING:
it's not a great idea for "char" to have those preferential casting
behaviors.

Worse than that, recent patches inventing special-purpose types
like pg_node_tree have assigned typcategory S to those types,
meaning they also get preferential casting treatment that's designed
on the assumption that they can hold arbitrary text.

To fix, invent a new category TYPCATEGORY_INTERNAL for internal-use
types, and assign that to all these types.  I used code 'Z' for
lack of a better idea ('I' was already taken).

This change breaks one query in psql/describe.c, which now needs to
explicitly cast a catalog "char" column to text before concatenating
it with an undecorated literal.  Also, a test case in contrib/citext
now needs an explicit cast to convert citext to "char".  Since the
point of this change is to not have "char" be a surprisingly-available
cast target, these breakages seem OK.

Per report from Ian Campbell.

Discussion: https://postgr.es/m/2216388.1638480141@sss.pgh.pa.us
2021-12-11 14:10:51 -05:00
Michael Paquier 5d08137076 Fix some typos with {a,an}
One of the changes impacts the documentation, so backpatch.

Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pu6+c+r3mY24VT7u+H+E_s6vMr5OdRiZ8NT3EOa-E5Lmw@mail.gmail.com
Backpatch-through: 14
2021-12-09 15:20:36 +09:00
Fujii Masao 815d61fcd4 postgres_fdw: Report warning when timeout expires while getting query result.
When aborting remote transaction or sending cancel request to a remote server,
postgres_fdw calls pgfdw_get_cleanup_result() to wait for the result of
transaction abort query or cancel request to arrive. It fails to get the result
if the timeout expires or a connection trouble happens.

Previously postgres_fdw reported no warning message even when the timeout
expired or a connection trouble happened in pgfdw_get_cleanup_result().
This could make the troubleshooting harder when such an event occurred.

This commit makes pgfdw_get_cleanup_result() tell its caller what trouble
(timeout or connection error) occurred, on failure, and also makes its caller
report the proper warning message based on that information.

Author: Fujii Masao
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/15aa988c-722e-ad3e-c936-4420c5b2bfea@oss.nttdata.com
2021-12-08 23:31:46 +09:00
Peter Eisentraut 254c63e9ed pgrowlocks: Fix incorrect format placeholders
Transaction IDs should be printed as unsigned, similar to xidout().
2021-12-08 09:32:16 +01:00
Peter Eisentraut 814e1d9ff7 pgcrypto: Remove explicit hex encoding/decoding from tests
This was from before the hex format was available in bytea.  Now we
can remove the extra explicit encoding/decoding calls and rely on the
default output format.

Discussion: https://www.postgresql.org/message-id/flat/17dcb4f7-7ac1-e2b6-d5f7-2dfba06cd9ee%40enterprisedb.com
2021-12-08 06:06:22 +01:00
Peter Eisentraut 37b2764593 Some RELKIND macro refactoring
Add more macros to group some RELKIND_* macros:

- RELKIND_HAS_PARTITIONS()
- RELKIND_HAS_TABLESPACE()
- RELKIND_HAS_TABLE_AM()

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/a574c8f1-9c84-93ad-a9e5-65233d6fc00f%40enterprisedb.com
2021-12-03 14:08:19 +01:00
Fujii Masao 557c39bba9 postgres_fdw: Fix unexpected reporting of empty message.
pgfdw_report_error() in postgres_fdw gets a message from PGresult or
PGconn to report an error received from a remote server. Previously
if it could get a message from neither of them, it reported empty
message unexpectedly. The cause of this issue was that pgfdw_report_error()
didn't handle properly the case where no message could be obtained
and its local variable message_primary was set to '\0'.

This commit improves pgfdw_report_error() so that it reports the message
"could not obtain ..." when it gets no message and message_primary
is set to '\0'. This is the same behavior as when message_primary is NULL.

dblink_res_error() in dblink has the same issue, so this commit also
improves it in the same way.

Back-patch to all supported branches.

Author: Fujii Masao
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/477c16c8-7ea4-20fc-38d5-ed3a77ed616c@oss.nttdata.com
2021-12-03 17:35:29 +09:00
Tom Lane 83884682f4 psql: include intra-query "--" comments in what's sent to the server.
psql's lexer has historically deleted dash-dash (single-line) comments
from what's collected and sent to the server.  This is inconsistent
with what it does for slash-star comments, and people have complained
before that they wish such comments would be captured in the server log.
Undoing the decision completely seems like too big a behavioral change,
however.  In particular, comments on lines preceding the start of a
query are generally not thought of as being part of that query.

What we can do to improve the situation is to capture comments that
are clearly *within* a query, that is after the first non-whitespace,
non-comment token but before the query's ending semicolon or backslash
command.  This is a nearly trivial code change, and it affects only a
few regression test results.

(It is tempting to try to apply the same rule to slash-star comments.
But it's hard to see how to do that without getting strange history
behavior for comments that cross lines, especially if the user then
starts a new query on the same line as the star-slash.  In view of
the lack of complaints, let's leave that case alone.)

Discussion: https://postgr.es/m/CAJcOf-cAdMVr7azeYR7nWKsNp7qhORzc84rV6d7m7knG5Hrtsw@mail.gmail.com
2021-12-01 12:06:31 -05:00
Daniel Gustafsson ac0db34e0e Remove PF_USED_FOR_ASSERTS_ONLY from variables in general use
fsstate in process_pending_requests (in postgres_fdw.c) was added in
8998e3cafa as an assertion-only variable,  1ec7fca859 stated using
the variable outside of assertions.

rd_index in get_index_column_opclass (in lsyscache.c) was introduced
in 2a6368343f, and then promptly used in the fix commit 7e04160390
shortly thereafter.

This removes the PG_USED_FOR_ASSERTS_ONLY variable decoration from
the above mentioned variables.

Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>
Discussion: https://postgr.es/m/F959106C-0F21-43A5-B2AE-D007D51ACBEE@yesql.se
2021-11-30 14:02:14 +01:00
Tom Lane 3804539e48 Replace random(), pg_erand48(), etc with a better PRNG API and algorithm.
Standardize on xoroshiro128** as our basic PRNG algorithm, eliminating
a bunch of platform dependencies as well as fundamentally-obsolete PRNG
code.  In addition, this API replacement will ease replacing the
algorithm again in future, should that become necessary.

xoroshiro128** is a few percent slower than the drand48 family,
but it can produce full-width 64-bit random values not only 48-bit,
and it should be much more trustworthy.  It's likely to be noticeably
faster than the platform's random(), depending on which platform you
are thinking about; and we can have non-global state vectors easily,
unlike with random().  It is not cryptographically strong, but neither
are the functions it replaces.

Fabien Coelho, reviewed by Dean Rasheed, Aleksander Alekseev, and myself

Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2105241211230.165418@pseudo
2021-11-28 21:33:07 -05:00
Peter Eisentraut fb5961fd13 Fix incorrect format placeholders
Also choose better types for the underlying variables to make this
more consistent.
2021-11-24 08:15:17 +01:00
David Rowley e502150f7d Allow Memoize to operate in binary comparison mode
Memoize would always use the hash equality operator for the cache key
types to determine if the current set of parameters were the same as some
previously cached set.  Certain types such as floating points where -0.0
and +0.0 differ in their binary representation but are classed as equal by
the hash equality operator may cause problems as unless the join uses the
same operator it's possible that whichever join operator is being used
would be able to distinguish the two values.  In which case we may
accidentally return in the incorrect rows out of the cache.

To fix this here we add a binary mode to Memoize to allow it to the
current set of parameters to previously cached values by comparing
bit-by-bit rather than logically using the hash equality operator.  This
binary mode is always used for LATERAL joins and it's used for normal
joins when any of the join operators are not hashable.

Reported-by: Tom Lane
Author: David Rowley
Discussion: https://postgr.es/m/3004308.1632952496@sss.pgh.pa.us
Backpatch-through: 14, where Memoize was added
2021-11-24 10:06:59 +13:00
Michael Paquier 1922d7c6e1 Add SQL functions to monitor the directory contents of replication slots
This commit adds a set of functions able to look at the contents of
various paths related to replication slots:
- pg_ls_logicalsnapdir, for pg_logical/snapshots/
- pg_ls_logicalmapdir, for pg_logical/mappings/
- pg_ls_replslotdir, for pg_replslot/<slot_name>/

These are intended to be used by monitoring tools.  Unlike pg_ls_dir(),
execution permission can be granted to non-superusers.  Roles members of
pg_monitor gain have access to those functions.

Bump catalog version.

Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Justin Pryzby
Discussion: https://postgr.es/m/CALj2ACWsfizZjMN6bzzdxOk1ADQQeSw8HhEjhmVXn_Pu+7VzLw@mail.gmail.com
2021-11-23 19:29:42 +09:00
Daniel Gustafsson aa12781b0d Improve publication error messages
Commit 81d5995b4b introduced more fine-grained errormessages for
incorrect relkinds for publication, while unlogged and temporary
tables were reported with using the same message.  This provides
separate error messages for these types of relpersistence.

Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/CALj2ACW9S=AswyQHjtO6WMcsergMkCBTtzXGrM8DX26DzfeTLQ@mail.gmail.com
2021-11-17 14:40:38 +01:00
Tom Lane f8abb0f5e1 postgres_fdw: suppress casts on constants in limited cases.
When deparsing an expression of the form "remote_var OP constant",
we'd normally apply a cast to the constant to make sure that the
remote parser thinks it's of the same type we do.  However, doing
so is often not necessary, and it causes problems if the user has
intentionally declared the local column as being of a different
type than the remote column.  A plausible use-case for that is
using text to represent a type that's an enum on the remote side.
A comparison on such a column will get shipped as "var = 'foo'::text",
which blows up on the remote side because there's no enum = text
operator.  But if we simply leave off the explicit cast, the
comparison will do exactly what the user wants.

It's possible to do this without major risk of semantic problems, by
relying on the longstanding parser heuristic that "if one operand of
an operator is of type unknown, while the other one has a known type,
assume that the unknown operand is also of that type".  Hence, this
patch leaves off the cast only if (a) the operator inputs have the same
type locally; (b) the constant will print as a string literal or NULL,
both of which are initially taken as type unknown; and (c) the non-Const
input is a plain foreign Var.  Rule (c) guarantees that the remote
parser will know the type of the non-Const input; moreover, it means
that if this cast-omission does cause any semantic surprises, that can
only happen in cases where the local column has a different type than
the remote column.  That wasn't guaranteed to work anyway, and this
patch should represent a net usability gain for such cases.

One point that I (tgl) remain slightly uncomfortable with is that we
will ignore an implicit RelabelType when deciding if the non-Const input
is a plain Var.  That makes it a little squishy to argue that the remote
should resolve the Const as being of the same type as its Var, because
then our Const is not the same type as our Var.  However, if we don't do
that, then this hack won't work as desired if the user chooses to use
varchar rather than text to represent some remote column.  That seems
useful, so do it like this for now.  We might have to give up the
RelabelType-ignoring bit if any problems surface.

Dian Fay, with review and kibitzing by me

Discussion: https://postgr.es/m/C9LU294V7K4F.34LRRDU449O45@lamia
2021-11-12 11:50:47 -05:00
Peter Eisentraut 733e039153 Fix incorrect format placeholders 2021-11-10 08:13:12 +01:00
Tomas Vondra e2fbb88372 Fix gist_bool_ops to use gbtreekey2
Commit 57e3c5160b added a new GiST bool opclass, but it used gbtreekey4
to store the data, which left two bytes undefined, as reported by skink,
our valgrind animal. There was a bit more confusion, because the opclass
also used gbtreekey8 in the definition.

Fix by defining a new gbtreekey2 struct, and using it in all the places.

Discussion: https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
2021-11-08 01:14:55 +01:00
Tom Lane 568620dfd6 contrib/sslinfo needs a fix too to make hamerkop happy.
Re-ordering the #include's is a bit problematic here because
libpq/libpq-be.h needs to include <openssl/ssl.h>.  Instead,
let's #undef the unwanted macro after all the #includes.
This is definitely uglier than the other way, but it should
work despite possible future header rearrangements.

(A look at the openssl headers indicates that X509_NAME is the
only conflicting symbol that we use.)

In passing, remove a related but long-incorrect comment in
pg_backup_archiver.h.

Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
2021-11-07 11:33:53 -05:00
Tomas Vondra 57e3c5160b Add bool GiST opclass to btree_gist
Adds bool opclass to btree_gist extension, to allow creating GiST
indexes on bool columns. GiST indexes on a single bool column don't seem
particularly useful, but this allows defining exclusion constraings
involving a bool column, for example.

Author: Emre Hasegeli
Reviewed-by: Andrey Borodin
Discussion: https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
2021-11-06 17:00:43 +01:00
Robert Haas bd807be693 amcheck: Add additional TOAST pointer checks.
Expand the checks of toasted attributes to complain if the rawsize is
overlarge.  For compressed attributes, also complain if compression
appears to have expanded the attribute or if the compression method is
invalid.

Mark Dilger, reviewed by Justin Pryzby, Alexander Alekseev, Heikki
Linnakangas, Greg Stark, and me.

Discussion: http://postgr.es/m/8E42250D-586A-4A27-B317-8B062C3816A8@enterprisedb.com
2021-11-05 09:24:25 -04:00
Peter Eisentraut db7d1a7b05 pgcrypto: Remove non-OpenSSL support
pgcrypto had internal implementations of some encryption algorithms,
as an alternative to calling out to OpenSSL.  These were rarely used,
since most production installations are built with OpenSSL.  Moreover,
maintaining parallel code paths makes the code more complex and
difficult to maintain.

This patch removes these internal implementations.  Now, pgcrypto is
only built if OpenSSL support is configured.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/0b42f1df-8cba-6a30-77d7-acc241cc88c1%40enterprisedb.com
2021-11-05 14:06:59 +01:00
Tom Lane e9d9ba2a4d Avoid some other O(N^2) hazards in list manipulation.
In the same spirit as 6301c3ada, fix some more places where we were
using list_delete_first() in a loop and thereby risking O(N^2)
behavior.  It's not clear that the lists manipulated in these spots
can get long enough to be really problematic ... but it's not clear
that they can't, either, and the fixes are simple enough.

As before, back-patch to v13.

Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
2021-11-01 16:24:39 -04:00
Tom Lane a667b06683 Don't try to read a multi-GB pg_stat_statements file in one call.
Windows fails on a request to read() more than INT_MAX bytes,
and perhaps other platforms could have similar issues.  Let's
adjust this code to read at most 1GB per call.

(One would not have thought the file could get that big, but now
we have a field report of trouble, so it can.  We likely ought to
add some mechanism to limit the size of the query-texts file
separately from the size of the hash table.  That is not this
patch, though.)

Per bug #17254 from Yusuke Egashira.  It's been like this for
awhile, so back-patch to all supported branches.

Discussion: https://postgr.es/m/17254-a926c89dc03375c2@postgresql.org
2021-10-31 19:13:48 -04:00
Peter Eisentraut 237c12aabe uuid-ossp: Remove obsolete build connection with pgcrypto
unused since a8ed6bb8f4
2021-10-30 12:53:02 +02:00
Tom Lane 7f580aa5d8 Improve contrib/amcheck's tests for CREATE INDEX CONCURRENTLY.
Commits fdd965d07 and 3cd9c3b92 tested CREATE INDEX CONCURRENTLY by
launching two separate pgbench runs concurrently.  This was needed so
that only a single client thread would run CREATE INDEX CONCURRENTLY,
avoiding deadlock between two CICs.  However, there's a better way,
which is to use an advisory lock to prevent concurrent CICs.  That's
better in part because the test code is shorter and more readable, but
mostly because it automatically scales things to launch an appropriate
number of CICs relative to the number of INSERT transactions.
As committed, typically half to three-quarters of the CIC transactions
were pointless because the INSERT transactions had already stopped.

In passing, remove background_pgbench, which was added to support
these tests and isn't needed anymore.  We can always put it back
if we find a use for it later.

Back-patch to v12; older pgbench versions lack the
conditional-execution features needed for this method.

Tom Lane and Andrey Borodin

Discussion: https://postgr.es/m/139687.1635277318@sss.pgh.pa.us
2021-10-28 11:45:14 -04:00
Fujii Masao 5fedf7417b Improve HINT message that FDW reports when there are no valid options.
The foreign data wrapper's validator function provides a HINT message with
list of valid options for the object specified in CREATE or ALTER command,
when the option given in the command is invalid. Previously
postgresql_fdw_validator() and the validator functions for postgres_fdw and
dblink_fdw worked in that way even there were no valid options in the object,
which could lead to the HINT message with empty list (because there were
no valid options). For example, ALTER FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (format 'csv') reported the following ERROR and HINT messages.
This behavior was confusing.

    ERROR: invalid option "format"
    HINT: Valid options in this context are:

There is no such issue in file_fdw. The validator function for file_fdw
reports the HINT message "There are no valid options in this context."
instead in that case.

This commit improves postgresql_fdw_validator() and the validator functions
for postgres_fdw and dblink_fdw so that they do likewise. For example,
this change causes the above ALTER FOREIGN DATA WRAPPER command to
report the following messages.

    ERROR:  invalid option "nonexistent"
    HINT:  There are no valid options in this context.

Author: Kosei Masumura
Reviewed-by: Bharath Rupireddy, Fujii Masao
Discussion: https://postgr.es/m/557d06cebe19081bfcc83ee2affc98d3@oss.nttdata.com
2021-10-27 00:46:52 +09:00
Andrew Dunstan b3b4d8e68a
Move Perl test modules to a better namespace
The five modules in our TAP test framework all had names in the top
level namespace. This is unwise because, even though we're not
exporting them to CPAN, the names can leak, for example if they are
exported by the RPM build process. We therefore move the modules to the
PostgreSQL::Test namespace. In the process PostgresNode is renamed to
Cluster, and TestLib is renamed to Utils. PostgresVersion becomes simply
PostgreSQL::Version, to avoid possible confusion about what it's the
version of.

Discussion: https://postgr.es/m/aede93a4-7d92-ef26-398f-5094944c2504@dunslane.net

Reviewed by Erik Rijkers and Michael Paquier
2021-10-24 10:28:19 -04:00
Noah Misch 3cd9c3b921 Fix CREATE INDEX CONCURRENTLY for the newest prepared transactions.
The purpose of commit 8a54e12a38 was to
fix this, and it sufficed when the PREPARE TRANSACTION completed before
the CIC looked for lock conflicts.  Otherwise, things still broke.  As
before, in a cluster having used CIC while having enabled prepared
transactions, queries that use the resulting index can silently fail to
find rows.  It may be necessary to reindex to recover from past
occurrences; REINDEX CONCURRENTLY suffices.  Fix this for future index
builds by making CIC wait for arbitrarily-recent prepared transactions
and for ordinary transactions that may yet PREPARE TRANSACTION.  As part
of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC
before it calls ProcArrayClearTransaction().  Back-patch to 9.6 (all
supported versions).

Andrey Borodin, reviewed (in earlier versions) by Andres Freund.

Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
2021-10-23 18:36:38 -07:00
Noah Misch fdd965d074 Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start.  That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index.  Queries that use the
resulting index can silently fail to find rows.  Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation.  It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).

Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.

Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
2021-10-23 18:36:38 -07:00
Etsuro Fujita 8c7be86883 postgres_fdw: Move comments about elog level in (sub)abort cleanup.
The comments were misplaced when adding postgres_fdw.  Fix that by
moving the comments to more appropriate functions.

Author: Etsuro Fujita
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CAPmGK164sAXQtC46mDFyu6d-T25Mzvh5qaRNkit06VMmecYnOA%40mail.gmail.com
2021-10-13 19:00:00 +09:00
Michael Paquier f9c4cb6868 Add more $Test::Builder::Level in the TAP tests
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.

This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).

A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.

Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.

Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
2021-10-12 11:15:44 +09:00
Peter Geoghegan 292698f158 amcheck: Skip unlogged relations in Hot Standby.
Have verify_heapam.c treat unlogged relations as if they were simply
empty when in Hot Standby mode.  This brings it in line with
verify_nbtree.c, which has handled unlogged relations in the same way
since bugfix commit 6754fe65a4.  This was an oversight in commit
866e24d47d, which extended contrib/amcheck to check heap relations.

In passing, lower the verbosity used when reporting that a relation has
been skipped like this, from NOTICE to DEBUG1.  This is appropriate
because the skipping behavior is only an implementation detail, needed
to work around the fact that unlogged tables don't have smgr-level
storage for their main fork when in Hot Standby mode.

Affected unlogged relations should be considered "trivially verified",
not skipped over.  They are verified in the same sense that a totally
empty relation can be verified.  This behavior seems least surprising
overall, since unlogged relations on a replica will initially be empty
if and when the replica is promoted and Hot Standby ends.

Author: Mark Dilger <mark.dilger@enterprisedb.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk_pukOFY7JmdiFLsrz+Pd3V8OwgC1TH2Vd5BH5ZgK4bA@mail.gmail.com
Backpatch: 14-, where heapam verification was introduced.
2021-10-11 17:21:48 -07:00
Michael Paquier 68f7c4b57a Clean up more code using "(expr) ? true : false"
This is similar to fd0625c, taking care of any remaining code paths that
are worth the cleanup.  This also changes some cases using opposite
expression patterns.

Author: Justin Pryzby, Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoCdF8dnUvr-BUWWGvA_XhKSoANacBMZb6jKyCk4TYfQ2Q@mail.gmail.com
2021-10-11 09:36:42 +09:00
Etsuro Fujita 972c7c6567 postgres_fdw: Fix comments in connection.c.
Commit 27e1f1456 missed updating some comments.

Reviewed-by: Bharath Rupireddy
Backpatch-through: 14
Discussion: https://postgr.es/m/CAPmGK15Q2Nm6U%2Ba_GwskrWFEVBZ9_3VKOvRrprGufpx91M_3Sw%40mail.gmail.com
2021-10-07 18:15:00 +09:00
Tom Lane 3071bbfe44 Fix null-pointer crash in postgres_fdw's conversion_error_callback.
Commit c7b7311f6 adjusted conversion_error_callback to always use
information from the query's rangetable, to avoid doing catalog lookups
in an already-failed transaction.  However, as a result of the utterly
inadequate documentation for make_tuple_from_result_row, I failed to
realize that fsstate could be NULL in some contexts.  That led to a
crash if we got a conversion error in such a context.  Fix by falling
back to the previous coding when fsstate is NULL.  Improve the
commentary, too.

Per report from Andrey Borodin.  Back-patch to 9.6, like the previous
patch.

Discussion: https://postgr.es/m/08916396-55E4-4D68-AB3A-BD6066F9E5C0@yandex-team.ru
2021-10-06 15:50:24 -04:00
Michael Paquier 2b0da0365b pg_stat_statements: Add some tests for older versions still usable
When the newest version is loaded, the backend would load objects from
the oldest complete SQL file (here 1.4) and then update to the latest
version with transition scripts (up to 1.9 currently).  This provides
some coverage for upgrades of pg_stat_statements, but there is no test
to show how things have changed across each version.

This adds a couple of tests for the upgrade paths using objects from
each version supported, stressing the objects whose behaviors have
changed across each version supported.

Author: Erica Zhang
Reviewed-by: Julien Rouhaud, Michael Paquier
Discussion: https://postgr.es/m/tencent_BBA974AFF61379F2345E782FD6C55891950A@qq.com
2021-10-02 17:40:13 +09:00
Tom Lane 6bc6bd47cf Fix instability in contrib/bloom TAP tests.
It turns out that the instability complained of in commit d3c09b9b1
has an embarrassingly simple explanation.  The test script waits for
the standby to flush incoming WAL to disk, but it should wait for
the WAL to be replayed, since we are testing for the effects of that
to be visible.

While at it, use wait_for_catchup instead of reinventing that logic,
and adjust $Test::Builder::Level to improve future error reports.

Back-patch to v12 where the necessary infrastructure came in
(cf. aforesaid commit).  Also back-patch 7d1aa6bf1 so that the
test will actually get run.

Discussion: https://postgr.es/m/2854602.1632852664@sss.pgh.pa.us
2021-09-28 17:34:31 -04:00
Peter Eisentraut c3b011d991 Support amcheck of sequences
Sequences were left out of the list of relation kinds that
verify_heapam knew how to check, though it is fairly trivial to allow
them.  Doing that, and while at it, updating pg_amcheck to include
sequences in relations matched by table and relation patterns.

Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/81ad4757-92c1-4aa3-7bee-f609544837e3%40enterprisedb.com
2021-09-28 15:26:25 +02:00
Tom Lane 7d1aa6bf1c Re-enable contrib/bloom's TAP tests.
These tests were disabled back in 2018 (commit d3c09b9b1) because of
failures observed in the buildfarm.  I've not been able to reproduce
any failure on longfin's host, though, so I'm curious whether or to
what extent we've fixed the problem.  Let's re-enable it (in HEAD
only) and see what blows up.

Discussion: https://postgr.es/m/2769443.1632773967@sss.pgh.pa.us
2021-09-27 18:48:01 -04:00
Michael Paquier e767ddcd35 Fix typos and grammar in code comments
Several mistakes have piled in the code comments over the time,
including incorrect grammar, function names and simple typos.  This
commit takes care of a portion of these.

No backpatch is done as this is only cosmetic.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20210924215827.GS831@telsasoft.com
2021-09-27 14:21:28 +09:00
Fujii Masao 85c6961128 postgres_fdw: Refactor transaction rollback code to avoid code duplication.
In postgres_fdw, pgfdw_xact_callback() and pgfdw_subxact_callback()
callback functions do almost the same thing to rollback remote toplevel-
and sub-transaction. But previously their such rollback logics were
implemented separately in each function and in different way. Which
could decrease the readability and maintainability of the code.

To fix the issue, this commit creates the common function to rollback
remote transactions, and makes those callback functions use it. Which
allows us to avoid unnecessary code duplication.

Author: Fujii Masao
Reviewed-by: Zhihong Yu, Bharath Rupireddy
Discussion: https://postgr.es/m/62fbb63a-d46c-fb47-a56d-f6be1909aa44@oss.nttdata.com
2021-09-22 23:47:36 +09:00
Peter Geoghegan d7897abf9e pageinspect: Make page deletion elog less chatty.
An elog that reports the value of a transaction ID stored on a deleted
nbtree page was added by commit e5d8a999, which taught page deletion to
store full 64-bit XIDs.  It seems very chatty on further reflection, so
lower its elevel from NOTICE to DEBUG2.

Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-, just like the nbtree XID enhancement.
2021-09-17 14:19:51 -07:00
Noah Misch b073c3ccd0 Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.
This switches the default ACL to what the documentation has recommended
since CVE-2018-1058.  Upgrades will carry forward any old ownership and
ACL.  Sites that declined the 2018 recommendation should take a fresh
look.  Recipes for commissioning a new database cluster from scratch may
need to create a schema, grant more privileges, etc.  Out-of-tree test
suites may require such updates.

Reviewed by Peter Eisentraut.

Discussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com
2021-09-09 23:38:09 -07:00
Peter Eisentraut 639a86e36a Remove Value node struct
The Value node struct is a weird construct.  It is its own node type,
but most of the time, it actually has a node type of Integer, Float,
String, or BitString.  As a consequence, the struct name and the node
type don't match most of the time, and so it has to be treated
specially a lot.  There doesn't seem to be any value in the special
construct.  There is very little code that wants to accept all Value
variants but nothing else (and even if it did, this doesn't provide
any convenient way to check it), and most code wants either just one
particular node type (usually String), or it accepts a broader set of
node types besides just Value.

This change removes the Value struct and node type and replaces them
by separate Integer, Float, String, and BitString node types that are
proper node types and structs of their own and behave mostly like
normal node types.

Also, this removes the T_Null node tag, which was previously also a
possible variant of Value but wasn't actually used outside of the
Value contained in A_Const.  Replace that by an isnull field in
A_Const.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5ba6bc5b-3f95-04f2-2419-f8ddb4c046fb@enterprisedb.com
2021-09-09 08:36:53 +02:00
Fujii Masao 98dbef90eb postgres_fdw: Revert unstable tests for postgres_fdw.application_name.
Commit 449ab63505 added the tests that check that postgres_fdw.application_name
GUC works as expected. But they were unstable and caused some buildfarm
members to report the failure. This commit reverts those unstable tests.

Reported-by: Tom Lane as per buildfarm
Discussion: https://postgr.es/m/3220909.1631054766@sss.pgh.pa.us
2021-09-08 16:28:43 +09:00
Michael Paquier fd0625c7a9 Clean up some code using "(expr) ? true : false"
All the code paths simplified here were already using a boolean or used
an expression that led to zero or one, making the extra bits
unnecessary.

Author: Justin Pryzby
Reviewed-by: Tom Lane, Michael Paquier, Peter Smith
Discussion: https://postgr.es/m/20210428182936.GE27406@telsasoft.com
2021-09-08 09:44:04 +09:00
Fujii Masao 449ab63505 postgres_fdw: Allow application_name of remote connection to be set via GUC.
This commit adds postgres_fdw.application_name GUC which specifies
a value for application_name configuration parameter used
when postgres_fdw establishes a connection to a foreign server.
This GUC setting always overrides application_name option of
the foreign server object. This GUC is useful when we want to
specify our own application_name per remote connection.

Previously application_name of a remote connection could be set
basically only via options of a server object. But which meant that
every session connecting to the same foreign server basically
should use the same application_name. Also if we want to change
the setting, we had to execute "ALTER SERVER ... OPTIONS ..." command.
It was inconvenient.

Author: Hayato Kuroda
Reviewed-by: Masahiro Ikeda, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870D1E8B949DAF6D3B84E02F5F29@TYCPR01MB5870.jpnprd01.prod.outlook.com
2021-09-07 12:27:30 +09:00
Tom Lane 2dc53fe2a7 Refactor postgresImportForeignSchema to avoid code duplication.
Avoid repeating fragments of the query we're building, along the
same lines as recent cleanup in pg_dump.  I got annoyed about this
because aa769f80e broke my pending patch to change postgres_fdw's
collation handling, due to each of us having incompletely done
this same refactoring.  Let's finish that job in hopes of having
a more stable base.
2021-09-01 16:21:13 -04:00
Daniel Gustafsson bb466c6b09 Prohibit map and grep in void context
map and grep are not intended to be used as mutators, iterating
with side-effects should be done with for or foreach loops. This
fixes the one occurrence of the pattern, and bumps the perlcritic
policy to severity 5 for the map and grep policies.

Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/87fsvzhhc4.fsf@wibble.ilmari.org
2021-08-31 11:07:04 +02:00
Peter Geoghegan 191dce109b contrib/amcheck: Add heapam CHECK_FOR_INTERRUPTS().
Add a CHECK_FOR_INTERRUPTS() call to make heap relation verification
responsive to query cancellations.

Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/CAH2-Wzk-9RtQgb2QiuLv8j2O0j9tSFKPmmch5nWSZhguUxvbrw%40mail.gmail.com
Backpatch: 14-, where amcheck heap verification was introduced.
2021-08-26 18:42:20 -07:00
Amit Kapila 29b5905470 Fix toast rewrites in logical decoding.
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.

To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.

Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
2021-08-25 09:53:07 +05:30
Michael Paquier 32cf7f7acc Improve performance of float overflow checks in btree_gist
The current code could do unnecessary calls to isinf() (two for the
argument values all the time while one could be sufficient in some
cases).  zero_is_valid was never used but the result value was still
checked on 0 in the first position of the check.

This is similar to 607f8ce.  btree_gist has just copy-pasted the code
doing those checks from the backend float4/8 code, as of the macro
CHECKFLOATVAL(), to do the work.

Author: Haiying Tang
Discussion: https://postgr.es/m/OS0PR01MB611358E3A7BC3C2F874AC36BFBF39@OS0PR01MB6113.jpnprd01.prod.outlook.com
2021-08-19 10:42:44 +09:00
Tom Lane a6bd28beb0 Fix failure of btree_gin indexscans with "char" type and </<= operators.
As a result of confusion about whether the "char" type is signed or
unsigned, scans for index searches like "col < 'x'" or "col <= 'x'"
would start at the middle of the index not the left end, thus missing
many or all of the entries they should find.  Fortunately, this
is not a symptom of index corruption.  It's only the search logic
that is broken, and we can fix it without unpleasant side-effects.

Per report from Jason Kim.  This has been wrong since btree_gin's
beginning, so back-patch to all supported branches.

Discussion: https://postgr.es/m/20210810001649.htnltbh7c63re42p@jasonk.me
2021-08-10 18:10:29 -04:00
Daniel Gustafsson 72bbff4cd6 Add alternative output for OpenSSL 3 without legacy loaded
OpenSSL 3 introduced the concept of providers to support modularization,
and moved the outdated ciphers to the new legacy provider. In case it's
not loaded in the users openssl.cnf file there will be a lot of regress
test failures, so add alternative outputs covering those.

Also document the need to load the legacy provider in order to use older
ciphers with OpenSSL-enabled pgcrypto.

This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.

Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
2021-08-10 15:08:46 +02:00
Daniel Gustafsson 318df80235 Disable OpenSSL EVP digest padding in pgcrypto
The PX layer in pgcrypto is handling digest padding on its own uniformly
for all backend implementations. Starting with OpenSSL 3.0.0, DecryptUpdate
doesn't flush the last block in case padding is enabled so explicitly
disable it as we don't use it.

This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.

Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
2021-08-10 15:01:52 +02:00
Michael Paquier 1e3445237b Fix regression test output of sepgsql
The difference is caused by 7b56584, for the tests involving a table
rewrite.

Per buildfarm member rhinoceros.

Discussion: https://postgr.es/m/YRHxXcyFjPuPTZui@paquier.xyz
2021-08-10 13:14:37 +09:00
Tom Lane 0e6aa8747d Avoid determining regexp subexpression matches, when possible.
Identifying the precise match locations for parenthesized subexpressions
is a fairly expensive task given the way our regexp engine works, both
at regexp compile time (where we must create an optimized NFA for each
parenthesized subexpression) and at runtime (where determining exact
match locations requires laborious search).

Up to now we've made little attempt to optimize this situation.  This
patch identifies cases where we know at compile time that we won't
need to know subexpression match locations, and teaches the regexp
compiler to not bother creating per-subexpression regexps for
parenthesis pairs that are not referenced by backrefs elsewhere in
the regexp.  (To preserve semantics, we obviously still have to
pin down the match locations of backref references.)  Users could
have obtained the same results before this by being careful to
write "non capturing" parentheses wherever possible, but few people
bother with that.

Discussion: https://postgr.es/m/2219936.1628115334@sss.pgh.pa.us
2021-08-09 11:26:34 -04:00
Etsuro Fujita aa769f80ed postgres_fdw: Fix issues with generated columns in foreign tables.
postgres_fdw imported generated columns from the remote tables as plain
columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
value into column "foo"" when inserting into the foreign tables, as it
tried to insert values into the generated columns.  To fix, we do the
following under the assumption that generated columns in a postgres_fdw
foreign table are defined so that they represent generated columns in
the underlying remote table:

* Send DEFAULT for the generated columns to the foreign server on insert
  or update, not generated column values computed on the local server.
* Add to postgresImportForeignSchema() an option "import_generated" to
  include column generated expressions in the definitions of foreign
  tables imported from a foreign server.  The option is true by default.

The assumption seems reasonable, because that would make a query of the
postgres_fdw foreign table return values for the generated columns that
are consistent with the generated expression.

While here, fix another issue in postgresImportForeignSchema(): it tried
to include column generated expressions as column default expressions in
the foreign table definitions when the import_default option was enabled.

Per bug #16631 from Daniel Cherniy.  Back-patch to v12 where generated
columns were added.

Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
2021-08-05 20:00:00 +09:00
Etsuro Fujita a8ed9bd59d Fix oversight in commit 1ec7fca859.
I failed to account for the possibility that when
ExecAppendAsyncEventWait() notifies multiple async-capable nodes using
postgres_fdw, a preceding node might invoke process_pending_request() to
process a pending asynchronous request made by a succeeding node.  In
that case the succeeding node should produce a tuple to return to the
parent Append node from tuples fetched by process_pending_request() when
notified.  Repair.

Per buildfarm via Michael Paquier.  Back-patch to v14, like the previous
commit.

Thanks to Tom Lane for testing.

Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
2021-08-02 12:45:00 +09:00
Tom Lane 5d44fff01e In postgres_fdw, allow CASE expressions to be pushed to the remote server.
This is simple enough except for the need to check whether CaseTestExpr
nodes have a collation that is not derived from a remote Var.  For that,
examine the CASE's "arg" expression and then pass that info down into the
recursive examination of the WHEN expressions.

Alexander Pyhalov, reviewed by Gilles Darold and myself

Discussion: https://postgr.es/m/fda09032e90d85d9b726a41e03f9097f@postgrespro.ru
2021-07-30 13:39:48 -04:00
Etsuro Fujita 1ec7fca859 postgres_fdw: Fix handling of pending asynchronous requests.
A pending asynchronous request is handled by process_pending_request(),
which previously not only processed an in-progress remote query but
performed ExecForeignScan() to produce a tuple to return to the local
server asynchronously from the result of the remote query.  But that led
to a server crash when executing a query or led to an "InstrStartNode
called twice in a row" or "InstrEndLoop called on running node" failure
when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it
contained multiple async-capable nodes accessing the same
initplan/subplan that contained multiple async-capable nodes scanning
the same foreign tables as for the parent async-capable nodes, as
reported by Andrey Lepikhov.  The reason is that the second step in
process_pending_request() invoked when executing the initplan/subplan
for one of the parent async-capable nodes caused recursive execution of
the initplan/subplan for another of the parent async-capable nodes.

To fix, split process_pending_request() into the two steps and postpone
the second step until ForeignAsyncConfigureWait() is called for each of
the pending asynchronous requests.  Also, in ExecAppendAsyncEventWait()
we assumed that FDWs would register at least one wait event in a
WaitEventSet created there when they were called from
ForeignAsyncConfigureWait() in that function, but allow FDWs to register
zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait()
to just return in that case.

Oversight in commit 27e1f1456.  Back-patch to v14 where that commit went
in.

Andrey Lepikhov and Etsuro Fujita

Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru
2021-07-30 17:00:00 +09:00
Andrew Dunstan 201a76183e
Unify PostgresNode's new() and get_new_node() methods
There is only one constructor now for PostgresNode, with the idiomatic
name 'new'. The method is not exported by the class, and must be called
as "PostgresNode->new('name',[args])". All the TAP tests that use
PostgresNode are modified accordingly. Third party scripts will need
adjusting, which is a fairly mechanical process (I just used a sed
script).
2021-07-29 05:58:08 -04:00
David Rowley 245de48455 Adjust MSVC build scripts to parse Makefiles for defines
This adjusts the MSVC build scripts to look at the compile flags mentioned
in the Makefile to look for -D arguments in order to determine which
constants should be defined in Visual Studio builds.

One small anomaly that appeared as a result of this change is that the
Makefile for the ltree contrib module defined LOWER_NODE, but this was
not properly defined in the MSVC build scripts.  This meant that MSVC
builds would differ in case sensitivity in the ltree module when
compared to builds using a make build environment.  To maintain the same
behavior here we remove the -DLOWER_NODE from the Makefile and just always
define it in ltree.h for non-MSVC builds.  We need to maintain the old
behavior here as this affects the on-disk compatibility of GiST indexes
when using the ltree type.

The only other resulting change here is that REFINT_VERBOSE is now defined
for the autoinc, insert_username and moddatetime contrib modules.
Previously on MSVC, this was only defined for the refint module.  This
aligns the behavior to build environments using make as all 4 of these
modules share the same Makefile.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
2021-07-29 12:01:23 +12:00
Fujii Masao 0e1275fb07 Avoid using ambiguous word "non-negative" in error messages.
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".

Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.

When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.

Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-07-28 01:20:16 +09:00
Tom Lane 48c5c90682 Use the "pg_temp" schema alias in EXPLAIN and related output.
This patch causes EXPLAIN output to refer to objects that are in
the current session's temp schema with the "pg_temp" schema alias
rather than that schema's actual name.  This is useful for our own
testing purposes since it will stabilize EXPLAIN VERBOSE output
for such cases, allowing us to use that in regression tests.
It should be less confusing for end users too.

Since ruleutils.c needs to change behavior for this, the change
also leaks into a few other users of ruleutils.c, for example
pg_get_viewdef().  AFAICS that won't cause any problems.
We did find that aggressively trying to change this behavior
across-the-board would cause issues, but as long as "pg_temp"
only appears within generated SQL text, I think it'll be fine.

Along the way, make get_namespace_name_or_temp conform to the
same API as get_namespace_name, ie that it returns a palloc'd
string or NULL.  The current behavior hasn't caused any bugs
since no callers attempt to pfree the result, but if it gets
more widespread usage that could become a problem.

Amul Sul, reviewed and extended by me

Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
2021-07-27 12:03:16 -04:00
Tom Lane 0806d08d46 Harden pg_stat_statements tests against CLOBBER_CACHE_ALWAYS.
Turns out the buildfarm hasn't been testing this, which will soon change.

Julien Rouhaud, per report from me

Discussion: https://postgr.es/m/42557.1627229005@sss.pgh.pa.us
2021-07-25 23:25:15 -04:00
Amit Kapila a8fd13cab0 Add support for prepared transactions to built-in logical replication.
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:

* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.

* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.

* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.

We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.

The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.

We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.

Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
2021-07-14 07:33:50 +05:30
David Rowley 83f4fcc655 Change the name of the Result Cache node to Memoize
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough.  That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize".  People seem to like "Memoize", so let's do the rename.

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
2021-07-14 12:43:58 +12:00
Tom Lane d68a003912 Rename debug_invalidate_system_caches_always to debug_discard_caches.
The name introduced by commit 4656e3d66 was agreed to be unreasonably
long.  To match this change, rename initdb's recently-added
--clobber-cache option to --discard-caches.

Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
2021-07-13 15:01:01 -04:00
Tom Lane f10f0ae420 Replace RelationOpenSmgr() with RelationGetSmgr().
The idea behind this patch is to design out bugs like the one fixed
by commit 9d523119f.  Previously, once one did RelationOpenSmgr(rel),
it was considered okay to access rel->rd_smgr directly for some
not-very-clear interval.  But since that pointer will be cleared by
relcache flushes, we had bugs arising from overreliance on a previous
RelationOpenSmgr call still being effective.

Now, very little code except that in rel.h and relcache.c should ever
touch the rd_smgr field directly.  The normal coding rule is to use
RelationGetSmgr(rel) and not expect the result to be valid for longer
than one smgr function call.  There are a couple of places where using
the function every single time seemed like overkill, but they are now
annotated with large warning comments.

Amul Sul, after an idea of mine.

Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
2021-07-12 17:01:36 -04:00
Michael Paquier 127404fbe2 pageinspect: Improve page_header() for pages of 32kB
ld_upper, ld_lower, pd_special and the page size have been using
smallint as return type, which could cause those fields to return
negative values in certain cases for builds configures with a page size
of 32kB.

Bump pageinspect to 1.10.  page_header() is able to handle the correct
return type of those fields at runtime when using an older version of
the extension, with some tests are added to cover that.

Author: Quan Zongliang
Reviewed-by: Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/8b8ec36e-61fe-14f9-005d-07bc85aa4eed@yeah.net
2021-07-12 11:05:27 +09:00
Peter Eisentraut 2ed532ee8c Improve error messages about mismatching relkind
Most error messages about a relkind that was not supported or
appropriate for the command was of the pattern

    "relation \"%s\" is not a table, foreign table, or materialized view"

This style can become verbose and tedious to maintain.  Moreover, it's
not very helpful: If I'm trying to create a comment on a TOAST table,
which is not supported, then the information that I could have created
a comment on a materialized view is pointless.

Instead, write the primary error message shorter and saying more
directly that what was attempted is not possible.  Then, in the detail
message, explain that the operation is not supported for the relkind
the object was.  To simplify that, add a new function
errdetail_relkind_not_supported() that does this.

In passing, make use of RELKIND_HAS_STORAGE() where appropriate,
instead of listing out the relkinds individually.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com
2021-07-08 09:44:51 +02:00
Tom Lane b9734c13f1 Fix crash in postgres_fdw for provably-empty remote UPDATE/DELETE.
In 86dc90056, I'd written find_modifytable_subplan with the assumption
that if the immediate child of a ModifyTable is a Result, it must be
a projecting Result with a subplan.  However, if the UPDATE or DELETE
has a provably-constant-false WHERE clause, that's not so: we'll
generate a dummy subplan with a childless Result.  Add the missing
null-check so we don't crash on such cases.

Per report from Alexander Pyhalov.

Discussion: https://postgr.es/m/b9a6f53549456b2f3e2fd150dcd79d72@postgrespro.ru
2021-07-07 15:21:25 -04:00
Fujii Masao d854720df6 postgres_fdw: Tighten up allowed values for batch_size, fetch_size options.
Previously the values such as '100$%$#$#', '9,223,372,' were accepted and
treated as valid integers for postgres_fdw options batch_size and fetch_size.
Whereas this is not the case with fdw_startup_cost and fdw_tuple_cost options
for which an error is thrown. This was because endptr was not used
while converting strings to integers using strtol.

This commit changes the logic so that it uses parse_int function
instead of strtol as it serves the purpose by returning false in case
if it is unable to convert the string to integer. Note that
this function also rounds off the values such as '100.456' to 100 and
'100.567' or '100.678' to 101.

While on this, use parse_real for fdw_startup_cost and fdw_tuple_cost options.

Since parse_int and parse_real are being used for reloptions and GUCs,
it is more appropriate to use in postgres_fdw rather than using strtol
and strtod directly.

Back-patch to v14.

Author: Bharath Rupireddy
Reviewed-by: Ashutosh Bapat, Tom Lane, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVMO6wY5Pc4oe1OCgUOAtdjHuFsBDw8R5uoYR86eWFQDA@mail.gmail.com
2021-07-07 11:13:40 +09:00
Tom Lane c7b7311f61 Avoid doing catalog lookups in postgres_fdw's conversion_error_callback.
As in 50371df26, this is a bad idea since the callback can't really
know what error is being thrown and thus whether or not it is safe
to attempt catalog accesses.  Rather than pushing said accesses into
the mainline code where they'd usually be a waste of cycles, we can
look at the query's rangetable instead.

This change does mean that we'll be printing query aliases (if any
were used) rather than the table or column's true name.  But that
doesn't seem like a bad thing: it's certainly a more useful definition
in self-join cases, for instance.  In any case, it seems unlikely that
any applications would be depending on this detail, so it seems safe
to change.

Patch by me.  Original complaint by Andres Freund; Bharath Rupireddy
noted the connection to conversion_error_callback.

Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
2021-07-06 12:36:12 -04:00
Tom Lane 8021770909 Further stabilize postgres_fdw test.
The queries involving ft1_nopw don't stably return the same row
anymore.  I surmise that an autovacuum hitting "S 1"."T 1"
right after the updates introduced by f61db909d/5843659d0 freed
some space, changing where subsequent insertions get stored.
It's only by good luck that these results were stable before,
though, since a LIMIT without ORDER BY isn't well defined,
and it's not like we've ever treated that table as append-only
in this test script.

Since we only really care whether these commands succeed or not,
just replace "SELECT *" with "SELECT 1".

Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2021-06-23%2019%3A52%3A08
2021-06-24 15:02:13 -04:00
Tom Lane a443c1b2d6 Allow non-quoted identifiers as isolation test session/step names.
For no obvious reason, isolationtester has always insisted that
session and step names be written with double quotes.  This is
fairly tedious and does little for test readability, especially
since the names that people actually choose almost always look
like normal identifiers.  Hence, let's tweak the lexer to allow
SQL-like identifiers not only double-quoted strings.

(They're SQL-like, not exactly SQL, because I didn't add any
case-folding logic.  Also there's no provision for U&"..." names,
not that anyone's likely to care.)

There is one incompatibility introduced by this change: if you write
"foo""bar" with no space, that used to be taken as two identifiers,
but now it's just one identifier with an embedded quote mark.

I converted all the src/test/isolation/ specfiles to remove
unnecessary double quotes, but stopped there because my
eyes were glazing over already.

Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.

Discussion: https://postgr.es/m/759113.1623861959@sss.pgh.pa.us
2021-06-23 18:41:39 -04:00
Tom Lane 4a054069a3 Improve display of query results in isolation tests.
Previously, isolationtester displayed SQL query results using some
ad-hoc code that clearly hadn't had much effort expended on it.
Field values longer than 14 characters weren't separated from
the next field, and usually caused misalignment of the columns
too.  Also there was no visual separation of a query's result
from subsequent isolationtester output.  This made test result
files confusing and hard to read.

To improve matters, let's use libpq's PQprint() function.  Although
that's long since unused by psql, it's still plenty good enough
for the purpose here.

Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.

Discussion: https://postgr.es/m/582362.1623798221@sss.pgh.pa.us
2021-06-23 11:13:00 -04:00
Tom Lane 741d7f1047 Use annotations to reduce instability of isolation-test results.
We've long contended with isolation test results that aren't entirely
stable.  Some test scripts insert long delays to try to force stable
results, which is not terribly desirable; but other erratic failure
modes remain, causing unrepeatable buildfarm failures.  I've spent a
fair amount of time trying to solve this by improving the server-side
support code, without much success: that way is fundamentally unable
to cope with diffs that stem from chance ordering of arrival of
messages from different server processes.

We can improve matters on the client side, however, by annotating
the test scripts themselves to show the desired reporting order
of events that might occur in different orders.  This patch adds
three types of annotations to deal with (a) test steps that might or
might not complete their waits before the isolationtester can see them
waiting; (b) test steps in different sessions that can legitimately
complete in either order; and (c) NOTIFY messages that might arrive
before or after the completion of a step in another session.  We might
need more annotation types later, but this seems to be enough to deal
with the instabilities we've seen in the buildfarm.  It also lets us
get rid of all the long delays that were previously used, cutting more
than a minute off the runtime of the isolation tests.

Back-patch to all supported branches, because the buildfarm
instabilities affect all the branches, and because it seems desirable
to keep isolationtester's capabilities the same across all branches
to simplify possible future back-patching of tests.

Discussion: https://postgr.es/m/327948.1623725828@sss.pgh.pa.us
2021-06-22 21:43:12 -04:00
Peter Eisentraut 97b7134186 amcheck: Fix code comments
Code comments were claiming that verify_heapam() was checking
privileges on the relation it was operating on, but it didn't actually
do that.  Perhaps earlier versions of the patch did that, but now the
access is regulated by privileges on the function.  Remove the wrong
comments.
2021-06-21 11:17:49 +02:00
Tom Lane 5843659d09 Stabilize test case added by commit f61db909d.
Buildfarm members ayu and tern have sometimes shown a different
plan than expected for this query.  I'd been unable to reproduce
that before today, but I finally realized what is happening.
If there is a concurrent open transaction (probably an autovacuum
run in the buildfarm, but this can also be arranged manually),
then the index entries for the rows removed by the DELETE a few
lines up are not killed promptly, causing a change in the planner's
estimate of the extremal value of ft2.c1, which moves the rowcount
estimate for "c1 > 1100" by enough to change the join plan from
nestloop to hash.

To fix, change the query condition to "c1 > 1000", causing the
hash plan to be preferred whether or not a concurrent open
transaction exists.  Since this UPDATE is tailored to be a no-op,
nothing else changes.

Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ayu&dt=2021-06-09%2022%3A45%3A48
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ayu&dt=2021-06-13%2022%3A38%3A18
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-06-20%2004%3A55%3A36
2021-06-20 11:48:44 -04:00
Tom Lane 7c337b6b52 Centralize the logic for protective copying of utility statements.
In the "simple Query" code path, it's fine for parse analysis or
execution of a utility statement to scribble on the statement's node
tree, since that'll just be thrown away afterwards.  However it's
not fine if the node tree is in the plan cache, as then it'd be
corrupted for subsequent executions.  Up to now we've dealt with
that by having individual utility-statement functions apply
copyObject() if they were going to modify the tree.  But that's
prone to errors of omission.  Bug #17053 from Charles Samborski
shows that CREATE/ALTER DOMAIN didn't get this memo, and can
crash if executed repeatedly from plan cache.

In the back branches, we'll just apply a narrow band-aid for that,
but in HEAD it seems prudent to have a more principled fix that
will close off the possibility of other similar bugs in future.
Hence, let's hoist the responsibility for doing copyObject up into
ProcessUtility from its children, thus ensuring that it happens for
all utility statement types.

Also, modify ProcessUtility's API so that its callers can tell it
whether a copy step is necessary.  It turns out that in all cases,
the immediate caller knows whether the node tree is transient, so
this doesn't involve a huge amount of code thrashing.  In this way,
while we lose a little bit in the execute-from-cache code path due
to sometimes copying node trees that wouldn't be mutated anyway,
we gain something in the simple-Query code path by not copying
throwaway node trees.  Statements that are complex enough to be
expensive to copy are almost certainly ones that would have to be
copied anyway, so the loss in the cache code path shouldn't be much.

(Note that this whole problem applies only to utility statements.
Optimizable statements don't have the issue because we long ago made
the executor treat Plan trees as read-only.  Perhaps someday we will
make utility statement execution act likewise, but I'm not holding
my breath.)

Discussion: https://postgr.es/m/931771.1623893989@sss.pgh.pa.us
Discussion: https://postgr.es/m/17053-3ca3f501bbc212b4@postgresql.org
2021-06-18 11:22:58 -04:00
Tomas Vondra 99cea49d65 Fix copying data into slots with FDW batching
Commit b676ac443b optimized handling of tuple slots with bulk inserts
into foreign tables, so that the slots are initialized only once and
reused for all batches. The data was however copied into the slots only
after the initialization, inserting duplicate values when the slot gets
reused. Fixed by moving the ExecCopySlot outside the init branch.

The existing postgres_fdw tests failed to catch this due to inserting
data into foreign tables without unique indexes, and then checking only
the number of inserted rows. This adds a new test with both a unique
index and a check of inserted values.

Reported-by: Alexander Pyhalov
Discussion: https://postgr.es/m/7a8cf8d56b3d18e5c0bccd6cd42d04ac%40postgrespro.ru
2021-06-16 23:49:25 +02:00
Tomas Vondra cb92703384 Adjust batch size in postgres_fdw to not use too many parameters
The FE/BE protocol identifies parameters with an Int16 index, which
limits the maximum number of parameters per query to 65535. With
batching added to postges_fdw this limit is much easier to hit, as
the whole batch is essentially a single query, making this error much
easier to hit.

The failures are a bit unpredictable, because it also depends on the
number of columns in the query. So instead of just failing, this patch
tweaks the batch_size to not exceed the maximum number of parameters.

Reported-by: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-06-08 20:28:31 +02:00
Tom Lane d16ebfbff7 Stabilize contrib/seg regression test.
If autovacuum comes along just after we fill table test_seg with
some data, it will update the stats to the point where we prefer
a plain indexscan over a bitmap scan, breaking the expected
output (as well as the point of the test case).  To fix, just
force a bitmap scan to be chosen here.

This has evidently been wrong since commit de1d042f5.  It's not
clear why we just recently saw any buildfarm failures due to it;
but prairiedog has failed twice on this test in the past week.
Hence, backpatch to v11 where this test case came in.
2021-06-07 14:52:42 -04:00
Etsuro Fujita f3baaf28a6 Fix rescanning of async-aware Append nodes.
In cases where run-time pruning isn't required, the synchronous and
asynchronous subplans for an async-aware Append node determined using
classify_matching_subplans() should be re-used when rescanning the node,
but the previous code re-determined them using that function repeatedly
each time when rescanning the node, leading to incorrect results in a
normal build and an Assert failure in an Assert-enabled build as that
function doesn't assume that it's called repeatedly in such cases.  Fix
the code as mentioned above.

My oversight in commit 27e1f1456.

While at it, initialize async-related pointers/variables to NULL/zero
explicitly in ExecInitAppend() and ExecReScanAppend(), just to be sure.
(The variables would have been set to zero before we get to the latter
function, but let's do so.)

Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com
2021-06-07 12:45:00 +09:00
Tom Lane f61db909df Fix postgres_fdw failure with whole-row Vars of type RECORD.
Commit 86dc90056 expects that FDWs can cope with whole-row Vars for
their tables, even if the Vars are marked with vartype RECORDOID.
Previously, whole-row Vars generated by the planner had vartype equal
to the relevant table's rowtype OID.  (The point behind this change is
to enable sharing of resjunk columns across inheritance child tables.)

It turns out that postgres_fdw fails to cope with this, though through
bad fortune none of its test cases exposed that.  Things mostly work,
but when we try to read back a value of such a Var, the expected
rowtype is not available to record_in().  Fortunately, it's not
difficult to hack up the tupdesc that controls this process to
substitute the foreign table's rowtype for RECORDOID.  Thus we can
solve the runtime problem while still sharing the resjunk column
with other tables.

Per report from Alexander Pyhalov.

Discussion: https://postgr.es/m/7817fb9ebd6661cdf9b67dec6e129a78@postgrespro.ru
2021-06-04 20:07:08 -04:00
David Rowley 7fc26d11e3 Adjust locations which have an incorrect copyright year
A few patches committed after ca3b37487 mistakenly forgot to make the
copyright year 2021.  Fix these.

Discussion: https://postgr.es/m/CAApHDvqyLmd9P2oBQYJ=DbrV8QwyPRdmXtCTFYPE08h+ip0UJw@mail.gmail.com
2021-06-04 12:19:50 +12:00
Tom Lane 889592344c Fix planner's row-mark code for inheritance from a foreign table.
Commit 428b260f8 broke planning of cases where row marks are needed
(SELECT FOR UPDATE, etc) and one of the query's tables is a foreign
table that has regular table(s) as inheritance children.  We got the
reverse case right, but apparently were thinking that foreign tables
couldn't be inheritance parents.  Not so; so we need to be able to
add a CTID junk column while adding a new child, not only a wholerow
junk column.

Back-patch to v12 where the faulty code came in.

Amit Langote

Discussion: https://postgr.es/m/CA+HiwqEmo3FV1LAQ4TVyS2h1WM=kMkZUmbNuZSCnfHvMcUcPeA@mail.gmail.com
2021-06-02 14:38:14 -04:00
Amit Kapila 6f4bdf8152 Fix assertion during streaming of multi-insert toast changes.
While decoding the multi-insert WAL we can't clean the toast untill we get
the last insert of that WAL record. Now if we stream the changes before we
get the last change, the memory for toast chunks won't be released and we
expect the txn to have streamed all changes after streaming.  This
restriction is mainly to ensure the correctness of streamed transactions
and it doesn't seem worth uplifting such a restriction just to allow this
case because anyway we will stream the transaction once such an insert is
complete.

Previously we were using two different flags (one for toast tuples and
another for speculative inserts) to indicate partial changes. Now instead
we replaced both of them with a single flag to indicate partial changes.

Reported-by: Pavan Deolasee
Author: Dilip Kumar
Reviewed-by: Pavan Deolasee, Amit Kapila
Discussion: https://postgr.es/m/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug@mail.gmail.com
2021-05-27 07:59:43 +05:30
Alvaro Herrera cafde58b33
Allow compute_query_id to be set to 'auto' and make it default
Allowing only on/off meant that all either all existing configuration
guides would become obsolete if we disabled it by default, or that we
would have to accept a performance loss in the default config if we
enabled it by default.  By allowing 'auto' as a middle ground, the
performance cost is only paid by those who enable pg_stat_statements and
similar modules.

I only edited the release notes to comment-out a paragraph that is now
factually wrong; further edits are probably needed to describe the
related change in more detail.

Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20210513002623.eugftm4nk2lvvks3@nol
2021-05-15 14:13:09 -04:00
Etsuro Fujita a784859f44 Prevent asynchronous execution of direct foreign-table modifications.
Commits 27e1f1456 and 86dc90056, which were independently discussed,
cause a crash when executing an inherited foreign UPDATE/DELETE query
with asynchronous execution enabled, where children of an Append node
that is the direct/indirect child of the ModifyTable node are rewritten
so as to modify foreign tables directly by postgresPlanDirectModify();
as in that case the direct modifications are executed asynchronously,
which is not currently supported by asynchronous execution.  Fix by
disabling asynchronous execution of the direct modifications in that
function.

Author: Etsuro Fujita
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com
2021-05-13 20:00:00 +09:00
Amit Kapila fc69509131 Fix tests for replication slots stats.
Some of the tests were not considering that the slot's spill stats could be
received by the stats collector after we have reset the stats. Remove
those tests and don't check total bytes decoded and sent to output plugin
in the spilled stats test as we can send the spilled stats to the stats
collector before actually sending the changes to output plugin.

Reported-by: Tom Lane as per buildfarm
Author: Vignesh C, Sawada Masahiko
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-05-13 10:23:27 +05:30
Tom Lane def5b065ff Initial pgindent and pgperltidy run for v14.
Also "make reformat-dat-files".

The only change worthy of note is that pgindent messed up the formatting
of launcher.c's struct LogicalRepWorkerId, which led me to notice that
that struct wasn't used at all anymore, so I just took it out.
2021-05-12 13:14:10 -04:00
Etsuro Fujita a363bc6da9 Fix EXPLAIN ANALYZE for async-capable nodes.
EXPLAIN ANALYZE for an async-capable ForeignScan node associated with
postgres_fdw is done just by using instrumentation for ExecProcNode()
called from the node's callbacks, causing the following problems:

1) If the remote table to scan is empty, the node is incorrectly
   considered as "never executed" by the command even if the node is
   executed, as ExecProcNode() isn't called from the node's callbacks at
   all in that case.
2) The command fails to collect timings for things other than
   ExecProcNode() done in the node, such as creating a cursor for the
   node's remote query.

To fix these problems, add instrumentation for async-capable nodes, and
modify postgres_fdw accordingly.

My oversight in commit 27e1f1456.

While at it, update a comment for the AsyncRequest struct in execnodes.h
and the documentation for the ForeignAsyncRequest API in fdwhandler.sgml
to match the code in ExecAsyncAppendResponse() in nodeAppend.c, and fix
typos in comments in nodeAppend.c.

Per report from Andrey Lepikhov, though I didn't use his patch.

Reviewed-by: Andrey Lepikhov
Discussion: https://postgr.es/m/2eb662bb-105d-fc20-7412-2f027cc3ca72%40postgrespro.ru
2021-05-12 14:00:00 +09:00
Tomas Vondra c6a01d9249 Copy the INSERT query in postgres_fdw
When executing the INSERT with batching, we may need to rebuild the
query when the batch size changes, in which case we pfree the current
string. We must not release the original string, stored in fdw_private,
because that may be needed in EXPLAIN ANALYZE. So make copy of the SQL,
but only for INSERT queries.

Reported-by: Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com
2021-05-07 22:29:43 +02:00
Andrew Dunstan 8b82de0164
Remove extraneous newlines added by perl copyright patch 2021-05-07 11:37:37 -04:00
Andrew Dunstan 8fa6e6919c
Add a copyright notice to perl files lacking one. 2021-05-07 10:56:14 -04:00
Tom Lane 1273a15bf9 Disable cache clobber to avoid breaking postgres_fdw termination test.
Commit 93f414614 improved a pre-existing test case so that it would
show whether or not termination of the "remote" worker process happened.
This soon exposed that, when debug_invalidate_system_caches_always
(nee CLOBBER_CACHE_ALWAYS) is enabled, no such termination occurs.
That's because cache invalidation forces postgres_fdw connections
to be dropped at end of transaction, so that there's no worker to
terminate.  There's a race condition as to whether the worker will
manage to get out of the BackendStatusArray before we look, but at
least on buildfarm member hyrax, it's failed twice in two attempts.

Rather than re-lobotomizing the test, let's fix this by transiently
disabling debug_invalidate_system_caches_always.  (Hooray for that
being just a GUC nowadays, rather than a compile-time option.)
If this proves not to be enough to make the test stable, we can
do the other thing instead.

Discussion: https://postgr.es/m/3854538.1620081771@sss.pgh.pa.us
2021-05-04 13:36:26 -04:00
Bruce Momjian f7a97b6ec3 Update query_id computation
Properly fix:

- the "ONLY" in FROM [ONLY] isn't hashed
- the agglevelsup field in GROUPING isn't hashed
- WITH TIES not being hashed (new in PG 13)
- "DISTINCT" in "GROUP BY [DISTINCT]" isn't hashed (new in PG 14)

Reported-by: Julien Rouhaud

Discussion: https://postgr.es/m/20210425081119.ulyzxqz23ueh3wuj@nol
2021-05-03 14:59:39 -04:00
Robert Haas 50529e5b4e amcheck: Improve some confusing reports about TOAST problems.
Don't phrase reports in terms of the number of tuples thus-far
returned by the index scan, but rather in terms of the chunk_seq
values found inside the tuples.

Patch by me, reviewed by Mark Dilger.

Discussion: http://postgr.es/m/CA+TgmoZUONCkdcdR778EKuE+f1r5Obieu63db2OgMPHaEvEPTQ@mail.gmail.com
2021-05-03 12:32:05 -04:00
Bruce Momjian 651d005e76 Revert use singular for -1 (commits 9ee7d533da and 5da9868ed9
Turns out you can specify negative values using plurals:

	https://english.stackexchange.com/questions/9735/is-1-followed-by-a-singular-or-plural-noun

so the previous code was correct enough, and consistent with other usage
in our code.  Also add comment in the two places where this could be
confused.

Reported-by: Noah Misch

Diagnosed-by: 20210425115726.GA2353095@rfd.leadboat.com
2021-05-01 10:42:44 -04:00
Amit Kapila 51ef917303 Another try to fix the test case added by commit f5fc2f5b23.
As per analysis, it appears that the 'drop slot' message from the previous
test and 'create slot' message of the new test are either missed or not
yet delivered to the stats collector due to which we will still see the
stats from the old slot. This can happen rarely which could be the reason
that we are seeing some failures in the buildfarm randomly. To avoid that
we are using a different slot name for the tests in
test_decoding/sql/stats.sql.

Reported-by: Tom Lane based on buildfarm reports
Author: Sawada Masahiko
Reviewed-by: Amit Kapila, Vignesh C
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-30 07:55:42 +05:30
Fujii Masao 8e9ea08bae Don't pass "ONLY" options specified in TRUNCATE to foreign data wrapper.
Commit 8ff1c94649 allowed TRUNCATE command to truncate foreign tables.
Previously the information about "ONLY" options specified in TRUNCATE
command were passed to the foreign data wrapper. Then postgres_fdw
constructed the TRUNCATE command to issue the remote server and
included "ONLY" options in it based on the passed information.

On the other hand, "ONLY" options specified in SELECT, UPDATE or DELETE
have no effect when accessing or modifying the remote table, i.e.,
are not passed to the foreign data wrapper. So it's inconsistent to
make only TRUNCATE command pass the "ONLY" options to the foreign data
wrapper. Therefore this commit changes the TRUNCATE command so that
it doesn't pass the "ONLY" options to the foreign data wrapper,
for the consistency with other statements. Also this commit changes
postgres_fdw so that it always doesn't include "ONLY" options in
the TRUNCATE command that it constructs.

Author: Fujii Masao
Reviewed-by: Bharath Rupireddy, Kyotaro Horiguchi, Justin Pryzby, Zhihong Yu
Discussion: https://postgr.es/m/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com
2021-04-27 14:41:27 +09:00
Amit Kapila 3fa17d3771 Use HTAB for replication slot statistics.
Previously, we used to use the array of size max_replication_slots to
store stats for replication slots. But that had two problems in the cases
where a message for dropping a slot gets lost: 1) the stats for the new
slot are not recorded if the array is full and 2) writing beyond the end
of the array if the user reduces the max_replication_slots.

This commit uses HTAB for replication slot statistics, resolving both
problems. Now, pgstat_vacuum_stat() search for all the dead replication
slots in stats hashtable and tell the collector to remove them. To avoid
showing the stats for the already-dropped slots, pg_stat_replication_slots
view searches slot stats by the slot name taken from pg_replication_slots.

Also, we send a message for creating a slot at slot creation, initializing
the stats. This reduces the possibility that the stats are accumulated
into the old slot stats when a message for dropping a slot gets lost.

Reported-by: Andres Freund
Author: Sawada Masahiko, test case by Vignesh C
Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-27 09:09:11 +05:30
Peter Geoghegan bb3ecc8c96 amcheck: MAXALIGN() nbtree special area offset.
This isn't strictly necessary, but in theory it might matter if in the
future the width of the nbtree special area changes -- its total size
might not be an even number of MAXALIGN() quantums, even with padding.
PageInit() MAXALIGN()s all special area offsets, but amcheck uses the
offset to perform initial basic validation of line pointers, so we don't
rely on the offset from the page header.

The real reason to do this is to set a good example for new code that
adds amcheck coverage for other index AMs.

Reported-By: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CALj2ACUMqTR9nErh99FbOBmzCXE9=gXNqhBiwYOhejJJS1LXqQ@mail.gmail.com
2021-04-23 15:37:03 -07:00
Michael Paquier 62aa2bb293 Remove use of [U]INT64_FORMAT in some translatable strings
%lld with (long long), or %llu with (unsigned long long) are more
adapted.  This is similar to 3286065.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210421.200000.1462448394029407895.horikyota.ntt@gmail.com
2021-04-23 13:25:49 +09:00
Etsuro Fujita bb684c82f7 Minor code cleanup in asynchronous execution support.
This is cleanup for commit 27e1f1456:

* ExecAppendAsyncEventWait(), which was modified a bit further by commit
  a8af856d3, duplicated the same nevents calculation.  Simplify the code
  a little bit to avoid the duplication.  Update comments there.
* Add an assertion to ExecAppendAsyncRequest().
* Update a comment about merging the async_capable options from input
  relations in merge_fdw_options(), per complaint from Kyotaro Horiguchi.
* Add a comment for fetch_more_data_begin().

Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK1637W30Wx3MnrReewhafn6F_0J76mrJGoFXFnpPq4QfvA%40mail.gmail.com
2021-04-23 12:00:00 +09:00
Amit Kapila c64dcc7fee Fix test case added by commit f5fc2f5b23.
In the new test after resetting the stats, we were not waiting for the
stats message to be delivered. Also, we need to decode the results for
the new test, otherwise, it will show the old stats.

In passing,
a. Change docs added by commit f5fc2f5b23 as per suggestion by
Justin Pryzby.
b. Bump the PGSTAT_FILE_FORMAT_ID as commit f5fc2f5b23 changes the file
format of stats.

Reported-by: Tom Lane based on buildfarm reports
Author: Vignesh C, Justin Pryzby
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-19 09:02:47 +05:30
Michael Paquier 7ef8b52cf0 Fix typos and grammar in comments and docs
Author: Justin Pryzby
Discussion: https://postgr.es/m/20210416070310.GG3315@telsasoft.com
2021-04-19 11:32:30 +09:00
Michael Paquier c731f9187b Replace magic constants for seek() calls in perl scripts
A couple of tests have been using 0 as magic constant while SEEK_SET can
be used instead.  This makes the code easier to understand, and more
consistent with the changes done in 3c5b068.

Per discussion with Andrew Dunstan.

Discussion: https://postgr.es/m/YHrc24AgJQ6tQ1q0@paquier.xyz
2021-04-19 10:15:35 +09:00
Amit Kapila f5fc2f5b23 Add information of total data processed to replication slot stats.
This adds the statistics about total transactions count and total
transaction data logically sent to the decoding output plugin from
ReorderBuffer. Users can query the pg_stat_replication_slots view to check
these stats.

Suggested-by: Andres Freund
Author: Vignesh C and Amit Kapila
Reviewed-by: Sawada Masahiko, Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
2021-04-16 07:34:43 +05:30
Peter Eisentraut fae65629ce Revert "psql: Show all query results by default"
This reverts commit 3a51306722.

Per discussion, this patch had too many issues to resolve at this
point of the development cycle.  We'll try again in the future.

Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2021-04-15 19:42:55 +02:00
Peter Eisentraut 59da8d9eb7 amcheck: Use correct format placeholder for TOAST chunk numbers
Several of these were already fixed in passing in
9acaf1a621, but one was remaining
inconsistent.
2021-04-15 08:58:03 +02:00
Robert Haas 9acaf1a621 amcheck: Reword some messages and fix an alignment problem.
We don't need to mention the attribute number in these messages, because
there's a dedicated column for that, but we should mention the toast
value ID, because that's really useful for any follow-up troubleshooting
the user wants to do. This also rewords some of the messages to hopefully
read a little better.

Also, use VARATT_EXTERNAL_GET_POINTER in case we're accessing a TOAST
pointer that isn't aligned on a platform that's fussy about alignment,
so that we don't crash while corruption-checking the user's data.

Mark Dilger, reviewed by me.

Discussion: http://postgr.es/m/7D3B9BF6-50D0-4C30-8506-1C1851C7F96F@enterprisedb.com
2021-04-14 12:46:31 -04:00
Peter Eisentraut 07e5e66742 Improve quoting in some error messages 2021-04-14 09:11:29 +02:00
Michael Paquier 93f4146144 Simplify tests of postgres_fdw terminating connections
The tests introduced in 32a9c0b for connections broken and
re-established rely on pg_terminate_backend() for their logic.  When
these were introduced, this function simply sent a signal to a backend
without waiting for the operation to complete, and the tests repeatedly
looked at pg_stat_activity to check if the operation was completed or
not.  Since aaf0432, it is possible to define a timeout to make
pg_terminate_backend() wait for a certain duration, so make use of it,
with a timeout reasonably large enough (3min) to give enough room for
the tests to pass even on slow machines.

Some measurements show that the tests of postgres_fdw are much faster
with this change.  For example, on my laptop, they now take 4s instead
of 6s.

Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACXGY_EfGrMTjKjHy2zi-u1u9rdeioU_fro0T6Jo8t56KQ@mail.gmail.com
2021-04-14 14:23:53 +09:00
Bruce Momjian 0f61727b75 Fixes for query_id feature
Ignore parallel workers in pg_stat_statements
  Oversight in 4f0b0966c8 which exposed queryid in parallel workers.
  Counters are aggregated by the main backend process so parallel workers
  would report duplicated activity, and could also report activity for the
  wrong entry as they are only aware of the top level queryid.

Fix thinko in pg_stat_get_activity when retrieving the queryid.

Remove unnecessary call to pgstat_report_queryid().

Reported-by: Amit Kapila, Andres Freund, Thomas Munro

Discussion: https://postgr.es/m/20210408051735.lfbdzun5zdlax5gd@alap3.anarazel.de p634GTSOqnDW86Owrn6qDAVosC5dJjXjp7BMfc5Gz1Q@mail.gmail.com

Author: Julien Rouhaud
2021-04-08 11:16:01 -04:00
Magnus Hagander 5844c23dc5 Merge v1.10 of pg_stat_statements into v1.9
v1.9 is already new in this version of PostgreSQL, so turn it into just
one change.

Author: Julien Rohaud
Discussion: https://postgr.es/m/20210408120505.7zinijtdexbyghvb@nol
2021-04-08 15:15:17 +02:00
Fujii Masao 8ff1c94649 Allow TRUNCATE command to truncate foreign tables.
This commit introduces new foreign data wrapper API for TRUNCATE.
It extends TRUNCATE command so that it accepts foreign tables as
the targets to truncate and invokes that API. Also it extends postgres_fdw
so that it can issue TRUNCATE command to foreign servers, by adding
new routine for that TRUNCATE API.

The information about options specified in TRUNCATE command, e.g.,
ONLY, CACADE, etc is passed to FDW via API. The list of foreign tables to
truncate is also passed to FDW. FDW truncates the foreign data sources
that the passed foreign tables specify, based on those information.
For example, postgres_fdw constructs TRUNCATE command using them
and issues it to the foreign server.

For performance, TRUNCATE command invokes the FDW routine for
TRUNCATE once per foreign server that foreign tables to truncate belong to.

Author: Kazutaka Onishi, Kohei KaiGai, slightly modified by Fujii Masao
Reviewed-by: Bharath Rupireddy, Michael Paquier, Zhihong Yu, Alvaro Herrera, Stephen Frost, Ashutosh Bapat, Amit Langote, Daniel Gustafsson, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/CAOP8fzb_gkReLput7OvOK+8NHgw-RKqNv59vem7=524krQTcWA@mail.gmail.com
Discussion: https://postgr.es/m/CAJuF6cMWDDqU-vn_knZgma+2GMaout68YUgn1uyDnexRhqqM5Q@mail.gmail.com
2021-04-08 20:56:08 +09:00
Magnus Hagander 6b4d23feef Track identical top vs nested queries independently in pg_stat_statements
Changing pg_stat_statements.track between 'all' and 'top' would control
if pg_stat_statements tracked just top level statements or also
statements inside functions, but when tracking all it would not
differentiate between the two. Being table to differentiate this is
useful both to track where the actual query is coming from, and to see
if there are differences in executions between the two.

To do this, add a boolean to the hash key indicating if the statement
was top level or not.

Experience from the pg_stat_kcache module shows that in at least some
"reasonable worloads" only <5% of the queries show up both top level and
nested. Based on this, admittedly small, dataset, this patch does not
try to de-duplicate those query *texts*, and will just store one copy
for the top level and one for the nested.

Author: Julien Rohaud
Reviewed-By: Magnus Hagander, Masahiro Ikeda
Discussion: https://postgr.es/m/20201202040516.GA43757@nol
2021-04-08 10:30:34 +02:00
Peter Eisentraut 2e0e066679 Update Unicode data to CLDR 39 2021-04-08 08:28:03 +02:00
Bruce Momjian 4f0b0966c8 Make use of in-core query id added by commit 5fd9dfa5f5
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.

Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.

Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.

For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.

Bump catalog version.

Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol

Author: Julien Rouhaud

Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
2021-04-07 14:04:06 -04:00
Robert Haas ec7ffb8096 amcheck: fix multiple problems with TOAST pointer validation
First, don't perform database access while holding a buffer lock.
When checking a heap, we can validate that TOAST pointers are sane by
performing a scan on the TOAST index and looking up the chunks that
correspond to each value ID that appears in a TOAST poiner in the main
table. But, to do that while holding a buffer lock at least risks
causing other backends to wait uninterruptibly, and probably can cause
undetected and uninterruptible deadlocks.  So, instead, make a list of
checks to perform while holding the lock, and then perform the checks
after releasing it.

Second, adjust things so that we don't try to follow TOAST pointers
for tuples that are already eligible to be pruned. The TOAST tuples
become eligible for pruning at the same time that the main tuple does,
so trying to check them may lead to spurious reports of corruption,
as observed in the buildfarm. The necessary infrastructure to decide
whether or not the tuple being checked is prunable was added by
commit 3b6c1259f9, but it wasn't
actually used for its intended purpose prior to this patch.

Mark Dilger, adjusted by me to avoid a memory leak.

Discussion: http://postgr.es/m/AC5479E4-6321-473D-AC92-5EC36299FBC2@enterprisedb.com
2021-04-07 13:39:12 -04:00
Bruce Momjian 5fd9dfa5f5 Move pg_stat_statements query jumbling to core.
Add compute_query_id GUC to control whether a query identifier should be
computed by the core (off by default).  It's thefore now possible to
disable core queryid computation and use pg_stat_statements with a
different algorithm to compute the query identifier by using a
third-party module.

To ensure that a single source of query identifier can be used and is
well defined, modules that calculate a query identifier should throw an
error if compute_query_id specified to compute a query id and if a query
idenfitier was already calculated.

Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol

Author: Julien Rouhaud

Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
2021-04-07 13:06:56 -04:00
Robert Haas 4573f6a9af amcheck: Remove duplicate XID/MXID bounds checks.
Commit 3b6c1259f9 resulted in the same
xmin and xmax bounds checking being performed in both check_tuple()
and check_tuple_visibility(). Remove the duplication.

While at it, adjust some code comments that were overlooked in that
commit.

Mark Dilger

Discussion: http://postgr.es/m/AC5479E4-6321-473D-AC92-5EC36299FBC2@enterprisedb.com
2021-04-07 12:11:44 -04:00
Peter Eisentraut 5c55dc8b47 libpq: Set Server Name Indication (SNI) for SSL connections
By default, have libpq set the TLS extension "Server Name Indication" (SNI).

This allows an SNI-aware SSL proxy to route connections.  (This
requires a proxy that is aware of the PostgreSQL protocol, not just
any SSL proxy.)

In the future, this could also allow the server to use different SSL
certificates for different host specifications.  (That would require
new server functionality.  This would be the client-side functionality
for that.)

Since SNI makes the host name appear in cleartext in the network
traffic, this might be undesirable in some cases.  Therefore, also add
a libpq connection option "sslsni" to turn it off.

Discussion: https://www.postgresql.org/message-id/flat/7289d5eb-62a5-a732-c3b9-438cee2cb709%40enterprisedb.com
2021-04-07 15:11:41 +02:00
Heikki Linnakangas d92b1cdbab Revert "Add sortsupport for gist_btree opclasses, for faster index builds."
This reverts commit 9f984ba6d2.

It was making the buildfarm unhappy, apparently setting client_min_messages
in a regression test produces different output if log_statement='all'.
Another issue is that I now suspect the bit sortsupport function was in
fact not correct to call byteacmp(). Revert to investigate both of those
issues.
2021-04-07 14:33:21 +03:00
Heikki Linnakangas 9f984ba6d2 Add sortsupport for gist_btree opclasses, for faster index builds.
Commit 16fa9b2b30 introduced a faster way to build GiST indexes, by
sorting all the data. This commit adds the sortsupport functions needed
to make use of that feature for btree_gist.

Author: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/2F3F7265-0D22-44DB-AD71-8554C743D943@yandex-team.ru
2021-04-07 13:22:05 +03:00
Peter Eisentraut 0b5e824528 Message improvement
The previous wording contained a superfluous comma.  Adjust phrasing
for grammatical correctness and clarity.
2021-04-07 07:42:44 +02:00
Michael Paquier 4c0239cb7a Remove redundant memset(0) calls for page init of some index AMs
Bloom, GIN, GiST and SP-GiST rely on PageInit() to initialize the
contents of a page, and this routine fills entirely a page with zeros
for a size of BLCKSZ, including the special space.  Those index AMs have
been using an extra memset() call to fill with zeros the special page
space, or even the whole page, which is not necessary as PageInit()
already does this work, so let's remove them.  GiST was not doing this
extra call, but has commented out a system call that did so since
6236991.

While on it, remove one MAXALIGN() for SP-GiST as PageInit() takes care
of that.  This makes the whole page initialization logic more consistent
across all index AMs.

Author: Bharath Rupireddy
Reviewed-by: Vignesh C, Mahendra Singh Thalor
Discussion: https://postgr.es/m/CALj2ACViOo2qyaPT7krWm4LRyRTw9kOXt+g6PfNmYuGA=YHj9A@mail.gmail.com
2021-04-07 14:35:26 +09:00
Fujii Masao a3740c48eb postgres_fdw: Allow partitions specified in LIMIT TO to be imported.
Commit f49bcd4ef3 disallowed postgres_fdw to import table partitions.
Because all data can be accessed through the partitioned table which
is the root of the partitioning hierarchy, importing only partitioned
table should allow access to all the data without creating extra objects.
This is a reasonable default when importing a whole schema. But there
may be the case where users want to explicitly import one of
a partitioned tables' partitions.

For that use case, this commit allows postgres_fdw to import tables or
foreign tables which are partitions of some other table only when they
are explicitly specified in LIMIT TO clause.  It doesn't change
the behavior that any partitions not specified in LIMIT TO are
automatically excluded in IMPORT FOREIGN SCHEMA command.

Author: Matthias van de Meent
Reviewed-by: Bernd Helmle, Amit Langote, Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/CAEze2Whwg4i=mzApMe+PXxCEfgoZmHGqdqQFW7J4bmj_5p6t1A@mail.gmail.com
2021-04-07 02:32:10 +09:00
Peter Eisentraut 3a51306722 psql: Show all query results by default
Previously, psql printed only the last result if a command string
returned multiple result sets.  Now it prints all of them.  The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2021-04-06 17:10:24 +02:00
Andres Freund 1d9c5d0ce2 Do not rely on pgstat.h to indirectly include storage/ headers.
An upcoming patch might remove the (now indirect) proc.h
include (which in turn includes other headers), and it's cleaner for
the modified files to include their dependencies directly anyway...

Discussion: https://postgr.es/m/20210402194458.2vu324hkk2djq6ce@alap3.anarazel.de
2021-04-02 20:02:47 -07:00
Fujii Masao b1be3074ac postgres_fdw: Add option to control whether to keep connections open.
This commit adds a new option keep_connections that controls
whether postgres_fdw keeps the connections to the foreign server open
so that the subsequent queries can re-use them. This option can only be
specified for a foreign server. The default is on. If set to off,
all connections to the foreign server will be discarded
at the end of transaction. Closed connections will be re-established
when they are necessary by future queries using a foreign table.

This option is useful, for example, when users want to prevent
the connections from eating up the foreign servers connections
capacity.

Author: Bharath Rupireddy
Reviewed-by: Alexey Kondratov, Vignesh C, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVvrp5=AVp2PupEm+nAC8S4buqR3fJMmaCoc7ftT0aD2A@mail.gmail.com
2021-04-02 19:45:42 +09:00
Fujii Masao 98e5bd103f Fix typos in comments.
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoA1YL7t0nzVSEySx6zOaE7xO3r0jyu8hkitGL2_XbaMxQ@mail.gmail.com
2021-04-02 16:26:36 +09:00
David Rowley 9eacee2e62 Add Result Cache executor node (take 2)
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-02 14:10:56 +13:00
Stephen Frost c9c41c7a33 Rename Default Roles to Predefined Roles
The term 'default roles' wasn't quite apt as these roles aren't able to
be modified or removed after installation, so rename them to be
'Predefined Roles' instead, adding an entry into the newly added
Obsolete Appendix to help users of current releases find the new
documentation.

Bruce Momjian and Stephen Frost

Discussion: https://postgr.es/m/157742545062.1149.11052653770497832538%40wrigleys.postgresql.org
and https://www.postgresql.org/message-id/20201120211304.GG16415@tamriel.snowman.net
2021-04-01 15:32:06 -04:00
Robert Haas 3b6c1259f9 amcheck: Fix verify_heapam's tuple visibility checking rules.
We now follow the order of checks from HeapTupleSatisfies* more
closely to avoid coming to erroneous conclusions.

Mark Dilger and Robert Haas

Discussion: http://postgr.es/m/CA+Tgmob6sii0yTvULYJ0Vq4w6ZBmj7zUhddL3b+SKDi9z9NA7Q@mail.gmail.com
2021-04-01 13:36:28 -04:00
David Rowley 28b3e3905c Revert b6002a796
This removes "Add Result Cache executor node".  It seems that something
weird is going on with the tracking of cache hits and misses as
highlighted by many buildfarm animals.  It's not yet clear what the
problem is as other parts of the plan indicate that the cache did work
correctly, it's just the hits and misses that were being reported as 0.

This is especially a bad time to have the buildfarm so broken, so
reverting before too many more animals go red.

Discussion: https://postgr.es/m/CAApHDvq_hydhfovm4=izgWs+C5HqEeRScjMbOgbpC-jRAeK3Yw@mail.gmail.com
2021-04-01 13:33:23 +13:00
David Rowley b6002a796d Add Result Cache executor node
Here we add a new executor node type named "Result Cache".  The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins.  This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again.  Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of
joins.  The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join.  In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch.  Merge joins would have to
skip over all of the unmatched rows.  If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join.  The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large.  Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join.  This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does.  The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables.  Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size.  We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node.  We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be.  Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.

For now, the planner will only consider using a result cache for
parameterized nested loop joins.  This works for both normal joins and
also for LATERAL type joins to subqueries.  It is possible to use this new
node for other uses in the future.  For example, to cache results from
correlated subqueries.  However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio.  Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations.  With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be.   In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%.  Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join.   However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values.  If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join.  Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature.  Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache.  However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default.  There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression.  Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default.  It remains to be seen if we'll
maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch.  Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people.  If there's some consensus on a better name, then we can
change it before the release.  Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.

Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
2021-04-01 12:32:22 +13:00
Tom Lane 8998e3cafa Silence compiler warning in non-assert builds.
Per buildfarm.
2021-03-31 16:50:45 -04:00
Tom Lane 86dc90056d Rework planning and execution of UPDATE and DELETE.
This patch makes two closely related sets of changes:

1. For UPDATE, the subplan of the ModifyTable node now only delivers
the new values of the changed columns (i.e., the expressions computed
in the query's SET clause) plus row identity information such as CTID.
ModifyTable must re-fetch the original tuple to merge in the old
values of any unchanged columns.  The core advantage of this is that
the changed columns are uniform across all tables of an inherited or
partitioned target relation, whereas the other columns might not be.
A secondary advantage, when the UPDATE involves joins, is that less
data needs to pass through the plan tree.  The disadvantage of course
is an extra fetch of each tuple to be updated.  However, that seems to
be very nearly free in context; even worst-case tests don't show it to
add more than a couple percent to the total query cost.  At some point
it might be interesting to combine the re-fetch with the tuple access
that ModifyTable must do anyway to mark the old tuple dead; but that
would require a good deal of refactoring and it seems it wouldn't buy
all that much, so this patch doesn't attempt it.

2. For inherited UPDATE/DELETE, instead of generating a separate
subplan for each target relation, we now generate a single subplan
that is just exactly like a SELECT's plan, then stick ModifyTable
on top of that.  To let ModifyTable know which target relation a
given incoming row refers to, a tableoid junk column is added to
the row identity information.  This gets rid of the horrid hack
that was inheritance_planner(), eliminating O(N^2) planning cost
and memory consumption in cases where there were many unprunable
target relations.

Point 2 of course requires point 1, so that there is a uniform
definition of the non-junk columns to be returned by the subplan.
We can't insist on uniform definition of the row identity junk
columns however, if we want to keep the ability to have both
plain and foreign tables in a partitioning hierarchy.  Since
it wouldn't scale very far to have every child table have its
own row identity column, this patch includes provisions to merge
similar row identity columns into one column of the subplan result.
In particular, we can merge the whole-row Vars typically used as
row identity by FDWs into one column by pretending they are type
RECORD.  (It's still okay for the actual composite Datums to be
labeled with the table's rowtype OID, though.)

There is more that can be done to file down residual inefficiencies
in this patch, but it seems to be committable now.

FDW authors should note several API changes:

* The argument list for AddForeignUpdateTargets() has changed, and so
has the method it must use for adding junk columns to the query.  Call
add_row_identity_var() instead of manipulating the parse tree directly.
You might want to reconsider exactly what you're adding, too.

* PlanDirectModify() must now work a little harder to find the
ForeignScan plan node; if the foreign table is part of a partitioning
hierarchy then the ForeignScan might not be the direct child of
ModifyTable.  See postgres_fdw for sample code.

* To check whether a relation is a target relation, it's no
longer sufficient to compare its relid to root->parse->resultRelation.
Instead, check it against all_result_relids or leaf_result_relids,
as appropriate.

Amit Langote and Tom Lane

Discussion: https://postgr.es/m/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com
2021-03-31 11:52:37 -04:00
Etsuro Fujita 27e1f14563 Add support for asynchronous execution.
This implements asynchronous execution, which runs multiple parts of a
non-parallel-aware Append concurrently rather than serially to improve
performance when possible.  Currently, the only node type that can be
run concurrently is a ForeignScan that is an immediate child of such an
Append.  In the case where such ForeignScans access data on different
remote servers, this would run those ForeignScans concurrently, and
overlap the remote operations to be performed simultaneously, so it'll
improve the performance especially when the operations involve
time-consuming ones such as remote join and remote aggregation.

We may extend this to other node types such as joins or aggregates over
ForeignScans in the future.

This also adds the support for postgres_fdw, which is enabled by the
table-level/server-level option "async_capable".  The default is false.

Robert Haas, Kyotaro Horiguchi, Thomas Munro, and myself.  This commit
is mostly based on the patch proposed by Robert Haas, but also uses
stuff from the patch proposed by Kyotaro Horiguchi and from the patch
proposed by Thomas Munro.  Reviewed by Kyotaro Horiguchi, Konstantin
Knizhnik, Andrey Lepikhov, Movead Li, Thomas Munro, Justin Pryzby, and
others.

Discussion: https://postgr.es/m/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com
Discussion: https://postgr.es/m/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com
Discussion: https://postgr.es/m/20200228.170650.667613673625155850.horikyota.ntt%40gmail.com
2021-03-31 18:45:00 +09:00
Amit Kapila 13cb5bd846 Remove extra semicolon in postgres_fdw tests.
Author: Suraj Kharage
Reviewed-by: Bharath Rupireddy, Vignesh C
Discussion: https://postgr.es/m/CAF1DzPWRfxUeH-wShz7P_pK5Tx6M_nEK+TkS8gn5ngvg07Q5=g@mail.gmail.com
2021-03-31 10:36:39 +05:30
Bruce Momjian 9ee7d533da adjust dblink regression expected output for commit 5da9868ed9
Seems the -1/singular output is used in the dblink regression tests.

Reported-by: Álvaro Herrera

Discussion: https://postgr.es/m/20210330231506.GA10666@alvherre.pgsql
2021-03-30 19:46:31 -04:00
David Rowley ed934d4fa3 Allow estimate_num_groups() to pass back further details about the estimation
Here we add a new output parameter to estimate_num_groups() to allow it to
inform the caller of additional, possibly useful information about the
estimation.

The new output parameter is a struct that currently contains just a single
field with a set of flags.  This was done rather than having the flags as
an output parameter to allow future fields to be added without having to
change the signature of the function at a later date when we want to pass
back further information that might not be suitable to store in the flags
field.

It seems reasonable that one day in the future that the planner would want
to know more about the estimation. For example, how many individual sets
of statistics was the estimation generated from?  The planner may want to
take that into account if we ever want to consider risks as well as costs
when generating plans.

For now, there's only 1 flag we set in the flags field.  This is to
indicate if the estimation fell back on using the hard-coded constants in
any part of the estimation. Callers may like to change their behavior if
this is set, and this gives them the ability to do so.  Callers may pass
the flag pointer as NULL if they have no interest in obtaining any
additional information about the estimate.

We're not adding any actual usages of these flags here.  Some follow-up
commits will make use of this feature.  Additionally, we're also not
making any changes to add support for clauselist_selectivity() and
clauselist_selectivity_ext().  However, if this is required in the future
then the same struct being added here should be fine to use as a new
output argument for those functions too.

Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvqQqpk=1W-G_ds7A9CsXX3BggWj_7okinzkLVhDubQzjA@mail.gmail.com
2021-03-30 20:52:46 +13:00
Amit Kapila f64ea6dc5c Add a xid argument to the filter_prepare callback for output plugins.
Along with gid, this provides a different way to identify the transaction.
The users that use xid in some way to prepare the transactions can use it
to filter prepare transactions. The later commands COMMIT PREPARED or
ROLLBACK PREPARED carries both identifiers, providing an output plugin the
choice of what to use.

Author: Markus Wanner
Reviewed-by: Vignesh C, Amit Kapila
Discussion: https://postgr.es/m/ee280000-7355-c4dc-e47b-2436e7be959c@enterprisedb.com
2021-03-30 10:34:43 +05:30
Etsuro Fujita bc2797ebb1 Update obsolete comment.
Back-patch to all supported branches.

Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK17DwzaSf%2BB71dhL2apXdtG-OmD6u2AL9Cq2ZmAR0%2BzapQ%40mail.gmail.com
2021-03-30 13:00:00 +09:00
Peter Eisentraut 22e1943f13 pgcrypto: Check for error return of px_cipher_decrypt()
This has previously not been a problem (that anyone ever reported),
but in future OpenSSL versions (3.0.0), where legacy ciphers are/can
be disabled, this is the place where this is reported.  So we need to
catch the error here, otherwise the higher-level functions would
return garbage.  The nearby encryption code already handled errors
similarly.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/9e9c431c-0adc-7a6d-9b1a-915de1ba3fe7@enterprisedb.com
2021-03-23 11:48:37 +01:00
Robert Haas bbe0a81db6 Allow configurable LZ4 TOAST compression.
There is now a per-column COMPRESSION option which can be set to pglz
(the default, and the only option in up until now) or lz4. Or, if you
like, you can set the new default_toast_compression GUC to lz4, and
then that will be the default for new table columns for which no value
is specified. We don't have lz4 support in the PostgreSQL code, so
to use lz4 compression, PostgreSQL must be built --with-lz4.

In general, TOAST compression means compression of individual column
values, not the whole tuple, and those values can either be compressed
inline within the tuple or compressed and then stored externally in
the TOAST table, so those properties also apply to this feature.

Prior to this commit, a TOAST pointer has two unused bits as part of
the va_extsize field, and a compessed datum has two unused bits as
part of the va_rawsize field. These bits are unused because the length
of a varlena is limited to 1GB; we now use them to indicate the
compression type that was used. This means we only have bit space for
2 more built-in compresison types, but we could work around that
problem, if necessary, by introducing a new vartag_external value for
any further types we end up wanting to add. Hopefully, it won't be
too important to offer a wide selection of algorithms here, since
each one we add not only takes more coding but also adds a build
dependency for every packager. Nevertheless, it seems worth doing
at least this much, because LZ4 gets better compression than PGLZ
with less CPU usage.

It's possible for LZ4-compressed datums to leak into composite type
values stored on disk, just as it is for PGLZ. It's also possible for
LZ4-compressed attributes to be copied into a different table via SQL
commands such as CREATE TABLE AS or INSERT .. SELECT.  It would be
expensive to force such values to be decompressed, so PostgreSQL has
never done so. For the same reasons, we also don't force recompression
of already-compressed values even if the target table prefers a
different compression method than was used for the source data.  These
architectural decisions are perhaps arguable but revisiting them is
well beyond the scope of what seemed possible to do as part of this
project.  However, it's relatively cheap to recompress as part of
VACUUM FULL or CLUSTER, so this commit adjusts those commands to do
so, if the configured compression method of the table happens not to
match what was used for some column value stored therein.

Dilip Kumar. The original patches on which this work was based were
written by Ildus Kurbangaliev, and those were patches were based on
even earlier work by Nikita Glukhov, but the design has since changed
very substantially, since allow a potentially large number of
compression methods that could be added and dropped on a running
system proved too problematic given some of the architectural issues
mentioned above; the choice of which specific compression method to
add first is now different; and a lot of the code has been heavily
refactored.  More recently, Justin Przyby helped quite a bit with
testing and reviewing and this version also includes some code
contributions from him. Other design input and review from Tomas
Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander
Korotkov, and me.

Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain
Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com
2021-03-19 15:10:38 -04:00
Andres Freund 5f79580ad6 Fix memory lifetime issues of replication slot stats.
When accessing replication slot stats, introduced in 9868167500,
pgstat_read_statsfiles() reads the data into newly allocated
memory. Unfortunately the current memory context at that point is the
callers, leading to leaks and use-after-free dangers.

The fix is trivial, explicitly use pgStatLocalContext. There's some
potential for further improvements, but that's outside of the scope of
this bugfix.

No backpatch necessary, feature is only in HEAD.

Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210317230447.c7uc4g3vbs4wi32i@alap3.anarazel.de
2021-03-17 16:21:46 -07:00
Peter Geoghegan 65445469d6 amcheck: Reduce debug message verbosity.
Empty sibling pages can occasionally be much more common than any other
event that we report on at elevel DEBUG1.  Increase the elevel for
relevant cases to DEBUG2 to avoid overwhelming the user with relatively
insignificant details.
2021-03-16 13:11:17 -07:00
Robert Haas 4078ce65a0 Fix a confusing amcheck corruption message.
Don't complain about the last TOAST chunk number being different
from what we expected if there are no TOAST chunks at all.
In such a case, saying that the final chunk number is 0 is not
really accurate, and the fact the value is missing from the
TOAST table is reported separately anyway.

Mark Dilger

Discussion: http://postgr.es/m/AA5506CE-7D2A-42E4-A51D-358635E3722D@enterprisedb.com
2021-03-16 15:42:50 -04:00
Michael Paquier 0ba71107ef Revert changes for SSL compression in libpq
This partially reverts 096bbf7 and 9d2d457, undoing the libpq changes as
it could cause breakages in distributions that share one single libpq
version across multiple major versions of Postgres for extensions and
applications linking to that.

Note that the backend is unchanged here, and it still disables SSL
compression while simplifying the underlying catalogs that tracked if
compression was enabled or not for a SSL connection.

Per discussion with Tom Lane and Daniel Gustafsson.

Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
2021-03-10 09:35:42 +09:00
Michael Paquier 096bbf7c93 Switch back sslcompression to be a normal input field in libpq
Per buildfarm member crake, any servers including a postgres_fdw server
with this option set would fail to do a pg_upgrade properly as the
option got hidden in f9264d1 by becoming a debug option, making the
restore of the FDW server fail.

This changes back the option in libpq to be visible, but still inactive
to fix this upgrade issue.

Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
2021-03-09 19:52:36 +09:00
Michael Paquier f9264d1524 Remove support for SSL compression
PostgreSQL disabled compression as of e3bdb2d and the documentation
recommends against using it since.  Additionally, SSL compression has
been disabled in OpenSSL since version 1.1.0, and was disabled in many
distributions long before that.  The most recent TLS version, TLSv1.3,
disallows compression at the protocol level.

This commit removes the feature itself, removing support for the libpq
parameter sslcompression (parameter still listed for compatibility
reasons with existing connection strings, just ignored), and removes
the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.

Note that, on top of removing the ability to activate compression by
configuration, compression is actively disabled in both frontend and
backend to avoid overrides from local configurations.

A TAP test is added for deprecated SSL parameters to check after
backwards compatibility.

Bump catalog version.

Author: Daniel Gustafsson
Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
Discussion:  https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
2021-03-09 11:16:47 +09:00
Tom Lane 1265a9c8f8 Add binary I/O capability for cube datatype.
We can adjust the not-yet-released cube--1.4--1.5.sql upgrade
rather than making a whole new version.

KaiGai Kohei

Discussion: https://postgr.es/m/CAOP8fzZO4y60QPTK=RGDXeVeVHV9tLHKOsh7voUOoUouVCPV8A@mail.gmail.com
2021-03-06 12:04:05 -05:00
Tom Lane 112d411fbe Remove deprecated containment operators for contrib types.
Since PG 8.2, @ and ~ have been deprecated aliases for the containment
operators @> and <@.  It seems like enough time has passed to actually
remove them, so do so.

This completes the project begun in commit 2f70fdb06.  Note that in
the core types, the relation to the preferred operator names was
reversed from what it is in these contrib modules.  The confusion
that induced was a large part of the reason for deprecation.

Justin Pryzby

Discussion: https://postgr.es/m/20201027032511.GF9241@telsasoft.com
2021-03-05 10:45:41 -05:00
Amit Kapila 19890a064e Add option to enable two_phase commits via pg_create_logical_replication_slot.
Commit 0aa8a01d04 extends the output plugin API to allow decoding of
prepared xacts and allowed the user to enable/disable the two-phase option
via pg_logical_slot_get_changes(). This can lead to a problem such that
the first time when it gets changes via pg_logical_slot_get_changes()
without two_phase option enabled it will not get the prepared even though
prepare is after consistent snapshot. Now next time during getting changes,
if the two_phase option is enabled it can skip prepare because by that
time start decoding point has been moved. So the user will only get commit
prepared.

Allow to enable/disable this option at the create slot time and default
will be false. It will break the existing slots which is fine in a major
release.

Author: Ajin Cherian
Reviewed-by: Amit Kapila and Vignesh C
Discussion: https://postgr.es/m/d0f60d60-133d-bf8d-bd70-47784d8fabf3@enterprisedb.com
2021-03-03 07:34:11 +05:30
Amit Kapila 8bdb1332eb Avoid repeated decoding of prepared transactions after a restart.
In commit a271a1b50e, we allowed decoding at prepare time and the prepare
was decoded again if there is a restart after decoding it. It was done
that way because we can't distinguish between the cases where we have not
decoded the prepare because it was prior to consistent snapshot or we have
decoded it earlier but restarted. To distinguish between these two cases,
we have introduced an initial_consistent_point at the slot level which is
an LSN at which we found a consistent point at the time of slot creation.
This is also the point where we have exported a snapshot for the initial
copy. So, prepare transaction prior to this point are sent along with
commit prepared.

This commit bumps SNAPBUILD_VERSION because of change in SnapBuild. It
will break existing slots which is fine in a major release.

Author: Ajin Cherian, based on idea by Andres Freund
Reviewed-by: Amit Kapila and Vignesh C
Discussion: https://postgr.es/m/d0f60d60-133d-bf8d-bd70-47784d8fabf3@enterprisedb.com
2021-03-01 09:11:18 +05:30
Noah Misch 388b959315 Raise a timeout to 180s, in contrib/test_decoding.
Per buildfarm member hornet.  The test is new in v14, so no back-patch.
2021-02-27 07:02:56 -08:00
Peter Geoghegan e5d8a99903 Use full 64-bit XIDs in deleted nbtree pages.
Otherwise we risk "leaking" deleted pages by making them non-recyclable
indefinitely.  Commit 6655a729 did the same thing for deleted pages in
GiST indexes.  That work was used as a starting point here.

Stop storing an XID indicating the oldest bpto.xact across all deleted
though unrecycled pages in nbtree metapages.  There is no longer any
reason to care about that condition/the oldest XID.  It only ever made
sense when wraparound was something _bt_vacuum_needs_cleanup() had to
consider.

The btm_oldest_btpo_xact metapage field has been repurposed and renamed.
It is now btm_last_cleanup_num_delpages, which is used to remember how
many non-recycled deleted pages remain from the last VACUUM (in practice
its value is usually the precise number of pages that were _newly
deleted_ during the specific VACUUM operation that last set the field).

The general idea behind storing btm_last_cleanup_num_delpages is to use
it to give _some_ consideration to non-recycled deleted pages inside
_bt_vacuum_needs_cleanup() -- though never too much.  We only really
need to avoid leaving a truly excessive number of deleted pages in an
unrecycled state forever.  We only do this to cover certain narrow cases
where no other factor makes VACUUM do a full scan, and yet the index
continues to grow (and so actually misses out on recycling existing
deleted pages).

These metapage changes result in a clear user-visible benefit: We no
longer trigger full index scans during VACUUM operations solely due to
the presence of only 1 or 2 known deleted (though unrecycled) blocks
from a very large index.  All that matters now is keeping the costs and
benefits in balance over time.

Fix an issue that has been around since commit 857f9c36, which added the
"skip full scan of index" mechanism (i.e. the _bt_vacuum_needs_cleanup()
logic).  The accuracy of btm_last_cleanup_num_heap_tuples accidentally
hinged upon _when_ the source value gets stored.  We now always store
btm_last_cleanup_num_heap_tuples in btvacuumcleanup().  This fixes the
issue because IndexVacuumInfo.num_heap_tuples (the source field) is
expected to accurately indicate the state of the table _after_ the
VACUUM completes inside btvacuumcleanup().

A backpatchable fix cannot easily be extracted from this commit.  A
targeted fix for the issue will follow in a later commit, though that
won't happen today.

I (pgeoghegan) have chosen to remove any mention of deleted pages in the
documentation of the vacuum_cleanup_index_scale_factor GUC/param, since
the presence of deleted (though unrecycled) pages is no longer of much
concern to users.  The vacuum_cleanup_index_scale_factor description in
the docs now seems rather unclear in any case, and it should probably be
rewritten in the near future.  Perhaps some passing mention of page
deletion will be added back at the same time.

Bump XLOG_PAGE_MAGIC due to nbtree WAL records using full XIDs now.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-WznpdHvujGUwYZ8sihX=d5u-tRYhi-F4wnV2uN2zHpMUXw@mail.gmail.com
2021-02-24 18:41:34 -08:00
Michael Paquier bcf2667bf6 Fix some typos, grammar and style in docs and comments
The portions fixing the documentation are backpatched where needed.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20210210235557.GQ20012@telsasoft.com
backpatch-through: 9.6
2021-02-24 16:13:17 +09:00
Alvaro Herrera 8deb6b38dc
Reinstate HEAP_XMAX_LOCK_ONLY|HEAP_KEYS_UPDATED as allowed
Commit 866e24d47d added an assert that HEAP_XMAX_LOCK_ONLY and
HEAP_KEYS_UPDATED cannot appear together, on the faulty assumption that
the latter necessarily referred to an update and not a tuple lock; but
that's wrong, because SELECT FOR UPDATE can use precisely that
combination, as evidenced by the amcheck test case added here.

Remove the Assert(), and also patch amcheck's verify_heapam.c to not
complain if the combination is found.  Also, out of overabundance of
caution, update (across all branches) README.tuplock to be more explicit
about this.

Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/20210124061758.GA11756@nol
2021-02-23 17:30:21 -03:00
Peter Eisentraut 6f6f284c7e Simplify printing of LSNs
Add a macro LSN_FORMAT_ARGS for use in printf-style printing of LSNs.
Convert all applicable code to use it.

Reviewed-by: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/CAExHW5ub5NaTELZ3hJUCE6amuvqAtsSxc7O+uK7y4t9Rrk23cw@mail.gmail.com
2021-02-23 10:27:02 +01:00
Tom Lane 6ee479abfc Fix invalid array access in trgm_regexp.c.
Brown-paper-bag bug in 08c0d6ad6: I missed one place that needed
to guard against RAINBOW arc colors.  Remarkably, nothing noticed
the invalid array access except buildfarm member thorntail.

Thanks to Noah Misch for assistance with tracking this down.
2021-02-21 19:46:46 -05:00
Tom Lane 08c0d6ad65 Invent "rainbow" arcs within the regex engine.
Some regular expression constructs, most notably the "." match-anything
metacharacter, produce a sheaf of parallel NFA arcs covering all
possible colors (that is, character equivalence classes).  We can make
a noticeable improvement in the space and time needed to process large
regexes by replacing such cases with a single arc bearing the special
color code "RAINBOW".  This requires only minor additional complication
in places such as pull() and push().

Callers of pg_reg_getoutarcs() must now be prepared for the possibility
of seeing a RAINBOW arc.  For the one known user, contrib/pg_trgm,
that's a net benefit since it cuts the number of arcs to be dealt with,
and the handling isn't any different than for other colors that contain
too many characters to be dealt with individually.

This is part of a patch series that in total reduces the regex engine's
runtime by about a factor of four on a large corpus of real-world regexes.

Patch by me, reviewed by Joel Jacobson

Discussion: https://postgr.es/m/1340281.1613018383@sss.pgh.pa.us
2021-02-20 18:11:56 -05:00
Peter Eisentraut f5465fade9 Allow specifying CRL directory
Add another method to specify CRLs, hashed directory method, for both
server and client side.  This offers a means for server or libpq to
load only CRLs that are required to verify a certificate.  The CRL
directory is specifed by separate GUC variables or connection options
ssl_crl_dir and sslcrldir, alongside the existing ssl_crl_file and
sslcrl, so both methods can be used at the same time.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20200731.173911.904649928639357911.horikyota.ntt@gmail.com
2021-02-18 07:59:10 +01:00
Tomas Vondra 927f453a94 Fix tuple routing to initialize batching only for inserts
A cross-partition update on a partitioned table is implemented as a
delete followed by an insert. With foreign partitions, this was however
causing issues, because the FDW and core may disagree on when to enable
batching.  postgres_fdw was only allowing batching for plain inserts
(CMD_INSERT) while core was trying to batch the insert component of the
cross-partition update.  Fix by restricting core to apply batching only
to plain CMD_INSERT queries.

It's possible to allow batching for cross-partition updates, but that
will require more extensive changes, so better to leave that for a
separate patch.

Author: Amit Langote
Reviewed-by: Tomas Vondra, Takayuki Tsunakawa
Discussion: https://postgr.es/m/20200628151002.7x5laxwpgvkyiu3q@development
2021-02-18 00:03:45 +01:00
Peter Eisentraut 0e392fcc0d Use errmsg_internal for debug messages
An inconsistent set of debug-level messages was not using
errmsg_internal(), thus uselessly exposing the messages to translation
work.  Fix those.
2021-02-17 11:33:25 +01:00