Commit Graph

6355 Commits

Author SHA1 Message Date
Fujii Masao 74527c3e02 Add tab-completion for CREATE FOREIGN TABLE.
Unlike CREATE TABLE, CREATE FOREIGN TABLE is not allowed inside
CREATE SCHEMA, so Matches() is used instead of TailMatches() for
the tab-completion.

Author: Tang <tanghy.fnst@fujitsu.com>
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB61137E96E0551278782D11CDFB519@OS0PR01MB6113.jpnprd01.prod.outlook.com
2022-01-15 11:22:24 +09:00
Tom Lane 98e93a1fc9 Clean up messy API for src/port/thread.c.
The point of this patch is to reduce inclusion spam by not needing
to #include <netdb.h> or <pwd.h> in port.h (which is read by every
compile in our tree).  To do that, we must remove port.h's
declarations of pqGetpwuid and pqGethostbyname.

pqGethostbyname is only used, and is only ever likely to be used,
in src/port/getaddrinfo.c --- which isn't even built on most
platforms, making pqGethostbyname dead code for most people.
Hence, deal with that by just moving it into getaddrinfo.c.

To clean up pqGetpwuid, invent a couple of simple wrapper
functions with less-messy APIs.  This allows removing some
duplicate error-handling code, too.

In passing, remove thread.c from the MSVC build, since it
contains nothing we use on Windows.

Noted while working on 376ce3e40.

Discussion: https://postgr.es/m/1634252654444.90107@mit.edu
2022-01-11 13:46:20 -05:00
Thomas Munro f3e78069db Make EXEC_BACKEND more convenient on Linux and FreeBSD.
Try to disable ASLR when building in EXEC_BACKEND mode, to avoid random
memory mapping failures while testing.  For developer use only, no
effect on regular builds.

Suggested-by: Andres Freund <andres@anarazel.de>
Tested-by: Bossart, Nathan <bossartn@amazon.com>
Discussion: https://postgr.es/m/20210806032944.m4tz7j2w47mant26%40alap3.anarazel.de
2022-01-11 00:04:33 +13:00
Tom Lane 376ce3e404 Prefer $HOME when looking up the current user's home directory.
When we need to identify the home directory on non-Windows, first
consult getenv("HOME").  If that's empty or unset, fall back
on our previous method of checking the <pwd.h> database.

Preferring $HOME allows the user to intentionally point at some
other directory, and it seems to be in line with the behavior of
most other utilities.  However, we shouldn't rely on it completely,
as $HOME is likely to be unset when running as a daemon.

Anders Kaseorg

Discussion: https://postgr.es/m/1634252654444.90107@mit.edu
2022-01-09 19:19:02 -05:00
Michael Paquier fe594abf7c Fix issues with describe queries of extended statistics in psql
This addresses some problems in the describe queries used for extended
statistics:
- Two schema qualifications for the text type were missing for \dX.
- The list of extended statistics listed for a table through \d was
ordered based on the object OIDs, but it is more consistent with the
other commands to order by namespace and then by object name.
- A couple of aliases were not used in \d.  These are removed.

This is similar to commits 1f092a3 and 07f8a9e.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220107022235.GA14051@telsasoft.com
Backpatch-through: 14
2022-01-08 16:44:45 +09:00
Michael Paquier d0d62262d3 Fix thinko coming from 000f3adf
pg_basebackup.c relies on the compression level to not be 0 to decide if
compression should be used, but 000f3adf missed the fact that the
default compression (Z_DEFAULT_COMPRESSION) is -1, which would be used
if specifying --gzip without --compress.

While on it, add some coverage for --gzip, as this is rather easy to
miss.

Reported-by: Christoph Berg
Discussion: https://postgr.es/m/YdhRDMLjabtXOnhY@msg.df7cb.de
2022-01-08 09:12:21 +09:00
Bruce Momjian 27b77ecf9f Update copyright for 2022
Backpatch-through: 10
2022-01-07 19:04:57 -05:00
Michael Paquier 50e144193c Add TAP tests for pg_basebackup with compression
pg_basebackup is able to use gzip to compress the contents of backups
with the tar format, but there were no tests for that.  This adds a
minimalistic set of tests to check the contents of such base backups,
including sanity checks on the contents generated with gzip commands.
The tests are skipped if Postgres is compiled --without-zlib, and the
checks based on the gzip command are skipped if nothing can be found,
following the same model as the existing tests for pg_receivewal.

Reviewed-by: Georgios Kokolatos
Discussion: https://postgr.es/m/Yb3GEgWwcu4wZDuA@paquier.xyz
2022-01-07 14:13:35 +09:00
Michael Paquier 000f3adfdc Refactor tar method of walmethods.c to rely on the compression method
Since d62bcc8, the directory method of walmethods.c uses the compression
method to determine which code path to take.  The tar method, used by
pg_basebackup --format=t, was inconsistent regarding that, as it relied
on the compression level to check if no compression or gzip should be
used.  This commit makes the code more consistent as a whole in this
file, making the tar logic use a compression method rather than
assigning COMPRESSION_NONE that would be ignored.

The options of pg_basebackup are planned to be reworked but we are not
sure yet of the shape they should have as this has some dependency with
the integration of the server-side compression for base backups, so this
is left out for the moment.  This change has as benefit to make easier
the future integration of new compression methods for the tar method of
walmethods.c, for the client-side compression.

Reviewed-by: Georgios Kokolatos
Discussion: https://postgr.es/m/Yb3GEgWwcu4wZDuA@paquier.xyz
2022-01-07 13:48:59 +09:00
Tom Lane 328dfbdabd Extend psql's \lo_list/\dl to be able to print large objects' ACLs.
The ACL is printed when you add + to the command, similarly to
various other psql backslash commands.

Along the way, move the code for this into describe.c,
where it is a better fit (and can share some code).

Pavel Luzanov, reviewed by Georgios Kokolatos

Discussion: https://postgr.es/m/6d722115-6297-bc53-bb7f-5f150e765299@postgrespro.ru
2022-01-06 13:09:05 -05:00
Alvaro Herrera f4566345cf
Create foreign key triggers in partitioned tables too
While user-defined triggers defined on a partitioned table have
a catalog definition for both it and its partitions, internal
triggers used by foreign keys defined on partitioned tables only
have a catalog definition for its partitions.  This commit fixes
that so that partitioned tables get the foreign key triggers too,
just like user-defined triggers.  Moreover, like user-defined
triggers, partitions' internal triggers will now also have their
tgparentid set appropriately.  This is to allow subsequent commit(s)
to make the foreign key related events to be fired in some cases
using the parent table triggers instead of those of partitions'.

This also changes what tgisinternal means in some cases.  Currently,
it means either that the trigger is an internal implementation object
of a foreign key constraint, or a "child" trigger on a partition
cloned from the trigger on the parent.  This commit changes it to
only mean the former to avoid confusion.  As for the latter, it can
be told by tgparentid being nonzero, which is now true both for user-
defined and foreign key's internal triggers.

Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Arne Roland <A.Roland@index.de>
Discussion: https://postgr.es/m/CA+HiwqG7LQSK+n8Bki8tWv7piHD=PnZro2y6ysU2-28JS6cfgQ@mail.gmail.com
2022-01-05 19:00:13 -03:00
Peter Eisentraut 56a3e848c7 pg_dump: Refactor dumpDatabase()
Rearrange the version-dependent pieces in the new more modular style.
2022-01-04 16:19:48 +01:00
Tom Lane dfe67c0e85 Tab completion: don't offer valid constraints in VALIDATE CONSTRAINT.
Improve psql so that "ALTER TABLE foo VALIDATE CONSTRAINT <TAB>"
only offers not-convalidated entries.  While it's not formally
wrong to offer validated ones, there's not much point either,
and it can save some typing if we incorporate this knowledge.

David Fetter, reviewed by Aleksander Alekseev

Discussion: https://postgr.es/m/20210427002433.GB17834@fetter.org
2022-01-03 18:14:01 -05:00
Tom Lane 3e6e86abca pg_dump: avoid unsafe function calls in getPolicies().
getPolicies() had the same disease I fixed in other places in
commit e3fcbbd62, i.e., it was calling pg_get_expr() for
expressions on tables that we don't necessarily have lock on.
To fix, restrict the query to only collect interesting rows,
rather than doing the filtering on the client side.

Like the previous patch, apply to HEAD only for now.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-31 12:47:57 -05:00
Tom Lane d5e8930f50 pg_dump: minor performance improvements from eliminating sub-SELECTs.
Get rid of the "username_subquery" mechanism in favor of doing
local lookups of role names from role OIDs.  The PG backend isn't
terribly smart about scalar SubLinks in SELECT output lists,
so this offers a small performance improvement, at least in
installations with more than a couple of users.  In any case
the old method didn't make for particularly readable SQL code.

While at it, I removed the various custom warning messages about
failing to find an object's owner, in favor of just fatal'ing
in the local lookup function.  AFAIK there is no reason any
longer to treat that as anything but a catalog-corruption case,
and certainly no reason to make translators deal with a dozen
different messages where one would do.  (If it turns out that
fatal() is indeed a bad idea, we can back off to issuing
pg_log_warning() and returning an empty string, resulting in
the same behavior as before, except more consistent.)

Also drop an entirely unnecessary sub-SELECT to check on the
pg_depend status of a sequence relation: we already have a
LEFT JOIN to fetch the row of interest in the FROM clause.

Discussion: https://postgr.es/m/2460369.1640903318@sss.pgh.pa.us
2021-12-31 11:39:26 -05:00
Tom Lane 5e65df64d6 pg_dump: make dumpPublication et al. less unlike sibling functions.
dumpPublication, dumpPublicationNamespace, dumpPublicationTable, and
dumpSubscription failed to check dataOnly.  This is just a latent bug,
because pg_backup_archiver.c would filter out the ArchiveEntry later;
but they're wasting cycles in data-only dumps, and the omission might
become a live bug someday.  In any case, it's not good to have some
dumpFoo functions do this and some not.

On the same reasoning, make dumpPublicationNamespace follow the
same pattern as every other dumpFoo function for checking the
DUMP_COMPONENT_DEFINITION flag.  (Since 5209c0ba0, we wouldn't
even get here if that flag isn't set, so checking it is just
pro forma right now.  But it might not be so forever.)

Since this is just cosmetic and/or future-proofing, no need for
back-patch.
2021-12-30 19:40:53 -05:00
Tom Lane c7cf73eb7b Minor cleanup/optimization in pg_dump.
In the wake of commits 05649b88c and 5209c0ba0, findComments() and
findSecLabels() no longer use their "Archive *fout" arguments,
so get rid of those.

While doing that, I noticed that there's no very good reason why
dumpCompositeTypeColComments() should be doing its own query to fetch
the column names of the composite type, when the calling function has
just fetched the same data.  Tweak it to use that query result.  This
probably doesn't save a lot for most people, because since 5209c0ba0
we won't get into this code at all unless the composite type has at
least one comment.  Nonetheless, it's a wasted query.
2021-12-30 14:29:32 -05:00
Peter Eisentraut 113fa3945f Fix incorrect format placeholders 2021-12-29 10:08:41 +01:00
Tom Lane 0f2abd0544 Add help & tab-complete support for psql's \getenv.
I forgot about these details in 33d3eeadb :-(.
Noted by Christoph Berg.

Discussion: https://postgr.es/m/YcI8i/mduMi91uXY@msg.df7cb.de
2021-12-21 16:18:41 -05:00
Tom Lane 33d3eeadb2 Add a \getenv command to psql.
\getenv fetches the value of an environment variable into a psql
variable.  This is the inverse of the \setenv command that was added
over ten years ago.  We'd not seen a compelling use-case for \getenv
at the time, but upcoming regression test refactoring provides a
sufficient reason to add it now.

Discussion: https://postgr.es/m/1655733.1639871614@sss.pgh.pa.us
2021-12-20 13:17:58 -05:00
Peter Eisentraut e2c52beecd pg_dump: Refactor getIndexes()
Rearrange the version-dependent pieces in the new more modular style.

Discussion: https://www.postgresql.org/message-id/flat/67a28a3f-7b79-a5a9-fcc7-947b170e66f0%40enterprisedb.com
2021-12-20 11:18:01 +01:00
Tom Lane b1c169caf0 Remove some more dead code in pg_dump.
Coverity complained that parts of dumpFunc() and buildACLCommands()
were now unreachable, as indeed they are.  Remove 'em.

In passing, make dumpFunc's handling of protrftypes less gratuitously
different from other fields.
2021-12-19 17:18:34 -05:00
Michael Paquier 22592e10b4 Fix typo in TAP tests of pg_receivewal
Introduced in d62bcc8, noticed while hacking in the area.
2021-12-18 16:30:01 +09:00
Michael Paquier 3d5ffccb6d Add option -N/--no-sync to pg_upgrade
This is an option consistent with what the other tools of src/bin/
(pg_checksums, pg_dump, pg_rewind and pg_basebackup) provide which is
useful for leveraging the I/O effort when testing things.  This is not
to be used in a production environment.

All the regression tests of pg_upgrade are updated to use this new
option.  This happens to cut at most a couple of seconds in environments
constrained on I/O, by avoiding a flush of data folder for the new
cluster upgraded.

Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/YbrhzuBmBxS/DkfX@paquier.xyz
2021-12-18 16:18:45 +09:00
Tom Lane cf0cab868a Remove psql support for server versions preceding 9.2.
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.

Aside from removing code that is dead per the assumption of
server >= 9.2, I tweaked the startup warning for unsupported versions
to complain about too-old servers as well as too-new ones.  The
warning that "Some psql features might not work" applies precisely
to both cases.

Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
2021-12-16 14:02:28 -05:00
Tom Lane c49d926833 Clean up some more freshly-dead code in pg_dump and pg_upgrade.
I missed a few things in 30e7c175b and e469f0aaf,
as noted by Justin Pryzby.

Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
2021-12-16 12:01:59 -05:00
Tom Lane 2a712066d0 Remove pg_dump's --no-synchronized-snapshots switch.
Server versions for which there was a plausible reason to
use this switch are all out of support now.  Leaving it
around would accomplish little except to let careless DBAs
shoot themselves in the foot.

Discussion: https://postgr.es/m/556122.1639520324@sss.pgh.pa.us
2021-12-15 18:44:47 -05:00
Peter Eisentraut bf9a55c107 pg_checksums: Fix data type
Segment numbers should be int, not BlockNumber (see also buffile.c).
Likely no harm, but better for consistency.
2021-12-15 10:29:27 +01:00
Tom Lane e469f0aaf3 Remove pg_upgrade support for upgrading from pre-9.2 servers.
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.

Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
2021-12-14 19:17:55 -05:00
Tom Lane 30e7c175b8 Remove pg_dump/pg_dumpall support for dumping from pre-9.2 servers.
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.  Remove over a thousand lines of code dedicated to
dumping from older server versions.  (As in previous changes of
this sort, we aren't removing pg_restore's ability to read older
archive files ... though it's fair to wonder how that might be
tested nowadays.)  This cleans up some dead code left behind by
commit 989596152.

Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
2021-12-14 17:09:07 -05:00
Andres Freund 45f52709d8 Make PG_TEST_USE_UNIX_SOCKETS work for tap tests on windows.
We need to replace windows-style \ path separators with / when putting socket
directories either in postgresql.conf or libpq connection strings, otherwise
they are interpreted as escapes.

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/4da250a5-4222-1522-f14d-8a72bcf7e38e@enterprisedb.com
2021-12-13 11:29:51 -08:00
Robert Haas 64da07c41a Default to log_checkpoints=on, log_autovacuum_min_duration=10m
The idea here is that when a performance problem is known to have
occurred at a certain point in time, it's a good thing if there is
some information available from the logs to help figure out what
might have happened around that time.

This change attracted an above-average amount of dissent, because
it means that a server with default settings will produce some amount
of log output even if nothing has gone wrong. However, by my count,
the mailing list discussion had about twice as many people in favor
of the change as opposed. The reasons for believing that the extra
log output is not an issue in practice are: (1) the rate at which
messages can be generated by this setting is bounded to one every
few minutes on a properly-configured system and (2) production
systems tend to have a lot more junk in the log from that due to
failed connection attempts, ERROR messages generated by application
activity, and the like.

Bharath Rupireddy, reviewed by Fujii Masao and by me. Many other
people commented on the thread, but as far as I can see that was
discussion of the merits of the change rather than review of the
patch.

Discussion: https://postgr.es/m/CALj2ACX-rW_OeDcp4gqrFUAkf1f50Fnh138dmkd0JkvCNQRKGA@mail.gmail.com
2021-12-13 09:48:48 -05:00
Tom Lane 07eee5a0dc Create a new type category for "internal use" types.
Historically we've put type "char" into the S (String) typcategory,
although calling it a string is a stretch considering it can only
store one byte.  (In our actual usage, it's more like an enum.)
This choice now seems wrong in view of the special heuristics
that parse_func.c and parse_coerce.c have for TYPCATEGORY_STRING:
it's not a great idea for "char" to have those preferential casting
behaviors.

Worse than that, recent patches inventing special-purpose types
like pg_node_tree have assigned typcategory S to those types,
meaning they also get preferential casting treatment that's designed
on the assumption that they can hold arbitrary text.

To fix, invent a new category TYPCATEGORY_INTERNAL for internal-use
types, and assign that to all these types.  I used code 'Z' for
lack of a better idea ('I' was already taken).

This change breaks one query in psql/describe.c, which now needs to
explicitly cast a catalog "char" column to text before concatenating
it with an undecorated literal.  Also, a test case in contrib/citext
now needs an explicit cast to convert citext to "char".  Since the
point of this change is to not have "char" be a surprisingly-available
cast target, these breakages seem OK.

Per report from Ian Campbell.

Discussion: https://postgr.es/m/2216388.1638480141@sss.pgh.pa.us
2021-12-11 14:10:51 -05:00
Andrew Dunstan 745b99c644
Revert "Check that we have a working tar before trying to use it"
This reverts commit f920f7e799.

The patch in effect fixed a problem we didn't have and caused another
instead.

Backpatch to release 14 like original

Discussion: https://postgr.es/m/3655283.1638977975@sss.pgh.pa.us
2021-12-08 16:45:39 -05:00
Andrew Dunstan f920f7e799
Check that we have a working tar before trying to use it
Issue exposed by commit edc2332550 and the buildfarm.

Backpatch to release 14 where this usage started.
2021-12-08 10:24:07 -05:00
Tom Lane 65aaed22a8 Account for TOAST data while scheduling parallel dumps.
In parallel mode, pg_dump tries to order the table-data-dumping
jobs with the largest tables first.  However, it was only
consulting the pg_class.relpages value to determine table size.
This ignores TOAST data, and so we could make poor scheduling
decisions in cases where some large tables are mostly TOASTed
data while others have very little.  To fix, add in the relpages
value for the TOAST table as well.

This patch also fixes a potential integer-overflow issue that
could result in poor scheduling on machines where off_t is
only 32 bits wide.  Such platforms are probably extinct in the
wild, but we do still nominally support them, so repair.

Per complaint from Hans Buschmann.

Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 13:23:07 -05:00
Tom Lane be85727a3d Use PREPARE/EXECUTE for repetitive per-object queries in pg_dump.
For objects such as functions, pg_dump issues the same secondary
data-collection query against each object to be dumped.  This can't
readily be refactored to avoid the repetitive queries, but we can
PREPARE these queries to reduce planning costs.

This patch applies the idea to functions, aggregates, operators, and
data types.  While it could be carried further, the remaining sorts of
objects aren't likely to appear in typical databases enough times to
be worth worrying over.  Moreover, doing the PREPARE is likely to be a
net loss if there aren't at least some dozens of objects to apply the
prepared query to.

Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 13:14:29 -05:00
Tom Lane 9895961529 Avoid per-object queries in performance-critical paths in pg_dump.
Instead of issuing a secondary data-collection query against each
table to be dumped, issue just one query, with a WHERE clause
restricting it to be applied to only the tables we intend to dump.
Likewise for indexes, constraints, and triggers.  This greatly
reduces the number of queries needed to dump a database containing
many tables.  It might seem that WHERE clauses listing many target
OIDs could be inefficient, but at least on recent server versions
this provides a very substantial speedup.

(In principle the same thing could be done with other object types
such as functions; but that would require significant refactoring
of pg_dump, so those will be tackled in a different way in a
following patch.)

The new WHERE clauses depend on the unnest() function, which is
only present in 8.4 and above.  We could implement them differently
for older servers, but there is an ongoing discussion that will
probably result in dropping pg_dump support for servers before 9.2,
so that seems like it'd be wasted work.  For now, just bump the
server version check to require >= 8.4, without stopping to remove
any of the code that's thereby rendered dead.  We'll mop that
situation up soon.

Patch by me, based on an idea from Andres Freund.

Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 13:07:31 -05:00
Tom Lane e3fcbbd623 Postpone calls of unsafe server-side functions in pg_dump.
Avoid calling pg_get_partkeydef(), pg_get_expr(relpartbound),
and regtypeout until we have lock on the relevant tables.
The existing coding is at serious risk of failure if there
are any concurrent DROP TABLE commands going on --- including
drops of other sessions' temp tables.

Arguably this is a bug fix that should be back-patched, but it's
moderately invasive and we've not had all that many complaints
about such failures.  Let's just put it in HEAD for now.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 12:49:49 -05:00
Tom Lane 0c9d84427f Rethink pg_dump's handling of object ACLs.
Throw away most of the existing logic for this, as it was very
inefficient thanks to expensive sub-selects executed to collect
ACL data that we very possibly would have no interest in dumping.
Reduce the ACL handling in the initial per-object-type queries
to be just collection of the catalog ACL fields, as it was
originally.  Fetch pg_init_privs data separately in a single
scan of that catalog, and do the merging calculations on the
client side.  Remove the separate code path used for pre-9.6
source servers; there is no good reason to treat them differently
from newer servers that happen to have empty pg_init_privs.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 12:39:45 -05:00
Tom Lane 5209c0ba0b Refactor pg_dump's tracking of object components to be dumped.
Split the DumpableObject.dump bitmask field into separate bitmasks
tracking which components are requested to be dumped (in the
existing "dump" field) and which components exist for the particular
object (in the new "components" field).  This gets rid of some
klugy and easily-broken logic that involved setting bits and later
clearing them.  More importantly, it restores the originally intended
behavior that pg_dump's secondary data-gathering queries should not
be executed for objects we have no interest in dumping.  That
optimization got broken when the dump flag was turned into a bitmask,
because irrelevant bits tended to remain set in many cases.  Since
the "components" field starts from a minimal set of bits and is
added onto as needed, ANDing it with "dump" provides a reliable
indicator of what we actually have to dump, without having to
complicate the logic that manages the request bits.  This makes
a significant difference in the number of queries needed when,
for example, there are many functions in extensions.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 12:25:48 -05:00
Peter Eisentraut 37b2764593 Some RELKIND macro refactoring
Add more macros to group some RELKIND_* macros:

- RELKIND_HAS_PARTITIONS()
- RELKIND_HAS_TABLESPACE()
- RELKIND_HAS_TABLE_AM()

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/a574c8f1-9c84-93ad-a9e5-65233d6fc00f%40enterprisedb.com
2021-12-03 14:08:19 +01:00
Tom Lane a7da419810 Add configure probe for rl_variable_bind().
Some exceedingly ancient readline libraries lack this function, causing
commit 3d858af07 to fail.  Per buildfarm (via Michael Paquier).

Discussion: https://postgr.es/m/E1msTLm-0007Cm-Ri@gemulon.postgresql.org
2021-12-02 13:06:27 -05:00
Peter Eisentraut a22d6a2cb6 pg_dump: Add missing relkind case
Checking for RELKIND_MATVIEW was forgotten in
guessConstraintInheritance().  This isn't a live problem, since it is
checked in flagInhTables() which relkinds can have parents, and those
entries will have numParents==0 after that.  But after discussion it
was felt that this place should be kept consistent with
flagInhTables() and flagInhAttrs().

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/a574c8f1-9c84-93ad-a9e5-65233d6fc00f@enterprisedb.com
2021-12-02 16:46:28 +01:00
Michael Paquier f2c52eeba9 pg_waldump: Emit stats summary when interrupted by SIGINT
Previously, pg_waldump would not display its statistics summary if it
got interrupted by SIGINT (or say a simple Ctrl+C).  It gains with this
commit a signal handler for SIGINT, trapping the signal to exit at the
earliest convenience to allow a display of the stats summary before
exiting.  This makes the reports more interactive, similarly to strace
-c.

This new behavior makes the combination of the options --stats and
--follow much more useful, so as the user will get a report for any
invocation of pg_waldump in such a case.  Information about the LSN
range of the stats computed is added as a header to the report
displayed.

This implementation comes from a suggestion by Álvaro Herrera and
myself, following a complaint by the author of this patch about --stats
and --follow not being useful together originally.

As documented, this is not supported on Windows, though its support
would be possible by catching the terminal events associated to Ctrl+C,
for example (this may require a more centralized implementation, as
other tools could benefit from a common API).

Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACUUx3PcK2z9h0_m7vehreZAUbcmOky9WSEpe8TofhV=PQ@mail.gmail.com
2021-12-02 13:52:16 +09:00
Michael Paquier 0df9641d39 Move into separate file all the SQL queries used in pg_upgrade tests
The existing pg_upgrade/test.sh and the buildfarm code have been holding
the same set of SQL queries when doing cross-version upgrade tests to
adapt the objects created by the regression tests before the upgrade
(mostly, incompatible or non-existing objects need to be dropped from
the origin, perhaps re-created).

This moves all those SQL queries into a new, separate, file with a set
of \if clauses to handle the version checks depending on the old version
of the cluster to-be-upgraded.

The long-term plan is to make the buildfarm code re-use this new SQL
file, so as committers are able to fix any compatibility issues in the
tests of pg_upgrade with a refresh of the core code, without having to
poke at the buildfarm client.  Note that this is only able to handle the
main regression test suite, and that nothing is done yet for contrib
modules yet (these have more issues like their database names).

A backpatch down to 10 is done, adapting the version checks as this
script needs to be only backward-compatible, so as it becomes possible
to clean up a maximum amount of code within the buildfarm client.

Author: Justin Pryzby, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 10
2021-12-02 10:31:20 +09:00
Tom Lane 3d858af07e psql: initialize comment-begin setting to a useful value by default.
Readline's meta-# command is supposed to insert a comment marker
at the start of the current line.  However, the default marker is
"#" which is entirely unhelpful for SQL.  Set it to "-- " instead.
(This setting can still be overridden in one's ~/.inputrc file,
so this change won't affect people who have already taken steps
to make the command useful.)

Discussion: https://postgr.es/m/CAJcOf-cAdMVr7azeYR7nWKsNp7qhORzc84rV6d7m7knG5Hrtsw@mail.gmail.com
2021-12-01 12:24:50 -05:00
Tom Lane c2f654930e psql: treat "--" comments between queries as separate history entries.
If we've not yet collected any non-whitespace, non-comment token for a
new query, flush the current input line to history before reading
another line.  This aligns psql's history behavior with the observation
that lines containing only comments are generally not thought of as
being part of the next query.  psql's prompting behavior is consistent
with that view, too, since it won't change the prompt until you
enter something that's neither whitespace nor a "--" comment.

Greg Nancarrow, simplified a bit by me

Discussion: https://postgr.es/m/CAJcOf-cAdMVr7azeYR7nWKsNp7qhORzc84rV6d7m7knG5Hrtsw@mail.gmail.com
2021-12-01 12:18:25 -05:00
Michael Paquier 9270778f46 Improve psql tab completion for various DROP commands
The following improvements are done:
- Handling of RESTRICT/CASCADE for DROP OWNED, matviews and policies.
- Handling of DROP TRANSFORM

This is a continuation of the work done in 0cd6d3b and f44ceb4.

Author: Ken Kato
Reviewed-by: Asif Rehman
Discussion: https://postgr.es/m/0fafb73f3a0c6bcec817a25ca9d5a853@oss.nttdata.com
2021-12-01 10:50:51 +09:00
Michael Paquier 6fb7c5d67c Centralize timestamp computation of control file on updates
This commit moves the timestamp computation of the control file within
the routine of src/common/ in charge of updating the backend's control
file, which is shared by multiple frontend tools (pg_rewind,
pg_checksums and pg_resetwal) and the backend itself.

This change has as direct effect to update the control file's timestamp
when writing the control file in pg_rewind and pg_checksums, something
that is helpful to keep track of control file updates for those
operations, something also tracked by the backend at startup within its
logs.  This part is arguably a bug, as ControlFileData->time should be
updated each time a new version of the control file is written, but this
is a behavior change so no backpatch is done.

Author: Amul Sul
Reviewed-by: Nathan Bossart, Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/CAAJ_b97nd_ghRpyFV9Djf9RLXkoTbOUqnocq11WGq9TisX09Fw@mail.gmail.com
2021-11-29 13:36:13 +09:00
Tom Lane 3804539e48 Replace random(), pg_erand48(), etc with a better PRNG API and algorithm.
Standardize on xoroshiro128** as our basic PRNG algorithm, eliminating
a bunch of platform dependencies as well as fundamentally-obsolete PRNG
code.  In addition, this API replacement will ease replacing the
algorithm again in future, should that become necessary.

xoroshiro128** is a few percent slower than the drand48 family,
but it can produce full-width 64-bit random values not only 48-bit,
and it should be much more trustworthy.  It's likely to be noticeably
faster than the platform's random(), depending on which platform you
are thinking about; and we can have non-global state vectors easily,
unlike with random().  It is not cryptographically strong, but neither
are the functions it replaces.

Fabien Coelho, reviewed by Dean Rasheed, Aleksander Alekseev, and myself

Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2105241211230.165418@pseudo
2021-11-28 21:33:07 -05:00
Michael Paquier f44ceb46ec Improve psql tab completion for views, FDWs, sequences and transforms
The following improvements are done:
- Addition of type completion for ALTER SEQUENCE AS.
- Ignore ALTER for transforms, as the command is not supported.
- Addition of more completion for ALTER FOREIGN DATA WRAPPER.
- Addition of options related to columns in ALTER VIEW.

This is a continuation of the work done in 0cd6d3b.

Author: Ken Kato
Discussion: https://postgr.es/m/9497ae9ca1b31eb9b1e97aded1c2ab07@oss.nttdata.com
2021-11-29 10:28:29 +09:00
Michael Paquier f79962d826 Remove useless LZ4 system call on failure when writing file header
If an error occurs when writing the LZ4 file header, LZ4F_compressEnd()
was called in the error code path of write(), followed by
LZ4F_freeCompressionContext() to finish the cleanup.  The code as-is was
not broken, but the LZ4F_compressEnd() proves to not be necessary as
there are no contents to flush at this stage, so remove it.

Per gripe from Jeevan Ladhe and Robert Haas.

Discussion: https://postgr.es/m/CAOgcT0PE33wbD7giAT1OSkNJt=p-vu8huq++qh=ny9O=SCP5aA@mail.gmail.com
2021-11-24 20:12:54 +09:00
Tom Lane b55f2b6926 Adjust pg_dump's priority ordering for casts.
When a stored expression depends on a user-defined cast, the backend
records the dependency as being on the cast's implementation function
--- or indeed, if there's no cast function involved but just
RelabelType or CoerceViaIO, no dependency is recorded at all.  This
is problematic for pg_dump, which is at risk of dumping things in the
wrong order leading to restore failures.  Given the lack of previous
reports, the risk isn't that high, but it can be demonstrated if the
cast is used in some view whose rowtype is then used as an input or
result type for some other function.  (That results in the view
getting hoisted into the functions portion of the dump, ahead of
the cast.)

A logically bulletproof fix for this would require including the
cast's OID in the parsed form of the expression, whence it could be
extracted by dependency.c, and then the stored dependency would force
pg_dump to do the right thing.  Such a change would be fairly invasive,
and certainly not back-patchable.  Moreover, since we'd prefer that
an expression using cast syntax be equal() to one doing the same
thing by explicit function call, the cast OID field would have to
have special ignored-by-comparisons semantics, making things messy.

So, let's instead fix this by a very simple hack in pg_dump: change
the object-type priority order so that casts are initially sorted
before functions, immediately after types.  This fixes the problem
in a fairly direct way for casts that have no implementation function.
For those that do, the implementation function will be hoisted to just
before the cast by the dependency sorting step, so that we still have
a valid dump order.  (I'm not sure that this provides a full guarantee
of no problems; but since it's been like this for many years without
any previous reports, this is probably enough to fix it in practice.)

Per report from Дмитрий Иванов.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAPL5KHoGa3uvyKp6z6m48LwCnTsK+LRQ_mcA4uKGfqAVSEjV_A@mail.gmail.com
2021-11-22 17:16:29 -05:00
Tom Lane 0b126c6a4b Fix pg_dump --inserts mode for generated columns with dropped columns.
If a table contains a generated column that's preceded by a dropped
column, dumpTableData_insert failed to account for the dropped
column, and would emit DEFAULT placeholder(s) in the wrong column(s).
This resulted in failures at restore time.  The default COPY code path
did not have this bug, likely explaining why it wasn't noticed sooner.

While we're fixing this, we can be a little smarter about the
situation: (1) avoid unnecessarily fetching the values of generated
columns, (2) omit generated columns from the output, too, if we're
using --column-inserts.  While these modes aren't expected to be
as high-performance as the COPY path, we might as well be as
efficient as we can; it doesn't add much complexity.

Per report from Дмитрий Иванов.
Back-patch to v12 where generated columns came in.

Discussion: https://postgr.es/m/CAPL5KHrkBniyQt5e1rafm5DdXvbgiiqfEQEJ9GjtVzN71Jj5pA@mail.gmail.com
2021-11-22 15:25:48 -05:00
Alvaro Herrera 2fed48f48f
Be more specific about OOM in XLogReaderAllocate
A couple of spots can benefit from an added errdetail(), which matches
what we were already doing in other places; and those that cannot
withstand errdetail() can get a more descriptive primary message.

Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/CALj2ACV+cX1eM03GfcA=ZMLXh5fSn1X1auJLz3yuS1duPSb9QA@mail.gmail.com
2021-11-22 13:43:43 -03:00
Tom Lane 282b6d00ab pg_receivewal, pg_recvlogical: allow canceling initial password prompt.
Previously it was impossible to terminate these programs via control-C
while they were prompting for a password.  We can fix that trivially
for their initial password prompts, by moving setup of the SIGINT
handler from just before to just after their initial GetConnection()
calls.

This fix doesn't permit escaping out of later re-prompts, but those
should be exceedingly rare, since the user's password or the server's
authentication setup would have to have changed meanwhile.  We
considered applying a fix similar to commit 46d665bc2, but that
seemed more complicated than it'd be worth.  Moreover, this way is
back-patchable, which that wasn't.

The misbehavior exists in all supported versions, so back-patch to all.

Tom Lane and Nathan Bossart

Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
2021-11-21 14:13:35 -05:00
Tom Lane 46d665bc26 Allow psql's other uses of simple_prompt() to be interrupted by ^C.
This fills in the work left un-done by 5f1148224.  \prompt can
be canceled out of now, and so can password prompts issued during
\connect.  (We don't need to do anything for password prompts
issued during startup, because we aren't yet trapping SIGINT
at that point.)

Nathan Bossart

Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
2021-11-19 12:11:46 -05:00
Michael Paquier 0cd6d3b3c5 Improve psql tab completion for transforms, domains and sequences
The following improvements are done:
- Addition of some tab completion for CREATE DOMAIN.
- Addition of some tab completion for CREATE TRANSFORM.
- Addition of type completion for CREATE SEQUENCE AS.

Author: Ken Kato
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/8d370135aef066659eef8e8fbfa6315b@oss.nttdata.com
2021-11-19 11:02:15 +09:00
Tom Lane 5f1148224b Provide a variant of simple_prompt() that can be interrupted by ^C.
Up to now, you couldn't escape out of psql's \password command
by typing control-C (or other local spelling of SIGINT).  This
is pretty user-unfriendly, so improve it.  To do so, we have to
modify the functions provided by pg_get_line.c; but we don't
want to mess with psql's SIGINT handler setup, so provide an
API that lets that handler cause the cancel to occur.

This relies on the assumption that we won't do any major harm by
longjmp'ing out of fgets().  While that's obviously a little shaky,
we've long had the same assumption in the main input loop, and few
issues have been reported.

psql has some other simple_prompt() calls that could usefully
be improved the same way; for now, just deal with \password.

Nathan Bossart, minor tweaks by me

Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
2021-11-17 19:09:54 -05:00
Tom Lane 248c3a937d Clean up error handling in pg_basebackup's walmethods.c.
The error handling here was a mess, as a result of a fundamentally
bad design (relying on errno to keep its value much longer than is
safe to assume) as well as a lot of just plain sloppiness, both as
to noticing errors at all and as to reporting the correct errno.
Moreover, the recent addition of LZ4 compression broke things
completely, because liblz4 doesn't use errno to report errors.

To improve matters, keep the error state in the DirectoryMethodData or
TarMethodData struct, and add a string field so we can handle cases
that don't set errno.  (The tar methods already had a version of this,
but it can be done more efficiently since all these cases use a
constant error string.)  Make the dir and tar methods handle errors
in basically identical ways, which they didn't before.

This requires copying errno into the state struct in a lot of places,
which is a bit tedious, but it has the virtue that we can get rid of
ad-hoc code to save and restore errno in a number of places ... not
to mention that it fixes other places that should've saved/restored
errno but neglected to.

In passing, fix some pointlessly static buffers to be ordinary
local variables.

There remains an issue about exactly how to handle errors from
fsync(), but that seems like material for its own patch.

While the LZ4 problems are new, all the rest of this is fixes for
old bugs, so backpatch to v10 where walmethods.c was introduced.

Patch by me; thanks to Michael Paquier for review.

Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
2021-11-17 14:16:34 -05:00
Tom Lane 3cac2c8caa Handle close() failures more robustly in pg_dump and pg_basebackup.
Coverity complained that applying get_gz_error after a failed gzclose,
as we did in one place in pg_basebackup, is unsafe.  I think it's
right: it's entirely likely that the call is touching freed memory.
Change that to inspect errno, as we do for other gzclose calls.

Also, be careful to initialize errno to zero immediately before any
gzclose() call where we care about the error status.  (There are
some calls where we don't, because we already failed at some previous
step.)  This ensures that we don't get a misleadingly irrelevant
error code if gzclose() fails in a way that doesn't set errno.
We could work harder at that, but it looks to me like all such cases
are basically can't-happen if we're not misusing zlib, so it's
not worth the extra notational cruft that would be required.

Also, fix several places that simply failed to check for close-time
errors at all, mostly at some remove from the close or gzclose itself;
and one place that did check but didn't bother to report the errno.

Back-patch to v12.  These mistakes are older than that, but between
the frontend logging API changes that happened in v12 and the fact
that frontend code can't rely on %m before that, the patch would need
substantial revision to work in older branches.  It doesn't quite
seem worth the trouble given the lack of related field complaints.

Patch by me; thanks to Michael Paquier for review.

Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
2021-11-17 13:08:25 -05:00
Peter Eisentraut 303d4eb1c5 Fix incorrect format placeholders 2021-11-17 07:30:30 +01:00
Tom Lane d6eb5a0c25 Make psql's \password default to CURRENT_USER, not PQuser(conn).
The documentation says plainly that \password acts on "the current user"
by default.  What it actually acted on, or tried to, was the username
used to log into the current session.  This is not the same thing if
one has since done SET ROLE or SET SESSION AUTHENTICATION.  Aside from
the possible surprise factor, it's quite likely that the current role
doesn't have permissions to set the password of the original role.

To fix, use "SELECT CURRENT_USER" to get the role name to act on.
(This syntax works with servers at least back to 7.0.)  Also, in
hopes of reducing confusion, include the role name that will be
acted on in the password prompt.

The discrepancy from the documentation makes this a bug, so
back-patch to all supported branches.

Patch by me; thanks to Nathan Bossart for review.

Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
2021-11-12 14:55:32 -05:00
Robert Haas 5a1007a508 Have the server properly terminate tar archives.
Earlier versions of PostgreSQL featured a version of pg_basebackup
that wanted to edit tar archives but was too dumb to parse them
properly. The server made things easier for the client by failing
to add the two blocks of zero bytes that ought to end a tar file,
leaving it up to the client to do that.

But since commit 23a1c6578c, we
don't need this hack any more, because pg_basebackup is now smarter
and can parse tar files even if they are properly terminated! So
change the server to always properly terminate the tar files. Older
versions of pg_basebackup can't talk to new servers anyway, so
there's no compatibility break.

On the pg_basebackup side, we see still need to add the terminating
zero bytes if we're talking to an older server, but not when the
server is v15+. Hopefully at some point we'll be able to remove
some of this compatibility cruft, but it seems best to hang on to
it for now.

In passing, add a file header comment to bbstreamer_tar.c, to make
it clearer what's going on here.

Discussion: http://postgr.es/m/CA+TgmoZbNzsWwM4BE5Jb_qHncY817DYZwGf+2-7hkMQ27ZwsMQ@mail.gmail.com
2021-11-09 14:29:15 -05:00
Amit Kapila b3812d0b9b Rename some enums to use TABLE instead of REL.
Commit 5a2832465f introduced some enums to represent all tables in schema
publications and used REL in their names. Use TABLE instead of REL in
those enums to avoid confusion with other objects like SEQUENCES that can
be part of a publication in the future.

In the passing, (a) Change one of the newly introduced error messages to
make it consistent for Create and Alter commands, (b) add missing alias in
one of the SQL Statements that is used to print publications associated
with the table.

Reported-by: Tomas Vondra, Peter Smith
Author: Vignesh C
Reviewed-by: Hou Zhijie, Peter Smith
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com
2021-11-09 08:39:33 +05:30
Robert Haas 57b5a9646d Minimal fix for unterminated tar archive problem.
Commit 23a1c6578c improved
pg_basebackup's ability to parse tar archives, but also arranged
to parse them only when we need to make some modification to the
contents of the archive. That's a problem, because the server
doesn't actually terminate tar archives. When the new parsing
logic was engaged, pg_basebackup would properly terminate the
tar file, but when it was skipped, pg_basebackup would just write
whatever it got from the server, meaning that the terminator
was missing.

Most versions of tar are willing to overlook the missing terminator, but
the AIX buildfarm animals were not. Fix by inventing a new kind of
bbstreamer that just blindly adds a terminator, and using it whenever we
don't parse the tar archive.

Discussion: http://postgr.es/m/CA+TgmoZbNzsWwM4BE5Jb_qHncY817DYZwGf+2-7hkMQ27ZwsMQ@mail.gmail.com
2021-11-08 16:36:06 -05:00
Tom Lane b0cf5444f9 Fix incorrect format placeholder.
Per buildfarm warnings.
2021-11-08 14:32:29 -05:00
Robert Haas ccf289745d Remove tests added by bd807be693.
The buildfarm is unhappy. It's not obvious why it doesn't like
these tests, but let's remove them until we figure it out.

Discussion: http://postgr.es/m/462618.1636171009@sss.pgh.pa.us
2021-11-07 15:32:32 -05:00
Tom Lane 568620dfd6 contrib/sslinfo needs a fix too to make hamerkop happy.
Re-ordering the #include's is a bit problematic here because
libpq/libpq-be.h needs to include <openssl/ssl.h>.  Instead,
let's #undef the unwanted macro after all the #includes.
This is definitely uglier than the other way, but it should
work despite possible future header rearrangements.

(A look at the openssl headers indicates that X509_NAME is the
only conflicting symbol that we use.)

In passing, remove a related but long-incorrect comment in
pg_backup_archiver.h.

Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
2021-11-07 11:33:53 -05:00
Tomas Vondra dafcf887da Mark mystreamer variable as PG_USED_FOR_ASSERTS_ONLY
Silences warnings about unused variable, when built without asserts.
2021-11-06 16:32:11 +01:00
Robert Haas 23a1c6578c Introduce 'bbstreamer' abstraction to modularize pg_basebackup.
pg_basebackup knows how to do quite a few things with a backup that it
gets from the server, like just write out the files, or compress them
first, or even parse the tar format and inject a modified
postgresql.auto.conf file into the archive generated by the server.
Unforatunely, this makes pg_basebackup.c a very large source file, and
also somewhat difficult to enhance, because for example the knowledge
that the server is sending us a 'tar' file rather than some other sort
of archive is spread all over the place rather than centralized.

In an effort to improve this situation, this commit invents a new
'bbstreamer' abstraction. Each archive received from the server is
fed to a bbstreamer which may choose to dispose of it or pass it
along to some other bbstreamer. Chunks may also be "labelled"
according to whether they are part of the payload data of a file
in the archive or part of the archive metadata.

So, for example, if we want to take a tar file, modify the
postgresql.auto.conf file it contains, and the gzip the result
and write it out, we can use a bbstreamer_tar_parser to parse the
tar file received from the server, a bbstreamer_recovery_injector
to modify the contents of postgresql.auto.conf, a
bbstreamer_tar_archiver to replace the tar headers for the file
modified in the previous step with newly-built ones that are
correct for the modified file, and a bbstreamer_gzip_writer to
gzip and write the resulting data. Only the objects with "tar"
in the name know anything about the tar archive format, and in
theory we could re-archive using some other format rather than
"tar" if somebody wanted to write the code.

These chances do add a substantial amount of code, but I think the
result is a lot more maintainable and extensible. pg_basebackup.c
itself shrinks by roughly a third, with a lot of the complexity
previously contained there moving into the newly-added files.

Patch by me. The larger patch series of which this is a part has been
reviewed and tested at various times by Andres Freund, Sumanta
Mukherjee, Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja,
Mark Dilger, Sergei Kornilov, and Jeevan Ladhe.

Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
2021-11-05 10:26:18 -04:00
Robert Haas bd807be693 amcheck: Add additional TOAST pointer checks.
Expand the checks of toasted attributes to complain if the rawsize is
overlarge.  For compressed attributes, also complain if compression
appears to have expanded the attribute or if the compression method is
invalid.

Mark Dilger, reviewed by Justin Pryzby, Alexander Alekseev, Heikki
Linnakangas, Greg Stark, and me.

Discussion: http://postgr.es/m/8E42250D-586A-4A27-B317-8B062C3816A8@enterprisedb.com
2021-11-05 09:24:25 -04:00
Michael Paquier a5b336b8b9 Improve psql tab completion for COMMENT
Completion is added for more object types, like domain constraints, text
search-ish objects or policies.  Moreover, the area is reorganized,
changing the list of objects supported by COMMENT to be in the same
order as the documentation to ease future additions.

Author: Ken Kato
Reviewed-by: Fujii Masao, Shinya Kato, Suraj Khamkar, Michael Paquier
Discussion: https://postgr.es/m/6e0c2f3f657b229bea32d098d118f307@oss.nttdata.com
2021-11-05 15:25:36 +09:00
Michael Paquier babbbb595d Add support for LZ4 compression in pg_receivewal
pg_receivewal gains a new option, --compression-method=lz4, available
when the code is compiled with --with-lz4.  Similarly to gzip, this
gives the possibility to compress archived WAL segments with LZ4.  This
option is not compatible with --compress.

The implementation uses LZ4 frames, and is compatible with simple lz4
commands.  Like gzip, using --synchronous ensures that any data will be
flushed to disk within the current .partial segment, so as it is
possible to retrieve as much WAL data as possible even from a
non-completed segment (this requires completing the partial file with
zeros up to the WAL segment size supported by the backend after
decompression, but this is the same as gzip).

The calculation of the streaming start LSN is able to transparently find
and check LZ4-compressed segments.  Contrary to gzip where the
uncompressed size is directly stored in the object read, the LZ4 chunk
protocol does not store the uncompressed data by default.  There is
contentSize that can be used with LZ4 frames by that would not help if
using an archive that includes segments compressed with the defaults of
a "lz4" command, where this is not stored.  So, this commit has taken
the most extensible approach by decompressing the already-archived
segment to check its uncompressed size, through a blank output buffer in
chunks of 64kB (no actual performance difference noticed with 8kB, 16kB
or 32kB, and the operation in itself is actually fast).

Tests have been added to verify the creation and correctness of the
generated LZ4 files.  The latter is achieved by the use of command
"lz4", if found in the environment.

The tar-based WAL method in walmethods.c, used now only by
pg_basebackup, does not know yet about LZ4.  Its code could be extended
for this purpose.

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
2021-11-05 11:33:25 +09:00
Michael Paquier 9588622945 Fix some thinkos with pg_receivewal --compression-method
The option name was incorrect in one of the error messages, and the
short option 'I' was used in the code but we did not intend things to be
this way.  While on it, fix the documentation to refer to a "method",
and not a "level.

Oversights in commit d62bcc8, that I have detected after more review of
the LZ4 patch for pg_receivewal.
2021-11-04 12:32:37 +09:00
Michael Paquier d62bcc8b07 Rework compression options of pg_receivewal
pg_receivewal includes since cada1af the option --compress, to allow the
compression of WAL segments using gzip, with a value of 0 (the default)
meaning that no compression can be used.

This commit introduces a new option, called --compression-method, able
to use as values "none", the default, and "gzip", to make things more
extensible.  The case of --compress=0 becomes fuzzy with this option
layer, so we have made the choice to make pg_receivewal return an error
when using "none" and a non-zero compression level, meaning that the
authorized values of --compress are now [1,9] instead of [0,9].  Not
specifying --compress with "gzip" as compression method makes
pg_receivewal use the default of zlib instead (Z_DEFAULT_COMPRESSION).

The code in charge of finding the streaming start LSN when scanning the
existing archives is refactored and made more extensible.  While on it,
rename "compression" to "compression_level" in walmethods.c, to reduce
the confusion with the introduction of the compression method, even if
the tar method used by pg_basebackup does not rely on the compression
method (yet, at least), but just on the compression level (this area
could be improved more, actually).

This is in preparation for an upcoming patch that adds LZ4 support to
pg_receivewal.

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar,
Robert Haas
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
2021-11-04 11:10:31 +09:00
Peter Eisentraut ef6f047d2c Fix incorrect format placeholder 2021-11-03 07:34:28 +01:00
Fujii Masao d8dba4d030 pgbench: Fix typo in comment.
Discussion: https://postgr.es/m/f9041ec2-46b6-1b41-0e84-9c8a1e2d6bda@oss.nttdata.com
2021-11-02 23:08:02 +09:00
Fujii Masao cd29be5459 pgbench: Improve error-handling in pgbench.
Previously failures of initial connection and logfile open caused pgbench
to proceed the benchmarking, report the incomplete results and exit with
status 2. It didn't make sense to proceed the benchmarking even when
pgbench could not start as prescribed.

This commit improves pgbench so that early errors that occur when
starting benchmark such as those failures should make pgbench exit
immediately with status 1.

Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
2021-11-02 22:49:57 +09:00
Michael Paquier 0f9b9938a0 Add TAP test for pg_receivewal with timeline switch
pg_receivewal is able to follow a timeline switch, but this was not
tested.  This test uses an empty archive location with a restart done
from a slot, making its implementation a tad simpler than if we would
reuse an existing archive directory.

Author: Ronan Dunklau
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
2021-11-01 13:16:04 +09:00
Tom Lane b21415595c Doc: improve README files associated with TAP tests.
Rearrange src/test/perl/README so that the first section is more
clearly "how to run these tests", and the rest "how to write new
tests".  Add some basic info there about debugging test failures.
Then, add cross-refs to that READNE from other READMEs that
describe how to run TAP tests.

Per suggestion from Kevin Burke, though this is not his original
patch.

Discussion: https://postgr.es/m/CAKcy5eiSbwiQnmCfnOnDCVC7B8fYyev3E=6pvvECP9pLE-Fcuw@mail.gmail.com
2021-10-31 18:12:44 -04:00
Peter Eisentraut fd2706589a pg_dump: Refactor messages
This reduces the number of separate messages for translation.
2021-10-30 19:25:10 +02:00
Michael Paquier d680992af5 Speed up TAP tests of pg_receivewal
This commit improves the speed of those tests by 25~30%, using some
simple ideas to reduce the amount of data written by pg_receivewal:
- Use a segment size of 1MB.  While reducing the amount of data zeroed
by pg_receivewal for the new segments, this improves the code coverage
with a non-default segment size.
- In the last test involving a slot's restart_lsn, generate a checkpoint
to advance the redo LSN and the WAL retained by the slot created,
reducing the number of segments that need to be archived.  This counts
for most of the gain.
- Minimize the amount of data inserted into the dummy table.

Reviewed-by: Ronan Dunklau
Discussion: https://postgr.es/m/YXqYKAdVEqmyTltK@paquier.xyz
2021-10-29 10:41:44 +09:00
Magnus Hagander eff61383b9 Clarify that --system reindexes system catalogs *only*
Make this more clear both in the help message and docs.

Reviewed-By: Michael Paquier
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CABUevEw6Je0WUFTLhPKOk4+BoBuDrE-fKw3N4ckqgDBMFu4paA@mail.gmail.com
2021-10-27 16:28:11 +02:00
Michael Paquier 70bfc5ae53 Add test for copy of shared dependencies from template database
As 98ec35b has proved, there has never been any coverage in this area of
the code.  This commit adds a new TAP test with a template database that
includes a small set of shared dependencies copied to a new database.
The test is added in createdb, where we have never tested that -T
generates a query with TEMPLATE, either.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/YXDTl+PfSnqmbbkE@paquier.xyz
2021-10-27 16:02:19 +09:00
Amit Kapila 5a2832465f Allow publishing the tables of schema.
A new option "FOR ALL TABLES IN SCHEMA" in Create/Alter Publication allows
one or more schemas to be specified, whose tables are selected by the
publisher for sending the data to the subscriber.

The new syntax allows specifying both the tables and schemas. For example:
CREATE PUBLICATION pub1 FOR TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2;
OR
ALTER PUBLICATION pub1 ADD TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2;

A new system table "pg_publication_namespace" has been added, to maintain
the schemas that the user wants to publish through the publication.
Modified the output plugin (pgoutput) to publish the changes if the
relation is part of schema publication.

Updates pg_dump to identify and dump schema publications. Updates the \d
family of commands to display schema publications and \dRp+ variant will
now display associated schemas if any.

Author: Vignesh C, Hou Zhijie, Amit Kapila
Syntax-Suggested-by: Tom Lane, Alvaro Herrera
Reviewed-by: Greg Nancarrow, Masahiko Sawada, Hou Zhijie, Amit Kapila, Haiying Tang, Ajin Cherian, Rahila Syed, Bharath Rupireddy, Mark Dilger
Tested-by: Haiying Tang
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ@mail.gmail.com
2021-10-27 07:44:52 +05:30
Michael Paquier f61e1dd2ce Allow pg_receivewal to stream from a slot's restart LSN
Prior to this patch, when running pg_receivewal, the streaming start
point would be the current location of the archives if anything is
found in the local directory where WAL segments are written, and
pg_receivewal would fall back to the current WAL flush location if there
are no archives, as of the result of an IDENTIFY_SYSTEM command.

If for some reason the WAL files from pg_receivewal were moved, it is
better to try a restart where we left at, which is the replication
slot's restart_lsn instead of skipping right to the current flush
location, to avoid holes in the WAL backed up.  This commit changes
pg_receivewal to use the following sequence of methods to determine the
starting streaming LSN:
- Scan the local archives.
- Use the slot's restart_lsn, if supported by the backend and if a slot
is defined.
- Fallback to the current flush LSN as reported by IDENTIFY_SYSTEM.

To keep compatibility with older server versions, we only attempt to use
READ_REPLICATION_SLOT if the backend version is at least 15, and
fallback to the older behavior of streaming from the current flush
LSN if the command is not supported.

Some TAP tests are added to cover this feature.

Author: Ronan Dunklau
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
2021-10-26 09:30:37 +09:00
Tom Lane 70bef49400 Fix minor memory leaks in pg_dump.
I found these by running pg_dump under "valgrind --leak-check=full".

The changes in flagInhIndexes() and getIndexes() replace allocation of
an array of which we use only some elements by individual allocations
of just the actually-needed objects.  The previous coding wasted some
memory, but more importantly it confused valgrind's leak tracking.

collectComments() and collectSecLabels() remain major blots on
the valgrind report, because they don't PQclear their query
results, in order to avoid a lot of strdup's.  That's a dubious
tradeoff, but I'll leave it alone here; an upcoming patch will
modify those functions enough to justify changing the tradeoff.
2021-10-24 12:38:26 -04:00
Andrew Dunstan b3b4d8e68a
Move Perl test modules to a better namespace
The five modules in our TAP test framework all had names in the top
level namespace. This is unwise because, even though we're not
exporting them to CPAN, the names can leak, for example if they are
exported by the RPM build process. We therefore move the modules to the
PostgreSQL::Test namespace. In the process PostgresNode is renamed to
Cluster, and TestLib is renamed to Utils. PostgresVersion becomes simply
PostgreSQL::Version, to avoid possible confusion about what it's the
version of.

Discussion: https://postgr.es/m/aede93a4-7d92-ef26-398f-5094944c2504@dunslane.net

Reviewed by Erik Rijkers and Michael Paquier
2021-10-24 10:28:19 -04:00
Noah Misch fdd965d074 Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start.  That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index.  Queries that use the
resulting index can silently fail to find rows.  Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation.  It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).

Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.

Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
2021-10-23 18:36:38 -07:00
Tom Lane 92316a4582 In pg_dump, use simplehash.h to look up dumpable objects by OID.
Create a hash table that indexes dumpable objects by CatalogId
(that is, catalog OID + object OID).  Use this to replace the
former catalogIdMap array, as well as various other single-
catalog index arrays, and also the extension membership map.

In principle this should be faster for databases with many objects,
since lookups are now O(1) not O(log N).  However, it seems that these
lookups are pretty much negligible in context, so that no overall
performance change can be measured.  But having only one lookup
data structure to maintain makes the code simpler and more flexible,
so let's do it anyway.

Discussion: https://postgr.es/m/2595220.1634855245@sss.pgh.pa.us
2021-10-22 17:19:03 -04:00
Tom Lane 2acc84c6fd pg_dump: fix mis-dumping of non-global default privileges.
Non-global default privilege entries should be dumped as-is,
not made relative to the default ACL for their object type.
This would typically only matter if one had revoked some
on-by-default privileges in a global entry, and then wanted
to grant them again in a non-global entry.

Per report from Boris Korzun.  This is an old bug, so back-patch
to all supported branches.

Neil Chen, test case by Masahiko Sawada

Discussion: https://postgr.es/m/111621616618184@mail.yandex.ru
Discussion: https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
2021-10-22 15:22:25 -04:00
Tom Lane 4438eb4a49 pg_dump: Reorganize getTables()
Along the same lines as 047329624, ed2c7f65b and daa9fe8a5, reduce
code duplication by having just one copy of the parts of the query
that are the same across all server versions; and make the
conditionals control the smallest possible amount of code.
This also gets rid of the confusing assortment of different ways
to accomplish the same result that we had here before.

While at it, make sure all three relevant parts of the function
list the fields in the same order.  This is just neatnik-ism,
of course.

Discussion: https://postgr.es/m/1240992.1634419055@sss.pgh.pa.us
2021-10-19 17:22:22 -04:00
Daniel Gustafsson 998d060f3d Fix bug in TOC file error message printing
If the blob TOC file cannot be parsed, the error message was failing
to print the filename as the variable holding it was shadowed by the
destination buffer for parsing.  When the filename fails to parse,
the error will print an empty string:

 ./pg_restore -d foo -F d dump
 pg_restore: error: invalid line in large object TOC file "": ..

..instead of the intended error message:

 ./pg_restore -d foo -F d dump
 pg_restore: error: invalid line in large object TOC file "dump/blobs.toc": ..

Fix by renaming both variables as the shared name was too generic to
store either and still convey what the variable held.

Backpatch all the way down to 9.6.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/A2B151F5-B32B-4F2C-BA4A-6870856D9BDE@yesql.se
Backpatch-through: 9.6
2021-10-19 12:59:54 +02:00
Daniel Gustafsson 1d7641d51a Fix sscanf limits in pg_basebackup and pg_dump
Make sure that the string parsing is limited by the size of the
destination buffer.

In pg_basebackup the available values sent from the server
is limited to two characters so there was no risk of overflow.

In pg_dump the buffer is bounded by MAXPGPATH, and thus the limit
must be inserted via preprocessor expansion and the buffer increased
by one to account for the terminator. There is no risk of overflow
here, since in this case, the buffer scanned is smaller than the
destination buffer.

Backpatch the pg_basebackup fix to 11 where it was introduced, and
the pg_dump fix all the way down to 9.6.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se
Backpatch-through: 11 and 9.6
2021-10-19 12:59:50 +02:00
Michael Paquier 384f1abdb9 Fix portability issues in new TAP tests of psql
The tests added by c0280bc and d9ddc50 in 001_basic.pl have introduced
commands calling directly psql, making them sensitive to the
environment.  One issue was that those commands forgot -X to not use a
local .psqlrc, causing all those tests to fail if psql cannot properly
parse this file.

TAP tests should be designed so as they run in an isolated fashion,
without any dependency on the environment where they are run.  As
PostgresNode::psql gives already all the facilities those new tests
need, switch to that instead of calling plain psql commands where
interactions with a backend are needed.  The test is slightly refactored
to be able to check after the expected patterns of stdout and stderr,
keeping the same amount of coverage as previously.

Reported-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wzn8ftvcDPwomn+y04JJzbT=TG7TN=QsmSEATUOW-ZuvQQ@mail.gmail.com
2021-10-18 09:51:21 +09:00
Tom Lane 40dfac4fc4 Avoid core dump in pg_dump when dumping from pre-8.3 server.
Commit f0e21f2f6 missed adding a tgisinternal output column
to getTriggers' query for pre-8.3 servers.  Back-patch to v11,
like that commit.
2021-10-16 15:02:55 -04:00
Tom Lane e2ff7d9a83 Make pg_dump acquire lock on partitioned tables that are to be dumped.
It was clearly the intent to do so all along, but the original coding
fat-fingered this by checking the wrong array element.  We fixed it
in passing in 403a3d91c, but that later got reverted, and we forgot
to keep this bug fix.

Most of the time this'd be relatively harmless, since once we lock
any of the partitioned table's leaf partitions, that would suffice
to prevent major DDL on the partitioned table itself.  However, a
childless partitioned table would get dumped with no relevant lock
whatsoever, possibly allowing dump failure or inconsistent output.

Unlike 403a3d91c, there are no versioning concerns, since every server
version that has partitioned tables will allow you to lock one.

Back-patch to v10 where partitioned tables were introduced.

Discussion: https://postgr.es/m/1018205.1634346327@sss.pgh.pa.us
2021-10-16 12:23:57 -04:00
Peter Geoghegan cd3f429d95 Remove unstable pg_amcheck tests.
Recent pg_amcheck bugfix commit d2bf06db added a test case that the
buildfarm has shown to be non-portable.  It doesn't particularly seem
worth keeping anyway.  Remove it.

Discussion: https://postgr.es/m/CAH2-Wz=7HKJ9WzAh7+M0JfwJ1yfT9qoE+KPa3P7iGToPOtGhXg@mail.gmail.com
Backpatch: 14-, just like the original commit.
2021-10-14 14:50:26 -07:00