Commit Graph

6757 Commits

Author SHA1 Message Date
Robert Haas bbc1376b39 Teach verify_heapam() to validate update chains within a page.
Prior to this commit, we only consider each tuple or line pointer
on the page in isolation, but now we can do some validation of a line
pointer against its successor. For example, a redirect line pointer
shouldn't point to another redirect line pointer, and if a tuple
is HOT-updated, the result should be a heap-only tuple.

Himanshu Upadhyaya and Robert Haas, reviewed by Aleksander Alekseev,
Andres Freund, and Peter Geoghegan.
2023-03-22 08:48:54 -04:00
Tom Lane b0d8f2d983 Add SHELL_ERROR and SHELL_EXIT_CODE magic variables to psql.
These are set after a \! command or a backtick substitution.
SHELL_ERROR is just "true" for error (nonzero exit status) or "false"
for success, while SHELL_EXIT_CODE records the actual exit status
following standard shell/system(3) conventions.

Corey Huinker, reviewed by Maxim Orlov and myself

Discussion: https://postgr.es/m/CADkLM=cWao2x2f+UDw15W1JkVFr_bsxfstw=NGea7r9m4j-7rQ@mail.gmail.com
2023-03-21 13:03:56 -04:00
Daniel Gustafsson 106f26a849 Avoid using atooid for numerical comparisons which arent Oids
The check for the number of roles in the target cluster for an upgrade
selects the existing roles and performs a COUNT(*) over the result.  A
value of one is the expected query result value indicating that only
the install user is present in the new cluster. The result was converted
with the function for converting a string containing an Oid into a numeric,
which avoids potential overflow but makes the code less readable since
it's not actually an Oid at all.

Discussion: https://postgr.es/m/41AB5F1F-4389-4B25-9668-5C430375836C@yesql.se
2023-03-21 12:57:21 +01:00
Peter Eisentraut 4c8044c044 pg_waldump: Allow hexadecimal values for -t/--timeline option
This makes it easier to specify values taken directly from WAL file
names.

The option parsing is arranged in the style of option_parse_int() (but
we need to parse unsigned int), to allow future refactoring in the
same manner.

Reviewed-by: Sébastien Lardière <sebastien@lardiere.net>
Discussion: https://www.postgresql.org/message-id/flat/8fef346e-2541-76c3-d768-6536ae052993@lardiere.net
2023-03-21 08:05:23 +01:00
Tom Lane 064709f803 Simplify and speed up pg_dump's creation of parent-table links.
Instead of trying to optimize this by skipping creation of the
links for tables we don't plan to dump, just create them all in
bulk with a single scan over the pg_inherits data.  The previous
approach was more or less O(N^2) in the number of pg_inherits
entries, not to mention being way too complicated.

Also, don't create useless TableAttachInfo objects.
It's silly to create a TableAttachInfo object that we're not
going to dump, when we know perfectly well at creation time
that it won't be dumped.

Patch by me; thanks to Julien Rouhaud for review.

Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
2023-03-17 13:43:10 -04:00
Tom Lane bc8cd50fef Fix pg_dump for hash partitioning on enum columns.
Hash partitioning on an enum is problematic because the hash codes are
derived from the OIDs assigned to the enum values, which will almost
certainly be different after a dump-and-reload than they were before.
This means that some rows probably end up in different partitions than
before, causing restore to fail because of partition constraint
violations.  (pg_upgrade dodges this problem by using hacks to force
the enum values to keep the same OIDs, but that's not possible nor
desirable for pg_dump.)

Users can work around that by specifying --load-via-partition-root,
but since that's a dump-time not restore-time decision, one might
find out the need for it far too late.  Instead, teach pg_dump to
apply that option automatically when dealing with a partitioned
table that has hash-on-enum partitioning.

Also deal with a pre-existing issue for --load-via-partition-root
mode: in a parallel restore, we try to TRUNCATE target tables just
before loading them, in order to enable some backend optimizations.
This is bad when using --load-via-partition-root because (a) we're
likely to suffer deadlocks from restore jobs trying to restore rows
into other partitions than they came from, and (b) if we miss getting
a deadlock we might still lose data due to a TRUNCATE removing rows
from some already-completed restore job.

The fix for this is conceptually simple: just don't TRUNCATE if we're
dealing with a --load-via-partition-root case.  The tricky bit is for
pg_restore to identify those cases.  In dumps using COPY commands we
can inspect each COPY command to see if it targets the nominal target
table or some ancestor.  However, in dumps using INSERT commands it's
pretty impractical to examine the INSERTs in advance.  To provide a
solution for that going forward, modify pg_dump to mark TABLE DATA
items that are using --load-via-partition-root with a comment.
(This change also responds to a complaint from Robert Haas that
the dump output for --load-via-partition-root is pretty confusing.)
pg_restore checks for the special comment as well as checking the
COPY command if present.  This will fail to identify the combination
of --load-via-partition-root and --inserts in pre-existing dump files,
but that should be a pretty rare case in the field.  If it does
happen you will probably get a deadlock failure that you can work
around by not using parallel restore, which is the same as before
this bug fix.

Having done this, there seems no remaining reason for the alarmism
in the pg_dump man page about combining --load-via-partition-root
with parallel restore, so remove that warning.

Patch by me; thanks to Julien Rouhaud for review.  Back-patch to
v11 where hash partitioning was introduced.

Discussion: https://postgr.es/m/1376149.1675268279@sss.pgh.pa.us
2023-03-17 13:31:40 -04:00
Peter Eisentraut 95a828378e Fix incorrect format placeholders
Small fixup for 9637badd9f.
2023-03-17 07:48:24 +01:00
Michael Paquier 6f9ee74d45 Improve handling of psql \watch's interval argument
A failure in parsing the interval value defined in the \watch command
was silently switched to 1s of interval between two queries, which can
be confusing.  This commit improves the error handling, and a couple of
tests are added to check after:
- An incorrect value.
- An out-of-range value.
- A negative value.

A value of zero is able to work now, meaning that there is no interval
of time between two queries in a \watch loop.  No backpatch is done, as
it could break existing applications.

Author: Andrey Borodin
Reviewed-by: Kyotaro Horiguchi, Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com
2023-03-16 09:32:36 +09:00
Tom Lane a563c24c95 Allow pg_dump to include/exclude child tables automatically.
This patch adds new pg_dump switches
    --table-and-children=pattern
    --exclude-table-and-children=pattern
    --exclude-table-data-and-children=pattern
which act the same as the existing --table, --exclude-table, and
--exclude-table-data switches, except that any partitions or
inheritance child tables of the table(s) matching the pattern
are also included or excluded.

Gilles Darold, reviewed by Stéphane Tachoires

Discussion: https://postgr.es/m/5aa393b5-5f67-8447-b83e-544516990ee2@migops.com
2023-03-14 16:09:03 -04:00
Andrew Dunstan 9f8377f7a2 Add a DEFAULT option to COPY FROM
This allows for a string which if an input field matches causes the
column's default value to be inserted. The advantage of this is that
the default can be inserted in some rows and not others, for which
non-default data is available.

The file_fdw extension is also modified to take allow use of this
option.

Israel Barth Rubio

Discussion: https://postgr.es/m/CAO_rXXAcqesk6DsvioOZ5zmeEmpUN5ktZf-9=9yu+DTr0Xr8Uw@mail.gmail.com
2023-03-13 10:01:56 -04:00
Peter Eisentraut d72900bded Improve support for UNICODE collation on older ICU
The recently added standard collation UNICODE (0d21d4b9bc) doesn't
give consistent results on some build farm members with old ICU
versions.  Apparently, the ICU locale specification 'und' (language
tag style) misbehaves on some older ICU versions.  Replacing it with
'' (ICU locale ID style) fixes it at least on some OS versions.  Let's
see what the build farm says.
2023-03-13 09:08:58 +01:00
Andres Freund a4f23f9b3c pg_amcheck: Minor test speedups
Freezing the relation N times and fetching the tuples one-by-one isn't that
cheap. On my machine this reduces test times by a bit less than one second, on
windows CI it's a few seconds.

Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230309001558.b7shzvio645ebdta@awork3.anarazel.de
2023-03-11 15:41:47 -08:00
Andres Freund 4f5d461e04 amcheck: Fix FullTransactionIdFromXidAndCtx() for xids before epoch 0
64bit xids can't represent xids before epoch 0 (see also be504a3e97). When
FullTransactionIdFromXidAndCtx() was passed such an xid, it'd create a 64bit
xid far into the future. Noticed while adding assertions in the course of
investigating be504a3e97, as amcheck's test create such xids.

To fix the issue, just return FirstNormalFullTransactionId in this case. A
freshly initdb'd cluster already has a newer horizon. The most minimal version
of this would make the messages for some detected corruptions differently
inaccurate. To make those cases accurate, switch
FullTransactionIdFromXidAndCtx() to use the 32bit modulo difference between
xid and nextxid to compute the 64bit xid, yielding sensible "in the future" /
"in the past" answers.

Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230108002923.cyoser3ttmt63bfn@awork3.anarazel.de
Backpatch: 14-, where heapam verification was introduced
2023-03-11 14:12:52 -08:00
Jeff Davis c45dc7ffbb initdb: derive encoding from locale for ICU; similar to libc.
Previously, the default encoding was derived from the locale when
using libc; while the default was always UTF-8 when using ICU. That
would throw an error when the locale was not compatible with UTF-8.

This commit causes initdb to derive the default encoding from the
locale for both providers. If --no-locale is specified (or if the
locale is C or POSIX), the default encoding will be UTF-8 for ICU
(because ICU does not support SQL_ASCII) and SQL_ASCII for libc.

Per buildfarm failure on system "hoverfly" related to commit
27b62377b4.

Discussion: https://postgr.es/m/d191d5841347301a8f1238f609471ddd957fc47e.camel%40j-davis.com
2023-03-10 10:51:24 -08:00
Peter Eisentraut 0d21d4b9bc Add standard collation UNICODE
This adds a new predefined collation named UNICODE, which sorts by the
default Unicode collation algorithm specifications, per SQL standard.

This only works if ICU support is built.

Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://www.postgresql.org/message-id/flat/1293e382-2093-a2bf-a397-c04e8f83d3c2@enterprisedb.com
2023-03-10 13:35:43 +01:00
Jeff Davis 8da2ec31ee Fix test failure caused in 27b62377b4.
Per buildfarm system "prion".
2023-03-09 15:34:41 -08:00
Jeff Davis 27b62377b4 Use ICU by default at initdb time.
If the ICU locale is not specified, initialize the default collator
and retrieve the locale name from that.

Discussion: https://postgr.es/m/510d284759f6e943ce15096167760b2edcb2e700.camel@j-davis.com
Reviewed-by: Peter Eisentraut
2023-03-09 10:52:41 -08:00
Jeff Davis 206b44bb24 Fix 9637badd9f.
Discussion: https://postgr.es/m/0a364430-266e-1e1a-d5d8-1a5273c9ddb6@dunslane.net
Reported-by: Andrew Dunstan
2023-03-09 10:26:47 -08:00
Jeff Davis 9637badd9f pg_upgrade: copy locale and encoding information to new cluster.
Previously, pg_upgrade checked that the old and new clusters were
compatible, including the locale and encoding. But the new cluster was
just created, and only template0 from the new cluster will be
preserved (template1 and postgres are both recreated during the
upgrade process).

Because template0 is not sensitive to locale or encoding, just update
the pg_database entry to be the same as template0 from the original
cluster.

This commit makes it easier to change the default initdb locale or
encoding settings without causing needless incompatibilities.

Discussion: https://postgr.es/m/d62b2874-729b-d26a-2d0a-0d64f509eca4@enterprisedb.com
Reviewed-by: Peter Eisentraut
2023-03-09 08:28:05 -08:00
Andres Freund 8bf826528a meson: tests: Adjust with_icu/ZSTD env vars for pg_dump, pg_basebackup
396d348b0 omitted adding with_icu to the pg_dump tests under
meson. Conversely, e6927270c exported ZSTD for pg_basebackup's tests, despite
pg_basebackup's ZSTD support not having any tests.

Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20230226225239.GL1653@telsasoft.com
2023-03-08 17:04:15 -08:00
Peter Eisentraut 30a53b7929 Allow tailoring of ICU locales with custom rules
This exposes the ICU facility to add custom collation rules to a
standard collation.

New options are added to CREATE COLLATION, CREATE DATABASE, createdb,
and initdb to set the rules.

Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Discussion: https://www.postgresql.org/message-id/flat/821c71a4-6ef0-d366-9acf-bb8e367f739f@enterprisedb.com
2023-03-08 16:56:37 +01:00
Peter Eisentraut 2a71ad64cb Break up long GETTEXT_FILES lists
One file per line seems best.  We already did this in some cases.
This adopts the same format everywhere (except in some cases where the
list reasonably fits on one line).
2023-03-08 15:05:43 +01:00
Daniel Gustafsson d3406d8036 Fix handling of default option values in createuser
Add description of which one is the default between two complementary
options of --bypassrls and --replication in the help text and docs. In
correspondence let the command always include the tokens corresponding
to every options of that kind in the SQL command sent to server. Tests
are updated accordingly.

Also fix the checks of some trivalue vars which were using literal zero
for checking default value instead of the enum label TRI_DEFAULT. While
not a bug, since TRI_DEFAULT is defined as zero, fixing improves read-
ability improved readability (and avoid bugs if the enum is changed).

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220810.151243.1073197628358749087.horikyota.ntt@gmail.com
2023-03-06 14:16:32 +01:00
Michael Paquier 4211fbd841 Add PROCESS_MAIN to VACUUM
Disabling this option is useful to run VACUUM (with or without FULL) on
only the toast table of a relation, bypassing the main relation.  This
option is enabled by default.

Running directly VACUUM on a toast table was already possible without
this feature, by using the non-deterministic name of a toast relation
(as of pg_toast.pg_toast_N, where N would be the OID of the parent
relation) in the VACUUM command, and it required a scan of pg_class to
know the name of the toast table.  So this feature is basically a
shortcut to be able to run VACUUM or VACUUM FULL on a toast relation,
using only the name of the parent relation.

A new switch called --no-process-main is added to vacuumdb, to work as
an equivalent of PROCESS_MAIN.

Regression tests are added to cover VACUUM and VACUUM FULL, looking at
pg_stat_all_tables.vacuum_count to see how many vacuums have run on
each table, main or toast.

Author: Nathan Bossart
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/20221230000028.GA435655@nathanxps13
2023-03-06 16:41:05 +09:00
Michael Paquier ce340e530d Revise pg_pwrite_zeros()
The following changes are made to pg_write_zeros(), the API able to
write series of zeros using vectored I/O:
- Add of an "offset" parameter, to write the size from this position
(the 'p' of "pwrite" seems to mean position, though POSIX does not
outline ythat directly), hence the name of the routine is incorrect if
it is not able to handle offsets.
- Avoid memset() of "zbuffer" on every call.
- Avoid initialization of the whole IOV array if not needed.
- Group the trailing write() call with the main write() call,
simplifying the function logic.

Author: Andres Freund
Reviewed-by: Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/20230215005525.mrrlmqrxzjzhaipl@awork3.anarazel.de
2023-03-06 13:21:33 +09:00
Tom Lane 3dfae91f7a Show "internal name" not "source code" in psql's \df+ command.
Our previous habit of showing the full function body is really
pretty unfriendly for tabular viewing of functions, and now that
we have \sf and \ef commands there seems no good reason why \df+
has to do it.  It still seems to make sense to show prosrc for
internal and C-language functions, since in those cases prosrc
is just the C function name; but then let's rename the column to
"Internal name" which is a more accurate descriptor.

Isaac Morland

Discussion: https://postgr.es/m/CAMsGm5eqKc6J1=Lwn=ZONG=6ZDYWRQ4cgZQLqMuZGB1aVt_JBg@mail.gmail.com
2023-03-02 17:15:13 -05:00
Daniel Gustafsson 2f80c95740 Mark options as deprecated in usage output
Some deprecated options were not marked as such in usage output.  This
does so across the installed binaries in an attempt to provide consistent
markup for this.

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/062C6A8A-A4E8-4F52-9E31-45F0C9E9915E@yesql.se
2023-03-02 14:36:37 +01:00
Daniel Gustafsson 7ab1bc2939 Fix outdated references to guc.c
Commit 0a20ff54f split out the GUC variables from guc.c into a new file
guc_tables.c. This updates comments referencing guc.c regarding variables
which are now in guc_tables.c.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6B50C70C-8C1F-4F9A-A7C0-EEAFCC032406@yesql.se
2023-03-02 13:49:39 +01:00
Tomas Vondra 6095069b40 Improve wording in pg_dump compression docs
A couple minor corrections in pg_dump comments and docs, related to the
recently introduced compression API.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20230227044910.GO1653@telsasoft.com
2023-03-01 16:11:38 +01:00
Tomas Vondra 34ce114374 Fix condition in pg_dump TAP test
The condition checking compression support was parenthesized
incorrectly after adding lz4, so fix that.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20230227044910.GO1653@telsasoft.com
2023-03-01 15:58:25 +01:00
Tom Lane 128dd9f9ec Fix logic buglets in pg_dump's flagInhAttrs().
As it stands, flagInhAttrs() can make changes in table properties that
change decisions made at other tables during other iterations of its
loop.  This is a pretty bad idea, since we visit the tables in OID
order which is not necessarily related to inheritance relationships.
So far as I can tell, the consequences are just cosmetic: we might
dump DEFAULT or GENERATED expressions that we don't really need to
because they match properties of the parent.  Nonetheless, it's buggy,
and somebody might someday add functionality here that fails less
benignly when the traversal order varies.

One issue is that when we decide we needn't dump a particular
GENERATED expression, we physically unlink the struct for it,
so that it will now look like the table has no such expression,
causing the wrong choice to be made at any child visited later.
We can improve that by instead clearing the dobj.dump flag,
and taking care to check that flag when it comes time to dump
the expression or not.

The other problem is that if we decide we need to fake up a DEFAULT
NULL clause to override a default that would otherwise get inherited,
we modify the data structure in the reverse fashion, creating an
attrdefs entry where there hadn't been one.  It's harder to avoid
doing that, but since the backend won't report a plain "DEFAULT NULL"
property we can modify the code to recognize ones we just added.

Add some commentary to perhaps forestall future mistakes of the
same ilk.

Since the effects of this seem only cosmetic, no back-patch.

Discussion: https://postgr.es/m/1506298.1676323579@sss.pgh.pa.us
2023-02-28 18:06:45 -05:00
Daniel Gustafsson d959523257 Disallow NULLS NOT DISTINCT indexes for primary keys
A unique index which is created with non-distinct NULLS cannot be
used for backing a primary key constraint.  Make sure to disallow
such table alterations and teach pg_dump to drop the non-distinct
NULLS clause on indexes where this has been set.

Bug: 17720
Reported-by: Reiner Peterke <zedaardv@drizzle.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/17720-dab8ee0fa85d316d@postgresql.org
2023-02-24 11:09:50 +01:00
Daniel Gustafsson 94851e4b90 pg_dump: Remove move "blob" terminology
Commit e9960732a9 accidentally introduced the blob terminology in
error messages which had previously been altered by commit 35ce24c33
from "blob" to "LO". This reverts back to "LO".

Reported-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20230224.163127.68506240520261483.horikyota.ntt@gmail.com
Discussion: https://www.postgresql.org/message-id/flat/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com
2023-02-24 08:49:28 +01:00
Tomas Vondra 0da243fed0 Add LZ4 compression to pg_dump
Expand pg_dump's compression streaming and file APIs to support the lz4
algorithm. The newly added compress_lz4.{c,h} files cover all the
functionality of the aforementioned APIs. Minor changes were necessary
in various pg_backup_* files, where code for the 'lz4' file suffix has
been added, as well as pg_dump's compression option parsing.

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Shi Yu, Tomas Vondra
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
2023-02-23 21:19:26 +01:00
Tomas Vondra e9960732a9 Introduce a generic pg_dump compression API
Switch pg_dump to use the Compression API, implemented by bf9aa490db.

The CompressFileHandle replaces the cfp* family of functions with a
struct of callbacks for accessing (compressed) files. This allows adding
new compression methods simply by introducing a new struct instance with
appropriate implementation of the callbacks.

Archives compressed using custom compression methods store an identifier
of the compression algorithm in their header instead of the compression
level. The header version is bumped.

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Tomas Vondra
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
2023-02-23 18:33:40 +01:00
Tomas Vondra 03d02f54a6 Prepare pg_dump internals for additional compression methods
Commit bf9aa490db introduced a compression API in compress_io.{c,h} to
make reuse easier, and allow adding more compression algorithms.
However, pg_backup_archiver.c was not switched to this API and continued
to call the compression directly.

This commit teaches pg_backup_archiver.c about the compression API, so
that it can benefit from bf9aa490db (simpler code, easier addition of
new compression methods).

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Tomas Vondra
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com
2023-02-23 15:38:40 +01:00
Heikki Linnakangas 009eeee746 pg_rewind: Fix determining TLI when server was just promoted.
If the source server was just promoted, and it hasn't written the
checkpoint record yet, pg_rewind considered the server to be still on
the old timeline. Because of that, it would claim incorrectly that no
rewind is required. Fix that by looking at minRecoveryPointTLI in the
control file in addition to the ThisTimeLineID on the checkpoint.

This has been a known issue since forever, and we had worked around it
in the regression tests by issuing a checkpoint after each promotion,
before running pg_rewind. But that was always quite hacky, so better
to fix this properly. This doesn't add any new tests for this, but
removes the previously-added workarounds from the existing tests, so
that they should occasionally hit this codepath again.

This is arguably a bug fix, but don't backpatch because we haven't
really treated it as a bug so far. Also, the patch didn't apply
cleanly to v13 and below. I'm sure sure it could be made to work on
v13, but doesn't seem worth the risk and effort.

Reviewed-by: Kyotaro Horiguchi, Ibrar Ahmed, Aleksander Alekseev
Discussion: https://www.postgresql.org/message-id/9f568c97-87fe-a716-bd39-65299b8a60f4%40iki.fi
2023-02-23 15:22:53 +02:00
Peter Eisentraut f198f0a48c pg_dump: Remove some dead code
Client-side tracking of atttypmod has been unused since 64f3524, when
server-side format_type() started being used exclusively.  So remove
this dead code.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/144be239-c893-9361-704f-ac85b5b98d1a%40enterprisedb.com
2023-02-22 08:23:56 +01:00
Michael Paquier 8bf5af2ee6 Fix small memory leak in psql's \bind command
psql_scan_slash_option() returns a malloc()'d result through a
PQExpBuffer, and exec_command_bind() was doing an extra allocation of
this option for no effect.

Introduced in 5b66de3.

Author: Kyotaro Horiguchi
Reviewed-by: Corey Huinker
Discussion: https://postgr.es/m/20230221.115555.89096938631423206.horikyota.ntt@gmail.com
2023-02-22 14:22:13 +09:00
Alvaro Herrera 038f586d5f
pgbench: Prepare commands in pipelines in advance
Failing to do so results in an error when a pgbench script tries to
start a serializable transaction inside a pipeline, because by the time
BEGIN ISOLATION LEVEL SERIALIZABLE is executed, we're already in a
transaction that has acquired a snapshot, so the server rightfully
complains.

We can work around that by preparing all commands in the pipeline before
actually starting the pipeline.  This changes the existing code in two
aspects: first, we now prepare each command individually at the point
where that command is about to be executed; previously, we would prepare
all commands in a script as soon as the first command of that script
would be executed.  It's hard to see that this would make much of a
difference (particularly since it only affects the first time to execute
each script in a client), but I didn't actually try to measure it.

Secondly, we no longer use PQsendPrepare() in pipeline mode, but only
PQprepare.  There's no specific reason for this change other than no
longer needing to do differently in pipeline mode.  (Previously we had
no choice, because in pipeline mode PQprepare could not be used.)

Backpatch to 14, where pgbench got support for pipeline mode.

Reported-by: Yugo NAGATA <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/20210716153013.fc53b1c780b06fccc07a7f0d@sraoss.co.jp
2023-02-21 10:56:37 +01:00
Alvaro Herrera a1acdacada
Add wait_for_replay_catchup wrapper to Cluster.pm
This simplifies a few lines of Perl test code a bit.

Author: Bertrand Drouvot
Discussion: https://postgr.es/m/846724b5-0723-f4c2-8b13-75301ec7509e@gmail.com
2023-02-13 11:52:19 +01:00
Michael Paquier ef7002dbe0 Fix various typos in code and tests
Most of these are recent, and the documentation portions are new as of
v16 so there is no need for a backpatch.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20230208155644.GM1653@telsasoft.com
2023-02-09 14:43:53 +09:00
Peter Eisentraut aa69541046 Remove useless casts to (void *) in arguments of some system functions
The affected functions are: bsearch, memcmp, memcpy, memset, memmove,
qsort, repalloc

Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/fd9adf5d-b1aa-e82f-e4c7-263c30145807%40enterprisedb.com
2023-02-07 06:57:59 +01:00
Michael Paquier d07c2948bf Add support for progress reporting to pg_verifybackup
This adds a new option to pg_verifybackup called -P/--progress, showing
every second some information about the progress of the checksum
verification based on the data of a backup manifest.

Similarly to what is done for pg_rewind and pg_basebackup, the
information printed in the progress report consists of the current
amount of data computed and the total amount of data that will be
computed.  Note that files found with an incorrect size do not have
their checksum verified, hence their size is not appended to the total
amount of data estimated during the first scan of the manifest data
(such incorrect sizes could be overly high, for one, falsifying the
progress report).

Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoC5+JOgMd4o3z_oxw0f8JDSsCYY7zSbhe-O9x7f33rw_A@mail.gmail.com
2023-02-06 14:40:31 +09:00
Thomas Munro c289117505 Try to fix pg_upgrade test on Windows, again.
Further to commit 54e72b66e, if rmtree() fails while cleaning up in
pg_upgrade, try again.  This gives our Windows unlink() wrapper a chance
to reach its wait-for-the-other-process-to-go-away logic, if the first
go around initiated the unlink of a file that a concurrently exiting
program still has open.

Discussion: https://postgr.es/m/CA%2BhUKGKCVy2%3Do%3Dd8c2Va6a_3Rpf_KkhUitkWCZ3hzuO2VwLMXA%40mail.gmail.com
2023-02-01 14:40:25 +13:00
Michael Paquier 783d8abc3b Fix behavior with pg_restore -l and compressed dumps
pg_restore -l has always been able to read the TOC data of a dump even
if its binary has no support for compression, for both compressed and
uncompressed dumps.  5e73a60 has introduced a backward-incompatible
behavior by switching a warning to a hard error in the code path reading
the header data of a dump, preventing the TOC items to be listed even if
pg_restore -l, with no support for compression, is used on a compressed
dump.  Most modern systems should have support for zlib, but it can be
also possible that somebody relies on the past behavior when copying
over a dump where binaries are not built with zlib support (most likely
some WIN32 flavors these days, though most environments should provide
that).

There is no easy way to have a regression test for this pattern, as it
requires a mix of dump/restore commands with different compilation
options, with and without compression.  One possibility I see here would
be to have a command-line option that enforces a non-compression check
for a build that supports compression, but that does not seem worth the
cost, either.

Reported-by: Justin Pryzby
Author: Georgios Kokolatos
Discussion: https://postgr.es/m/20230125180020.GF22427@telsasoft.com
2023-01-27 10:19:50 +09:00
Andres Freund 25b2aba0c3 Zero initialize uses of instr_time about to trigger compiler warnings
These are all not necessary from a correctness POV. However, in the near
future instr_time will be simplified to an int64, at which point gcc would
otherwise start to warn about the changed places.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20230116023639.rn36vf6ajqmfciua@awork3.anarazel.de
2023-01-20 21:16:47 -08:00
Tom Lane 74739d1d3f Avoid harmless warning from pg_dump --if-exists mode.
If the public schema has a non-default owner (perhaps due to
dropping and recreating it) then use of pg_dump's "--if-exists"
option results in a warning message:

warning: could not find where to insert IF EXISTS in statement "-- *not* dropping schema, since initdb creates it"

This is harmless since the dump output is the same either way,
but nonetheless it's undesirable.  It's the fault of commit
a7a7be1f2, which created situations where a TOC entry's "defn"
or "dropStmt" fields could be just comments.  Although that
commit fixed up the kluges in pg_backup_archiver.c that munge defn
strings, it missed doing so for the one that munges dropStmts.

Per bug# 17753 from Justin Zhang.

Discussion: https://postgr.es/m/17753-9c8773631747ee1c@postgresql.org
2023-01-19 19:32:50 -05:00
Michael Paquier 6ba6306256 Fix failure with perlcritic in psql's create_help.pl
No buildfarm members have reported that yet, but a recently-refreshed
Debian host did.

Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/Y8ey5z4Nav62g4/K@paquier.xyz
Backpatch-through: 11
2023-01-19 10:01:53 +09:00
Tom Lane 47bb9db759 Get rid of the "new" and "old" entries in a view's rangetable.
The rule system needs "old" and/or "new" pseudo-RTEs in rule actions
that are ON INSERT/UPDATE/DELETE.  Historically it's put such entries
into the ON SELECT rules of views as well, but those are really quite
vestigial.  The only thing we've used them for is to carry the
view's relid forward to AcquireExecutorLocks (so that we can
re-lock the view to verify it hasn't changed before re-using a plan)
and to carry its relid and permissions data forward to execution-time
permissions checks.  What we can do instead of that is to retain
these fields of the RTE_RELATION RTE for the view even after we
convert it to an RTE_SUBQUERY RTE.  This requires a tiny amount of
extra complication in the planner and AcquireExecutorLocks, but on
the other hand we can get rid of the logic that moves that data from
one place to another.

The principal immediate benefit of doing this, aside from a small
saving in the pg_rewrite data for views, is that these pseudo-RTEs
no longer trigger ruleutils.c's heuristic about qualifying variable
names when the rangetable's length is more than 1.  That results
in quite a number of small simplifications in regression test outputs,
which are all to the good IMO.

Bump catversion because we need to dump a few more fields of
RTE_SUBQUERY RTEs.  While those will always be zeroes anyway in
stored rules (because we'd never populate them until query rewrite)
they are useful for debugging, and it seems like we'd better make
sure to transmit such RTEs accurately in plans sent to parallel
workers.  I don't think the executor actually examines these fields
after startup, but someday it might.

This is a second attempt at committing 1b4d280ea.  The difference
from the first time is that now we can add some filtering rules to
AdjustUpgrade.pm to allow cross-version upgrade testing to pass
despite all the cosmetic changes in CREATE VIEW outputs.

Amit Langote (filtering rules by me)

Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
2023-01-18 13:23:57 -05:00