Commit Graph

1235 Commits

Author SHA1 Message Date
Peter Eisentraut f198f0a48c pg_dump: Remove some dead code
Client-side tracking of atttypmod has been unused since 64f3524, when
server-side format_type() started being used exclusively.  So remove
this dead code.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/144be239-c893-9361-704f-ac85b5b98d1a%40enterprisedb.com
2023-02-22 08:23:56 +01:00
John Naylor 15eb122880 Remove redundant relkind check
Ranier Vilela

Reviewed by Justin Pryzby
Discussion: https://www.postgresql.org/message-id/CAEudQAp2R2fbbi0OHHhv_n4%3DCh0t1VtjObR9YMqtGKHJ%2BfaUFQ%40mail.gmail.com
2023-01-17 14:25:01 +07:00
Tom Lane 8bf6ec3ba3 Improve handling of inherited GENERATED expressions.
In both partitioning and traditional inheritance, require child
columns to be GENERATED if and only if their parent(s) are.
Formerly we allowed the case of an inherited column being
GENERATED when its parent isn't, but that results in inconsistent
behavior: the column can be directly updated through an UPDATE
on the parent table, leading to it containing a user-supplied
value that might not match the generation expression.  This also
fixes an oversight that we enforced partition-key-columns-can't-
be-GENERATED against parent tables, but not against child tables
that were dynamically attached to them.

Also, remove the restriction that the child's generation expression
be equivalent to the parent's.  In the wake of commit 3f7836ff6,
there doesn't seem to be any reason that we need that restriction,
since generation expressions are always computed per-table anyway.
By removing this, we can also allow a child to merge multiple
inheritance parents with inconsistent generation expressions, by
overriding them with its own expression, much as we've long allowed
for DEFAULT expressions.

Since we're rejecting a case that we used to accept, this doesn't
seem like a back-patchable change.  Given the lack of field
complaints about the inconsistent behavior, it's likely that no
one is doing this anyway, but we won't change it in minor releases.

Amit Langote and Tom Lane

Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
2023-01-11 15:55:02 -05:00
Amit Kapila 216a784829 Perform apply of large transactions by parallel workers.
Currently, for large transactions, the publisher sends the data in
multiple streams (changes divided into chunks depending upon
logical_decoding_work_mem), and then on the subscriber-side, the apply
worker writes the changes into temporary files and once it receives the
commit, it reads from those files and applies the entire transaction. To
improve the performance of such transactions, we can instead allow them to
be applied via parallel workers.

In this approach, we assign a new parallel apply worker (if available) as
soon as the xact's first stream is received and the leader apply worker
will send changes to this new worker via shared memory. The parallel apply
worker will directly apply the change instead of writing it to temporary
files. However, if the leader apply worker times out while attempting to
send a message to the parallel apply worker, it will switch to
"partial serialize" mode -  in this mode, the leader serializes all
remaining changes to a file and notifies the parallel apply workers to
read and apply them at the end of the transaction. We use a non-blocking
way to send the messages from the leader apply worker to the parallel
apply to avoid deadlocks. We keep this parallel apply assigned till the
transaction commit is received and also wait for the worker to finish at
commit. This preserves commit ordering and avoid writing to and reading
from files in most cases. We still need to spill if there is no worker
available.

This patch also extends the SUBSCRIPTION 'streaming' parameter so that the
user can control whether to apply the streaming transaction in a parallel
apply worker or spill the change to disk. The user can set the streaming
parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means
the streaming will be applied via a parallel apply worker, if available.
The parameter value 'on' means the streaming transaction will be spilled
to disk. The default value is 'off' (same as current behaviour).

In addition, the patch extends the logical replication STREAM_ABORT
message so that abort_lsn and abort_time can also be sent which can be
used to update the replication origin in parallel apply worker when the
streaming transaction is aborted. Because this message extension is needed
to support parallel streaming, parallel streaming is not supported for
publications on servers < PG16.

Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko
Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
2023-01-09 07:52:45 +05:30
Tom Lane 5f53b42cfd During pg_dump startup, acquire table locks in batches.
Combine multiple LOCK TABLE commands to reduce the number of
round trips to the server.  This is particularly helpful when
dumping from a remote server, but it seems useful even without
that.  In particular, shortening the time from seeing a table
in pg_class to acquiring lock on it reduces the window for
trouble from concurrent DDL.

Aleksander Alekseev, reviewed by Fabrízio de Royes Mello,
Gilles Darold, and Andres Freund

Discussion: https://postgr.es/m/CAJ7c6TO4z1+OBa-R+fC8FnaUgbEWJUf2Kq=nRngTW5EXtKru2g@mail.gmail.com
2023-01-03 17:56:44 -05:00
Bruce Momjian c8e1ba736b Update copyright for 2023
Backpatch-through: 11
2023-01-02 15:00:37 -05:00
Alexander Korotkov 096dd80f3c Add USER SET parameter values for pg_db_role_setting
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role.  That lets ordinary roles set placeholder variables, which
permission requirements are not known yet.  Such a value wouldn't be used if
the variable finally appear to require superuser privileges.

The new flags are stored in the pg_db_role_setting.setuser array.  Catversion
is bumped.

This commit is inspired by the previous work by Steve Chavez.

Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
2022-12-09 13:12:20 +03:00
Peter Eisentraut 35ce24c333 pg_dump: Remove "blob" terminology
For historical reasons, pg_dump refers to large objects as "BLOBs".
This term is not used anywhere else in PostgreSQL, and it also means
something different in the SQL standard and other SQL systems.

This patch renames internal functions, code comments, documentation,
etc. to use the "large object" or "LO" terminology instead.  There is
no functionality change, so the archive format still uses the name
"BLOB" for the archive entry.  Additional long command-line options
are added with the new naming.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com
2022-12-05 08:52:55 +01:00
Michael Paquier 5e73a60488 Switch pg_dump to use compression specifications
Compression specifications are currently used by pg_basebackup and
pg_receivewal, and are able to let the user control in an extended way
the method and level of compression used.  As an effect of this commit,
pg_dump's -Z/--compress is now able to use more than just an integer, as
of the grammar "method[:detail]".

The method can be either "none" or "gzip", and can optionally take a
detail string.  If the detail string is only an integer, it defines the
compression level.  A comma-separated list of keywords can also be used
method allows for more options, the only keyword supported now is
"level".

The change is backward-compatible, hence specifying only an integer
leads to no compression for a level of 0 and gzip compression when the
level is greater than 0.

Most of the code changes are straight-forward, as pg_dump was relying on
an integer tracking the compression level to check for gzip or no
compression.  These are changed to use a compression specification and
the algorithm stored in it.

As of this change, note that the dump format is not bumped because there
is no need yet to track the compression algorithm in the TOC entries.
Hence, we still rely on the compression level to make the difference
when reading them.  This will be mandatory once a new compression method
is added, though.

In order to keep the code simpler when parsing the compression
specification, the code is changed so as pg_dump now fails hard when
using gzip on -Z/--compress without its support compiled, rather than
enforcing no compression without the user knowing about it except
through a warning.  Like before this commit, archive and custom formats
are compressed by default when the code is compiled with gzip, and left
uncompressed without gzip.

Author: Georgios Kokolatos
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 10:45:02 +09:00
Tom Lane 26ee7fb368 pg_dump: fix failure to dump comments on constraints in some cases.
Thinko in commit 5209c0ba0: I checked the wrong object's
DUMP_COMPONENT_COMMENT bit in two places.

Per bug #17675 from Franz-Josef Färber.

Discussion: https://postgr.es/m/17675-c69c001e06390867@postgresql.org
2022-11-02 11:30:04 -04:00
Robert Haas a448e49bcb Revert 56-bit relfilenode change and follow-up commits.
There are still some alignment-related failures in the buildfarm,
which might or might not be able to be fixed quickly, but I've also
just realized that it increased the size of many WAL records by 4 bytes
because a block reference contains a RelFileLocator. The effect of that
hasn't been studied or discussed, so revert for now.
2022-09-28 09:55:28 -04:00
Robert Haas 05d4cbf9b6 Increase width of RelFileNumbers from 32 bits to 56 bits.
RelFileNumbers are now assigned using a separate counter, instead of
being assigned from the OID counter. This counter never wraps around:
if all 2^56 possible RelFileNumbers are used, an internal error
occurs. As the cluster is limited to 2^64 total bytes of WAL, this
limitation should not cause a problem in practice.

If the counter were 64 bits wide rather than 56 bits wide, we would
need to increase the width of the BufferTag, which might adversely
impact buffer lookup performance. Also, this lets us use bigint for
pg_class.relfilenode and other places where these values are exposed
at the SQL level without worrying about overflow.

This should remove the need to keep "tombstone" files around until
the next checkpoint when relations are removed. We do that to keep
RelFileNumbers from being recycled, but now that won't happen
anyway. However, this patch doesn't actually change anything in
this area; it just makes it possible for a future patch to do so.

Dilip Kumar, based on an idea from Andres Freund, who also reviewed
some earlier versions of the patch. Further review and some
wordsmithing by me. Also reviewed at various points by Ashutosh
Sharma, Vignesh C, Amul Sul, Álvaro Herrera, and Tom Lane.

Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com
2022-09-27 13:25:21 -04:00
Robert Haas 2f47715cc8 Move RelFileNumber declarations to common/relpath.h.
Previously, these were declared in postgres_ext.h, but they are not
needed nearly so widely as the OID declarations, so that doesn't
necessarily make sense. Also, because postgres_ext.h is included
before most of c.h has been processed, the previous location creates
some problems for a pending patch.

Patch by me, reviewed by Dilip Kumar.

Discussion: http://postgr.es/m/CA+TgmoYc8oevMqRokZQ4y_6aRn-7XQny1JBr5DyWR_jiFtONHw@mail.gmail.com
2022-09-27 12:01:57 -04:00
Peter Geoghegan 20e69daa13 Harmonize parameter names in pg_dump/pg_dumpall.
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in pg_dump/pg_dumpall
related code.

Affected code happens to be inconsistent in how it applies conventions
around how Archive and Archive Handle variables are named.  Significant
code churn is required to fully fix those inconsistencies, so take the
least invasive approach possible: treat function definition names as
authoritative, and mechanically adjust corresponding names from function
definitions to match.

Like other recent commits that cleaned up function parameter names, this
commit was written with help from clang-tidy.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAH2-Wzmma+vzcO6gr5NYDZ+sx0G14aU-UrzFutT2FoRaisVCUQ@mail.gmail.com
2022-09-22 16:41:23 -07:00
Alvaro Herrera 790bf615dd
Remove ALL keyword from TABLES IN SCHEMA for publication
This may be a bit too subtle, but removing that word from there makes
this clause no longer a perfect parallel of the GRANT variant "ALL
TABLES IN SCHEMA": indeed, for publications what we record is the schema
itself, not the tables therein, which means that any tables added to the
schema in the future are also published.  This is completely different
to what GRANT does, which is affect only the tables that exist when the
command is executed.

There isn't resounding support for this change, but there are a few
positive votes and no opposition.  Because the time to 15 RC1 is very
short, let's get this out now.

Backpatch to 15.

Discussion: https://postgr.es/m/2729c9e2-9aac-8cda-f2f4-34f2bcc18f4e
2022-09-22 19:02:25 +02:00
David Rowley 8b26769bc4 Fix an assortment of improper usages of string functions
In a similar effort to f736e188c and 110d81728, fixup various usages of
string functions where a more appropriate function is available and more
fit for purpose.

These changes include:

1. Use cstring_to_text_with_len() instead of cstring_to_text() when
   working with a StringInfoData and the length can easily be obtained.
2. Use appendStringInfoString() instead of appendStringInfo() when no
   formatting is required.
3. Use pstrdup(...) instead of psprintf("%s", ...)
4. Use pstrdup(...) instead of psprintf(...) (with no formatting)
5. Use appendPQExpBufferChar() instead of appendPQExpBufferStr() when the
   length of the string being appended is 1.
6. appendStringInfoChar() instead of appendStringInfo() when no formatting
   is required and string is 1 char long.
7. Use appendPQExpBufferStr(b, .) instead of appendPQExpBuffer(b, "%s", .)
8. Don't use pstrdup when it's fine to just point to the string constant.

I (David) did find other cases of #8 but opted to use #4 instead as I
wasn't certain enough that applying #8 was ok (e.g in hba.c)

Author: Ranier Vilela, David Rowley
Discussion: https://postgr.es/m/CAApHDvo2j2+RJBGhNtUz6BxabWWh2Jx16wMUMWKUjv70Ver1vg@mail.gmail.com
2022-09-06 13:19:44 +12:00
Peter Eisentraut 396d348b04 pg_dump: Dump colliculocale
This was forgotten when the new column was introduced.

Author: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/7ad26354e75259f59c4a6c6997b8ee32%40postgrespro.ru
2022-08-24 20:13:52 +02:00
David Rowley 421892a192 Further reduce warnings with -Wshadow=compatible-local
In a similar effort to f01592f91, here we're targetting fixing the
warnings that -Wshadow=compatible-local produces that we can fix by moving
a variable to an inner scope to stop that variable from being shadowed by
another variable declared somewhere later in the function.

All of the warnings being fixed here are changing the scope of variables
which are being used as an iterator for a "for" loop.  In each instance,
the fix happens to be changing the for loop to use the C99 type
initialization.  Much of this code likely pre-dates our use of C99.

Reducing the scope of the outer scoped variable seems like the safest way
to fix these.  Renaming seems more likely to risk patches using the wrong
variable.  Reducing the scope is more likely to result in a compilation
failure after applying some future patch rather than introducing bugs with
it.

By my count, this takes the warning count from 129 down to 114.

Author: Justin Pryzby
Discussion: https://postgr.es/m/CAApHDvrwLGBP%2BYw9vriayyf%3DXR4uPWP5jr6cQhP9au_kaDUhbA%40mail.gmail.com
2022-08-24 12:27:12 +12:00
David Rowley f01592f915 Remove shadowed local variables that are new in v15
Compiling with -Wshadow=compatible-local yields quite a few warnings about
local variables being shadowed by compatible local variables in an inner
scope.  Of course, this is perfectly valid in C, but we have had bugs in
the past as a result of developers failing to notice this.  af7d270dd is a
recent example.

Here we do a cleanup of warnings we receive from -Wshadow=compatible-local
for code which is new to PostgreSQL 15.  We've yet to have the discussion
about if we actually ever want to run that as a standard compilation flag.
We'll need to at least get the number of warnings down to something easier
to manage before we can realistically consider if we want this or not.
This commit is the first step towards reducing the warnings.

The changes being made here are all fairly trivial.  Because of that, and
the fact that v15 is still in beta, this is being back-patched into 15.
It seems more risky not to do this as the risk of future bugs is increased
by the additional conflicts that this commit could cause for any future
bug fixes touching the same areas as this commit.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220817145434.GC26426%40telsasoft.com
Backpatch-through: 15
2022-08-20 11:40:44 +12:00
Robert Haas 4374699639 Fix brown paper bag bug in bbe08b8869.
We must issue the TRUNCATE command first and update relfrozenxid
and relminmxid afterward; otherwise, TRUNCATE overwrites the
previously-set values.

Add a test case like I should have done the first time.

Per buildfarm report from TestUpgradeXversion.pm, by way of Tom
Lane.
2022-07-29 16:31:57 -04:00
Robert Haas 80d6907219 Fix mistake in bbe08b8869.
The earlier commit used pg_class.relfilenode where it should have
used pg_class.oid. This could lead to emitting an UPDATE statement
into the dump that would update nothing (or the wrong thing) when
executed in the new cluster, resulting in relfrozenxid and
relminmxid being improperly carried forward for pg_largeobject.

Noticed by Dilip Kumar.

Discussion: http://postgr.es/m/CAFiTN-ty1Gzs6stk2vt9BJiq0m0hzf=aPnh3a-4Z3Tk5GzoENw@mail.gmail.com
2022-07-29 11:20:07 -04:00
Robert Haas bbe08b8869 Use TRUNCATE to preserve relfilenode for pg_largeobject + index.
Commit 9a974cbcba arranged to preserve
the relfilenode of user tables across pg_upgrade, but failed to notice
that pg_upgrade treats pg_largeobject as a user table and thus it needs
the same treatment. Otherwise, large objects will appear to vanish
after a  pg_upgrade.

Commit d498e052b4 fixed this problem
by teaching pg_dump to UPDATE pg_class.relfilenode for pg_largeobject
and its index. However, because an UPDATE on the catalog rows doesn't
change anything on disk, this can leave stray files behind in the new
cluster. They will normally be empty, but it's a little bit untidy.

Hence, this commit arranges to do the same thing using DDL. Specifically,
it makes TRUNCATE work for the pg_largeobject catalog when in
binary-upgrade mode, and it then uses that command in binary-upgrade
dumps as a way of setting pg_class.relfilenode for pg_largeobject and
its index. That way, the old files are removed from the new cluster.

Discussion: http://postgr.es/m/CA+TgmoYYMXGUJO5GZk1-MByJGu_bB8CbOL6GJQC8=Bzt6x6vDg@mail.gmail.com
2022-07-28 16:03:42 -04:00
Amit Kapila 366283961a Allow users to skip logical replication of data having origin.
This patch adds a new SUBSCRIPTION parameter "origin". It specifies
whether the subscription will request the publisher to only send changes
that don't have an origin or send changes regardless of origin. Setting it
to "none" means that the subscription will request the publisher to only
send changes that have no origin associated. Setting it to "any" means
that the publisher sends changes regardless of their origin. The default
is "any".
Usage:
CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=postgres port=9999'
PUBLICATION pub1 WITH (origin = none);

This can be used to avoid loops (infinite replication of the same data)
among replication nodes.

This feature allows filtering only the replication data originating from
WAL but for initial sync (initial copy of table data) we don't have such a
facility as we can only distinguish the data based on origin from WAL. As
a follow-up patch, we are planning to forbid the initial sync if the
origin is specified as none and we notice that the publication tables were
also replicated from other publishers to avoid duplicate data or loops.

We forbid to allow creating origin with names 'none' and 'any' to avoid
confusion with the same name options.

Author: Vignesh C, Amit Kapila
Reviewed-By: Peter Smith, Amit Kapila, Dilip Kumar, Shi yu, Ashutosh Bapat, Hayato Kuroda
Discussion: https://postgr.es/m/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9=9orqubhjcQ@mail.gmail.com
2022-07-21 08:47:38 +05:30
Robert Haas d498e052b4 Preserve relfilenode of pg_largeobject and its index across pg_upgrade.
Commit 9a974cbcba did this for user
tables, but pg_upgrade treats pg_largeobject as a user table, and so
needs the same treatment. Without this fix, if you rewrite the
pg_largeobject table and then perform an upgrade with pg_upgrade, the
table will apparently be empty on the new cluster, while all of your
objects will end up with an orphaned file.

With this fix, instead of the old cluster's pg_largeobject files ending
up orphaned, the original files fro the new cluster do. That's mostly
harmless because we expect the table to be empty, but we might want
to arrange to remove the as part of the upgrade. Since we're still
debating the best way of doing that, I (rhaas) have decided to postpone
dealing with that problem and get the basic fix committed.

Justin Pryzby, reviewed by Shruthi Gowda and by me.

Discussion: http://postgr.es/m/20220701231413.GI13040@telsasoft.com
2022-07-08 10:20:27 -04:00
Robert Haas b0a55e4329 Change internal RelFileNode references to RelFileNumber or RelFileLocator.
We have been using the term RelFileNode to refer to either (1) the
integer that is used to name the sequence of files for a certain relation
within the directory set aside for that tablespace/database combination;
or (2) that value plus the OIDs of the tablespace and database; or
occasionally (3) the whole series of files created for a relation
based on those values. Using the same name for more than one thing is
confusing.

Replace RelFileNode with RelFileNumber when we're talking about just the
single number, i.e. (1) from above, and with RelFileLocator when we're
talking about all the things that are needed to locate a relation's files
on disk, i.e. (2) from above. In the places where we refer to (3) as
a relfilenode, instead refer to "relation storage".

Since there is a ton of SQL code in the world that knows about
pg_class.relfilenode, don't change the name of that column, or of other
SQL-facing things that derive their name from it.

On the other hand, do adjust closely-related internal terminology. For
example, the structure member names dbNode and spcNode appear to be
derived from the fact that the structure itself was called RelFileNode,
so change those to dbOid and spcOid. Likewise, various variables with
names like rnode and relnode get renamed appropriately, according to
how they're being used in context.

Hopefully, this is clearer than before. It is also preparation for
future patches that intend to widen the relfilenumber fields from its
current width of 32 bits. Variables that store a relfilenumber are now
declared as type RelFileNumber rather than type Oid; right now, these
are the same, but that can now more easily be changed.

Dilip Kumar, per an idea from me. Reviewed also by Andres Freund.
I fixed some whitespace issues, changed a couple of words in a
comment, and made one other minor correction.

Discussion: http://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com
Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com
Discussion: http://postgr.es/m/CAFiTN-vTe79M8uDH1yprOU64MNFE+R3ODRuA+JWf27JbhY4hJw@mail.gmail.com
2022-07-06 11:39:09 -04:00
Peter Eisentraut 02c408e21a Remove redundant null pointer checks before free()
Per applicable standards, free() with a null pointer is a no-op.
Systems that don't observe that are ancient and no longer relevant.
Some PostgreSQL code already required this behavior, so this change
does not introduce any new requirements, just makes the code more
consistent.

Discussion: https://www.postgresql.org/message-id/flat/dac5d2d0-98f5-94d9-8e69-46da2413593d%40enterprisedb.com
2022-07-03 11:47:15 +02:00
Tom Lane 23e7b38bfe Pre-beta mechanical code beautification.
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
2022-05-12 15:17:30 -04:00
Tom Lane 2cb1272445 Rethink method for assigning OIDs to the template0 and postgres DBs.
Commit aa0105141 assigned fixed OIDs to template0 and postgres
in a very ad-hoc way.  Notably, instead of teaching Catalog.pm
about these OIDs, the unused_oids script was just hacked to
not show them as unused.  That's problematic since, for example,
duplicate_oids wouldn't report any future conflict.  Hence,
invent a macro DECLARE_OID_DEFINING_MACRO() that can be used to
define an OID that is known to Catalog.pm and will participate
in duplicate-detection as well as renumbering by renumber_oids.pl.
(We don't anticipate renumbering these particular OIDs, but we
might as well build out all the Catalog.pm infrastructure while
we're here.)

Another issue is that aa0105141 neglected to touch IsPinnedObject,
with the result that it now claimed template0 and postgres are
pinned.  The right thing to do there seems to be to teach it that
no database is pinned, since in fact DROP DATABASE doesn't check
for pinned-ness (and at least for these cases, that is an
intentional choice).  It's not clear whether this wrong answer
had any visible effect, but perhaps it could have resulted in
erroneous management of dependency entries.

In passing, rename the TemplateDbOid macro to Template1DbOid
to reduce confusion (likely we should have done that way back
when we invented template0, but we didn't), and rename the
OID macros for template0 and postgres to have a similar style.

There are no changes to postgres.bki here, so no need for a
catversion bump.

Discussion: https://postgr.es/m/2935358.1650479692@sss.pgh.pa.us
2022-04-21 16:23:15 -04:00
Robert Haas d2d3547979 Allow db.schema.table patterns, but complain about random garbage.
psql, pg_dump, and pg_amcheck share code to process object name
patterns like 'foo*.bar*' to match all tables with names starting in
'bar' that are in schemas starting with 'foo'. Before v14, any number
of extra name parts were silently ignored, so a command line '\d
foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'.  In v14,
as a result of commit 2c8726c4b0, we
instead treated this as a request for table quux in a schema named
'foo.bar.baz.bletch'. That caused problems for people like Justin
Pryzby who were accustomed to copying strings of the form
db.schema.table from messages generated by PostgreSQL itself and using
them as arguments to \d.

Accordingly, revise things so that if an object name pattern contains
more parts than we're expecting, we throw an error, unless there's
exactly one extra part and it matches the current database name.
That way, thisdb.myschema.mytable is accepted as meaning just
myschema.mytable, but otherdb.myschema.mytable is an error, and so
is some.random.garbage.myschema.mytable.

Mark Dilger, per report from Justin Pryzby and discussion among
various people.

Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com
2022-04-20 11:37:29 -04:00
Tom Lane 9a374b77fb Improve frontend error logging style.
Get rid of the separate "FATAL" log level, as it was applied
so inconsistently as to be meaningless.  This mostly involves
s/pg_log_fatal/pg_log_error/g.

Create a macro pg_fatal() to handle the common use-case of
pg_log_error() immediately followed by exit(1).  Various
modules had already invented either this or equivalent macros;
standardize on pg_fatal() and apply it where possible.

Invent the ability to add "detail" and "hint" messages to a
frontend message, much as we have long had in the backend.

Except where rewording was needed to convert existing coding
to detail/hint style, I have (mostly) resisted the temptation
to change existing message wording.

Patch by me.  Design and patch reviewed at various stages by
Robert Haas, Kyotaro Horiguchi, Peter Eisentraut and
Daniel Gustafsson.

Discussion: https://postgr.es/m/1363732.1636496441@sss.pgh.pa.us
2022-04-08 14:55:14 -04:00
Tomas Vondra 2c7ea57e56 Revert "Logical decoding of sequences"
This reverts a sequence of commits, implementing features related to
logical decoding and replication of sequences:

 - 0da92dc530
 - 80901b3291
 - b779d7d8fd
 - d5ed9da41d
 - a180c2b34d
 - 75b1521dae
 - 2d2232933b
 - 002c9dd97a
 - 05843b1aa4

The implementation has issues, mostly due to combining transactional and
non-transactional behavior of sequences. It's not clear how this could
be fixed, but it'll require reworking significant part of the patch.

Discussion: https://postgr.es/m/95345a19-d508-63d1-860a-f5c2f41e8d40@enterprisedb.com
2022-04-07 20:06:36 +02:00
Peter Eisentraut 344d62fb9a Unlogged sequences
Add support for unlogged sequences.  Unlike for unlogged tables, this
is not a performance feature.  It allows sequences associated with
unlogged tables to be excluded from replication.

A new subcommand ALTER SEQUENCE ... SET LOGGED/UNLOGGED is added.

An identity/serial sequence now automatically gets and follows the
persistence level (logged/unlogged) of its owning table.  (The
sequences owned by temporary tables were already temporary through the
separate mechanism in RangeVarAdjustRelationPersistence().)  But you
can still change the persistence of an owned sequence separately.
Also, pg_dump and pg_upgrade preserve the persistence of existing
sequences.

Discussion: https://www.postgresql.org/message-id/flat/04e12818-2f98-257c-b926-2845d74ed04f%402ndquadrant.com
2022-04-07 16:18:00 +02:00
Tomas Vondra 923def9a53 Allow specifying column lists for logical replication
This allows specifying an optional column list when adding a table to
logical replication. The column list may be specified after the table
name, enclosed in parentheses. Columns not included in this list are not
sent to the subscriber, allowing the schema on the subscriber to be a
subset of the publisher schema.

For UPDATE/DELETE publications, the column list needs to cover all
REPLICA IDENTITY columns. For INSERT publications, the column list is
arbitrary and may omit some REPLICA IDENTITY columns. Furthermore, if
the table uses REPLICA IDENTITY FULL, column list is not allowed.

The column list can contain only simple column references. Complex
expressions, function calls etc. are not allowed. This restriction could
be relaxed in the future.

During the initial table synchronization, only columns included in the
column list are copied to the subscriber. If the subscription has
several publications, containing the same table with different column
lists, columns specified in any of the lists will be copied.

This means all columns are replicated if the table has no column list
at all (which is treated as column list with all columns), or when of
the publications is defined as FOR ALL TABLES (possibly IN SCHEMA that
matches the schema of the table).

For partitioned tables, publish_via_partition_root determines whether
the column list for the root or the leaf relation will be used. If the
parameter is 'false' (the default), the list defined for the leaf
relation is used. Otherwise, the column list for the root partition
will be used.

Psql commands \dRp+ and \d <table-name> now display any column lists.

Author: Tomas Vondra, Alvaro Herrera, Rahila Syed
Reviewed-by: Peter Eisentraut, Alvaro Herrera, Vignesh C, Ibrar Ahmed,
Amit Kapila, Hou zj, Peter Smith, Wang wei, Tang, Shi yu
Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
2022-03-26 01:01:27 +01:00
Tomas Vondra 75b1521dae Add decoding of sequences to built-in replication
This commit adds support for decoding of sequences to the built-in
replication (the infrastructure was added by commit 0da92dc530).

The syntax and behavior mostly mimics handling of tables, i.e. a
publication may be defined as FOR ALL SEQUENCES (replicating all
sequences in a database), FOR ALL SEQUENCES IN SCHEMA (replicating
all sequences in a particular schema) or individual sequences.

To publish sequence modifications, the publication has to include
'sequence' action. The protocol is extended with a new message,
describing sequence increments.

A new system view pg_publication_sequences lists all the sequences
added to a publication, both directly and indirectly. Various psql
commands (\d and \dRp) are improved to also display publications
including a given sequence, or sequences included in a publication.

Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Amit Kapila, Hannu Krosing, Andres
             Freund, Petr Jelinek
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
2022-03-24 18:49:27 +01:00
Amit Kapila 208c5d65bb Add ALTER SUBSCRIPTION ... SKIP.
This feature allows skipping the transaction on subscriber nodes.

If incoming change violates any constraint, logical replication stops
until it's resolved. Currently, users need to either manually resolve the
conflict by updating a subscriber-side database or by using function
pg_replication_origin_advance() to skip the conflicting transaction. This
commit introduces a simpler way to skip the conflicting transactions.

The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),
which allows the apply worker to skip the transaction finished at
specified LSN. The apply worker skips all data modification changes within
the transaction.

Author: Masahiko Sawada
Reviewed-by: Takamichi Osumi, Hou Zhijie, Peter Eisentraut, Amit Kapila, Shi Yu, Vignesh C, Greg Nancarrow, Haiying Tang, Euler Taveira
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
2022-03-22 07:11:19 +05:30
Peter Eisentraut f2553d4306 Add option to use ICU as global locale provider
This adds the option to use ICU as the default locale provider for
either the whole cluster or a database.  New options for initdb,
createdb, and CREATE DATABASE are used to select this.

Since some (legacy) code still uses the libc locale facilities
directly, we still need to set the libc global locale settings even if
ICU is otherwise selected.  So pg_database now has three
locale-related fields: the existing datcollate and datctype, which are
always set, and a new daticulocale, which is only set if ICU is
selected.  A similar change is made in pg_collation for consistency,
but in that case, only the libc-related fields or the ICU-related
field is set, never both.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
2022-03-17 11:13:16 +01:00
Amit Kapila 705e20f855 Optionally disable subscriptions on error.
Logical replication apply workers for a subscription can easily get stuck
in an infinite loop of attempting to apply a change, triggering an error
(such as a constraint violation), exiting with the error written to the
subscription server log, and restarting.

To partially remedy the situation, this patch adds a new subscription
option named 'disable_on_error'. To be consistent with old behavior, this
option defaults to false. When true, both the tablesync worker and apply
worker catch any errors thrown and disable the subscription in order to
break the loop. The error is still also written in the logs.

Once the subscription is disabled, users can either manually resolve the
conflict/error or skip the conflicting transaction by using
pg_replication_origin_advance() function. After resolving the conflict,
users need to enable the subscription to allow apply process to proceed.

Author: Osumi Takamichi and Mark Dilger
Reviewed-by: Greg Nancarrow, Vignesh C, Amit Kapila, Wang wei, Tang Haiying, Peter Smith, Masahiko Sawada, Shi Yu
Discussion : https://postgr.es/m/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com
2022-03-14 09:32:40 +05:30
Amit Kapila 52e4f0cd47 Allow specifying row filters for logical replication of tables.
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.

The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.

If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.

The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.

This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.

If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.

Psql commands \dRp+ and \d <table-name> will display any row filters.

Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
2022-02-22 08:11:50 +05:30
Peter Eisentraut 37851a8b83 Database-level collation version tracking
This adds to database objects the same version tracking that collation
objects have.  There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.

This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported.  But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
2022-02-14 08:27:26 +01:00
Peter Eisentraut 4c5c41b4d9 Remove unnecessary resetPQExpBuffer call
Oversight in e2c52beecd.

Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20220209025007.eogz2aivcnvw46ym%40jrouhaud
2022-02-10 12:23:40 +01:00
Peter Eisentraut 94aa7cc5f7 Add UNIQUE null treatment option
The SQL standard has been ambiguous about whether null values in
unique constraints should be considered equal or not.  Different
implementations have different behaviors.  In the SQL:202x draft, this
has been formalized by making this implementation-defined and adding
an option on unique constraint definitions UNIQUE [ NULLS [NOT]
DISTINCT ] to choose a behavior explicitly.

This patch adds this option to PostgreSQL.  The default behavior
remains UNIQUE NULLS DISTINCT.  Making this happen in the btree code
is pretty easy; most of the patch is just to carry the flag around to
all the places that need it.

The CREATE UNIQUE INDEX syntax extension is not from the standard,
it's my own invention.

I named all the internal flags, catalog columns, etc. in the negative
("nulls not distinct") so that the default PostgreSQL behavior is the
default if the flag is false.

Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
2022-02-03 11:48:21 +01:00
Robert Haas aa01051418 pg_upgrade: Preserve database OIDs.
Commit 9a974cbcba arranged to preserve
relfilenodes and tablespace OIDs. For similar reasons, also arrange
to preserve database OIDs.

One problem is that, up until now, the OIDs assigned to the template0
and postgres databases have not been fixed. This could be a problem
when upgrading, because pg_upgrade might try to migrate a database
from the old cluster to the new cluster while keeping the OID and find
a different database with that OID, resulting in a failure. If it finds
a database with the same name and the same OID that's OK: it will be
dropped and recreated. But the same OID and a different name is a
problem.

To prevent that, fix the OIDs for postgres and template0 to specific
values less than 16384. To avoid running afoul of this rule, these
values should not be changed in future releases. It's not a problem
that these OIDs aren't fixed in existing releases, because the OIDs
that we're assigning here weren't used for either of these databases
in any previous release. Thus, there's no chance that an upgrade of
a cluster from any previous release will collide with the OIDs we're
assigning here. And going forward, the OIDs will always be fixed, so
the only potential collision is with a system database having the
same name and the same OID, which is OK.

This patch lets users assign a specific OID to a database as well,
provided however that it can't be less than 16384. I (rhaas) thought
it might be better not to expose this capability to users, but the
consensus was otherwise, so the syntax is documented. Letting users
assign OIDs below 16384 would not be OK, though, because a
user-created database with a low-numbered OID might collide with a
system-created database in a future release. We therefore prohibit
that.

Shruthi KC, based on an earlier patch from Antonin Houska, reviewed
and with some adjustments by me.

Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com
Discussion: http://postgr.es/m/CAASxf_Mnwm1Dh2vd5FAhVX6S1nwNSZUB1z12VddYtM++H2+p7w@mail.gmail.com
2022-01-24 14:23:43 -05:00
Tom Lane ac7df108cf pg_dump: avoid useless query in binary_upgrade_set_type_oids_by_type_oid
Commit 6df7a9698 wrote appendPQExpBuffer where it should have
written printfPQExpBuffer.  This resulted in re-issuing the
previous query along with the desired one, which very accidentally
had no negative consequences except for some wasted cycles.

Back-patch to v14 where that came in.

Discussion: https://postgr.es/m/1714711.1642962663@sss.pgh.pa.us
2022-01-23 13:54:24 -05:00
Tom Lane 353708e1fb Clean up recent Coverity complaints.
Commit 5c649fe15 introduced a memory leak into pg_basebackup's
parse_compress_options.  (I simplified nearby code while at it.)

Commit 9a974cbcb introduced a memory leak into pg_dump's
binary_upgrade_set_pg_class_oids.

Coverity also complained about a call of SnapBuildProcessChange that
ignored the result, unlike every other call of that function.  This
is evidently intentional, so add a (void) cast to indicate that.
(It's also old, dating to b89e15105; I suppose the reason it showed
up now is 7a5f6b474's recent rearrangement of nearby code.)
2022-01-23 12:51:38 -05:00
Robert Haas 9a974cbcba pg_upgrade: Preserve relfilenodes and tablespace OIDs.
Currently, database OIDs, relfilenodes, and tablespace OIDs can all
change when a cluster is upgraded using pg_upgrade. It seems better
to preserve them, because (1) it makes troubleshooting pg_upgrade
easier, since you don't have to do a lot of work to match up files
in the old and new clusters, (2) it allows 'rsync' to save bandwidth
when used to re-sync a cluster after an upgrade, and (3) if we ever
encrypt or sign blocks, we would likely want to use a nonce that
depends on these values.

This patch only arranges to preserve relfilenodes and tablespace
OIDs. The task of preserving database OIDs is left for another patch,
since it involves some complexities that don't exist in these cases.

Database OIDs have a similar issue, but there are some tricky points
in that case that do not apply to these cases, so that problem is left
for another patch.

Shruthi KC, based on an earlier patch from Antonin Houska, reviewed
and with some adjustments by me.

Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com
2022-01-17 13:40:27 -05:00
Michael Paquier 2158628864 Add support for --no-table-access-method in pg_{dump,dumpall,restore}
The logic is similar to default_tablespace in some ways, so as no SET
queries on default_table_access_method are generated before dumping or
restoring an object (table or materialized view support table AMs) when
specifying this new option.

This option is useful to enforce the use of a default access method even
if some tables included in a dump use an AM different than the system's
default.

There are already two cases in the TAP tests of pg_dump with a table and
a materialized view that use a non-default table AM, and these are
extended that the new option does not generate SET clauses on
default_table_access_method.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20211207153930.GR17618@telsasoft.com
2022-01-17 14:51:46 +09:00
Bruce Momjian 27b77ecf9f Update copyright for 2022
Backpatch-through: 10
2022-01-07 19:04:57 -05:00
Alvaro Herrera f4566345cf
Create foreign key triggers in partitioned tables too
While user-defined triggers defined on a partitioned table have
a catalog definition for both it and its partitions, internal
triggers used by foreign keys defined on partitioned tables only
have a catalog definition for its partitions.  This commit fixes
that so that partitioned tables get the foreign key triggers too,
just like user-defined triggers.  Moreover, like user-defined
triggers, partitions' internal triggers will now also have their
tgparentid set appropriately.  This is to allow subsequent commit(s)
to make the foreign key related events to be fired in some cases
using the parent table triggers instead of those of partitions'.

This also changes what tgisinternal means in some cases.  Currently,
it means either that the trigger is an internal implementation object
of a foreign key constraint, or a "child" trigger on a partition
cloned from the trigger on the parent.  This commit changes it to
only mean the former to avoid confusion.  As for the latter, it can
be told by tgparentid being nonzero, which is now true both for user-
defined and foreign key's internal triggers.

Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Arne Roland <A.Roland@index.de>
Discussion: https://postgr.es/m/CA+HiwqG7LQSK+n8Bki8tWv7piHD=PnZro2y6ysU2-28JS6cfgQ@mail.gmail.com
2022-01-05 19:00:13 -03:00
Peter Eisentraut 56a3e848c7 pg_dump: Refactor dumpDatabase()
Rearrange the version-dependent pieces in the new more modular style.
2022-01-04 16:19:48 +01:00
Tom Lane 3e6e86abca pg_dump: avoid unsafe function calls in getPolicies().
getPolicies() had the same disease I fixed in other places in
commit e3fcbbd62, i.e., it was calling pg_get_expr() for
expressions on tables that we don't necessarily have lock on.
To fix, restrict the query to only collect interesting rows,
rather than doing the filtering on the client side.

Like the previous patch, apply to HEAD only for now.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-31 12:47:57 -05:00
Tom Lane d5e8930f50 pg_dump: minor performance improvements from eliminating sub-SELECTs.
Get rid of the "username_subquery" mechanism in favor of doing
local lookups of role names from role OIDs.  The PG backend isn't
terribly smart about scalar SubLinks in SELECT output lists,
so this offers a small performance improvement, at least in
installations with more than a couple of users.  In any case
the old method didn't make for particularly readable SQL code.

While at it, I removed the various custom warning messages about
failing to find an object's owner, in favor of just fatal'ing
in the local lookup function.  AFAIK there is no reason any
longer to treat that as anything but a catalog-corruption case,
and certainly no reason to make translators deal with a dozen
different messages where one would do.  (If it turns out that
fatal() is indeed a bad idea, we can back off to issuing
pg_log_warning() and returning an empty string, resulting in
the same behavior as before, except more consistent.)

Also drop an entirely unnecessary sub-SELECT to check on the
pg_depend status of a sequence relation: we already have a
LEFT JOIN to fetch the row of interest in the FROM clause.

Discussion: https://postgr.es/m/2460369.1640903318@sss.pgh.pa.us
2021-12-31 11:39:26 -05:00
Tom Lane 5e65df64d6 pg_dump: make dumpPublication et al. less unlike sibling functions.
dumpPublication, dumpPublicationNamespace, dumpPublicationTable, and
dumpSubscription failed to check dataOnly.  This is just a latent bug,
because pg_backup_archiver.c would filter out the ArchiveEntry later;
but they're wasting cycles in data-only dumps, and the omission might
become a live bug someday.  In any case, it's not good to have some
dumpFoo functions do this and some not.

On the same reasoning, make dumpPublicationNamespace follow the
same pattern as every other dumpFoo function for checking the
DUMP_COMPONENT_DEFINITION flag.  (Since 5209c0ba0, we wouldn't
even get here if that flag isn't set, so checking it is just
pro forma right now.  But it might not be so forever.)

Since this is just cosmetic and/or future-proofing, no need for
back-patch.
2021-12-30 19:40:53 -05:00
Tom Lane c7cf73eb7b Minor cleanup/optimization in pg_dump.
In the wake of commits 05649b88c and 5209c0ba0, findComments() and
findSecLabels() no longer use their "Archive *fout" arguments,
so get rid of those.

While doing that, I noticed that there's no very good reason why
dumpCompositeTypeColComments() should be doing its own query to fetch
the column names of the composite type, when the calling function has
just fetched the same data.  Tweak it to use that query result.  This
probably doesn't save a lot for most people, because since 5209c0ba0
we won't get into this code at all unless the composite type has at
least one comment.  Nonetheless, it's a wasted query.
2021-12-30 14:29:32 -05:00
Peter Eisentraut e2c52beecd pg_dump: Refactor getIndexes()
Rearrange the version-dependent pieces in the new more modular style.

Discussion: https://www.postgresql.org/message-id/flat/67a28a3f-7b79-a5a9-fcc7-947b170e66f0%40enterprisedb.com
2021-12-20 11:18:01 +01:00
Tom Lane b1c169caf0 Remove some more dead code in pg_dump.
Coverity complained that parts of dumpFunc() and buildACLCommands()
were now unreachable, as indeed they are.  Remove 'em.

In passing, make dumpFunc's handling of protrftypes less gratuitously
different from other fields.
2021-12-19 17:18:34 -05:00
Tom Lane c49d926833 Clean up some more freshly-dead code in pg_dump and pg_upgrade.
I missed a few things in 30e7c175b and e469f0aaf,
as noted by Justin Pryzby.

Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
2021-12-16 12:01:59 -05:00
Tom Lane 2a712066d0 Remove pg_dump's --no-synchronized-snapshots switch.
Server versions for which there was a plausible reason to
use this switch are all out of support now.  Leaving it
around would accomplish little except to let careless DBAs
shoot themselves in the foot.

Discussion: https://postgr.es/m/556122.1639520324@sss.pgh.pa.us
2021-12-15 18:44:47 -05:00
Tom Lane 30e7c175b8 Remove pg_dump/pg_dumpall support for dumping from pre-9.2 servers.
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.  Remove over a thousand lines of code dedicated to
dumping from older server versions.  (As in previous changes of
this sort, we aren't removing pg_restore's ability to read older
archive files ... though it's fair to wonder how that might be
tested nowadays.)  This cleans up some dead code left behind by
commit 989596152.

Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
2021-12-14 17:09:07 -05:00
Tom Lane 65aaed22a8 Account for TOAST data while scheduling parallel dumps.
In parallel mode, pg_dump tries to order the table-data-dumping
jobs with the largest tables first.  However, it was only
consulting the pg_class.relpages value to determine table size.
This ignores TOAST data, and so we could make poor scheduling
decisions in cases where some large tables are mostly TOASTed
data while others have very little.  To fix, add in the relpages
value for the TOAST table as well.

This patch also fixes a potential integer-overflow issue that
could result in poor scheduling on machines where off_t is
only 32 bits wide.  Such platforms are probably extinct in the
wild, but we do still nominally support them, so repair.

Per complaint from Hans Buschmann.

Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 13:23:07 -05:00
Tom Lane be85727a3d Use PREPARE/EXECUTE for repetitive per-object queries in pg_dump.
For objects such as functions, pg_dump issues the same secondary
data-collection query against each object to be dumped.  This can't
readily be refactored to avoid the repetitive queries, but we can
PREPARE these queries to reduce planning costs.

This patch applies the idea to functions, aggregates, operators, and
data types.  While it could be carried further, the remaining sorts of
objects aren't likely to appear in typical databases enough times to
be worth worrying over.  Moreover, doing the PREPARE is likely to be a
net loss if there aren't at least some dozens of objects to apply the
prepared query to.

Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 13:14:29 -05:00
Tom Lane 9895961529 Avoid per-object queries in performance-critical paths in pg_dump.
Instead of issuing a secondary data-collection query against each
table to be dumped, issue just one query, with a WHERE clause
restricting it to be applied to only the tables we intend to dump.
Likewise for indexes, constraints, and triggers.  This greatly
reduces the number of queries needed to dump a database containing
many tables.  It might seem that WHERE clauses listing many target
OIDs could be inefficient, but at least on recent server versions
this provides a very substantial speedup.

(In principle the same thing could be done with other object types
such as functions; but that would require significant refactoring
of pg_dump, so those will be tackled in a different way in a
following patch.)

The new WHERE clauses depend on the unnest() function, which is
only present in 8.4 and above.  We could implement them differently
for older servers, but there is an ongoing discussion that will
probably result in dropping pg_dump support for servers before 9.2,
so that seems like it'd be wasted work.  For now, just bump the
server version check to require >= 8.4, without stopping to remove
any of the code that's thereby rendered dead.  We'll mop that
situation up soon.

Patch by me, based on an idea from Andres Freund.

Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 13:07:31 -05:00
Tom Lane e3fcbbd623 Postpone calls of unsafe server-side functions in pg_dump.
Avoid calling pg_get_partkeydef(), pg_get_expr(relpartbound),
and regtypeout until we have lock on the relevant tables.
The existing coding is at serious risk of failure if there
are any concurrent DROP TABLE commands going on --- including
drops of other sessions' temp tables.

Arguably this is a bug fix that should be back-patched, but it's
moderately invasive and we've not had all that many complaints
about such failures.  Let's just put it in HEAD for now.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 12:49:49 -05:00
Tom Lane 0c9d84427f Rethink pg_dump's handling of object ACLs.
Throw away most of the existing logic for this, as it was very
inefficient thanks to expensive sub-selects executed to collect
ACL data that we very possibly would have no interest in dumping.
Reduce the ACL handling in the initial per-object-type queries
to be just collection of the catalog ACL fields, as it was
originally.  Fetch pg_init_privs data separately in a single
scan of that catalog, and do the merging calculations on the
client side.  Remove the separate code path used for pre-9.6
source servers; there is no good reason to treat them differently
from newer servers that happen to have empty pg_init_privs.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 12:39:45 -05:00
Tom Lane 5209c0ba0b Refactor pg_dump's tracking of object components to be dumped.
Split the DumpableObject.dump bitmask field into separate bitmasks
tracking which components are requested to be dumped (in the
existing "dump" field) and which components exist for the particular
object (in the new "components" field).  This gets rid of some
klugy and easily-broken logic that involved setting bits and later
clearing them.  More importantly, it restores the originally intended
behavior that pg_dump's secondary data-gathering queries should not
be executed for objects we have no interest in dumping.  That
optimization got broken when the dump flag was turned into a bitmask,
because irrelevant bits tended to remain set in many cases.  Since
the "components" field starts from a minimal set of bits and is
added onto as needed, ANDing it with "dump" provides a reliable
indicator of what we actually have to dump, without having to
complicate the logic that manages the request bits.  This makes
a significant difference in the number of queries needed when,
for example, there are many functions in extensions.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
2021-12-06 12:25:48 -05:00
Peter Eisentraut 37b2764593 Some RELKIND macro refactoring
Add more macros to group some RELKIND_* macros:

- RELKIND_HAS_PARTITIONS()
- RELKIND_HAS_TABLESPACE()
- RELKIND_HAS_TABLE_AM()

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/a574c8f1-9c84-93ad-a9e5-65233d6fc00f%40enterprisedb.com
2021-12-03 14:08:19 +01:00
Peter Eisentraut a22d6a2cb6 pg_dump: Add missing relkind case
Checking for RELKIND_MATVIEW was forgotten in
guessConstraintInheritance().  This isn't a live problem, since it is
checked in flagInhTables() which relkinds can have parents, and those
entries will have numParents==0 after that.  But after discussion it
was felt that this place should be kept consistent with
flagInhTables() and flagInhAttrs().

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/a574c8f1-9c84-93ad-a9e5-65233d6fc00f@enterprisedb.com
2021-12-02 16:46:28 +01:00
Tom Lane 0b126c6a4b Fix pg_dump --inserts mode for generated columns with dropped columns.
If a table contains a generated column that's preceded by a dropped
column, dumpTableData_insert failed to account for the dropped
column, and would emit DEFAULT placeholder(s) in the wrong column(s).
This resulted in failures at restore time.  The default COPY code path
did not have this bug, likely explaining why it wasn't noticed sooner.

While we're fixing this, we can be a little smarter about the
situation: (1) avoid unnecessarily fetching the values of generated
columns, (2) omit generated columns from the output, too, if we're
using --column-inserts.  While these modes aren't expected to be
as high-performance as the COPY path, we might as well be as
efficient as we can; it doesn't add much complexity.

Per report from Дмитрий Иванов.
Back-patch to v12 where generated columns came in.

Discussion: https://postgr.es/m/CAPL5KHrkBniyQt5e1rafm5DdXvbgiiqfEQEJ9GjtVzN71Jj5pA@mail.gmail.com
2021-11-22 15:25:48 -05:00
Amit Kapila b3812d0b9b Rename some enums to use TABLE instead of REL.
Commit 5a2832465f introduced some enums to represent all tables in schema
publications and used REL in their names. Use TABLE instead of REL in
those enums to avoid confusion with other objects like SEQUENCES that can
be part of a publication in the future.

In the passing, (a) Change one of the newly introduced error messages to
make it consistent for Create and Alter commands, (b) add missing alias in
one of the SQL Statements that is used to print publications associated
with the table.

Reported-by: Tomas Vondra, Peter Smith
Author: Vignesh C
Reviewed-by: Hou Zhijie, Peter Smith
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com
2021-11-09 08:39:33 +05:30
Peter Eisentraut fd2706589a pg_dump: Refactor messages
This reduces the number of separate messages for translation.
2021-10-30 19:25:10 +02:00
Amit Kapila 5a2832465f Allow publishing the tables of schema.
A new option "FOR ALL TABLES IN SCHEMA" in Create/Alter Publication allows
one or more schemas to be specified, whose tables are selected by the
publisher for sending the data to the subscriber.

The new syntax allows specifying both the tables and schemas. For example:
CREATE PUBLICATION pub1 FOR TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2;
OR
ALTER PUBLICATION pub1 ADD TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2;

A new system table "pg_publication_namespace" has been added, to maintain
the schemas that the user wants to publish through the publication.
Modified the output plugin (pgoutput) to publish the changes if the
relation is part of schema publication.

Updates pg_dump to identify and dump schema publications. Updates the \d
family of commands to display schema publications and \dRp+ variant will
now display associated schemas if any.

Author: Vignesh C, Hou Zhijie, Amit Kapila
Syntax-Suggested-by: Tom Lane, Alvaro Herrera
Reviewed-by: Greg Nancarrow, Masahiko Sawada, Hou Zhijie, Amit Kapila, Haiying Tang, Ajin Cherian, Rahila Syed, Bharath Rupireddy, Mark Dilger
Tested-by: Haiying Tang
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ@mail.gmail.com
2021-10-27 07:44:52 +05:30
Tom Lane 70bef49400 Fix minor memory leaks in pg_dump.
I found these by running pg_dump under "valgrind --leak-check=full".

The changes in flagInhIndexes() and getIndexes() replace allocation of
an array of which we use only some elements by individual allocations
of just the actually-needed objects.  The previous coding wasted some
memory, but more importantly it confused valgrind's leak tracking.

collectComments() and collectSecLabels() remain major blots on
the valgrind report, because they don't PQclear their query
results, in order to avoid a lot of strdup's.  That's a dubious
tradeoff, but I'll leave it alone here; an upcoming patch will
modify those functions enough to justify changing the tradeoff.
2021-10-24 12:38:26 -04:00
Tom Lane 92316a4582 In pg_dump, use simplehash.h to look up dumpable objects by OID.
Create a hash table that indexes dumpable objects by CatalogId
(that is, catalog OID + object OID).  Use this to replace the
former catalogIdMap array, as well as various other single-
catalog index arrays, and also the extension membership map.

In principle this should be faster for databases with many objects,
since lookups are now O(1) not O(log N).  However, it seems that these
lookups are pretty much negligible in context, so that no overall
performance change can be measured.  But having only one lookup
data structure to maintain makes the code simpler and more flexible,
so let's do it anyway.

Discussion: https://postgr.es/m/2595220.1634855245@sss.pgh.pa.us
2021-10-22 17:19:03 -04:00
Tom Lane 2acc84c6fd pg_dump: fix mis-dumping of non-global default privileges.
Non-global default privilege entries should be dumped as-is,
not made relative to the default ACL for their object type.
This would typically only matter if one had revoked some
on-by-default privileges in a global entry, and then wanted
to grant them again in a non-global entry.

Per report from Boris Korzun.  This is an old bug, so back-patch
to all supported branches.

Neil Chen, test case by Masahiko Sawada

Discussion: https://postgr.es/m/111621616618184@mail.yandex.ru
Discussion: https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
2021-10-22 15:22:25 -04:00
Tom Lane 4438eb4a49 pg_dump: Reorganize getTables()
Along the same lines as 047329624, ed2c7f65b and daa9fe8a5, reduce
code duplication by having just one copy of the parts of the query
that are the same across all server versions; and make the
conditionals control the smallest possible amount of code.
This also gets rid of the confusing assortment of different ways
to accomplish the same result that we had here before.

While at it, make sure all three relevant parts of the function
list the fields in the same order.  This is just neatnik-ism,
of course.

Discussion: https://postgr.es/m/1240992.1634419055@sss.pgh.pa.us
2021-10-19 17:22:22 -04:00
Tom Lane 40dfac4fc4 Avoid core dump in pg_dump when dumping from pre-8.3 server.
Commit f0e21f2f6 missed adding a tgisinternal output column
to getTriggers' query for pre-8.3 servers.  Back-patch to v11,
like that commit.
2021-10-16 15:02:55 -04:00
Tom Lane e2ff7d9a83 Make pg_dump acquire lock on partitioned tables that are to be dumped.
It was clearly the intent to do so all along, but the original coding
fat-fingered this by checking the wrong array element.  We fixed it
in passing in 403a3d91c, but that later got reverted, and we forgot
to keep this bug fix.

Most of the time this'd be relatively harmless, since once we lock
any of the partitioned table's leaf partitions, that would suffice
to prevent major DDL on the partitioned table itself.  However, a
childless partitioned table would get dumped with no relevant lock
whatsoever, possibly allowing dump failure or inconsistent output.

Unlike 403a3d91c, there are no versioning concerns, since every server
version that has partitioned tables will allow you to lock one.

Back-patch to v10 where partitioned tables were introduced.

Discussion: https://postgr.es/m/1018205.1634346327@sss.pgh.pa.us
2021-10-16 12:23:57 -04:00
Noah Misch b073c3ccd0 Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.
This switches the default ACL to what the documentation has recommended
since CVE-2018-1058.  Upgrades will carry forward any old ownership and
ACL.  Sites that declined the 2018 recommendation should take a fresh
look.  Recipes for commissioning a new database cluster from scratch may
need to create a schema, grant more privileges, etc.  Out-of-tree test
suites may require such updates.

Reviewed by Peter Eisentraut.

Discussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com
2021-09-09 23:38:09 -07:00
Tom Lane 072e2f8a62 Avoid useless malloc/free traffic around getFormattedTypeName().
Coverity complained that one caller of getFormattedTypeName() failed
to free the returned string.  Which is true, but rather than fixing
that one, let's get rid of this tedious and error-prone requirement.
Now that getFormattedTypeName() caches its result, strdup'ing that
result and expecting the caller to free it accomplishes little except
to waste cycles.  We do create a leak in the case where getTypes didn't
make a TypeInfo for the type, but that basically shouldn't ever happen.

Back-patch, as commit 6c450a861 was.  This isn't a particularly
interesting bug fix, but the API change seems like a hazard for
future back-patching activity if we don't back-patch it.
2021-09-08 15:09:42 -04:00
Tom Lane bd3611db5a In pg_dump, avoid doing per-table queries for RLS policies.
For no particularly good reason, getPolicies() queried pg_policy
separately for each table.  We can collect all the policies in
a single query instead, and attach them to the correct TableInfo
objects using findTableByOid() lookups.  On the regression
database, this reduces the number of queries substantially, and
provides a visible savings even when running against a local
server.

Per complaint from Hubert Depesz Lubaczewski.  Since this is such
a simple fix and can have a visible performance benefit, back-patch
to all supported branches.

Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
2021-08-31 15:04:05 -04:00
Tom Lane 6c450a861f Cache the results of format_type() queries in pg_dump.
There's long been a "TODO: there might be some value in caching
the results" annotation on pg_dump's getFormattedTypeName function;
but we hadn't gotten around to checking what it was costing us to
repetitively look up type names.  It turns out that when dumping the
current regression database, about 10% of the total number of queries
issued are duplicative format_type() queries.  However, Hubert Depesz
Lubaczewski reported a not-unusual case where these account for over
half of the queries issued by pg_dump.  Individually these queries
aren't expensive, but when network lag is a factor, they add up to a
problem.  We can very easily add some caching to getFormattedTypeName
to solve it.

Since this is such a simple fix and can have a visible performance
benefit, back-patch to all supported branches.

Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
2021-08-31 13:53:49 -04:00
Daniel Gustafsson d782d5987e Avoid invoking PQfnumber in loop constructs
When looping over the resultset from a SQL query, extracting the field
number before the loop body to avoid repeated calls to PQfnumber is an
established pattern.  On very wide tables this can have a performance
impact, but it wont be noticeable in the common case. This fixes a few
queries which were extracting the field number in the loop body.

Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Nathan Bossart <bossartn@amazon.com>
Discussion: https://postgr.es/m/OS0PR01MB57164C392783F29F6D0ECA0B94139@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-08-27 16:24:33 +02:00
Michael Paquier 6f164e6d17 Unify parsing logic for command-line integer options
Most of the integer options for command-line binaries now make use of a
single routine able to do the job, fixing issues with the detection of
sloppy values caused for example by the use of atoi(), that fails on
strings beginning with numerical characters with junk trailing
characters.

This commit cuts down the number of strings requiring translation by 26
per my count, switching the code to have two error types for invalid and
out-of-range values instead.

Much more could be done here, with float or even int64 options, but
int32 was the most appealing case as it is possible to rely on strtol()
to do the job reliably.  Note that there are some exceptions for now,
like pg_ctl or pg_upgrade that use their own logging logic.  A couple of
negative TAP tests required some adjustments for the new errors
generated.

pg_dump and pg_restore tracked the maximum number of parallel jobs
within the option parsing.  The code is refactored a bit to track that
in the code dedicated to parallelism instead.

Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CALj2ACXqdG9WhqVoJ9zYf-iZt7sgK7Szv5USs=he6NnWQ2ofTA@mail.gmail.com
2021-07-24 18:35:03 +09:00
Alvaro Herrera f0e21f2f61
Fix pg_dump for disabled triggers on partitioned tables
pg_dump failed to preserve the 'enabled' flag (which can be not only
disabled, but also REPLICA or ALWAYS) for partitions which had it
changed from their respective parents.  Attempt to handle that by
including a definition for such triggers in the dump, but replace the
standard CREATE TRIGGER line with an ALTER TRIGGER line.

Backpatch to 11, where these triggers can exist.  In branches 11 and 12,
pick up a few test lines from commit b9b408c487 to verify that
pg_upgrade is okay with these arrangements.

Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 17:29:22 -04:00
Amit Kapila a8fd13cab0 Add support for prepared transactions to built-in logical replication.
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:

* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.

* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.

* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.

We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.

The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.

We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.

Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
2021-07-14 07:33:50 +05:30
Noah Misch 7ac10f6920 Dump COMMENT ON SCHEMA public.
As we do for other attributes of the public schema, omit the COMMENT
command when its payload would match what initdb had installed.  For
dumps that do carry this new COMMENT command, non-superusers restoring
them are likely to get an error.

Reviewed by Asif Rehman.

Discussion: https://postgr.es/m/ab48a34c-60f6-e388-502a-3e5fe46a2dae@postgresfriends.org
2021-06-28 18:34:55 -07:00
Noah Misch a7a7be1f2f Dump public schema ownership and security labels.
As a side effect, this corrects dumps of public schema ACLs in databases
where the DBA dropped and recreated that schema.

Reviewed by Asif Rehman.

Discussion: https://postgr.es/m/20201229134924.GA1431748@rfd.leadboat.com
2021-06-28 18:34:55 -07:00
Tom Lane e6241d8e03 Rethink definition of pg_attribute.attcompression.
Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to
compress, use the current setting of default_toast_compression".
This allows '\0' to be a suitable default choice regardless of
datatype, greatly simplifying code paths that initialize tupledescs
and the like.  It seems like a more user-friendly approach as well,
because now the default compression choice doesn't migrate into table
definitions, meaning that changing default_toast_compression is
usually sufficient to flip an installation's behavior; one needn't
tediously issue per-column ALTER SET COMPRESSION commands.

Along the way, fix a few minor bugs and documentation issues
with the per-column-compression feature.  Adopt more robust
APIs for SetIndexStorageProperties and GetAttributeCompression.

Bump catversion because typical contents of attcompression will now
be different.  We could get away without doing that, but it seems
better to ensure v14 installations all agree on this.  (We already
forced initdb for beta2, anyway.)

Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
2021-05-27 13:24:27 -04:00
Peter Eisentraut 09ae329957 Message style improvements 2021-05-14 10:26:41 +02:00
Thomas Munro ec48314708 Revert per-index collation version tracking feature.
Design problems were discovered in the handling of composite types and
record types that would cause some relevant versions not to be recorded.
Misgivings were also expressed about the use of the pg_depend catalog
for this purpose.  We're out of time for this release so we'll revert
and try again.

Commits reverted:

1bf946bd: Doc: Document known problem with Windows collation versions.
cf002008: Remove no-longer-relevant test case.
ef387bed: Fix bogus collation-version-recording logic.
0fb0a050: Hide internal error for pg_collation_actual_version(<bad OID>).
ff942057: Suppress "warning: variable 'collcollate' set but not used".
d50e3b1f: Fix assertion in collation version lookup.
f24b1569: Rethink extraction of collation dependencies.
257836a7: Track collation versions for indexes.
cd6f479e: Add pg_depend.refobjversion.
7d1297df: Remove pg_collation.collversion.

Discussion: https://postgr.es/m/CA%2BhUKGLhj5t1fcjqAu8iD9B3ixJtsTNqyCCD4V0aTO9kAKAjjA%40mail.gmail.com
2021-05-07 21:10:11 +12:00
Peter Eisentraut 3cbea581c7 Remove unused function argument
This was already unused in the initial commit
257836a755.  Apparently, it was used in
an earlier proposed patch version.
2021-04-26 07:05:21 +02:00
Tom Lane 1111b2668d Undo decision to allow pg_proc.prosrc to be NULL.
Commit e717a9a18 changed the longstanding rule that prosrc is NOT NULL
because when a SQL-language function is written in SQL-standard style,
we don't currently have anything useful to put there.  This seems a poor
decision though, as it could easily have negative impacts on external
PLs (opening them to crashes they didn't use to have, for instance).
SQL-function-related code can just as easily test "is prosqlbody not
null" as "is prosrc null", so there's no real gain there either.
Hence, revert the NOT NULL marking removal and adjust related logic.

For now, we just put an empty string into prosrc for SQL-standard
functions.  Maybe we'll have a better idea later, although the
history of things like pg_attrdef.adsrc suggests that it's not
easy to maintain a string equivalent of a node tree.

This also adds an assertion that queryDesc->sourceText != NULL
to standard_ExecutorStart.  We'd been silently relying on that
for awhile, so let's make it less silent.

Also fix some overlooked documentation and test cases.

Discussion: https://postgr.es/m/2197698.1617984583@sss.pgh.pa.us
2021-04-15 17:17:20 -04:00
Michael Paquier 344487e2db Tweak behavior of pg_dump --extension with configuration tables
6568cef, that introduced the option, had an inconsistent behavior when
it comes to configuration tables set up by pg_extension_config_dump, as
the data of all configuration tables would included in a dump even for
extensions not listed by a set of --extension switches.

The contents dumped changed depending on the schema where an extension
was installed when an extension was not listed.  For example, an
extension installed under the public schema would have its configuration
data not dumped even when not listed with --extension, which was
inconsistent with the case of an extension installed on a non-public
schema, where the configuration would be dumped.

Per discussion with Noah, we have settled down to the simple rule of
dumping configuration data of an extension if it is listed in
--extension (default is unchanged and backward-compatible, to dump
everything on sight if there are no extensions directly listed).  This
avoids some weird cases where the dumps depended on a --schema for one.

More tests are added to cover the gap, where we cross-check more
behaviors depending on --schema when an extension is not listed.

Reported-by: Noah Misch
Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/20210404220802.GA728316@rfd.leadboat.com
2021-04-15 10:03:46 +09:00
Peter Eisentraut e717a9a18b SQL-standard function body
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
statements for language SQL with a function body that conforms to the
SQL standard and is portable to other implementations.

Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
this allows writing out the SQL statements making up the body
unquoted, either as a single statement:

    CREATE FUNCTION add(a integer, b integer) RETURNS integer
        LANGUAGE SQL
        RETURN a + b;

or as a block

    CREATE PROCEDURE insert_data(a integer, b integer)
    LANGUAGE SQL
    BEGIN ATOMIC
      INSERT INTO tbl VALUES (a);
      INSERT INTO tbl VALUES (b);
    END;

The function body is parsed at function definition time and stored as
expression nodes in a new pg_proc column prosqlbody.  So at run time,
no further parsing is required.

However, this form does not support polymorphic arguments, because
there is no more parse analysis done at call time.

Dependencies between the function and the objects it uses are fully
tracked.

A new RETURN statement is introduced.  This can only be used inside
function bodies.  Internally, it is treated much like a SELECT
statement.

psql needs some new intelligence to keep track of function body
boundaries so that it doesn't send off statements when it sees
semicolons that are inside a function body.

Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
2021-04-07 21:47:55 +02:00
Michael Paquier 6568cef26e Add support for --extension in pg_dump
When specified, only extensions matching the given pattern are included
in dumps.  Similarly to --table and --schema, when --strict-names is
used,  a perfect match is required.  Also, like the two other options,
this new option offers no guarantee that dependent objects have been
dumped, so a restore may fail on a clean database.

Tests are added in test_pg_dump/, checking after a set of positive and
negative cases, with or without an extension's contents added to the
dump generated.

Author: Guillaume Lelarge
Reviewed-by: David Fetter, Tom Lane, Michael Paquier, Asif Rehman,
Julien Rouhaud
Discussion: https://postgr.es/m/CAECtzeXOt4cnMU5+XMZzxBPJ_wu76pNy6HZKPRBL-j7yj1E4+g@mail.gmail.com
2021-03-31 09:12:34 +09:00
Tom Lane 54bb91c30e Further tweaking of pg_dump's handling of default_toast_compression.
As committed in bbe0a81db, pg_dump from a pre-v14 server effectively
acts as though you'd said --no-toast-compression.  I think the right
thing is for it to act as though default_toast_compression is set to
"pglz", instead, so that the tables' toast compression behavior is
preserved.  You can always get the other behavior, if you want that,
by giving the switch.

Discussion: https://postgr.es/m/1112852.1616609702@sss.pgh.pa.us
2021-03-30 10:57:57 -04:00
Tom Lane aa25d1089a Fix up pg_dump's handling of per-attribute compression options.
The approach used in commit bbe0a81db would've been disastrous for
portability of dumps.  Instead handle non-default compression options
in separate ALTER TABLE commands.  This reduces chatter for the
common case where most columns are compressed the same way, and it
makes it possible to restore the dump to a server that lacks any
knowledge of per-attribute compression options (so long as you're
willing to ignore syntax errors from the ALTER TABLE commands).

There's a whole lot left to do to mop up after bbe0a81db, but
I'm fast-tracking this part because we need to see if it's
enough to make the buildfarm's cross-version-upgrade tests happy.

Justin Pryzby and Tom Lane

Discussion: https://postgr.es/m/20210119190720.GL8560@telsasoft.com
2021-03-20 15:01:10 -04:00
Robert Haas bbe0a81db6 Allow configurable LZ4 TOAST compression.
There is now a per-column COMPRESSION option which can be set to pglz
(the default, and the only option in up until now) or lz4. Or, if you
like, you can set the new default_toast_compression GUC to lz4, and
then that will be the default for new table columns for which no value
is specified. We don't have lz4 support in the PostgreSQL code, so
to use lz4 compression, PostgreSQL must be built --with-lz4.

In general, TOAST compression means compression of individual column
values, not the whole tuple, and those values can either be compressed
inline within the tuple or compressed and then stored externally in
the TOAST table, so those properties also apply to this feature.

Prior to this commit, a TOAST pointer has two unused bits as part of
the va_extsize field, and a compessed datum has two unused bits as
part of the va_rawsize field. These bits are unused because the length
of a varlena is limited to 1GB; we now use them to indicate the
compression type that was used. This means we only have bit space for
2 more built-in compresison types, but we could work around that
problem, if necessary, by introducing a new vartag_external value for
any further types we end up wanting to add. Hopefully, it won't be
too important to offer a wide selection of algorithms here, since
each one we add not only takes more coding but also adds a build
dependency for every packager. Nevertheless, it seems worth doing
at least this much, because LZ4 gets better compression than PGLZ
with less CPU usage.

It's possible for LZ4-compressed datums to leak into composite type
values stored on disk, just as it is for PGLZ. It's also possible for
LZ4-compressed attributes to be copied into a different table via SQL
commands such as CREATE TABLE AS or INSERT .. SELECT.  It would be
expensive to force such values to be decompressed, so PostgreSQL has
never done so. For the same reasons, we also don't force recompression
of already-compressed values even if the target table prefers a
different compression method than was used for the source data.  These
architectural decisions are perhaps arguable but revisiting them is
well beyond the scope of what seemed possible to do as part of this
project.  However, it's relatively cheap to recompress as part of
VACUUM FULL or CLUSTER, so this commit adjusts those commands to do
so, if the configured compression method of the table happens not to
match what was used for some column value stored therein.

Dilip Kumar. The original patches on which this work was based were
written by Ildus Kurbangaliev, and those were patches were based on
even earlier work by Nikita Glukhov, but the design has since changed
very substantially, since allow a potentially large number of
compression methods that could be added and dropped on a running
system proved too problematic given some of the architectural issues
mentioned above; the choice of which specific compression method to
add first is now different; and a lot of the code has been heavily
refactored.  More recently, Justin Przyby helped quite a bit with
testing and reviewing and this version also includes some code
contributions from him. Other design input and review from Tomas
Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, Alexander
Korotkov, and me.

Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain
Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com
2021-03-19 15:10:38 -04:00
Peter Eisentraut 6499008150 pg_dump: Add const decorations
Add const decorations to the *info arguments of the dump* functions,
to clarify that they don't modify that argument.  Many other nearby
functions modify their arguments, so this can help clarify these
different APIs a bit.

Discussion: https://www.postgresql.org/message-id/flat/012d3030-9a2c-99a1-ed2d-988978b5632f%40enterprisedb.com
2021-02-10 13:21:47 +01:00
Peter Eisentraut 0bf83648a5 pg_dump: Fix dumping of inherited generated columns
Generation expressions of generated columns are always inherited, so
there is no need to set them separately in child tables, and there is
no syntax to do so either.  The code previously used the code paths
for the handling of default values, for which different rules apply;
in particular it might want to set a default value explicitly for an
inherited column.  This resulted in unrestorable dumps.  For generated
columns, just skip them in inherited tables.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
2021-02-03 11:27:13 +01:00
Peter Eisentraut 2592be8be5 Fix typo 2021-01-29 09:43:21 +01:00