postgresql/src/bin/pg_dump/pg_backup_archiver.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

462 lines
17 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* pg_backup_archiver.h
*
* Private interface to the pg_dump archiver routines.
* It is NOT intended that these routines be called by any
* dumper directly.
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from its use.
*
*
* IDENTIFICATION
2010-09-20 22:08:53 +02:00
* src/bin/pg_dump/pg_backup_archiver.h
*
*-------------------------------------------------------------------------
*/
#ifndef __PG_BACKUP_ARCHIVE__
#define __PG_BACKUP_ARCHIVE__
#include <time.h>
#include "libpq-fe.h"
#include "pg_backup.h"
#include "pqexpbuffer.h"
#define LOBBUFSIZE 16384
/* Data block types */
#define BLK_DATA 1
#define BLK_BLOBS 3
/* Encode version components into a convenient integer <maj><min><rev> */
#define MAKE_ARCHIVE_VERSION(major, minor, rev) (((major) * 256 + (minor)) * 256 + (rev))
#define ARCHIVE_MAJOR(version) (((version) >> 16) & 255)
#define ARCHIVE_MINOR(version) (((version) >> 8) & 255)
#define ARCHIVE_REV(version) (((version) ) & 255)
/* Historical version numbers (checked in code) */
#define K_VERS_1_0 MAKE_ARCHIVE_VERSION(1, 0, 0)
#define K_VERS_1_2 MAKE_ARCHIVE_VERSION(1, 2, 0) /* Allow No ZLIB */
#define K_VERS_1_3 MAKE_ARCHIVE_VERSION(1, 3, 0) /* BLOBS */
#define K_VERS_1_4 MAKE_ARCHIVE_VERSION(1, 4, 0) /* Date & name in header */
#define K_VERS_1_5 MAKE_ARCHIVE_VERSION(1, 5, 0) /* Handle dependencies */
#define K_VERS_1_6 MAKE_ARCHIVE_VERSION(1, 6, 0) /* Schema field in TOCs */
#define K_VERS_1_7 MAKE_ARCHIVE_VERSION(1, 7, 0) /* File Offset size in
* header */
#define K_VERS_1_8 MAKE_ARCHIVE_VERSION(1, 8, 0) /* change interpretation
* of ID numbers and
* dependencies */
#define K_VERS_1_9 MAKE_ARCHIVE_VERSION(1, 9, 0) /* add default_with_oids
* tracking */
#define K_VERS_1_10 MAKE_ARCHIVE_VERSION(1, 10, 0) /* add tablespace */
#define K_VERS_1_11 MAKE_ARCHIVE_VERSION(1, 11, 0) /* add toc section
* indicator */
#define K_VERS_1_12 MAKE_ARCHIVE_VERSION(1, 12, 0) /* add separate BLOB
* entries */
#define K_VERS_1_13 MAKE_ARCHIVE_VERSION(1, 13, 0) /* change search_path
* behavior */
#define K_VERS_1_14 MAKE_ARCHIVE_VERSION(1, 14, 0) /* add tableam */
#define K_VERS_1_15 MAKE_ARCHIVE_VERSION(1, 15, 0) /* add
* compression_algorithm
* in header */
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
#define K_VERS_1_16 MAKE_ARCHIVE_VERSION(1, 16, 0) /* BLOB METADATA entries
* and multiple BLOBS */
/* Current archive version number (the format we can output) */
#define K_VERS_MAJOR 1
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
#define K_VERS_MINOR 16
#define K_VERS_REV 0
#define K_VERS_SELF MAKE_ARCHIVE_VERSION(K_VERS_MAJOR, K_VERS_MINOR, K_VERS_REV)
/* Newest format we can read */
#define K_VERS_MAX MAKE_ARCHIVE_VERSION(K_VERS_MAJOR, K_VERS_MINOR, 255)
/* Flags to indicate disposition of offsets stored in files */
#define K_OFFSET_POS_NOT_SET 1
#define K_OFFSET_POS_SET 2
#define K_OFFSET_NO_DATA 3
/*
* Special exit values from worker children. We reserve 0 for normal
* success; 1 and other small values should be interpreted as crashes.
*/
#define WORKER_OK 0
#define WORKER_CREATE_DONE 10
#define WORKER_INHIBIT_DATA 11
#define WORKER_IGNORED_ERRORS 12
typedef struct _archiveHandle ArchiveHandle;
typedef struct _tocEntry TocEntry;
struct ParallelState;
#define READ_ERROR_EXIT(fd) \
do { \
if (feof(fd)) \
pg_fatal("could not read from input file: end of file"); \
else \
pg_fatal("could not read from input file: %m"); \
} while (0)
#define WRITE_ERROR_EXIT \
do { \
pg_fatal("could not write to output file: %m"); \
} while (0)
typedef enum T_Action
{
ACT_DUMP,
ACT_RESTORE,
} T_Action;
typedef void (*ClosePtrType) (ArchiveHandle *AH);
typedef void (*ReopenPtrType) (ArchiveHandle *AH);
typedef void (*ArchiveEntryPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*StartDataPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*WriteDataPtrType) (ArchiveHandle *AH, const void *data, size_t dLen);
typedef void (*EndDataPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*StartLOsPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*StartLOPtrType) (ArchiveHandle *AH, TocEntry *te, Oid oid);
typedef void (*EndLOPtrType) (ArchiveHandle *AH, TocEntry *te, Oid oid);
typedef void (*EndLOsPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef int (*WriteBytePtrType) (ArchiveHandle *AH, const int i);
typedef int (*ReadBytePtrType) (ArchiveHandle *AH);
typedef void (*WriteBufPtrType) (ArchiveHandle *AH, const void *c, size_t len);
typedef void (*ReadBufPtrType) (ArchiveHandle *AH, void *buf, size_t len);
typedef void (*WriteExtraTocPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*ReadExtraTocPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*PrintExtraTocPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef void (*PrintTocDataPtrType) (ArchiveHandle *AH, TocEntry *te);
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
typedef void (*PrepParallelRestorePtrType) (ArchiveHandle *AH);
typedef void (*ClonePtrType) (ArchiveHandle *AH);
typedef void (*DeClonePtrType) (ArchiveHandle *AH);
typedef int (*WorkerJobDumpPtrType) (ArchiveHandle *AH, TocEntry *te);
typedef int (*WorkerJobRestorePtrType) (ArchiveHandle *AH, TocEntry *te);
typedef size_t (*CustomOutPtrType) (ArchiveHandle *AH, const void *buf, size_t len);
2001-03-22 05:01:46 +01:00
typedef enum
{
SQL_SCAN = 0, /* normal */
SQL_IN_SINGLE_QUOTE, /* '...' literal */
SQL_IN_DOUBLE_QUOTE, /* "..." identifier */
} sqlparseState;
typedef struct
{
sqlparseState state; /* see above */
bool backSlash; /* next char is backslash quoted? */
PQExpBuffer curCmd; /* incomplete line (NULL if not created) */
} sqlparseInfo;
They are two different problems; the TOC entry is important for any multiline command or to rerun the command easily later. Whereas displaying the failed SQL command is a matter of fixing the error messages. The latter is complicated by failed COPY commands which, with die-on-errors off, results in the data being processed as a command, so dumping the command will dump all of the data. In the case of long commands, should the whole command be dumped? eg. (eg. several pages of function definition). In the case of the COPY command, I'm not sure what to do. Obviously, it would be best to avoid sending the data, but the data and command are combined (from memory). Also, the 'data' may be in the form of INSERT statements. Attached patch produces the first 125 chars of the command: pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC Entry 26; 1255 16449270 FUNCTION plpgsql_call_handler() pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_call_handler" already exists with same argument types Command was: CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_call_han... pg_restore: [archiver (db)] Error from TOC Entry 27; 1255 16449271 FUNCTION plpgsql_validator(oid) pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_validator" already exists with same argument types Command was: CREATE FUNCTION plpgsql_validator(oid) RETURNS void AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_validator' LANGU... Philip Warner
2004-08-20 22:00:34 +02:00
typedef enum
{
STAGE_NONE = 0,
STAGE_INITIALIZING,
STAGE_PROCESSING,
STAGE_FINALIZING,
} ArchiverStage;
typedef enum
{
OUTPUT_SQLCMDS = 0, /* emitting general SQL commands */
OUTPUT_COPYDATA, /* writing COPY data */
OUTPUT_OTHERDATA, /* writing data as INSERT commands */
} ArchiverOutput;
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last. Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end, materialized view refreshes were occurring while the permissions on referenced objects were still at defaults. This led to failures if, say, an MV owned by user A reads from a table owned by user B, even if B had granted the necessary privileges to A. We've had multiple complaints about that type of restore failure, most recently from Jordan Gigov. The ideal fix for this would be to start treating ACLs as dependency- sortable objects, rather than hard-wiring anything about their dump order (the existing approach is a messy kluge dating to commit dc0e76ca3). But that's going to be a rather major change, and it certainly wouldn't lead to a back-patchable fix. As a short-term solution, convert the existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack, ie, normal objects then ACLs then matview refreshes. Because this happens in RestoreArchive(), it will also fix the problem when restoring from an existing archive-format dump. (Note this means that if a matview refresh would have failed under the permissions prevailing at dump time, it'll fail during restore as well. We'll define that as user error rather than something we should try to work around.) To avoid performance loss in parallel restore, we need the matview refreshes to still be parallelizable. Hence, clean things up enough so that both ACLs and matviews are handled by the parallel restore infrastructure, instead of reverting back to serial restore for ACLs. There is still a final serial step, but it shouldn't normally have to do anything; it's only there to try to recover if we get stuck due to some problem like unresolved circular dependencies. Patch by me, but it owes something to an earlier attempt by Kevin Grittner. Back-patch to 9.3 where materialized views were introduced. Discussion: https://postgr.es/m/28572.1500912583@sss.pgh.pa.us
2017-08-03 23:36:23 +02:00
/*
* For historical reasons, ACL items are interspersed with everything else in
* a dump file's TOC; typically they're right after the object they're for.
* However, we need to restore data before ACLs, as otherwise a read-only
* table (ie one where the owner has revoked her own INSERT privilege) causes
* data restore failures. On the other hand, matview REFRESH commands should
* come out after ACLs, as otherwise non-superuser-owned matviews might not
* be able to execute. (If the permissions at the time of dumping would not
Fix pg_dump/pg_restore to restore event triggers later. Previously, event triggers were restored just after regular triggers (and FK constraints, which are basically triggers). This is risky since an event trigger, once installed, could interfere with subsequent restore commands. Worse, because event triggers don't have any particular dependencies on any post-data objects, a parallel restore would consider them eligible to be restored the moment the post-data phase starts, allowing them to also interfere with restoration of a whole bunch of objects that would have been restored before them in a serial restore. There's no way to completely remove the risk of a misguided event trigger breaking the restore, since if nothing else it could break other event triggers. But we can certainly push them to later in the process to minimize the hazard. To fix, tweak the RestorePass mechanism introduced by commit 3eb9a5e7c so that event triggers are handled as part of the post-ACL processing pass (renaming the "REFRESH" pass to "POST_ACL" to reflect its more general use). This will cause them to restore after everything except matview refreshes, which seems OK since matview refreshes really ought to run in the post-restore state of the database. In a parallel restore, event triggers and matview refreshes might be intermixed, but that seems all right as well. Also update the code and comments in pg_dump_sort.c so that its idea of how things are sorted agrees with what actually happens due to the RestorePass mechanism. This is mostly cosmetic: it'll affect the order of objects in a dump's TOC, but not the actual restore order. But not changing that would be quite confusing to somebody reading the code. Back-patch to all supported branches. Fabrízio de Royes Mello, tweaked a bit by me Discussion: https://postgr.es/m/CAFcNs+ow1hmFox8P--3GSdtwz-S3Binb6ZmoP6Vk+Xg=K6eZNA@mail.gmail.com
2020-03-09 19:58:11 +01:00
* allow a REFRESH, too bad; we won't fix that for you.) We also want event
* triggers to be restored after ACLs, so that they can't mess those up.
*
* These considerations force us to make three passes over the TOC,
* restoring the appropriate subset of items in each pass. We assume that
* the dependency sort resulted in an appropriate ordering of items within
* each subset.
*
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last. Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end, materialized view refreshes were occurring while the permissions on referenced objects were still at defaults. This led to failures if, say, an MV owned by user A reads from a table owned by user B, even if B had granted the necessary privileges to A. We've had multiple complaints about that type of restore failure, most recently from Jordan Gigov. The ideal fix for this would be to start treating ACLs as dependency- sortable objects, rather than hard-wiring anything about their dump order (the existing approach is a messy kluge dating to commit dc0e76ca3). But that's going to be a rather major change, and it certainly wouldn't lead to a back-patchable fix. As a short-term solution, convert the existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack, ie, normal objects then ACLs then matview refreshes. Because this happens in RestoreArchive(), it will also fix the problem when restoring from an existing archive-format dump. (Note this means that if a matview refresh would have failed under the permissions prevailing at dump time, it'll fail during restore as well. We'll define that as user error rather than something we should try to work around.) To avoid performance loss in parallel restore, we need the matview refreshes to still be parallelizable. Hence, clean things up enough so that both ACLs and matviews are handled by the parallel restore infrastructure, instead of reverting back to serial restore for ACLs. There is still a final serial step, but it shouldn't normally have to do anything; it's only there to try to recover if we get stuck due to some problem like unresolved circular dependencies. Patch by me, but it owes something to an earlier attempt by Kevin Grittner. Back-patch to 9.3 where materialized views were introduced. Discussion: https://postgr.es/m/28572.1500912583@sss.pgh.pa.us
2017-08-03 23:36:23 +02:00
* XXX This mechanism should be superseded by tracking dependencies on ACLs
* properly; but we'll still need it for old dump files even after that.
*/
typedef enum
{
RESTORE_PASS_MAIN = 0, /* Main pass (most TOC item types) */
RESTORE_PASS_ACL, /* ACL item types */
Fix pg_dump/pg_restore to restore event triggers later. Previously, event triggers were restored just after regular triggers (and FK constraints, which are basically triggers). This is risky since an event trigger, once installed, could interfere with subsequent restore commands. Worse, because event triggers don't have any particular dependencies on any post-data objects, a parallel restore would consider them eligible to be restored the moment the post-data phase starts, allowing them to also interfere with restoration of a whole bunch of objects that would have been restored before them in a serial restore. There's no way to completely remove the risk of a misguided event trigger breaking the restore, since if nothing else it could break other event triggers. But we can certainly push them to later in the process to minimize the hazard. To fix, tweak the RestorePass mechanism introduced by commit 3eb9a5e7c so that event triggers are handled as part of the post-ACL processing pass (renaming the "REFRESH" pass to "POST_ACL" to reflect its more general use). This will cause them to restore after everything except matview refreshes, which seems OK since matview refreshes really ought to run in the post-restore state of the database. In a parallel restore, event triggers and matview refreshes might be intermixed, but that seems all right as well. Also update the code and comments in pg_dump_sort.c so that its idea of how things are sorted agrees with what actually happens due to the RestorePass mechanism. This is mostly cosmetic: it'll affect the order of objects in a dump's TOC, but not the actual restore order. But not changing that would be quite confusing to somebody reading the code. Back-patch to all supported branches. Fabrízio de Royes Mello, tweaked a bit by me Discussion: https://postgr.es/m/CAFcNs+ow1hmFox8P--3GSdtwz-S3Binb6ZmoP6Vk+Xg=K6eZNA@mail.gmail.com
2020-03-09 19:58:11 +01:00
RESTORE_PASS_POST_ACL, /* Event trigger and matview refresh items */
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last. Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end, materialized view refreshes were occurring while the permissions on referenced objects were still at defaults. This led to failures if, say, an MV owned by user A reads from a table owned by user B, even if B had granted the necessary privileges to A. We've had multiple complaints about that type of restore failure, most recently from Jordan Gigov. The ideal fix for this would be to start treating ACLs as dependency- sortable objects, rather than hard-wiring anything about their dump order (the existing approach is a messy kluge dating to commit dc0e76ca3). But that's going to be a rather major change, and it certainly wouldn't lead to a back-patchable fix. As a short-term solution, convert the existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack, ie, normal objects then ACLs then matview refreshes. Because this happens in RestoreArchive(), it will also fix the problem when restoring from an existing archive-format dump. (Note this means that if a matview refresh would have failed under the permissions prevailing at dump time, it'll fail during restore as well. We'll define that as user error rather than something we should try to work around.) To avoid performance loss in parallel restore, we need the matview refreshes to still be parallelizable. Hence, clean things up enough so that both ACLs and matviews are handled by the parallel restore infrastructure, instead of reverting back to serial restore for ACLs. There is still a final serial step, but it shouldn't normally have to do anything; it's only there to try to recover if we get stuck due to some problem like unresolved circular dependencies. Patch by me, but it owes something to an earlier attempt by Kevin Grittner. Back-patch to 9.3 where materialized views were introduced. Discussion: https://postgr.es/m/28572.1500912583@sss.pgh.pa.us
2017-08-03 23:36:23 +02:00
Fix pg_dump/pg_restore to restore event triggers later. Previously, event triggers were restored just after regular triggers (and FK constraints, which are basically triggers). This is risky since an event trigger, once installed, could interfere with subsequent restore commands. Worse, because event triggers don't have any particular dependencies on any post-data objects, a parallel restore would consider them eligible to be restored the moment the post-data phase starts, allowing them to also interfere with restoration of a whole bunch of objects that would have been restored before them in a serial restore. There's no way to completely remove the risk of a misguided event trigger breaking the restore, since if nothing else it could break other event triggers. But we can certainly push them to later in the process to minimize the hazard. To fix, tweak the RestorePass mechanism introduced by commit 3eb9a5e7c so that event triggers are handled as part of the post-ACL processing pass (renaming the "REFRESH" pass to "POST_ACL" to reflect its more general use). This will cause them to restore after everything except matview refreshes, which seems OK since matview refreshes really ought to run in the post-restore state of the database. In a parallel restore, event triggers and matview refreshes might be intermixed, but that seems all right as well. Also update the code and comments in pg_dump_sort.c so that its idea of how things are sorted agrees with what actually happens due to the RestorePass mechanism. This is mostly cosmetic: it'll affect the order of objects in a dump's TOC, but not the actual restore order. But not changing that would be quite confusing to somebody reading the code. Back-patch to all supported branches. Fabrízio de Royes Mello, tweaked a bit by me Discussion: https://postgr.es/m/CAFcNs+ow1hmFox8P--3GSdtwz-S3Binb6ZmoP6Vk+Xg=K6eZNA@mail.gmail.com
2020-03-09 19:58:11 +01:00
#define RESTORE_PASS_LAST RESTORE_PASS_POST_ACL
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last. Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end, materialized view refreshes were occurring while the permissions on referenced objects were still at defaults. This led to failures if, say, an MV owned by user A reads from a table owned by user B, even if B had granted the necessary privileges to A. We've had multiple complaints about that type of restore failure, most recently from Jordan Gigov. The ideal fix for this would be to start treating ACLs as dependency- sortable objects, rather than hard-wiring anything about their dump order (the existing approach is a messy kluge dating to commit dc0e76ca3). But that's going to be a rather major change, and it certainly wouldn't lead to a back-patchable fix. As a short-term solution, convert the existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack, ie, normal objects then ACLs then matview refreshes. Because this happens in RestoreArchive(), it will also fix the problem when restoring from an existing archive-format dump. (Note this means that if a matview refresh would have failed under the permissions prevailing at dump time, it'll fail during restore as well. We'll define that as user error rather than something we should try to work around.) To avoid performance loss in parallel restore, we need the matview refreshes to still be parallelizable. Hence, clean things up enough so that both ACLs and matviews are handled by the parallel restore infrastructure, instead of reverting back to serial restore for ACLs. There is still a final serial step, but it shouldn't normally have to do anything; it's only there to try to recover if we get stuck due to some problem like unresolved circular dependencies. Patch by me, but it owes something to an earlier attempt by Kevin Grittner. Back-patch to 9.3 where materialized views were introduced. Discussion: https://postgr.es/m/28572.1500912583@sss.pgh.pa.us
2017-08-03 23:36:23 +02:00
} RestorePass;
#define REQ_SCHEMA 0x01 /* want schema */
#define REQ_DATA 0x02 /* want data */
#define REQ_SPECIAL 0x04 /* for special TOC entries */
struct _archiveHandle
{
Archive public; /* Public part of archive */
int version; /* Version of file */
char *archiveRemoteVersion; /* When reading an archive, the
* version of the dumped DB */
char *archiveDumpVersion; /* When reading an archive, the version of
* the dumper */
size_t intSize; /* Size of an integer in the archive */
size_t offSize; /* Size of a file offset in the archive -
* Added V1.7 */
ArchiveFormat format; /* Archive format */
sqlparseInfo sqlparse; /* state for parsing INSERT data */
time_t createDate; /* Date archive created */
/*
Fix pg_restore's misdesigned code for detecting archive file format. Despite the clear comments pointing out that the duplicative code segments in ReadHead() and _discoverArchiveFormat() needed to be in sync, they were not: the latter did not bother to apply any of the sanity checks in the former. We'd missed noticing this partly because none of those checks would fail in scenarios we customarily test, and partly because the oversight would be masked if both segments execute, which they would in cases other than needing to autodetect the format of a non-seekable stdin source. However, in a case meeting all these requirements --- for example, trying to read a newer-than-supported archive format from non-seekable stdin --- pg_restore missed applying the version check and would likely dump core or otherwise misbehave. The whole thing is silly anyway, because there seems little reason to duplicate the logic beyond the one-line verification that the file starts with "PGDMP". There seems to have been an undocumented assumption that multiple major formats (major enough to require separate reader modules) would nonetheless share the first half-dozen fields of the custom-format header. This seems unlikely, so let's fix it by just nuking the duplicate logic in _discoverArchiveFormat(). Also get rid of the pointless attempt to seek back to the start of the file after successful autodetection. That wastes cycles and it means we have four behaviors to verify not two. Per bug #16951 from Sergey Koposov. This has been broken for decades, so back-patch to all supported versions. Discussion: https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
2021-04-01 19:34:16 +02:00
* Fields used when discovering archive format. For tar format, we load
* the first block into the lookahead buffer, and verify that it looks
* like a tar header. The tar module must then consume bytes from the
* lookahead buffer before reading any more from the file. For custom
* format, we load only the "PGDMP" marker into the buffer, and then set
* readHeader after confirming it matches. The buffer is vestigial in
* this case, as the subsequent code just checks readHeader and doesn't
* examine the buffer.
*/
Fix pg_restore's misdesigned code for detecting archive file format. Despite the clear comments pointing out that the duplicative code segments in ReadHead() and _discoverArchiveFormat() needed to be in sync, they were not: the latter did not bother to apply any of the sanity checks in the former. We'd missed noticing this partly because none of those checks would fail in scenarios we customarily test, and partly because the oversight would be masked if both segments execute, which they would in cases other than needing to autodetect the format of a non-seekable stdin source. However, in a case meeting all these requirements --- for example, trying to read a newer-than-supported archive format from non-seekable stdin --- pg_restore missed applying the version check and would likely dump core or otherwise misbehave. The whole thing is silly anyway, because there seems little reason to duplicate the logic beyond the one-line verification that the file starts with "PGDMP". There seems to have been an undocumented assumption that multiple major formats (major enough to require separate reader modules) would nonetheless share the first half-dozen fields of the custom-format header. This seems unlikely, so let's fix it by just nuking the duplicate logic in _discoverArchiveFormat(). Also get rid of the pointless attempt to seek back to the start of the file after successful autodetection. That wastes cycles and it means we have four behaviors to verify not two. Per bug #16951 from Sergey Koposov. This has been broken for decades, so back-patch to all supported versions. Discussion: https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
2021-04-01 19:34:16 +02:00
int readHeader; /* Set if we already read "PGDMP" marker */
char *lookahead; /* Buffer used when reading header to discover
* format */
Fix pg_restore's misdesigned code for detecting archive file format. Despite the clear comments pointing out that the duplicative code segments in ReadHead() and _discoverArchiveFormat() needed to be in sync, they were not: the latter did not bother to apply any of the sanity checks in the former. We'd missed noticing this partly because none of those checks would fail in scenarios we customarily test, and partly because the oversight would be masked if both segments execute, which they would in cases other than needing to autodetect the format of a non-seekable stdin source. However, in a case meeting all these requirements --- for example, trying to read a newer-than-supported archive format from non-seekable stdin --- pg_restore missed applying the version check and would likely dump core or otherwise misbehave. The whole thing is silly anyway, because there seems little reason to duplicate the logic beyond the one-line verification that the file starts with "PGDMP". There seems to have been an undocumented assumption that multiple major formats (major enough to require separate reader modules) would nonetheless share the first half-dozen fields of the custom-format header. This seems unlikely, so let's fix it by just nuking the duplicate logic in _discoverArchiveFormat(). Also get rid of the pointless attempt to seek back to the start of the file after successful autodetection. That wastes cycles and it means we have four behaviors to verify not two. Per bug #16951 from Sergey Koposov. This has been broken for decades, so back-patch to all supported versions. Discussion: https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
2021-04-01 19:34:16 +02:00
size_t lookaheadSize; /* Allocated size of buffer */
size_t lookaheadLen; /* Length of valid data in lookahead */
size_t lookaheadPos; /* Current read position in lookahead buffer */
2001-03-22 05:01:46 +01:00
ArchiveEntryPtrType ArchiveEntryPtr; /* Called for each metadata object */
StartDataPtrType StartDataPtr; /* Called when table data is about to be
* dumped */
WriteDataPtrType WriteDataPtr; /* Called to send some table data to the
* archive */
EndDataPtrType EndDataPtr; /* Called when table data dump is finished */
WriteBytePtrType WriteBytePtr; /* Write a byte to output */
ReadBytePtrType ReadBytePtr; /* Read a byte from an archive */
WriteBufPtrType WriteBufPtr; /* Write a buffer of output to the archive */
ReadBufPtrType ReadBufPtr; /* Read a buffer of input from the archive */
ClosePtrType ClosePtr; /* Close the archive */
ReopenPtrType ReopenPtr; /* Reopen the archive */
WriteExtraTocPtrType WriteExtraTocPtr; /* Write extra TOC entry data
* associated with the current
* archive format */
ReadExtraTocPtrType ReadExtraTocPtr; /* Read extra info associated with
* archive format */
PrintExtraTocPtrType PrintExtraTocPtr; /* Extra TOC info for format */
PrintTocDataPtrType PrintTocDataPtr;
2001-03-22 05:01:46 +01:00
StartLOsPtrType StartLOsPtr;
EndLOsPtrType EndLOsPtr;
StartLOPtrType StartLOPtr;
EndLOPtrType EndLOPtr;
2001-03-22 05:01:46 +01:00
SetupWorkerPtrType SetupWorkerPtr;
WorkerJobDumpPtrType WorkerJobDumpPtr;
WorkerJobRestorePtrType WorkerJobRestorePtr;
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
PrepParallelRestorePtrType PrepParallelRestorePtr;
ClonePtrType ClonePtr; /* Clone format-specific fields */
DeClonePtrType DeClonePtr; /* Clean up cloned fields */
CustomOutPtrType CustomOutPtr; /* Alternative script output routine */
/* Stuff for direct DB connection */
char *archdbname; /* DB name *read* from archive */
char *savedPassword; /* password for ropt->username, if known */
char *use_role;
PGconn *connection;
Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore. Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar signals and set a flag that was tested in various data-transfer loops. This was prone to errors of omission (cf commit 3c8aa6654); and even if the client-side response was prompt, we did nothing that would cause long-running SQL commands (e.g. CREATE INDEX) to terminate early. Also, the master process would effectively do nothing at all upon receipt of SIGINT; the only reason it seemed to work was that in typical scenarios the signal would also be delivered to the child processes. We should support termination when a signal is delivered only to the master process, though. Windows builds had no console interrupt handler, so they would just fall over immediately at control-C, again leaving long-running SQL commands to finish unmolested. To fix, remove the flag-checking approach altogether. Instead, allow the Unix signal handler to send a cancel request directly and then exit(1). In the master process, also have it forward the signal to the children. On Windows, add a console interrupt handler that behaves approximately the same. The main difference is that a single execution of the Windows handler can send all the cancel requests since all the info is available in one process, whereas on Unix each process sends a cancel only for its own database connection. In passing, fix an old problem that DisconnectDatabase tends to send a cancel request before exiting a parallel worker, even if nothing went wrong. This is at least a waste of cycles, and could lead to unexpected log messages, or maybe even data loss if it happened in pg_restore (though in the current code the problem seems to affect only pg_dump). The cause was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY state, causing PQtransactionStatus() to report PQTRANS_ACTIVE. That's normally harmless because the next PQexec() will silently clear the PGASYNC_BUSY state; but in a parallel worker we might exit without any additional SQL commands after a COPY step. So add an extra PQgetResult() call after a COPY to allow libpq to return to PGASYNC_IDLE state. This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore were introduced. Thanks to Kyotaro Horiguchi for Windows testing and code suggestions. Original-Patch: <7005.1464657274@sss.pgh.pa.us> Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
2016-06-02 19:27:53 +02:00
/* If connCancel isn't NULL, SIGINT handler will send a cancel */
PGcancel *volatile connCancel;
int connectToDB; /* Flag to indicate if direct DB connection is
* required */
ArchiverOutput outputKind; /* Flag for what we're currently writing */
bool pgCopyIn; /* Currently in libpq 'COPY IN' mode. */
2001-03-22 05:01:46 +01:00
int loFd;
bool writingLO;
int loCount; /* # of LOs restored */
2001-03-22 05:01:46 +01:00
char *fSpec; /* Archive File Spec */
FILE *FH; /* General purpose file handle */
void *OF; /* Output file */
2001-03-22 05:01:46 +01:00
struct _tocEntry *toc; /* Header of circular list of TOC entries */
int tocCount; /* Number of TOC entries */
DumpId maxDumpId; /* largest DumpId among all TOC entries */
/* arrays created after the TOC list is complete: */
struct _tocEntry **tocsByDumpId; /* TOCs indexed by dumpId */
DumpId *tableDataId; /* TABLE DATA ids, indexed by table dumpId */
struct _tocEntry *currToc; /* Used when dumping data */
Switch pg_dump to use compression specifications Compression specifications are currently used by pg_basebackup and pg_receivewal, and are able to let the user control in an extended way the method and level of compression used. As an effect of this commit, pg_dump's -Z/--compress is now able to use more than just an integer, as of the grammar "method[:detail]". The method can be either "none" or "gzip", and can optionally take a detail string. If the detail string is only an integer, it defines the compression level. A comma-separated list of keywords can also be used method allows for more options, the only keyword supported now is "level". The change is backward-compatible, hence specifying only an integer leads to no compression for a level of 0 and gzip compression when the level is greater than 0. Most of the code changes are straight-forward, as pg_dump was relying on an integer tracking the compression level to check for gzip or no compression. These are changed to use a compression specification and the algorithm stored in it. As of this change, note that the dump format is not bumped because there is no need yet to track the compression algorithm in the TOC entries. Hence, we still rely on the compression level to make the difference when reading them. This will be mandatory once a new compression method is added, though. In order to keep the code simpler when parsing the compression specification, the code is changed so as pg_dump now fails hard when using gzip on -Z/--compress without its support compiled, rather than enforcing no compression without the user knowing about it except through a warning. Like before this commit, archive and custom formats are compressed by default when the code is compiled with gzip, and left uncompressed without gzip. Author: Georgios Kokolatos Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 02:45:02 +01:00
pg_compress_specification compression_spec; /* Requested specification for
* compression */
bool dosync; /* data requested to be synced on sight */
DataDirSyncMethod sync_method;
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
2001-03-22 05:01:46 +01:00
/* these vars track state to avoid sending redundant SET commands */
char *currUser; /* current username, or NULL if unknown */
char *currSchema; /* current schema, or NULL */
char *currTablespace; /* current tablespace, or NULL */
char *currTableAm; /* current table access method, or NULL */
2004-08-29 07:07:03 +02:00
void *lo_buf;
size_t lo_buf_used;
size_t lo_buf_size;
They are two different problems; the TOC entry is important for any multiline command or to rerun the command easily later. Whereas displaying the failed SQL command is a matter of fixing the error messages. The latter is complicated by failed COPY commands which, with die-on-errors off, results in the data being processed as a command, so dumping the command will dump all of the data. In the case of long commands, should the whole command be dumped? eg. (eg. several pages of function definition). In the case of the COPY command, I'm not sure what to do. Obviously, it would be best to avoid sending the data, but the data and command are combined (from memory). Also, the 'data' may be in the form of INSERT statements. Attached patch produces the first 125 chars of the command: pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC Entry 26; 1255 16449270 FUNCTION plpgsql_call_handler() pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_call_handler" already exists with same argument types Command was: CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_call_han... pg_restore: [archiver (db)] Error from TOC Entry 27; 1255 16449271 FUNCTION plpgsql_validator(oid) pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_validator" already exists with same argument types Command was: CREATE FUNCTION plpgsql_validator(oid) RETURNS void AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_validator' LANGU... Philip Warner
2004-08-20 22:00:34 +02:00
int noTocComments;
ArchiverStage stage;
ArchiverStage lastErrorStage;
Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last. Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end, materialized view refreshes were occurring while the permissions on referenced objects were still at defaults. This led to failures if, say, an MV owned by user A reads from a table owned by user B, even if B had granted the necessary privileges to A. We've had multiple complaints about that type of restore failure, most recently from Jordan Gigov. The ideal fix for this would be to start treating ACLs as dependency- sortable objects, rather than hard-wiring anything about their dump order (the existing approach is a messy kluge dating to commit dc0e76ca3). But that's going to be a rather major change, and it certainly wouldn't lead to a back-patchable fix. As a short-term solution, convert the existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack, ie, normal objects then ACLs then matview refreshes. Because this happens in RestoreArchive(), it will also fix the problem when restoring from an existing archive-format dump. (Note this means that if a matview refresh would have failed under the permissions prevailing at dump time, it'll fail during restore as well. We'll define that as user error rather than something we should try to work around.) To avoid performance loss in parallel restore, we need the matview refreshes to still be parallelizable. Hence, clean things up enough so that both ACLs and matviews are handled by the parallel restore infrastructure, instead of reverting back to serial restore for ACLs. There is still a final serial step, but it shouldn't normally have to do anything; it's only there to try to recover if we get stuck due to some problem like unresolved circular dependencies. Patch by me, but it owes something to an earlier attempt by Kevin Grittner. Back-patch to 9.3 where materialized views were introduced. Discussion: https://postgr.es/m/28572.1500912583@sss.pgh.pa.us
2017-08-03 23:36:23 +02:00
RestorePass restorePass; /* used only during parallel restore */
They are two different problems; the TOC entry is important for any multiline command or to rerun the command easily later. Whereas displaying the failed SQL command is a matter of fixing the error messages. The latter is complicated by failed COPY commands which, with die-on-errors off, results in the data being processed as a command, so dumping the command will dump all of the data. In the case of long commands, should the whole command be dumped? eg. (eg. several pages of function definition). In the case of the COPY command, I'm not sure what to do. Obviously, it would be best to avoid sending the data, but the data and command are combined (from memory). Also, the 'data' may be in the form of INSERT statements. Attached patch produces the first 125 chars of the command: pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC Entry 26; 1255 16449270 FUNCTION plpgsql_call_handler() pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_call_handler" already exists with same argument types Command was: CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_call_han... pg_restore: [archiver (db)] Error from TOC Entry 27; 1255 16449271 FUNCTION plpgsql_validator(oid) pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_validator" already exists with same argument types Command was: CREATE FUNCTION plpgsql_validator(oid) RETURNS void AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_validator' LANGU... Philip Warner
2004-08-20 22:00:34 +02:00
struct _tocEntry *currentTE;
struct _tocEntry *lastErrorTE;
};
struct _tocEntry
{
struct _tocEntry *prev;
struct _tocEntry *next;
CatalogId catalogId;
DumpId dumpId;
teSection section;
bool hadDumper; /* Archiver was passed a dumper routine (used
* in restore) */
char *tag; /* index tag */
char *namespace; /* null or empty string if not in a schema */
char *tablespace; /* null if not in a tablespace; empty string
* means use database default */
char *tableam; /* table access method, only for TABLE tags */
char *owner;
char *desc;
char *defn;
char *dropStmt;
char *copyStmt;
DumpId *dependencies; /* dumpIds of objects this one depends on */
int nDeps; /* number of dependencies */
DataDumperPtr dataDumper; /* Routine to dump data for object */
const void *dataDumperArg; /* Arg for above routine */
void *formatData; /* TOC Entry data specific to file format */
/* working state while dumping/restoring */
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
pgoff_t dataLength; /* item's data size; 0 if none or unknown */
int reqs; /* do we need schema and/or data of object
* (REQ_* bit mask) */
bool created; /* set for DATA member if TABLE was created */
/* working state (needed only for parallel restore) */
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
struct _tocEntry *pending_prev; /* list links for pending-items list; */
struct _tocEntry *pending_next; /* NULL if not in that list */
int depCount; /* number of dependencies not yet restored */
DumpId *revDeps; /* dumpIds of objects depending on this one */
int nRevDeps; /* number of such dependencies */
DumpId *lockDeps; /* dumpIds of objects this one needs lock on */
int nLockDeps; /* number of such dependencies */
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
Unified logging system for command-line programs This unifies the various ad hoc logging (message printing, error printing) systems used throughout the command-line programs. Features: - Program name is automatically prefixed. - Message string does not end with newline. This removes a common source of inconsistencies and omissions. - Additionally, a final newline is automatically stripped, simplifying use of PQerrorMessage() etc., another common source of mistakes. - I converted error message strings to use %m where possible. - As a result of the above several points, more translatable message strings can be shared between different components and between frontends and backend, without gratuitous punctuation or whitespace differences. - There is support for setting a "log level". This is not meant to be user-facing, but can be used internally to implement debug or verbose modes. - Lazy argument evaluation, so no significant overhead if logging at some level is disabled. - Some color in the messages, similar to gcc and clang. Set PG_COLOR=auto to try it out. Some colors are predefined, but can be customized by setting PG_COLORS. - Common files (common/, fe_utils/, etc.) can handle logging much more simply by just using one API without worrying too much about the context of the calling program, requiring callbacks, or having to pass "progname" around everywhere. - Some programs called setvbuf() to make sure that stderr is unbuffered, even on Windows. But not all programs did that. This is now done centrally. Soft goals: - Reduces vertical space use and visual complexity of error reporting in the source code. - Encourages more deliberate classification of messages. For example, in some cases it wasn't clear without analyzing the surrounding code whether a message was meant as an error or just an info. - Concepts and terms are vaguely aligned with popular logging frameworks such as log4j and Python logging. This is all just about printing stuff out. Nothing affects program flow (e.g., fatal exits). The uses are just too varied to do that. Some existing code had wrappers that do some kind of print-and-exit, and I adapted those. I tried to keep the output mostly the same, but there is a lot of historical baggage to unwind and special cases to consider, and I might not always have succeeded. One significant change is that pg_rewind used to write all error messages to stdout. That is now changed to stderr. Reviewed-by: Donald Dong <xdong@csumb.edu> Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru> Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
2019-04-01 14:24:37 +02:00
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
/* Options for ArchiveEntry */
typedef struct _archiveOpts
{
const char *tag;
const char *namespace;
const char *tablespace;
const char *tableam;
const char *owner;
const char *description;
teSection section;
const char *createStmt;
const char *dropStmt;
const char *copyStmt;
const DumpId *deps;
int nDeps;
DataDumperPtr dumpFn;
const void *dumpArg;
} ArchiveOpts;
#define ARCHIVE_OPTS(...) &(ArchiveOpts){__VA_ARGS__}
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
/* Called to add a TOC entry */
extern TocEntry *ArchiveEntry(Archive *AHX, CatalogId catalogId,
DumpId dumpId, ArchiveOpts *opts);
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
extern void WriteHead(ArchiveHandle *AH);
extern void ReadHead(ArchiveHandle *AH);
extern void WriteToc(ArchiveHandle *AH);
extern void ReadToc(ArchiveHandle *AH);
extern void WriteDataChunks(ArchiveHandle *AH, struct ParallelState *pstate);
extern void WriteDataChunksForTocEntry(ArchiveHandle *AH, TocEntry *te);
extern ArchiveHandle *CloneArchive(ArchiveHandle *AH);
extern void DeCloneArchive(ArchiveHandle *AH);
extern int TocIDRequired(ArchiveHandle *AH, DumpId id);
TocEntry *getTocEntryByDumpId(ArchiveHandle *AH, DumpId id);
extern bool checkSeek(FILE *fp);
#define appendStringLiteralAHX(buf,str,AH) \
appendStringLiteral(buf, str, (AH)->public.encoding, (AH)->public.std_strings)
#define appendByteaLiteralAHX(buf,str,len,AH) \
appendByteaLiteral(buf, str, len, (AH)->public.std_strings)
/*
* Mandatory routines for each supported format
*/
extern size_t WriteInt(ArchiveHandle *AH, int i);
extern int ReadInt(ArchiveHandle *AH);
extern char *ReadStr(ArchiveHandle *AH);
extern size_t WriteStr(ArchiveHandle *AH, const char *c);
int ReadOffset(ArchiveHandle *, pgoff_t *);
size_t WriteOffset(ArchiveHandle *, pgoff_t, int);
extern void StartRestoreLOs(ArchiveHandle *AH);
extern void StartRestoreLO(ArchiveHandle *AH, Oid oid, bool drop);
extern void EndRestoreLO(ArchiveHandle *AH, Oid oid);
extern void EndRestoreLOs(ArchiveHandle *AH);
extern void InitArchiveFmt_Custom(ArchiveHandle *AH);
extern void InitArchiveFmt_Null(ArchiveHandle *AH);
extern void InitArchiveFmt_Directory(ArchiveHandle *AH);
extern void InitArchiveFmt_Tar(ArchiveHandle *AH);
extern bool isValidTarHeader(char *header);
extern void ReconnectToServer(ArchiveHandle *AH, const char *dbname);
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
extern void IssueCommandPerBlob(ArchiveHandle *AH, TocEntry *te,
const char *cmdBegin, const char *cmdEnd);
extern void IssueACLPerBlob(ArchiveHandle *AH, TocEntry *te);
extern void DropLOIfExists(ArchiveHandle *AH, Oid oid);
void ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle *AH);
int ahprintf(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
#endif