postgresql/src/bin/pg_dump/pg_backup_directory.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

878 lines
23 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* pg_backup_directory.c
*
* A directory format dump is a directory, which contains a "toc.dat" file
* for the TOC, and a separate file for each data entry, named "<oid>.dat".
* Large objects are stored in separate files named "blob_<oid>.dat",
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
* and there's a plain-text TOC file for each BLOBS TOC entry named
* "blobs_<dumpID>.toc" (or just "blobs.toc" in archive versions before 16).
*
* If compression is used, each data file is individually compressed and the
* ".gz" suffix is added to the filenames. The TOC files are never
* compressed by pg_dump, however they are accepted with the .gz suffix too,
* in case the user has manually compressed them with 'gzip'.
*
* NOTE: This format is identical to the files written in the tar file in
* the 'tar' format, except that we don't write the restore.sql file (TODO),
* and the tar format doesn't support compression. Please keep the formats in
* sync.
*
*
* Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
* Portions Copyright (c) 2000, Philip Warner
*
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from its use.
*
* IDENTIFICATION
* src/bin/pg_dump/pg_backup_directory.c
*
*-------------------------------------------------------------------------
*/
#include "postgres_fe.h"
#include <dirent.h>
#include <sys/stat.h>
#include "common/file_utils.h"
#include "compress_io.h"
#include "parallel.h"
#include "pg_backup_utils.h"
2011-01-23 22:44:07 +01:00
typedef struct
{
/*
* Our archive location. This is basically what the user specified as his
* backup file but of course here it is a directory.
*/
char *directory;
CompressFileHandle *dataFH; /* currently open data file */
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
CompressFileHandle *LOsTocFH; /* file handle for blobs_NNN.toc */
ParallelState *pstate; /* for parallel backup / restore */
} lclContext;
typedef struct
{
char *filename; /* filename excluding the directory (basename) */
} lclTocEntry;
/* prototypes for private functions */
static void _ArchiveEntry(ArchiveHandle *AH, TocEntry *te);
static void _StartData(ArchiveHandle *AH, TocEntry *te);
static void _EndData(ArchiveHandle *AH, TocEntry *te);
static void _WriteData(ArchiveHandle *AH, const void *data, size_t dLen);
static int _WriteByte(ArchiveHandle *AH, const int i);
static int _ReadByte(ArchiveHandle *AH);
static void _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len);
static void _ReadBuf(ArchiveHandle *AH, void *buf, size_t len);
static void _CloseArchive(ArchiveHandle *AH);
static void _ReopenArchive(ArchiveHandle *AH);
static void _PrintTocData(ArchiveHandle *AH, TocEntry *te);
static void _WriteExtraToc(ArchiveHandle *AH, TocEntry *te);
static void _ReadExtraToc(ArchiveHandle *AH, TocEntry *te);
static void _PrintExtraToc(ArchiveHandle *AH, TocEntry *te);
static void _StartLOs(ArchiveHandle *AH, TocEntry *te);
static void _StartLO(ArchiveHandle *AH, TocEntry *te, Oid oid);
static void _EndLO(ArchiveHandle *AH, TocEntry *te, Oid oid);
static void _EndLOs(ArchiveHandle *AH, TocEntry *te);
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
static void _LoadLOs(ArchiveHandle *AH, TocEntry *te);
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
static void _PrepParallelRestore(ArchiveHandle *AH);
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
static int _WorkerJobRestoreDirectory(ArchiveHandle *AH, TocEntry *te);
static int _WorkerJobDumpDirectory(ArchiveHandle *AH, TocEntry *te);
static void setFilePath(ArchiveHandle *AH, char *buf,
const char *relativeFilename);
/*
* Init routine required by ALL formats. This is a global routine
* and should be declared in pg_backup_archiver.h
*
* Its task is to create any extra archive context (using AH->formatData),
* and to initialize the supported function pointers.
*
* It should also prepare whatever its input source is for reading/writing,
* and in the case of a read mode connection, it should load the Header & TOC.
*/
void
InitArchiveFmt_Directory(ArchiveHandle *AH)
{
lclContext *ctx;
/* Assuming static functions, this can be copied for each format. */
AH->ArchiveEntryPtr = _ArchiveEntry;
AH->StartDataPtr = _StartData;
AH->WriteDataPtr = _WriteData;
AH->EndDataPtr = _EndData;
AH->WriteBytePtr = _WriteByte;
AH->ReadBytePtr = _ReadByte;
AH->WriteBufPtr = _WriteBuf;
AH->ReadBufPtr = _ReadBuf;
AH->ClosePtr = _CloseArchive;
AH->ReopenPtr = _ReopenArchive;
AH->PrintTocDataPtr = _PrintTocData;
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
AH->StartLOsPtr = _StartLOs;
AH->StartLOPtr = _StartLO;
AH->EndLOPtr = _EndLO;
AH->EndLOsPtr = _EndLOs;
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
AH->PrepParallelRestorePtr = _PrepParallelRestore;
AH->ClonePtr = _Clone;
AH->DeClonePtr = _DeClone;
AH->WorkerJobRestorePtr = _WorkerJobRestoreDirectory;
AH->WorkerJobDumpPtr = _WorkerJobDumpDirectory;
/* Set up our private context */
ctx = (lclContext *) pg_malloc0(sizeof(lclContext));
AH->formatData = (void *) ctx;
ctx->dataFH = NULL;
ctx->LOsTocFH = NULL;
/*
* Now open the TOC file
*/
if (!AH->fSpec || strcmp(AH->fSpec, "") == 0)
pg_fatal("no output directory specified");
ctx->directory = AH->fSpec;
if (AH->mode == archModeWrite)
{
struct stat st;
bool is_empty = false;
/* we accept an empty existing directory */
if (stat(ctx->directory, &st) == 0 && S_ISDIR(st.st_mode))
{
DIR *dir = opendir(ctx->directory);
if (dir)
{
struct dirent *d;
is_empty = true;
while (errno = 0, (d = readdir(dir)))
{
if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
{
is_empty = false;
break;
}
}
if (errno)
pg_fatal("could not read directory \"%s\": %m",
ctx->directory);
if (closedir(dir))
pg_fatal("could not close directory \"%s\": %m",
ctx->directory);
}
}
if (!is_empty && mkdir(ctx->directory, 0700) < 0)
pg_fatal("could not create directory \"%s\": %m",
ctx->directory);
}
else
{ /* Read Mode */
char fname[MAXPGPATH];
CompressFileHandle *tocFH;
setFilePath(AH, fname, "toc.dat");
tocFH = InitDiscoverCompressFileHandle(fname, PG_BINARY_R);
if (tocFH == NULL)
pg_fatal("could not open input file \"%s\": %m", fname);
ctx->dataFH = tocFH;
2011-04-10 17:42:00 +02:00
/*
* The TOC of a directory format dump shares the format code of the
* tar format.
*/
AH->format = archTar;
ReadHead(AH);
AH->format = archDirectory;
ReadToc(AH);
/* Nothing else in the file, so close it again... */
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(tocFH))
pg_fatal("could not close TOC file: %m");
ctx->dataFH = NULL;
}
}
/*
* Called by the Archiver when the dumper creates a new TOC entry.
*
* We determine the filename for this entry.
*/
static void
_ArchiveEntry(ArchiveHandle *AH, TocEntry *te)
{
lclTocEntry *tctx;
char fn[MAXPGPATH];
tctx = (lclTocEntry *) pg_malloc0(sizeof(lclTocEntry));
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
if (strcmp(te->desc, "BLOBS") == 0)
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
{
snprintf(fn, MAXPGPATH, "blobs_%d.toc", te->dumpId);
tctx->filename = pg_strdup(fn);
}
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
else if (te->dataDumper)
{
snprintf(fn, MAXPGPATH, "%d.dat", te->dumpId);
tctx->filename = pg_strdup(fn);
}
else
tctx->filename = NULL;
te->formatData = (void *) tctx;
}
/*
* Called by the Archiver to save any extra format-related TOC entry
* data.
*
* Use the Archiver routines to write data - they are non-endian, and
* maintain other important file information.
*/
static void
_WriteExtraToc(ArchiveHandle *AH, TocEntry *te)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
/*
* A dumpable object has set tctx->filename, any other object has not.
* (see _ArchiveEntry).
*/
if (tctx->filename)
WriteStr(AH, tctx->filename);
else
WriteStr(AH, "");
}
/*
* Called by the Archiver to read any extra format-related TOC data.
*
* Needs to match the order defined in _WriteExtraToc, and should also
* use the Archiver input routines.
*/
static void
_ReadExtraToc(ArchiveHandle *AH, TocEntry *te)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
if (tctx == NULL)
{
tctx = (lclTocEntry *) pg_malloc0(sizeof(lclTocEntry));
te->formatData = (void *) tctx;
}
tctx->filename = ReadStr(AH);
if (strlen(tctx->filename) == 0)
{
free(tctx->filename);
tctx->filename = NULL;
}
}
/*
* Called by the Archiver when restoring an archive to output a comment
* that includes useful information about the TOC entry.
*/
static void
_PrintExtraToc(ArchiveHandle *AH, TocEntry *te)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
if (AH->public.verbose && tctx->filename)
ahprintf(AH, "-- File: %s\n", tctx->filename);
}
/*
* Called by the archiver when saving TABLE DATA (not schema). This routine
* should save whatever format-specific information is needed to read
* the archive back.
*
* It is called just prior to the dumper's 'DataDumper' routine being called.
*
* We create the data file for writing.
*/
static void
_StartData(ArchiveHandle *AH, TocEntry *te)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
lclContext *ctx = (lclContext *) AH->formatData;
char fname[MAXPGPATH];
setFilePath(AH, fname, tctx->filename);
ctx->dataFH = InitCompressFileHandle(AH->compression_spec);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!ctx->dataFH->open_write_func(fname, PG_BINARY_W, ctx->dataFH))
pg_fatal("could not open output file \"%s\": %m", fname);
}
/*
* Called by archiver when dumper calls WriteData. This routine is
* called for both LO and table data; it is the responsibility of
* the format to manage each kind of data using StartLO/StartData.
*
* It should only be called from within a DataDumper routine.
*
* We write the data to the open data file.
*/
static void
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
CompressFileHandle *CFH = ctx->dataFH;
errno = 0;
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (dLen > 0 && !CFH->write_func(data, dLen, CFH))
{
/* if write didn't set errno, assume problem is no disk space */
if (errno == 0)
errno = ENOSPC;
pg_fatal("could not write to output file: %s",
CFH->get_error_func(CFH));
}
}
/*
* Called by the archiver when a dumper's 'DataDumper' routine has
* finished.
*
* We close the data file.
*/
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
/* Close the file */
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(ctx->dataFH))
pg_fatal("could not close data file: %m");
ctx->dataFH = NULL;
}
/*
* Print data for a given file (can be a LO as well)
*/
static void
_PrintFileData(ArchiveHandle *AH, char *filename)
{
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
size_t cnt = 0;
char *buf;
size_t buflen;
CompressFileHandle *CFH;
if (!filename)
return;
CFH = InitDiscoverCompressFileHandle(filename, PG_BINARY_R);
if (!CFH)
pg_fatal("could not open input file \"%s\": %m", filename);
buflen = DEFAULT_IO_BUFFER_SIZE;
buf = pg_malloc(buflen);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
while (CFH->read_func(buf, buflen, &cnt, CFH) && cnt > 0)
{
ahwrite(buf, 1, cnt, AH);
}
free(buf);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(CFH))
pg_fatal("could not close data file \"%s\": %m", filename);
}
/*
* Print data for a given TOC entry
*/
static void
_PrintTocData(ArchiveHandle *AH, TocEntry *te)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
if (!tctx->filename)
return;
if (strcmp(te->desc, "BLOBS") == 0)
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
_LoadLOs(AH, te);
else
{
char fname[MAXPGPATH];
2011-04-10 17:42:00 +02:00
setFilePath(AH, fname, tctx->filename);
_PrintFileData(AH, fname);
}
}
static void
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
_LoadLOs(ArchiveHandle *AH, TocEntry *te)
{
Oid oid;
lclContext *ctx = (lclContext *) AH->formatData;
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
CompressFileHandle *CFH;
char tocfname[MAXPGPATH];
char line[MAXPGPATH];
StartRestoreLOs(AH);
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
/*
* Note: before archive v16, there was always only one BLOBS TOC entry,
* now there can be multiple. We don't need to worry what version we are
* reading though, because tctx->filename should be correct either way.
*/
setFilePath(AH, tocfname, tctx->filename);
CFH = ctx->LOsTocFH = InitDiscoverCompressFileHandle(tocfname, PG_BINARY_R);
if (ctx->LOsTocFH == NULL)
pg_fatal("could not open large object TOC file \"%s\" for input: %m",
tocfname);
/* Read the LOs TOC file line-by-line, and process each LO */
while ((CFH->gets_func(line, MAXPGPATH, CFH)) != NULL)
{
char lofname[MAXPGPATH + 1];
char path[MAXPGPATH];
/* Can't overflow because line and lofname are the same length */
if (sscanf(line, "%u %" CppAsString2(MAXPGPATH) "s\n", &oid, lofname) != 2)
pg_fatal("invalid line in large object TOC file \"%s\": \"%s\"",
tocfname, line);
StartRestoreLO(AH, oid, AH->public.ropt->dropSchema);
snprintf(path, MAXPGPATH, "%s/%s", ctx->directory, lofname);
_PrintFileData(AH, path);
EndRestoreLO(AH, oid);
}
if (!CFH->eof_func(CFH))
pg_fatal("error reading large object TOC file \"%s\"",
tocfname);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(ctx->LOsTocFH))
pg_fatal("could not close large object TOC file \"%s\": %m",
tocfname);
ctx->LOsTocFH = NULL;
EndRestoreLOs(AH);
}
/*
* Write a byte of data to the archive.
* Called by the archiver to do integer & byte output to the archive.
* These routines are only used to read & write the headers & TOC.
*/
static int
_WriteByte(ArchiveHandle *AH, const int i)
{
unsigned char c = (unsigned char) i;
lclContext *ctx = (lclContext *) AH->formatData;
CompressFileHandle *CFH = ctx->dataFH;
errno = 0;
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!CFH->write_func(&c, 1, CFH))
{
/* if write didn't set errno, assume problem is no disk space */
if (errno == 0)
errno = ENOSPC;
pg_fatal("could not write to output file: %s",
CFH->get_error_func(CFH));
}
return 1;
}
/*
* Read a byte of data from the archive.
* Called by the archiver to read bytes & integers from the archive.
* These routines are only used to read & write headers & TOC.
* EOF should be treated as a fatal error.
*/
static int
_ReadByte(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
CompressFileHandle *CFH = ctx->dataFH;
return CFH->getc_func(CFH);
}
/*
* Write a buffer of data to the archive.
* Called by the archiver to write a block of bytes to the TOC or a data file.
*/
static void
_WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
{
lclContext *ctx = (lclContext *) AH->formatData;
CompressFileHandle *CFH = ctx->dataFH;
errno = 0;
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!CFH->write_func(buf, len, CFH))
{
/* if write didn't set errno, assume problem is no disk space */
if (errno == 0)
errno = ENOSPC;
pg_fatal("could not write to output file: %s",
CFH->get_error_func(CFH));
}
}
/*
* Read a block of bytes from the archive.
*
* Called by the archiver to read a block of bytes from the archive
*/
static void
_ReadBuf(ArchiveHandle *AH, void *buf, size_t len)
{
lclContext *ctx = (lclContext *) AH->formatData;
CompressFileHandle *CFH = ctx->dataFH;
/*
* If there was an I/O error, we already exited in readF(), so here we
* exit on short reads.
*/
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!CFH->read_func(buf, len, NULL, CFH))
pg_fatal("could not read from input file: end of file");
}
/*
* Close the archive.
*
* When writing the archive, this is the routine that actually starts
* the process of saving it to files. No data should be written prior
* to this point, since the user could sort the TOC after creating it.
*
* If an archive is to be written, this routine must call:
* WriteHead to save the archive header
* WriteToc to save the TOC entries
* WriteDataChunks to save all data & LOs.
*/
static void
_CloseArchive(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
2011-04-10 17:42:00 +02:00
if (AH->mode == archModeWrite)
{
CompressFileHandle *tocFH;
Switch pg_dump to use compression specifications Compression specifications are currently used by pg_basebackup and pg_receivewal, and are able to let the user control in an extended way the method and level of compression used. As an effect of this commit, pg_dump's -Z/--compress is now able to use more than just an integer, as of the grammar "method[:detail]". The method can be either "none" or "gzip", and can optionally take a detail string. If the detail string is only an integer, it defines the compression level. A comma-separated list of keywords can also be used method allows for more options, the only keyword supported now is "level". The change is backward-compatible, hence specifying only an integer leads to no compression for a level of 0 and gzip compression when the level is greater than 0. Most of the code changes are straight-forward, as pg_dump was relying on an integer tracking the compression level to check for gzip or no compression. These are changed to use a compression specification and the algorithm stored in it. As of this change, note that the dump format is not bumped because there is no need yet to track the compression algorithm in the TOC entries. Hence, we still rely on the compression level to make the difference when reading them. This will be mandatory once a new compression method is added, though. In order to keep the code simpler when parsing the compression specification, the code is changed so as pg_dump now fails hard when using gzip on -Z/--compress without its support compiled, rather than enforcing no compression without the user knowing about it except through a warning. Like before this commit, archive and custom formats are compressed by default when the code is compiled with gzip, and left uncompressed without gzip. Author: Georgios Kokolatos Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 02:45:02 +01:00
pg_compress_specification compression_spec = {0};
char fname[MAXPGPATH];
setFilePath(AH, fname, "toc.dat");
/* this will actually fork the processes for a parallel backup */
ctx->pstate = ParallelBackupStart(AH);
/* The TOC is always created uncompressed */
Switch pg_dump to use compression specifications Compression specifications are currently used by pg_basebackup and pg_receivewal, and are able to let the user control in an extended way the method and level of compression used. As an effect of this commit, pg_dump's -Z/--compress is now able to use more than just an integer, as of the grammar "method[:detail]". The method can be either "none" or "gzip", and can optionally take a detail string. If the detail string is only an integer, it defines the compression level. A comma-separated list of keywords can also be used method allows for more options, the only keyword supported now is "level". The change is backward-compatible, hence specifying only an integer leads to no compression for a level of 0 and gzip compression when the level is greater than 0. Most of the code changes are straight-forward, as pg_dump was relying on an integer tracking the compression level to check for gzip or no compression. These are changed to use a compression specification and the algorithm stored in it. As of this change, note that the dump format is not bumped because there is no need yet to track the compression algorithm in the TOC entries. Hence, we still rely on the compression level to make the difference when reading them. This will be mandatory once a new compression method is added, though. In order to keep the code simpler when parsing the compression specification, the code is changed so as pg_dump now fails hard when using gzip on -Z/--compress without its support compiled, rather than enforcing no compression without the user knowing about it except through a warning. Like before this commit, archive and custom formats are compressed by default when the code is compiled with gzip, and left uncompressed without gzip. Author: Georgios Kokolatos Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 02:45:02 +01:00
compression_spec.algorithm = PG_COMPRESSION_NONE;
tocFH = InitCompressFileHandle(compression_spec);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!tocFH->open_write_func(fname, PG_BINARY_W, tocFH))
pg_fatal("could not open output file \"%s\": %m", fname);
ctx->dataFH = tocFH;
2011-04-10 17:42:00 +02:00
/*
* Write 'tar' in the format field of the toc.dat file. The directory
* is compatible with 'tar', so there's no point having a different
* format code for it.
*/
AH->format = archTar;
WriteHead(AH);
AH->format = archDirectory;
WriteToc(AH);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(tocFH))
pg_fatal("could not close TOC file: %m");
WriteDataChunks(AH, ctx->pstate);
ParallelBackupEnd(AH, ctx->pstate);
/*
* In directory mode, there is no need to sync all the entries
* individually. Just recurse once through all the files generated.
*/
if (AH->dosync)
sync_dir_recurse(ctx->directory, AH->sync_method);
}
AH->FH = NULL;
}
/*
* Reopen the archive's file handle.
*/
static void
_ReopenArchive(ArchiveHandle *AH)
{
/*
* Our TOC is in memory, our data files are opened by each child anyway as
* they are separate. We support reopening the archive by just doing
* nothing.
*/
}
/*
* LO support
*/
/*
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
* Called by the archiver when starting to save BLOB DATA (not schema).
* It is called just prior to the dumper's DataDumper routine.
*
* We open the large object TOC file here, so that we can append a line to
* it for each LO.
*/
static void
_StartLOs(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
Switch pg_dump to use compression specifications Compression specifications are currently used by pg_basebackup and pg_receivewal, and are able to let the user control in an extended way the method and level of compression used. As an effect of this commit, pg_dump's -Z/--compress is now able to use more than just an integer, as of the grammar "method[:detail]". The method can be either "none" or "gzip", and can optionally take a detail string. If the detail string is only an integer, it defines the compression level. A comma-separated list of keywords can also be used method allows for more options, the only keyword supported now is "level". The change is backward-compatible, hence specifying only an integer leads to no compression for a level of 0 and gzip compression when the level is greater than 0. Most of the code changes are straight-forward, as pg_dump was relying on an integer tracking the compression level to check for gzip or no compression. These are changed to use a compression specification and the algorithm stored in it. As of this change, note that the dump format is not bumped because there is no need yet to track the compression algorithm in the TOC entries. Hence, we still rely on the compression level to make the difference when reading them. This will be mandatory once a new compression method is added, though. In order to keep the code simpler when parsing the compression specification, the code is changed so as pg_dump now fails hard when using gzip on -Z/--compress without its support compiled, rather than enforcing no compression without the user knowing about it except through a warning. Like before this commit, archive and custom formats are compressed by default when the code is compiled with gzip, and left uncompressed without gzip. Author: Georgios Kokolatos Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 02:45:02 +01:00
pg_compress_specification compression_spec = {0};
char fname[MAXPGPATH];
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
setFilePath(AH, fname, tctx->filename);
/* The LO TOC file is never compressed */
Switch pg_dump to use compression specifications Compression specifications are currently used by pg_basebackup and pg_receivewal, and are able to let the user control in an extended way the method and level of compression used. As an effect of this commit, pg_dump's -Z/--compress is now able to use more than just an integer, as of the grammar "method[:detail]". The method can be either "none" or "gzip", and can optionally take a detail string. If the detail string is only an integer, it defines the compression level. A comma-separated list of keywords can also be used method allows for more options, the only keyword supported now is "level". The change is backward-compatible, hence specifying only an integer leads to no compression for a level of 0 and gzip compression when the level is greater than 0. Most of the code changes are straight-forward, as pg_dump was relying on an integer tracking the compression level to check for gzip or no compression. These are changed to use a compression specification and the algorithm stored in it. As of this change, note that the dump format is not bumped because there is no need yet to track the compression algorithm in the TOC entries. Hence, we still rely on the compression level to make the difference when reading them. This will be mandatory once a new compression method is added, though. In order to keep the code simpler when parsing the compression specification, the code is changed so as pg_dump now fails hard when using gzip on -Z/--compress without its support compiled, rather than enforcing no compression without the user knowing about it except through a warning. Like before this commit, archive and custom formats are compressed by default when the code is compiled with gzip, and left uncompressed without gzip. Author: Georgios Kokolatos Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
2022-12-02 02:45:02 +01:00
compression_spec.algorithm = PG_COMPRESSION_NONE;
ctx->LOsTocFH = InitCompressFileHandle(compression_spec);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!ctx->LOsTocFH->open_write_func(fname, "ab", ctx->LOsTocFH))
pg_fatal("could not open output file \"%s\": %m", fname);
}
/*
* Called by the archiver when we're about to start dumping a LO.
*
* We create a file to write the LO to.
*/
static void
_StartLO(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
lclContext *ctx = (lclContext *) AH->formatData;
char fname[MAXPGPATH];
snprintf(fname, MAXPGPATH, "%s/blob_%u.dat", ctx->directory, oid);
ctx->dataFH = InitCompressFileHandle(AH->compression_spec);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!ctx->dataFH->open_write_func(fname, PG_BINARY_W, ctx->dataFH))
pg_fatal("could not open output file \"%s\": %m", fname);
}
/*
* Called by the archiver when the dumper is finished writing a LO.
*
* We close the LO file and write an entry to the LO TOC file for it.
*/
static void
_EndLO(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
lclContext *ctx = (lclContext *) AH->formatData;
CompressFileHandle *CFH = ctx->LOsTocFH;
char buf[50];
int len;
/* Close the BLOB data file itself */
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(ctx->dataFH))
pg_fatal("could not close LO data file: %m");
ctx->dataFH = NULL;
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
/* register the LO in blobs_NNN.toc */
len = snprintf(buf, sizeof(buf), "%u blob_%u.dat\n", oid, oid);
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!CFH->write_func(buf, len, CFH))
{
/* if write didn't set errno, assume problem is no disk space */
if (errno == 0)
errno = ENOSPC;
pg_fatal("could not write to LOs TOC file: %s",
CFH->get_error_func(CFH));
}
}
/*
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
* Called by the archiver when finishing saving BLOB DATA.
*
* We close the LOs TOC file.
*/
static void
_EndLOs(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
Improve type handling in pg_dump's compress file API After 0da243fed0 got committed, we've received a report about a compiler warning, related to the new LZ4File_gets() function: compress_lz4.c: In function 'LZ4File_gets': compress_lz4.c:492:19: warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits] 492 | if (dsize < 0) The reason is very simple - dsize is declared as size_t, which is an unsigned integer, and thus the check is pointless and we might fail to notice an error in some cases (or fail in a strange way a bit later). The warning could have been silenced by simply changing the type, but we realized the API mostly assumes all the libraries use the same types and report errors the same way (e.g. by returning 0 and/or negative value). But we can't make this assumption - the gzip/lz4 libraries already disagree on some of this, and even if they did a library added in the future might not. The right solution is to define what the API does, and translate the library-specific behavior in consistent way (so that the internal errors are not exposed to users of our compression API). So this adjusts the data types in a couple places, so that we don't miss library errors, and simplifies and unifies the error reporting to simply return true/false (instead of e.g. size_t). While at it, make sure LZ4File_open_write() does not clobber errno in case open_func() fails. Author: Georgios Kokolatos Reported-by: Alexander Lakhin Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
2023-03-23 17:51:55 +01:00
if (!EndCompressFileHandle(ctx->LOsTocFH))
pg_fatal("could not close LOs TOC file: %m");
ctx->LOsTocFH = NULL;
}
/*
* Gets a relative file name and prepends the output directory, writing the
* result to buf. The caller needs to make sure that buf is MAXPGPATH bytes
* big. Can't use a static char[MAXPGPATH] inside the function because we run
* multithreaded on Windows.
*/
static void
setFilePath(ArchiveHandle *AH, char *buf, const char *relativeFilename)
{
lclContext *ctx = (lclContext *) AH->formatData;
char *dname;
dname = ctx->directory;
if (strlen(dname) + 1 + strlen(relativeFilename) + 1 > MAXPGPATH)
pg_fatal("file name too long: \"%s\"", dname);
strcpy(buf, dname);
strcat(buf, "/");
strcat(buf, relativeFilename);
}
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
/*
* Prepare for parallel restore.
*
* The main thing that needs to happen here is to fill in TABLE DATA and BLOBS
* TOC entries' dataLength fields with appropriate values to guide the
* ordering of restore jobs. The source of said data is format-dependent,
* as is the exact meaning of the values.
*
* A format module might also choose to do other setup here.
*/
static void
_PrepParallelRestore(ArchiveHandle *AH)
{
TocEntry *te;
for (te = AH->toc->next; te != AH->toc; te = te->next)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
char fname[MAXPGPATH];
struct stat st;
/*
* A dumpable object has set tctx->filename, any other object has not.
* (see _ArchiveEntry).
*/
if (tctx->filename == NULL)
continue;
/* We may ignore items not due to be restored */
if ((te->reqs & REQ_DATA) == 0)
continue;
/*
* Stat the file and, if successful, put its size in dataLength. When
* using compression, the physical file size might not be a very good
* guide to the amount of work involved in restoring the file, but we
* only need an approximate indicator of that.
*/
setFilePath(AH, fname, tctx->filename);
if (stat(fname, &st) == 0)
te->dataLength = st.st_size;
else if (AH->compression_spec.algorithm != PG_COMPRESSION_NONE)
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
{
if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP)
strlcat(fname, ".gz", sizeof(fname));
else if (AH->compression_spec.algorithm == PG_COMPRESSION_LZ4)
strlcat(fname, ".lz4", sizeof(fname));
else if (AH->compression_spec.algorithm == PG_COMPRESSION_ZSTD)
strlcat(fname, ".zst", sizeof(fname));
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
if (stat(fname, &st) == 0)
te->dataLength = st.st_size;
}
/*
Rearrange pg_dump's handling of large objects for better efficiency. Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC entry for each large object (blob), but it did not touch the ancient decision to put all the blobs' data into a single "BLOBS" TOC entry. This is bad for a few reasons: for databases with millions of blobs, the TOC becomes unreasonably large, causing performance issues; selective restore of just some blobs is quite impossible; and we cannot parallelize either dump or restore of the blob data, since our architecture for that relies on farming out whole TOC entries to worker processes. To improve matters, let's group multiple blobs into each blob metadata TOC entry, and then make corresponding per-group blob data TOC entries. Selective restore using pg_restore's -l/-L switches is then possible, though only at the group level. (Perhaps we should provide a switch to allow forcing one-blob-per-group for users who need precise selective restore and don't have huge numbers of blobs. This patch doesn't do that, instead just hard-wiring the maximum number of blobs per entry at 1000.) The blobs in a group must all have the same owner, since the TOC entry format only allows one owner to be named. In this implementation we also require them to all share the same ACL (grants); the archive format wouldn't require that, but pg_dump's representation of DumpableObjects does. It seems unlikely that either restriction will be problematic for databases with huge numbers of blobs. The metadata TOC entries now have a "desc" string of "BLOB METADATA", and their "defn" string is just a newline-separated list of blob OIDs. The restore code has to generate creation commands, ALTER OWNER commands, and drop commands (for --clean mode) from that. We would need special-case code for ALTER OWNER and drop in any case, so the alternative of keeping the "defn" as directly executable SQL code for creation wouldn't buy much, and it seems like it'd bloat the archive to little purpose. Since we require the blobs of a metadata group to share the same ACL, we can furthermore store only one copy of that ACL, and then make pg_restore regenerate the appropriate commands for each blob. This saves space in the dump file not only by removing duplicative SQL command strings, but by not needing a separate TOC entry for each blob's ACL. In turn, that reduces client-side memory requirements for handling many blobs. ACL TOC entries that need this special processing are labeled as "ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL, continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually have to make such a distinction, but it saves a few cycles during restore for the easy case, and it seems like a good idea to not change the TOC contents unnecessarily. The data TOC entries ("BLOBS") are exactly the same as before, except that now there can be more than one, so we'd better give them identifying tag strings. Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into SECTION_PRE_DATA, which perhaps is defensible in some ways, but it's a rather odd choice considering that we go out of our way to treat blobs as data. Moreover, because parallel restore handles the PRE_DATA section serially, this means we'd only get part of the parallelism speedup we could hope for. Move these entries into SECTION_DATA, letting us parallelize the lo_create calls not just the data loading when there are many blobs. Add dependencies to ensure that we won't try to load data for a blob we've not yet created. As this stands, we still generate a separate TOC entry for any comment or security label attached to a blob. I feel comfortable in believing that comments and security labels on blobs are rare, so this patch should be enough to get most of the useful TOC compression for blobs. We have to bump the archive file format version number, since existing versions of pg_restore wouldn't know they need to do something special for BLOB METADATA, plus they aren't going to work correctly with multiple BLOBS entries or multiple-large-object ACL entries. The directory and tar-file format handlers need some work for multiple BLOBS entries: they used to hard-wire the file name as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc". The 002_pg_dump.pl test script also knows about that and requires minor updates. (I had to drop the test for manually-compressed blobs.toc files with LZ4, because lz4's obtuse command line design requires explicit specification of the output file name which seems impractical here. I don't think we're losing any useful test coverage thereby; that test stanza seems completely duplicative with the gzip and zstd cases anyway.) In passing, centralize management of the lo_buf used to hold data while restoring blobs. The code previously had each format handler create lo_buf, which seems rather pointless given that the format handlers all make it the same way. Moreover, the format handlers never use lo_buf directly, making this setup a failure from a separation-of-concerns standpoint. Let's move the responsibility into pg_backup_archiver.c, which is the only module concerned with lo_buf. The reason to do this in this patch is that it allows a centralized fix for the now-false assumption that we never restore blobs in parallel. Also, get rid of dead code in DropLOIfExists: it's been a long time since we had any need to be able to restore to a pre-9.0 server. Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
2024-04-01 22:25:56 +02:00
* If this is a BLOBS entry, what we stat'd was blobs_NNN.toc, which
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
* most likely is a lot smaller than the actual blob data. We don't
* have a cheap way to estimate how much smaller, but fortunately it
* doesn't matter too much as long as we get the LOs processed
Improve parallel scheduling logic in pg_dump/pg_restore. Previously, the way this worked was that a parallel pg_dump would re-order the TABLE_DATA items in the dump's TOC into decreasing size order, and separately re-order (some of) the INDEX items into decreasing size order. Then pg_dump would dump the items in that order. Later, parallel pg_restore just followed the TOC order. This method had lots of deficiencies: * TOC ordering randomly differed between parallel and non-parallel dumps, and was hard to predict in the former case, causing problems for building stable pg_dump test cases. * Parallel restore only followed a well-chosen order if the dump had been done in parallel; in particular, this never happened for restore from custom-format dumps. * The best order for restore isn't necessarily the same as for dump, and it's not really static either because of locking considerations. * TABLE_DATA and INDEX items aren't the only things that might take a lot of work during restore. Scheduling was particularly stupid for the BLOBS item, which might require lots of work during dump as well as restore, but was left to the end in either case. This patch removes the logic that changed the TOC order, fixing the test instability problem. Instead, we sort the parallelizable items just before processing them during a parallel dump. Independently of that, parallel restore prioritizes the ready-to-execute tasks based on the size of the underlying table. In the case of dependent tasks such as index, constraint, or foreign key creation, the largest relevant table is used as the metric for estimating the task length. (This is pretty crude, but it should be enough to avoid the case we want to avoid, which is ending the run with just a few large tasks such that we can't make use of all N workers.) Patch by me, responding to a complaint from Peter Eisentraut, who also reviewed the patch. Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 23:31:51 +02:00
* reasonably early. Arbitrarily scale up by a factor of 1K.
*/
if (strcmp(te->desc, "BLOBS") == 0)
te->dataLength *= 1024;
}
}
/*
* Clone format-specific fields during parallel restoration.
*/
static void
_Clone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
AH->formatData = (lclContext *) pg_malloc(sizeof(lclContext));
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
/*
* TOC-entry-local state isn't an issue because any one TOC entry is
* touched by just one worker child.
*/
/*
* We also don't copy the ParallelState pointer (pstate), only the leader
* process ever writes to it.
*/
}
static void
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
free(ctx);
}
/*
* This function is executed in the child of a parallel backup for a
* directory-format archive and dumps the actual data for one TOC entry.
*/
static int
_WorkerJobDumpDirectory(ArchiveHandle *AH, TocEntry *te)
{
/*
* This function returns void. We either fail and die horribly or
* succeed... A failure will be detected by the parent when the child dies
* unexpectedly.
*/
WriteDataChunksForTocEntry(AH, te);
return 0;
}
/*
* This function is executed in the child of a parallel restore from a
* directory-format archive and restores the actual data for one TOC entry.
*/
static int
_WorkerJobRestoreDirectory(ArchiveHandle *AH, TocEntry *te)
{
return parallel_restore(AH, te);
}