Commit Graph

12 Commits

Author SHA1 Message Date
Robert Haas cd64dc42d1 pg_combinebackup: Fix incorrect tablespace handling.
The previous coding mangled the pathname calculation for
incremental files located in user-defined tablespaces.

Enhance the test cases to cover such cases, as I should have
done originally. Thanks to Andres Freund for alerting me to the
lack of test coverage.

Discussion: http://postgr.es/m/CA+TgmoYdXTjo9iQeoipTccDpWZzvBNS6EndY2uARM+T4yG_yDg@mail.gmail.com
2024-04-19 13:30:42 -04:00
Daniel Gustafsson 950d4a2cb1 Fix typos and duplicate words
This fixes various typos, duplicated words, and tiny bits of whitespace
mainly in code comments but also in docs.

Author: Daniel Gustafsson <daniel@yesql.se>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Alexander Lakhin <exclusion@gmail.com>
Author: David Rowley <dgrowleyml@gmail.com>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/3F577953-A29E-4722-98AD-2DA9EFF2CBB8@yesql.se
2024-04-18 21:28:07 +02:00
Tomas Vondra 079d94ab34 Check HAVE_COPY_FILE_RANGE before calling copy_file_range
Fix a mistake in ac81101551 - write_reconstructed_file() called
copy_file_range() without properly checking HAVE_COPY_FILE_RANGE.

Reported by several macOS machines. Also reported by cfbot, but I missed
that issue before commit.
2024-04-05 19:38:20 +02:00
Tomas Vondra ac81101551 Allow using copy_file_range in write_reconstructed_file
This commit allows using copy_file_range() for efficient combining of
data from multiple files, instead of simply reading/writing the blocks.
Depending on the filesystem and other factors (size of the increment,
distribution of modified blocks etc.) this may be faster than the
block-by-block copy, but more importantly it enables various features
provided by CoW filesystems.

If a checksum needs to be calculated for the file, the same strategy as
when copying whole files is used - copy_file_range is used to copy the
blocks, but the file is also read for the checksum calculation.

While the checksum calculation is rarely needed when cloning whole
files, when reconstructing the files from multiple backups it needs to
happen almost always (the only exception is when the user specified
--no-manifest).

Author: Tomas Vondra
Reviewed-by: Thomas Munro, Jakub Wartak, Robert Haas
Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
2024-04-05 19:19:36 +02:00
Tomas Vondra f8ce4ed78c Allow copying files using clone/copy_file_range
Adds --clone/--copy-file-range options to pg_combinebackup, to allow
copying files using file cloning or copy_file_range(). These methods may
be faster than the standard block-by-block copy, but the main advantage
is that they enable various features provided by CoW filesystems.

This commit only uses these copy methods for files that did not change
and can be copied as a whole from a single backup.

These new copy methods may not be available on all platforms, in which
case the command throws an error (immediately, even if no files would be
copied as a whole). This early failure seems better than failing later
when trying to copy the first file, after performing a lot of work on
earlier files.

If the requested copy method is available, but a checksum needs to be
recalculated (e.g. because of a different checksum type), the file is
still copied using the requested method, but it is also read for the
checksum calculation. Depending on the filesystem this may be more
expensive than just performing the simple copy, but it does enable the
CoW benefits.

Initial patch by Jakub Wartak, various reworks and improvements by me.

Author: Tomas Vondra, Jakub Wartak
Reviewed-by: Thomas Munro, Jakub Wartak, Robert Haas
Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
2024-04-05 18:01:32 +02:00
Tomas Vondra 10e3226ba1 Align blocks in incremental backups to BLCKSZ
Align blocks stored in incremental files to BLCKSZ, so that the
incremental backups work well with CoW filesystems.

The header of the incremental file is padded with \0 to a multiple of
BLCKSZ, so that the block data (also BLCKSZ) is aligned to BLCKSZ. The
padding is added only to files containing block data, so files with just
the header remain small. This adds a bit of extra space, but as the
number of blocks increases the overhead gets negligible very quickly.
And as the padding is \0 bytes, it does compress extremely well.

The alignment is important for CoW filesystems that usually require the
blocks to be aligned to filesystem page size for features like block
sharing, deduplication etc. to work well. With the variable sized header
the blocks in the increments were not aligned at all, negating the
benefits of the CoW filesystems.

This matters even for non-CoW filesystems, for example when placed on a
RAID array. If the block is not aligned, it may easily span multiple
devices, causing read and write amplification.

It might be better to align the blocks to the filesystem page, not
BLCKSZ, but we have no good way to determine that. Even if we determine
the page size at the time of taking the backup, the backup may move. For
now the BLCKSZ seems sufficient - the filesystem page is usually 4K, so
the default BLCKSZ (8K by default) is aligned to that.

Author: Tomas Vondra
Reviewed-by: Robert Haas, Jakub Wartak
Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
2024-04-05 16:30:01 +02:00
Peter Eisentraut 97d85be365 Make the order of the header file includes consistent
Similar to commit 7e735035f2.

Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-WhpCFMbXCjtJ%2BFzmjfPrp7Hw1pk4p%2BZpU95Kh3ofZ1A%40mail.gmail.com
2024-03-13 15:07:00 +01:00
Peter Eisentraut 6612185883 Fix incorrect format placeholders 2024-03-13 06:40:32 +01:00
Robert Haas 3d5c332a3d Repair various defects in dc21234005.
pg_combinebackup had various problems:

* strncpy was used in various places where strlcpy should be used
  instead, to avoid any possibility of the result not being
  \0-terminated.
* scan_for_existing_tablespaces() failed to close the directory,
  and an error when opening the directory was reported with the
  wrong pathname.
* write_reconstructed_file() contained some redundant and therefore
  dead code.
* flush_manifest() didn't check the result of pg_checksum_update()
  as we do in other places, and misused a local pathname variable
  that shouldn't exist at all.

In pg_basebackup, the wrong variable name was used in one place,
due to a copy and paste that was not properly adjusted.

In blkreftable.c, the loop incorrectly doubled chunkno instead of
max_chunks. Fix that. Also remove a nearby assertion per repeated
off-list complaints from Tom Lane.

Per Coverity and subsequent code inspection by me and by Tom Lane.

Discussion: http://postgr.es/m/CA+Tgmobvqqj-DW9F7uUzT-cQqs6wcVb-Xhs=w=hzJnXSE-kRGw@mail.gmail.com
2024-01-11 13:06:10 -05:00
Bruce Momjian 29275b1d17 Update copyright for 2024
Reported-by: Michael Paquier

Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz

Backpatch-through: 12
2024-01-03 20:49:05 -05:00
Tom Lane 98c6231d19 Fix incorrect data type choices in some read and write calls.
Recently-introduced code in reconstruct.c was using "unsigned"
to store the result of read(), pg_pread(), or write().  This is
completely bogus: it breaks subsequent tests for the result being
negative, as we're being reminded of by a chorus of buildfarm
warnings.  Switch to "int" as was doubtless intended.  (There are
several other uses of "unsigned" in this file that also look poorly
chosen to me, but for now I'm just trying to clean up the buildfarm.)

A larger problem is that "int" is not necessarily wide enough to hold
the result: per POSIX, all these functions return ssize_t.  In places
where the requested read or write length clearly fits in int, that's
academic.  It may be academic anyway as long as we constrain
individual data files to 1GB, since even a readv or writev-like
operation would then not be responsible for transferring more than
1GB.  Nonetheless it seems like trouble waiting to happen, so I made
a pass over readv and writev calls and fixed the result variables
where that seemed appropriate.  We might want to think about changing
some of the fd.c functions to return ssize_t too, for future-proofing;
but I didn't tackle that here.

Discussion: https://postgr.es/m/1672202.1703441340@sss.pgh.pa.us
2023-12-27 11:02:53 -05:00
Robert Haas dc21234005 Add support for incremental backup.
To take an incremental backup, you use the new replication command
UPLOAD_MANIFEST to upload the manifest for the prior backup. This
prior backup could either be a full backup or another incremental
backup.  You then use BASE_BACKUP with the INCREMENTAL option to take
the backup.  pg_basebackup now has an --incremental=PATH_TO_MANIFEST
option to trigger this behavior.

An incremental backup is like a regular full backup except that
some relation files are replaced with files with names like
INCREMENTAL.${ORIGINAL_NAME}, and the backup_label file contains
additional lines identifying it as an incremental backup. The new
pg_combinebackup tool can be used to reconstruct a data directory
from a full backup and a series of incremental backups.

Patch by me.  Reviewed by Matthias van de Meent, Dilip Kumar, Jakub
Wartak, Peter Eisentraut, and Álvaro Herrera. Thanks especially to
Jakub for incredibly helpful and extensive testing.

Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
2023-12-20 09:49:12 -05:00