postgresql/src/include/pgtar.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

46 lines
1.2 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* pgtar.h
* Functions for manipulating tarfile datastructures (src/port/tar.c)
*
*
* Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/pgtar.h
*
*-------------------------------------------------------------------------
*/
#ifndef PG_TAR_H
#define PG_TAR_H
#define TAR_BLOCK_SIZE 512
enum tarError
{
TAR_OK = 0,
TAR_NAME_TOO_LONG,
TAR_SYMLINK_TOO_LONG
};
extern enum tarError tarCreateHeader(char *h, const char *filename,
const char *linktarget, pgoff_t size,
mode_t mode, uid_t uid, gid_t gid,
time_t mtime);
Adopt the GNU convention for handling tar-archive members exceeding 8GB. The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
2015-11-22 02:21:31 +01:00
extern uint64 read_tar_number(const char *s, int len);
extern void print_tar_number(char *s, int len, uint64 val);
extern int tarChecksum(char *header);
/*
* Compute the number of padding bytes required for an entry in a tar
* archive. We must pad out to a multiple of TAR_BLOCK_SIZE. Since that's
* a power of 2, we can use TYPEALIGN().
*/
static inline size_t
tarPaddingBytesRequired(size_t len)
{
return TYPEALIGN(TAR_BLOCK_SIZE, len) - len;
}
#endif