postgresql/src/backend/access
Alvaro Herrera fa2fa99552 Permit dump/reload of not-too-large >1GB tuples
Our documentation states that our maximum field size is 1 GB, and that
our maximum row size of 1.6 TB.  However, while this might be attainable
in theory with enough contortions, it is not workable in practice; for
starters, pg_dump fails to dump tables containing rows larger than 1 GB,
even if individual columns are well below the limit; and even if one
does manage to manufacture a dump file containing a row that large, the
server refuses to load it anyway.

This commit enables dumping and reloading of such tuples, provided two
conditions are met:

1. no single column is larger than 1 GB (in output size -- for bytea
   this includes the formatting overhead)
2. the whole row is not larger than 2 GB

There are three related changes to enable this:

a. StringInfo's API now has two additional functions that allow creating
a string that grows beyond the typical 1GB limit (and "long" string).
ABI compatibility is maintained.  We still limit these strings to 2 GB,
though, for reasons explained below.

b. COPY now uses long StringInfos, so that pg_dump doesn't choke
trying to emit rows longer than 1GB.

c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
for the input tuple, which means that large tuples are accepted on
input.  Note that at this point we do not apply any further limit to the
input tuple size.

The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
length words to describe each row; and because the documentation is
ambiguous on its signedness and libpq does consider it signed, we cannot
use the highest-order bit.  Additionally, the StringInfo API uses "int"
(which is 4 bytes wide in most platforms) in many places, so we'd need
to change that API too in order to improve, which has lots of fallout.

Backpatch to 9.5, which is the oldest that has
MemoryContextAllocExtended, a necessary piece of infrastructure.  We
could apply to 9.4 with very minimal additional effort, but any further
than that would require backpatching "huge" allocations too.

This is the largest set of changes we could find that can be
back-patched without breaking compatibility with existing systems.
Fixing a bigger set of problems (for example, dumping tuples bigger than
2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
protocol and/or changing the StringInfo API in an ABI-incompatible way,
neither of which would be back-patchable.

Authors: Daniel Vérité, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
2016-12-02 00:34:01 -03:00
..
brin Rewrite PageIndexDeleteNoCompact into a form that only deletes 1 tuple. 2016-09-09 19:00:59 -04:00
common Permit dump/reload of not-too-large >1GB tuples 2016-12-02 00:34:01 -03:00
gin Replace the built-in GIN array opclasses with a single polymorphic opclass. 2016-09-26 14:52:44 -04:00
gist Fix outdated comments, GIST search queue is not an RBTree anymore. 2016-09-20 11:38:25 +03:00
hash Improve hash index bucket split behavior. 2016-11-30 15:39:21 -05:00
heap Fix WAL-logging of FSM and VM truncation. 2016-10-19 14:26:05 +03:00
index Add SQL-accessible functions for inspecting index AM properties. 2016-08-13 18:31:14 -04:00
nbtree Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases 2016-11-17 13:31:30 -03:00
rmgrdesc Remove useless code. 2016-09-23 10:44:50 -04:00
spgist Add macros to make AllocSetContextCreate() calls simpler and safer. 2016-08-27 17:50:38 -04:00
tablesample Update copyright for 2016 2016-01-02 13:33:40 -05:00
transam Bring some clarity to the defaults for the xxx_flush_after parameters. 2016-11-25 18:36:10 -05:00
Makefile TABLESAMPLE, SQL Standard and extensible 2015-05-15 14:37:10 -04:00