Commit Graph

132 Commits

Author SHA1 Message Date
Andres Freund 3b925e905d tableam: Add pg_dump support.
This adds pg_dump support for table AMs in a similar manner to how
tablespaces are handled. That is, instead of specifying the AM for
every CREATE TABLE etc, emit SET default_table_access_method
statements. That makes it easier to change the AM for all/most tables
in a dump, and allows restore to succeed even if some AM is not
available.

This increases the dump archive version, as a tables/matview's AM
needs to be tracked therein.

Author: Dimitri Dolgov, Andres Freund
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
2019-03-06 09:54:38 -08:00
Alvaro Herrera 19455c9f56 pg_dump: Fix ArchiveEntry handling of some empty values
Commit f831d4acc changed what pg_dump emits for some empty fields: they
were output as empty strings before, NULL pointer afterwards.  That
makes old pg_restore unable to work (crash) with such files, which is
unacceptable.  Return to the original representation by explicitly
setting those struct members to "" where needed; remove some no longer
needed checks for NULL input.

We can declutter the code a little by returning to NULLs when we next
update the archive version, so add a note to remind us later.

Discussion: https://postgr.es/m/20190225074539.az6j3u464cvsoxh6@depesz.com
Reported-by: hubert depesz lubaczewski
Author: Dmitry Dolgov
2019-02-28 17:19:44 -03:00
Alvaro Herrera f831d4accd Add ArchiveOpts to pass options to ArchiveEntry
The ArchiveEntry function has a number of arguments that can be
considered optional.  Split them out into a separate struct, to make the
API more flexible for changes.

Author: Dmitry Dolgov
Discussion: https://postgr.es/m/CA+q6zcXRxPE+qp6oerQWJ3zS061WPOhdxeMrdc-Yf-2V5vsrEw@mail.gmail.com
2019-02-01 11:29:42 -03:00
Andres Freund 578b229718 Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.

This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row.  Neither pg_dump nor COPY included the contents of the
oid column by default.

The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.

WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.

Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
  WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
  issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
  restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
  OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
  plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.

The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.

The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such.  This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.

The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.

Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).

The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.

While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.

Catversion bump, for obvious reasons.

Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-20 16:00:17 -08:00
Peter Eisentraut 6178f3cb79 pg_dump: Fix dumping of WITH OIDS tables
A table with OIDs that was the first in the dump output would not get
dumped with OIDs enabled.  Fix that.

The reason was that the currWithOids flag was declared to be bool but
actually also takes a -1 value for "don't know yet".  But under
stdbool.h semantics, that is coerced to true, so the required SET
default_with_oids command is not output again.  Change the variable
type to char to fix that.

Reported-by: Derek Nelson <derek@pipelinedb.com>
2018-11-13 09:41:20 +01:00
Tom Lane 548e50976c Improve parallel scheduling logic in pg_dump/pg_restore.
Previously, the way this worked was that a parallel pg_dump would
re-order the TABLE_DATA items in the dump's TOC into decreasing size
order, and separately re-order (some of) the INDEX items into decreasing
size order.  Then pg_dump would dump the items in that order.  Later,
parallel pg_restore just followed the TOC order.  This method had lots
of deficiencies:

* TOC ordering randomly differed between parallel and non-parallel
dumps, and was hard to predict in the former case, causing problems
for building stable pg_dump test cases.

* Parallel restore only followed a well-chosen order if the dump had
been done in parallel; in particular, this never happened for restore
from custom-format dumps.

* The best order for restore isn't necessarily the same as for dump,
and it's not really static either because of locking considerations.

* TABLE_DATA and INDEX items aren't the only things that might take a lot
of work during restore.  Scheduling was particularly stupid for the BLOBS
item, which might require lots of work during dump as well as restore,
but was left to the end in either case.

This patch removes the logic that changed the TOC order, fixing the
test instability problem.  Instead, we sort the parallelizable items
just before processing them during a parallel dump.  Independently
of that, parallel restore prioritizes the ready-to-execute tasks
based on the size of the underlying table.  In the case of dependent
tasks such as index, constraint, or foreign key creation, the largest
relevant table is used as the metric for estimating the task length.
(This is pretty crude, but it should be enough to avoid the case we
want to avoid, which is ending the run with just a few large tasks
such that we can't make use of all N workers.)

Patch by me, responding to a complaint from Peter Eisentraut,
who also reviewed the patch.

Discussion: https://postgr.es/m/5137fe12-d0a2-4971-61b6-eb4e7e8875f8@2ndquadrant.com
2018-09-14 17:31:51 -04:00
Tom Lane 3d2aed664e Avoid using unsafe search_path settings during dump and restore.
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object.  This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run.  That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.

This patch changes pg_dump so that it does not change the search_path
dynamically.  The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter.  Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.

Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.

Security: CVE-2018-1058
2018-02-26 10:18:21 -05:00
Tom Lane 160a4f62ee In pg_dump, force reconnection after issuing ALTER DATABASE SET command(s).
The folly of not doing this was exposed by the buildfarm: in some cases,
the GUC settings applied through ALTER DATABASE SET may be essential to
interpreting the reloaded data correctly.  Another argument why we can't
really get away with the scheme proposed in commit b3f840120 is that it
cannot work for parallel restore: even if the parent process manages to
hang onto the previous GUC state, worker processes would see the state
post-ALTER-DATABASE.  (Perhaps we could have dodged that bullet by
delaying DATABASE PROPERTIES restoration to the end of the run, but
that does nothing for the data semantics problem.)

This leaves us with no solution for the default_transaction_read_only issue
that commit 4bd371f6f intended to work around, other than "you gotta remove
such settings before dumping/upgrading".  However, in view of the fact that
parallel restore broke that hack years ago and no one has noticed, it's
fair to question how many people care.  I'm unexcited about adding a large
dollop of new complexity to handle that corner case.

This would be a one-liner fix, except it turns out that ReconnectToServer
tries to optimize away "redundant" reconnections.  While that may have been
valuable when coded, a quick survey of current callers shows that there are
no cases where that's actually useful, so just remove that check.  While at
it, remove the function's useless return value.

Discussion: https://postgr.es/m/12453.1516655001@sss.pgh.pa.us
2018-01-23 10:55:16 -05:00
Tom Lane 3eb9a5e7c4 Fix pg_dump/pg_restore to emit REFRESH MATERIALIZED VIEW commands last.
Because we push all ACL (i.e. GRANT/REVOKE) restore steps to the end,
materialized view refreshes were occurring while the permissions on
referenced objects were still at defaults.  This led to failures if,
say, an MV owned by user A reads from a table owned by user B, even
if B had granted the necessary privileges to A.  We've had multiple
complaints about that type of restore failure, most recently from
Jordan Gigov.

The ideal fix for this would be to start treating ACLs as dependency-
sortable objects, rather than hard-wiring anything about their dump order
(the existing approach is a messy kluge dating to commit dc0e76ca3).
But that's going to be a rather major change, and it certainly wouldn't
lead to a back-patchable fix.  As a short-term solution, convert the
existing two-pass hack (ie, normal objects then ACLs) to a three-pass hack,
ie, normal objects then ACLs then matview refreshes.  Because this happens
in RestoreArchive(), it will also fix the problem when restoring from an
existing archive-format dump.

(Note this means that if a matview refresh would have failed under the
permissions prevailing at dump time, it'll fail during restore as well.
We'll define that as user error rather than something we should try
to work around.)

To avoid performance loss in parallel restore, we need the matview
refreshes to still be parallelizable.  Hence, clean things up enough
so that both ACLs and matviews are handled by the parallel restore
infrastructure, instead of reverting back to serial restore for ACLs.
There is still a final serial step, but it shouldn't normally have to
do anything; it's only there to try to recover if we get stuck due to
some problem like unresolved circular dependencies.

Patch by me, but it owes something to an earlier attempt by Kevin Grittner.
Back-patch to 9.3 where materialized views were introduced.

Discussion: https://postgr.es/m/28572.1500912583@sss.pgh.pa.us
2017-08-03 17:36:39 -04:00
Tom Lane c7b8998ebb Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.

Commit e3860ffa4d wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code.  The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there.  BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs.  So the
net result is that in about half the cases, such comments are placed
one tab stop left of before.  This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.

Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.

This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.

Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 15:19:25 -04:00
Bruce Momjian a6fd7b7a5f Post-PG 10 beta1 pgindent run
perltidy run not included.
2017-05-17 16:31:56 -04:00
Tom Lane c079673dcb Preventive maintenance in advance of pgindent run.
Reformat various places in which pgindent will make a mess, and
fix a few small violations of coding style that I happened to notice
while perusing the diffs from a pgindent dry run.

There is one actual bug fix here: the need-to-enlarge-the-buffer code
path in icu_convert_case was obviously broken.  Perhaps it's unreachable
in our usage?  Or maybe this is just sadly undertested.
2017-05-16 20:36:35 -04:00
Peter Eisentraut 4be613f692 pg_dump: Rename some typedefs to avoid name conflicts
In struct _archiveHandle, some of the fields have the same name as a
typedef.  This is kind of confusing, so rename the types so they have
names distinct from the struct fields.  In C++, the previous coding
changes the meaning of the typedef in the scope of the struct, causing
warnings and possibly other problems.

Reviewed-by: Andres Freund <andres@anarazel.de>
2017-04-06 14:16:54 -04:00
Andrew Dunstan 96a7128b7b Sync pg_dump and pg_dumpall output
Before exiting any files are fsync'ed. A --no-sync option is also
provided for a faster exit if desired.

Michael Paquier.

Reviewed by Albe Laurenz

Discussion: https://postgr.es/m/CAB7nPqS1uZ=Ov+UruW6jr3vB-S_DLVMPc0dQpV-fTDjmm0ZQMg@mail.gmail.com
2017-03-22 10:20:13 -04:00
Tom Lane fcf70e0dbc Re-pgindent src/bin/pg_dump/*
Cleanup for recent patches --- it's not much change, but I got annoyed
while re-indenting the view-rule fix I'm working on.
2016-11-17 14:36:59 -05:00
Peter Eisentraut 8c035e55c4 pg_dump: Simplify internal archive version handling
The ArchiveHandle structure contained the archive format version number
twice, once as a single field and once split into components.  Simplify
that by just keeping the single field and adding some macros to extract
the components.  Introduce some macros for composing version numbers, to
eliminate the repeated use of magic formulas.  Drop the unused trailing
zero byte from the run-time composite version representation.

reviewed by Tom Lane
2016-10-25 17:02:22 -04:00
Tom Lane fb03d08a89 Rationalize parallel dump/restore's handling of worker cmd/status messages.
The existing APIs for creating and parsing command and status messages are
rather messy; for example, archive-format modules have to provide code
for constructing command messages, which is entirely pointless since
the code to read them is hard-wired in WaitForCommands() and hence
no format-specific variation is actually possible.  But there's little
foreseeable reason to need format-specific variation anyway.

The situation for status messages is no better; at least those are both
constructed and parsed by format-specific code, but said code is quite
redundant since there's no actual need for format-specific variation.

To add insult to injury, the first API involves returning pointers to
static buffers, which is bad, while the second involves returning pointers
to malloc'd strings, which is safer but randomly inconsistent.

Hence, get rid of the MasterStartParallelItem and MasterEndParallelItem
APIs, and instead write centralized functions that construct and parse
command and status messages.  If we ever do need more flexibility, these
functions can be the standard implementations of format-specific
callback methods, but that's a long way off if it ever happens.

Tom Lane, reviewed by Kevin Grittner

Discussion: <17340.1464465717@sss.pgh.pa.us>
2016-09-27 13:56:04 -04:00
Tom Lane b7b8cc0cfc Redesign parallel dump/restore's wait-for-workers logic.
The ListenToWorkers/ReapWorkerStatus APIs were messy and hard to use.
Instead, make DispatchJobForTocEntry register a callback function that
will take care of state cleanup, doing whatever had been done by the caller
of ReapWorkerStatus in the old design.  (This callback is essentially just
the old mark_work_done function in the restore case, and a trivial test for
worker failure in the dump case.)  Then we can have ListenToWorkers call
the callback immediately on receipt of a status message, and return the
worker to WRKR_IDLE state; so the WRKR_FINISHED state goes away.

This allows us to design a unified wait-for-worker-messages loop:
WaitForWorkers replaces EnsureIdleWorker and EnsureWorkersFinished as well
as the mess in restore_toc_entries_parallel.  Also, we no longer need the
fragile API spec that the caller of DispatchJobForTocEntry is responsible
for ensuring there's an idle worker, since DispatchJobForTocEntry can just
wait until there is one.

In passing, I got rid of the ParallelArgs struct, which was a net negative
in terms of notational verboseness, and didn't seem to be providing any
noticeable amount of abstraction either.

Tom Lane, reviewed by Kevin Grittner

Discussion: <1188.1464544443@sss.pgh.pa.us>
2016-09-27 13:22:39 -04:00
Tom Lane e652273e07 Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar
signals and set a flag that was tested in various data-transfer loops.
This was prone to errors of omission (cf commit 3c8aa6654); and even if
the client-side response was prompt, we did nothing that would cause
long-running SQL commands (e.g. CREATE INDEX) to terminate early.
Also, the master process would effectively do nothing at all upon receipt
of SIGINT; the only reason it seemed to work was that in typical scenarios
the signal would also be delivered to the child processes.  We should
support termination when a signal is delivered only to the master process,
though.

Windows builds had no console interrupt handler, so they would just fall
over immediately at control-C, again leaving long-running SQL commands to
finish unmolested.

To fix, remove the flag-checking approach altogether.  Instead, allow the
Unix signal handler to send a cancel request directly and then exit(1).
In the master process, also have it forward the signal to the children.
On Windows, add a console interrupt handler that behaves approximately
the same.  The main difference is that a single execution of the Windows
handler can send all the cancel requests since all the info is available
in one process, whereas on Unix each process sends a cancel only for its
own database connection.

In passing, fix an old problem that DisconnectDatabase tends to send a
cancel request before exiting a parallel worker, even if nothing went
wrong.  This is at least a waste of cycles, and could lead to unexpected
log messages, or maybe even data loss if it happened in pg_restore (though
in the current code the problem seems to affect only pg_dump).  The cause
was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY
state, causing PQtransactionStatus() to report PQTRANS_ACTIVE.  That's
normally harmless because the next PQexec() will silently clear the
PGASYNC_BUSY state; but in a parallel worker we might exit without any
additional SQL commands after a COPY step.  So add an extra PQgetResult()
call after a COPY to allow libpq to return to PGASYNC_IDLE state.

This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore
were introduced.

Thanks to Kyotaro Horiguchi for Windows testing and code suggestions.

Original-Patch: <7005.1464657274@sss.pgh.pa.us>
Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
2016-06-02 13:28:17 -04:00
Tom Lane 5b5fea2a11 Access pg_dump's options structs through Archive struct, not directly.
Rather than passing around DumpOptions and RestoreOptions as separate
arguments, add fields to struct Archive to carry pointers to these objects,
and access them through those fields when needed.  There already was a
RestoreOptions pointer in Archive, though for no obvious reason it was part
of the "private" struct rather than out where pg_dump.c could see it.

Doing this allows reversion of quite a lot of parameter-addition changes
made in commit 0eea8047bf, which is a good thing IMO because this will
reduce the code delta between 9.4 and 9.5, probably easing a few future
back-patch efforts.  Moreover, the previous commit only added a DumpOptions
argument to functions that had to have it at the time, which means we could
anticipate still more code churn (and more back-patch hazard) as the
requirement spread further.  I'd hit exactly that problem in my upcoming
patch to fix extension membership marking, which is what motivated me to
do this.
2016-01-13 17:48:33 -05:00
Andres Freund bbfd7edae5 Add macros wrapping all usage of gcc's __attribute__.
Until now __attribute__() was defined to be empty for all compilers but
gcc. That's problematic because it prevents using it in other compilers;
which is necessary e.g. for atomics portability.  It's also just
generally dubious to do so in a header as widely included as c.h.

Instead add pg_attribute_format_arg, pg_attribute_printf,
pg_attribute_noreturn macros which are implemented in the compilers that
understand them. Also add pg_attribute_noreturn and pg_attribute_packed,
but don't provide fallbacks, since they can affect functionality.

This means that external code that, possibly unwittingly, relied on
__attribute__ defined to be empty on !gcc compilers may now run into
warnings or errors on those compilers. But there shouldn't be many
occurances of that and it's hard to work around...

Discussion: 54B58BA3.8040302@ohmu.fi
Author: Oskari Saarenmaa, with some minor changes by me.
2015-03-11 14:30:01 +01:00
Alvaro Herrera 0eea8047bf pg_dump: Reduce use of global variables
Most pg_dump.c global variables, which were passed down individually to
dumping routines, are now grouped as members of the new DumpOptions
struct, which is used as a local variable and passed down into routines
that need it.  This helps future development efforts; in particular it
is said to enable a mode in which a parallel pg_dump run can output
multiple streams, and have them restored in parallel.

Also take the opportunity to clean up the pg_dump header files somewhat,
to avoid circularity.

Author: Joachim Wieland, revised by Álvaro Herrera
Reviewed by Peter Eisentraut
2014-10-14 15:00:55 -03:00
Bruce Momjian 0a78320057 pgindent run for 9.4
This includes removing tabs after periods in C comments, which was
applied to back branches, so this change should not effect backpatching.
2014-05-06 12:12:18 -04:00
Bruce Momjian 14ea89366f Properly detect read and write errors in pg_dump/dumpall, and pg_restore
Previously some I/O errors were ignored.
2014-05-05 20:27:16 -04:00
Bruce Momjian 9af4159fce pgindent run for release 9.3
This is the first run of the Perl-based pgindent script.  Also update
pgindent instructions.
2013-05-29 16:58:43 -04:00
Tom Lane 846681fdd5 Fix some unportable constructs in parallel pg_dump code.
Didn't compile on semi-obsolete gcc, and probably not on not-gcc-at-all
either.
2013-03-24 15:35:37 -04:00
Andrew Dunstan 9e257a181c Add parallel pg_dump option.
New infrastructure is added which creates a set number of workers
(threads on Windows, forked processes on Unix). Jobs are then
handed out to these workers by the master process as needed.
pg_restore is adjusted to use this new infrastructure in place of the
old setup which created a new worker for each step on the fly. Parallel
dumps acquire a snapshot clone in order to stay consistent, if
available.

The parallel option is selected by the -j / --jobs command line
parameter of pg_dump.

Joachim Wieland, lightly editorialized by Andrew Dunstan.
2013-03-24 11:27:20 -04:00
Bruce Momjian 927d61eeff Run pgindent on 9.2 source tree in preparation for first 9.3
commit-fest.
2012-06-10 15:20:04 -04:00
Tom Lane 4317e0246c Rewrite --section option to decouple it from --schema-only/--data-only.
The initial implementation of pg_dump's --section option supposed that the
existing --schema-only and --data-only options could be made equivalent to
--section settings.  This is wrong, though, due to dubious but long since
set-in-stone decisions about where to dump SEQUENCE SET items, as seen in
bug report from Martin Pitt.  (And I'm not totally convinced there weren't
other bugs, either.)  Undo that coupling and instead drive --section
filtering off current-section state tracked as we scan through the TOC
list to call _tocEntryRequired().

To make sure those decisions don't shift around and hopefully save a few
cycles, run _tocEntryRequired() only once per TOC entry and save the result
in a new TOC field.  This required minor rejiggering of ACL handling but
also allows a far cleaner implementation of inhibit_data_for_failed_table.

Also, to ensure that pg_dump and pg_restore have the same behavior with
respect to the --section switches, add _tocEntryRequired() filtering to
WriteToc() and WriteDataChunks(), rather than trying to implement section
filtering in an entirely orthogonal way in dumpDumpableObject().  This
required adjusting the handling of the special ENCODING and STDSTRINGS
items, but they were pretty weird before anyway.

Minor other code review for the patch, too.
2012-05-29 23:22:14 -04:00
Tom Lane c89bdf7690 Eliminate some more O(N^2) behaviors in pg_dump/pg_restore.
This patch fixes three places (which AFAICT is all of them) where runtime
was O(N^2) in the number of TOC entries, by using an index array to replace
linear searches of the TOC list.  This performance issue is a bit less bad
than those recently fixed, because it depends on the number of items dumped
not the number in the source database, so the problem can be dodged by
doing partial dumps.

The previous coding already had an instance of one of the two index arrays
needed, but it was only calculated in parallel-restore cases; now we need
it all the time.  I also chose to move the arrays into the ArchiveHandle
data structure, to make this code a bit more ready for the day that we
try to sling multiple ArchiveHandles around in pg_dump or pg_restore.

Since we still need some server-side work before pg_dump can really cope
nicely with tens of thousands of tables, there's probably little point in
back-patching.
2012-05-28 20:38:28 -04:00
Alvaro Herrera 9d23a70d51 pg_dump: get rid of die_horribly
The old code was using exit_horribly or die_horribly other depending on
whether it had an ArchiveHandle on which to close the connection or not;
but there were places that were passing a NULL ArchiveHandle to
die_horribly, and other places that used exit_horribly while having an
AH available.  So there wasn't all that much consistency.

Improve the situation by keeping only one of the routines, and instead
of having to pass the AH down from the caller, arrange for it to be
present for an on_exit_nicely callback to operate on.

Author: Joachim Wieland
Some tweaks by me

Per a suggestion from Robert Haas, in the ongoing "parallel pg_dump"
saga.
2012-03-20 18:58:00 -03:00
Peter Eisentraut 19f45565f5 pg_dump: Remove undocumented "files" output format
This was for demonstration only, and now it was creating compiler
warnings from zlib without an obvious fix (see also
d923125b77), let's just remove it.  The
"directory" format is presumably similar enough anyway.
2012-03-20 20:39:59 +02:00
Robert Haas 1631598ea2 pg_dump: Further reduce reliance on global variables.
This is another round of refactoring to make things simpler for parallel
pg_dump.  pg_dump.c now issues SQL queries through the relevant Archive
object, rather than relying on the global variable g_conn.  This commit
isn't quite enough to get rid of g_conn entirely, but it makes a big
dent in its utilization and, along the way, manages to be slightly less
code than before.
2012-02-07 10:07:02 -05:00
Robert Haas 96abd81744 Remove dead declaration. 2012-02-06 12:09:20 -05:00
Peter Eisentraut 88a6ac9f93 pg_dump: Add GCC noreturn attribute to appropriate functions
This is a small help to the compiler and static analyzers.
2012-01-31 20:49:10 +02:00
Tom Lane f3316a05b5 Fix pg_restore's direct-to-database mode for INSERT-style table data.
In commit 6545a901aa, I removed the mini SQL
lexer that was in pg_backup_db.c, thinking that it had no real purpose
beyond separating COPY data from SQL commands, which purpose had been
obsoleted by long-ago fixes in pg_dump's archive file format.
Unfortunately this was in error: that code was also used to identify
command boundaries in INSERT-style table data, which is run together as a
single string in the archive file for better compressibility.  As a result,
direct-to-database restores from archive files made with --inserts or
--column-inserts fail in our latest releases, as reported by Dick Visser.

To fix, restore the mini SQL lexer, but simplify it by adjusting the
calling logic so that it's only required to cope with INSERT-style table
data, not arbitrary SQL commands.  This allows us to not have to deal with
SQL comments, E'' strings, or dollar-quoted strings, none of which have
ever been emitted by dumpTableData_insert.

Also, fix the lexer to cope with standard-conforming strings, which was the
actual bug that the previous patch was meant to solve.

Back-patch to all supported branches.  The previous patch went back to 8.2,
which unfortunately means that the EOL release of 8.2 contains this bug,
but I don't think we're doing another 8.2 release just because of that.
2012-01-06 13:04:09 -05:00
Andrew Dunstan a4cd6abcc9 Add --section option to pg_dump and pg_restore.
Valid values are --pre-data, data and post-data. The option can be
given more than once. --schema-only is equivalent to
--section=pre-data --section=post-data. --data-only is equivalent
to --section=data.

Andrew Dunstan, reviewed by Joachim Wieland and Josh Berkus.
2011-12-16 19:09:38 -05:00
Tom Lane 0195e5c4ab Clean up after recent pg_dump patches.
Fix entirely broken handling of va_list printing routines, update some
out-of-date comments, fix some bogus inclusion orders, fix NLS declarations,
fix missed realloc calls.
2011-11-29 20:41:54 -05:00
Bruce Momjian 8b08deb0d1 Simplify the pg_dump/pg_restore error reporting macros, and allow
pg_dumpall to use the same memory allocation functions as the others.
2011-11-29 16:34:45 -05:00
Tom Lane 6545a901aa Fix pg_restore's direct-to-database mode for standard_conforming_strings.
pg_backup_db.c contained a mini SQL lexer with which it tried to identify
boundaries between SQL commands, but that code was not designed to cope
with standard_conforming_strings, and would get the wrong answer if a
backslash immediately precedes a closing single quote in such a string,
as per report from Julian Mehnle.  The bug only affects direct-to-database
restores from archive files made with standard_conforming_strings = on.

Rather than complicating the code some more to try to fix that, let's just
rip it all out.  The only reason it was needed was to cope with COPY data
embedded into ordinary archive entries, which was a layout that was used
only for about the first three weeks of the archive format's existence,
and never in any production release of pg_dump.  Instead, just rely on the
archive file layout to tell us whether we're printing COPY data or not.

This bug represents a data corruption hazard in all releases in which
standard_conforming_strings can be turned on, ie 8.2 and later, so
back-patch to all supported branches.
2011-07-28 14:06:57 -04:00
Andrew Dunstan c02d5b7c27 Use a macro variable PG_PRINTF_ATTRIBUTE for the style used for checking printf type functions.
The style is set to "printf" for backwards compatibility everywhere except
on Windows, where it is set to "gnu_printf", which eliminates hundreds of
false error messages from modern versions of gcc arising from  %m and %ll{d,u}
formats.
2011-04-28 10:56:14 -04:00
Bruce Momjian bf50caf105 pgindent run before PG 9.1 beta 1. 2011-04-10 11:42:00 -04:00
Heikki Linnakangas 7f508f1c6b Add 'directory' format to pg_dump. The new directory format is compatible
with the 'tar' format, in that untarring a tar format archive produces a
valid directory format archive.

Joachim Wieland and Heikki Linnakangas
2011-01-23 23:10:15 +02:00
Tom Lane e2627258c3 Suppress possibly-uninitialized-variable warnings from gcc 4.5.
It appears that gcc 4.5 can issue such warnings for whole structs, not
just scalar variables as in the past.  Refactor some pg_dump code slightly
so that the OutputContext local variables are always initialized, even
if they won't be used.  It's cheap enough to not be worth worrying about.
2011-01-22 17:56:42 -05:00
Tom Lane 663fc32e26 Eliminate O(N^2) behavior in parallel restore with many blobs.
With hundreds of thousands of TOC entries, the repeated searches in
reduce_dependencies() become the dominant cost.  Get rid of that searching
by constructing reverse-dependency lists, which we can do in O(N) time
during the fix_dependencies() preprocessing.  I chose to store the reverse
dependencies as DumpId arrays for consistency with the forward-dependency
representation, and keep the previously-transient tocsByDumpId[] array
around to locate actual TOC entry structs quickly from dump IDs.

While this fixes the slow case reported by Vlad Arkhipov, there is still
a potential for O(N^2) behavior with sufficiently many tables:
fix_dependencies itself, as well as mark_create_done and
inhibit_data_for_failed_table, are doing repeated searches to deal with
table-to-table-data dependencies.  Possibly this work could be extended
to deal with that, although the latter two functions are also used in
non-parallel restore where we currently don't run fix_dependencies.

Another TODO is that we fail to parallelize restore of multiple blobs
at all.  This appears to require changes in the archive format to fix.

Back-patch to 9.0 where the problem was reported.  8.4 has potential issues
as well; but since it doesn't create a separate TOC entry for each blob,
it's at much less risk of having enough TOC entries to cause real problems.
2010-12-09 13:03:11 -05:00
Heikki Linnakangas bf9aa490db Refactor the pg_dump zlib code from pg_backup_custom.c to a separate file,
to make it easier to reuse that code. There is no user-visible changes.

This is in preparation for the patch to add a new archive format, a directory,
to perform a custom-like dump but with each table being dumped to a separate
file (that in turn is a prerequisite for parallel pg_dump). This also makes it
easier to add new compression methods in the future, and makes the
pg_backup_custom.c code easier to read, when the compression-related code is
factored out.

Joachim Wieland, with heavy editorialization by me.
2010-12-02 21:39:03 +02:00
Magnus Hagander 9f2e211386 Remove cvs keywords from all files. 2010-09-20 22:08:53 +02:00
Bruce Momjian 65e806cba1 pgindent run for 9.0 2010-02-26 02:01:40 +00:00
Tom Lane c0d5be5d6a Fix up pg_dump's treatment of large object ownership and ACLs. We now emit
a separate archive entry for each BLOB, and use pg_dump's standard methods
for dealing with its ownership, ACL if any, and comment if any.  This means
that switches like --no-owner and --no-privileges do what they're supposed
to.  Preliminary testing says that performance is still reasonable even
with many blobs, though we'll have to see how that shakes out in the field.

KaiGai Kohei, revised by me
2010-02-18 01:29:10 +00:00
Itagaki Takahiro 84f910a707 Additional fixes for large object access control.
Use pg_largeobject_metadata.oid instead of pg_largeobject.loid
to enumerate existing large objects in pg_dump, pg_restore, and
contrib modules.
2009-12-14 00:39:11 +00:00