Commit Graph

1854 Commits

Author SHA1 Message Date
Peter Eisentraut 3e6639a465 pg_dump: Remove obsolete handling of sequence names
There was code that attempted to check whether the sequence name stored
inside the sequence was the same as the name in pg_class.  But that code
was already ifdef'ed out, and now that the sequence no longer stores its
own name, it's altogether obsolete, so remove it.
2016-12-23 10:55:06 -05:00
Stephen Frost 2259bf672c Fix dumping of casts and transforms using built-in functions
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the
cast or transform if it happened to use a built-in function because we
weren't including the information about built-in functions when querying
pg_proc from getFuncs().

Modify the query in getFuncs() to also gather information about
functions which are used by user-defined casts and transforms (where
"user-defined" means "has an OID >= FirstNormalObjectId").  This also
adds to the TAP regression tests for 9.6 and master to cover these
types of objects.

Back-patch all the way for casts, back to 9.5 for transforms.

Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
2016-12-21 13:47:06 -05:00
Stephen Frost 19990918d3 For 8.0 servers, get last built-in oid from pg_database
We didn't start ensuring that all built-in objects had OIDs less than
16384 until 8.1, so for 8.0 servers we still need to query the value out
of pg_database.  We need this, in particular, to distinguish which casts
were built-in and which were user-defined.

For HEAD, we only worry about going back to 8.0, for the back-branches,
we also ensure that 7.0-7.4 work.

Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
2016-12-21 13:47:06 -05:00
Peter Eisentraut 1753b1b027 Add pg_sequence system catalog
Move sequence metadata (start, increment, etc.) into a proper system
catalog instead of storing it in the sequence heap object.  This
separates the metadata from the sequence data.  Sequence metadata is now
operated on transactionally by DDL commands, whereas previously
rollbacks of sequence-related DDL commands would be ignored.

Reviewed-by: Andreas Karlsson <andreas@proxel.se>
2016-12-20 08:28:18 -05:00
Heikki Linnakangas 2560d244b4 Fix quoting and a compiler warning in dumping partitions.
Partition name needs to be quoted in the ATTACH PARTITION command
constructed in binary-upgrade mode.

Silence compiler warning about set but unused variable, without
--enable-cassert.
2016-12-08 14:10:10 +02:00
Robert Haas f0e44751d7 Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own.  The children are called
partitions and contain all of the actual data.  Each partition has an
implicit partitioning constraint.  Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed.  Partitions
can't have extra columns and may not allow nulls unless the parent
does.  Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.

Currently, tables can be range-partitioned or list-partitioned.  List
partitioning is limited to a single column, but range partitioning can
involve multiple columns.  A partitioning "column" can be an
expression.

Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations.  The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.

Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others.  Minor revisions by me.
2016-12-07 13:17:55 -05:00
Stephen Frost 093129c9d9 Add support for restrictive RLS policies
We have had support for restrictive RLS policies since 9.5, but they
were only available through extensions which use the appropriate hooks.
This adds support into the grammer, catalog, psql and pg_dump for
restrictive RLS policies, thus reducing the cases where an extension is
necessary.

In passing, also move away from using "AND"d and "OR"d in comments.
As pointed out by Alvaro, it's not really appropriate to attempt
to make verbs out of "AND" and "OR", so reword those comments which
attempted to.

Reviewed By: Jeevan Chalke, Dean Rasheed
Discussion: https://postgr.es/m/20160901063404.GY4028@tamriel.snowman.net
2016-12-05 15:50:55 -05:00
Stephen Frost 4fafa579b0 Add --no-blobs option to pg_dump
Add an option to exclude blobs when running pg_dump.  By default, blobs
are included but this option can be used to exclude them while keeping
the rest of the dump.

Commment updates and regression tests from me.

Author: Guillaume Lelarge
Reviewed-by: Amul Sul
Discussion: https://postgr.es/m/VisenaEmail.48.49926ea6f91dceb6.15355a48249@tc7-visena
2016-11-29 11:09:35 -05:00
Stephen Frost 8f91f323b4 Clean up pg_dump tests, re-enable BLOB testing
Add a loop to check that each test covers all of the pg_dump runs.  We
(I) had been a bit sloppy when adding new runs and not making sure to
mark if they should be under like or unlike for each test, this loop
makes sure that the test system will complain if any are forgotten in
the future.

The loop also correctly handles the 'catch all' cases, which are used to
avoid running unnecessary specific checks when a single catch-all can be
done (eg: a no-acl run should not have any GRANT commands).

Also, re-enable the testing of blobs, but use lo_from_bytea() instead of
trying to be cute and writing out to a file and then reading it back in
with psql, which proved to be difficult for some buildfarm members.
This allows us to add support for testing the --no-blobs option which
will be getting added shortly, provided the buildfarm doesn't blow up on
this.
2016-11-18 14:21:33 -05:00
Tom Lane d8c05aff56 Fix pg_dump's handling of circular dependencies in views.
pg_dump's traditional solution for breaking a circular dependency involving
a view was to create the view with CREATE TABLE and then later issue CREATE
RULE "_RETURN" ... to convert the table to a view, relying on the backend's
very very ancient code that supports making views that way.  We've wanted
to get rid of that kluge for a long time, but the thing that finally
motivates doing something about it is the recognition that this method
fails with the --clean option, because it leads to issuing DROP RULE
"_RETURN" followed by DROP TABLE --- and the backend won't let you drop a
view's _RETURN rule.

Instead, let's break circular dependencies by initially creating the view
using CREATE VIEW AS SELECT NULL::columntype AS columnname, ... (so that
it has the right column names and types to support external references,
but no dependencies beyond the column data types), and then later dumping
the ON SELECT rule using the spelling CREATE OR REPLACE VIEW.  This method
wasn't available when this code was originally written, but it's been
possible since PG 7.3, so it seems fine to start relying on it now.

To solve the --clean problem, make the dropStmt for an ON SELECT rule
be CREATE OR REPLACE VIEW with the same dummy target list as above.
In this way, during the DROP phase, we first reduce the view to have
no extra dependencies, and then we can drop it entirely when we've
gotten rid of whatever had a circular dependency on it.

(Note: this should work adequately well with the --if-exists option, since
the CREATE OR REPLACE VIEW will go through whether the view exists or not.
It could fail if the view exists with a conflicting column set, but we
don't really support --clean against a non-matching database anyway.)

This allows cleaning up some other kluges inside pg_dump, notably that
we don't need a notion of reloptions attached to a rule anymore.

Although this is a bug fix, commit to HEAD only for now.  The problem's
existed for a long time and we've had relatively few complaints, so it
doesn't really seem worth taking risks to fix it in the back branches.
We might revisit that choice if no problems emerge.

Discussion: <19092.1479325184@sss.pgh.pa.us>
2016-11-17 15:25:59 -05:00
Tom Lane ac888986fc Improve pg_dump/pg_restore --create --if-exists logic.
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix.  Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified.  That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.

Back-patch to 9.4 where this option was introduced.

Discussion: <19092.1479325184@sss.pgh.pa.us>
2016-11-17 14:59:13 -05:00
Tom Lane fcf70e0dbc Re-pgindent src/bin/pg_dump/*
Cleanup for recent patches --- it's not much change, but I got annoyed
while re-indenting the view-rule fix I'm working on.
2016-11-17 14:36:59 -05:00
Peter Eisentraut a7e5457db8 pg_upgrade: Upgrade sequence data via pg_dump
Previously, pg_upgrade migrated sequence data like tables by copying the
on-disk file.  This does not allow any changes in the on-disk format for
sequences.  It's simpler to just have pg_dump set the new sequence
values as it normally does.  To do that, create a hidden submode in
pg_dump that dumps sequence data even when a schema-only dump is
requested, and trigger that submode in binary upgrade mode.  (This new
submode could easily be exposed as a command-line option, but it has
limited use outside of pg_dump and would probably cause some confusion,
so we don't do that at this time.)

Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-13 21:44:58 -05:00
Peter Eisentraut 27d2c12328 pg_dump: Separate table and sequence data object types
Instead of handling both sequence data and table data internally as
"table data", handle sequences separately under a "sequence set" type.
We already handled materialized view data differently, so it makes the
code somewhat cleaner to handle each relation kind separately at the top
level.

This does not change the output format, since there already was a
separate "SEQUENCE SET" archive entry type.  A noticeable difference is
that SEQUENCE SET entries now always appear after TABLE DATA entries.
And in parallel mode there is less sorting to do, because the sequence
data entries are no longer considered table data.

Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-13 21:44:58 -05:00
Peter Eisentraut 8c035e55c4 pg_dump: Simplify internal archive version handling
The ArchiveHandle structure contained the archive format version number
twice, once as a single field and once split into components.  Simplify
that by just keeping the single field and adding some macros to extract
the components.  Introduce some macros for composing version numbers, to
eliminate the repeated use of magic formulas.  Drop the unused trailing
zero byte from the run-time composite version representation.

reviewed by Tom Lane
2016-10-25 17:02:22 -04:00
Tom Lane c08521eb55 Remove dead code in pg_dump.
I'm not sure if this provision for "pg_backup" behaving a bit differently
from "pg_dump" ever did anything useful in a released version.  But it's
definitely dead code now.

Michael Paquier
2016-10-13 16:08:16 -04:00
Tom Lane 0a4bf6b192 Fix pg_dumpall regression test to be locale-independent.
The expected results in commit b4fc64578 seem to have been generated
in a non-C locale, which just points up the fact that the ORDER BY
clause was locale-sensitive.

Per buildfarm.
2016-10-13 10:46:22 -04:00
Andres Freund b4fc645787 Make pg_dumpall's database ACL query independent of hash table order.
Previously GRANT order on databases was not well defined, due to the use
of EXCEPT without an ORDER BY.  Add an ORDER BY, adapt test output.

I don't, at the moment, see reason to backpatch this.
2016-10-12 18:29:57 -07:00
Tom Lane c0a3b211bc pg_dump's getTypes() needn't retrieve typinput or typoutput anymore.
Commit 64f3524e2 removed the stanza of code that examined these values.
I failed to notice they were unnecessary because my compiler didn't
warn about the un-read variables.  Noted by Peter Eisentraut.
2016-10-12 15:11:31 -04:00
Tom Lane 64f3524e2c Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers.
The need for dumping from such ancient servers has decreased to about nil
in the field, so let's remove all the code that catered to it.  Aside
from removing a lot of boilerplate variant queries, this allows us to not
have to cope with servers that don't have (a) schemas or (b) pg_depend.
That means we can get rid of assorted squishy code around that.  There
may be some nonobvious additional simplifications possible, but this patch
already removes about 1500 lines of code.

I did not remove the ability for pg_restore to read custom-format archives
generated by these old versions (and light testing says that that does
still work).  If you have an old server, you probably also have a pg_dump
that will work with it; but you have an old custom-format backup file,
that might be all you have.

It'd be possible at this point to remove fmtQualifiedId()'s version
argument, but I refrained since that would affect code outside pg_dump.

Discussion: <2661.1475849167@sss.pgh.pa.us>
2016-10-12 12:20:02 -04:00
Tom Lane 4806f26f9e Fix pg_dump to work against pre-9.0 servers again.
getBlobs' queries for pre-9.0 servers were broken in two ways:
the 7.x/8.x query uses DISTINCT so it can't have unspecified-type
NULLs in the target list, and both that query and the 7.0 one
failed to provide the correct output column labels, so that the
subsequent code to extract data from the PGresult would fail.

Back-patch to 9.6 where the breakage was introduced (by commit 23f34fa4b).

Amit Langote and Tom Lane

Discussion: <0a3e7a0e-37bd-8427-29bd-958135862f0a@lab.ntt.co.jp>
2016-10-07 09:51:18 -04:00
Tom Lane e8bdee2770 Add ALTER EXTENSION ADD/DROP ACCESS METHOD, and use it in pg_upgrade.
Without this, an extension containing an access method is not properly
dumped/restored during pg_upgrade --- the AM ends up not being a member
of the extension after upgrading.

Another oversight in commit 473b93287, reported by Andrew Dunstan.

Report: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
2016-10-02 14:31:28 -04:00
Tom Lane 0109ab2760 Make struct ParallelSlot private within pg_dump/parallel.c.
The only field of this struct that other files have any need to touch
is the pointer to the TocEntry a worker is working on.  (Well,
pg_backup_archiver.c is actually looking at workerStatus too, but that
can be finessed by specifying that the TocEntry pointer is NULL for a
non-busy worker.)

Hence, move out the TocEntry pointers to a separate array within
struct ParallelState, and then we can make struct ParallelSlot private.

I noted the possibility of this previously, but hadn't got round to
actually doing it.

Discussion: <1188.1464544443@sss.pgh.pa.us>
2016-09-27 14:29:12 -04:00
Tom Lane fb03d08a89 Rationalize parallel dump/restore's handling of worker cmd/status messages.
The existing APIs for creating and parsing command and status messages are
rather messy; for example, archive-format modules have to provide code
for constructing command messages, which is entirely pointless since
the code to read them is hard-wired in WaitForCommands() and hence
no format-specific variation is actually possible.  But there's little
foreseeable reason to need format-specific variation anyway.

The situation for status messages is no better; at least those are both
constructed and parsed by format-specific code, but said code is quite
redundant since there's no actual need for format-specific variation.

To add insult to injury, the first API involves returning pointers to
static buffers, which is bad, while the second involves returning pointers
to malloc'd strings, which is safer but randomly inconsistent.

Hence, get rid of the MasterStartParallelItem and MasterEndParallelItem
APIs, and instead write centralized functions that construct and parse
command and status messages.  If we ever do need more flexibility, these
functions can be the standard implementations of format-specific
callback methods, but that's a long way off if it ever happens.

Tom Lane, reviewed by Kevin Grittner

Discussion: <17340.1464465717@sss.pgh.pa.us>
2016-09-27 13:56:04 -04:00
Tom Lane b7b8cc0cfc Redesign parallel dump/restore's wait-for-workers logic.
The ListenToWorkers/ReapWorkerStatus APIs were messy and hard to use.
Instead, make DispatchJobForTocEntry register a callback function that
will take care of state cleanup, doing whatever had been done by the caller
of ReapWorkerStatus in the old design.  (This callback is essentially just
the old mark_work_done function in the restore case, and a trivial test for
worker failure in the dump case.)  Then we can have ListenToWorkers call
the callback immediately on receipt of a status message, and return the
worker to WRKR_IDLE state; so the WRKR_FINISHED state goes away.

This allows us to design a unified wait-for-worker-messages loop:
WaitForWorkers replaces EnsureIdleWorker and EnsureWorkersFinished as well
as the mess in restore_toc_entries_parallel.  Also, we no longer need the
fragile API spec that the caller of DispatchJobForTocEntry is responsible
for ensuring there's an idle worker, since DispatchJobForTocEntry can just
wait until there is one.

In passing, I got rid of the ParallelArgs struct, which was a net negative
in terms of notational verboseness, and didn't seem to be providing any
noticeable amount of abstraction either.

Tom Lane, reviewed by Kevin Grittner

Discussion: <1188.1464544443@sss.pgh.pa.us>
2016-09-27 13:22:39 -04:00
Alvaro Herrera 51c3e9fade Include <sys/select.h> where needed
<sys/select.h> is required by POSIX.1-2001 to get the prototype of
select(2), but nearly no systems enforce that because older standards
let you get away with including some other headers.  Recent OpenBSD
hacking has removed that frail touch of friendliness, however, which
broke some compiles; fix all the way back to 9.1 by adding the required
standard.  Only vacuumdb.c was reported to fail, but it seems easier to
fix the whole lot in a fell swoop.

Per bug #14334 by Sean Farrell.
2016-09-27 01:05:21 -03:00
Tom Lane 12f6eadffd Fix incorrect logic for excluding range constructor functions in pg_dump.
Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to
dumping range constructor functions if they are part of an extension
and we're in binary-upgrade mode.  Actually, we don't want to dump them
separately even then, since CREATE TYPE AS RANGE will create the range's
constructor functions regardless.  Per report from Andrew Dunstan.

It looks like this mistake was introduced by me, in commit b985d4877, in
perhaps-overzealous refactoring to reduce code duplication.  I'm suitably
embarrassed.

Report: <34854939-02d7-f591-5677-ce2994104599@dunslane.net>
2016-09-23 13:49:26 -04:00
Peter Eisentraut 8b845520fb Add tests for various connection string issues
Add tests for consistent support of connection strings in frontend
programs as well as proper handling of unusual characters in database
and user names.  These tests were developed for the issues of
CVE-2016-5424.

To allow testing of names with spaces, change the pg_regress
command-line options --create-role and --dbname to split their arguments
by comma only, not space or comma as before.  Only commas were actually
used in existing uses.

Noah Misch, Michael Paquier, Peter Eisentraut
2016-09-22 12:00:00 -04:00
Peter Eisentraut 46b55e7f85 pg_restore: Add -N option to exclude schemas
This is similar to the -N option in pg_dump, except that it doesn't take
a pattern, just like the existing -n option in pg_restore.

From: Michael Banck <michael.banck@credativ.de>
2016-09-20 12:00:00 -04:00
Tom Lane df5d9bb8d5 Allow pg_dump to dump non-extension members of an extension-owned schema.
Previously, if a schema was created by an extension, a normal pg_dump run
(not --binary-upgrade) would summarily skip every object in that schema.
In a case where an extension creates a schema and then users create other
objects within that schema, this does the wrong thing: we want pg_dump
to skip the schema but still create the non-extension-owned objects.

There's no easy way to fix this pre-9.6, because in earlier versions the
"dump" status for a schema is just a bool and there's no way to distinguish
"dump me" from "dump my members".  However, as of 9.6 we do have enough
state to represent that, so this is a simple correction of the logic in
selectDumpableNamespace.

In passing, make some cosmetic fixes in nearby code.

Martín Marqués, reviewed by Michael Paquier

Discussion: <99581032-71de-6466-c325-069861f1947d@2ndquadrant.com>
2016-09-08 13:12:01 -04:00
Tom Lane e97e9c57bd Don't print database's tablespace in pg_dump -C --no-tablespaces output.
If the database has a non-default tablespace, we emitted a TABLESPACE
clause in the CREATE DATABASE command emitted by -C, even if
--no-tablespaces was also specified.  This seems wrong, and it's
inconsistent with what pg_dumpall does, so change it.  Per bug #14315
from Danylo Hlynskyi.

Back-patch to 9.5.  The bug is much older, but it'd be a more invasive
change before 9.5 because dumpDatabase() hasn't got an easy way to get
to the outputNoTablespaces flag.  Doesn't seem worth the work given
the lack of previous complaints.

Report: <20160908081953.1402.75347@wrigleys.postgresql.org>
2016-09-08 10:48:03 -04:00
Tom Lane 9daec77e16 Simplify correct use of simple_prompt().
The previous API for this function had it returning a malloc'd string.
That meant that callers had to check for NULL return, which few of them
were doing, and it also meant that callers had to remember to free()
the string later, which required extra logic in most cases.

Instead, make simple_prompt() write into a buffer supplied by the caller.
Anywhere that the maximum required input length is reasonably small,
which is almost all of the callers, we can just use a local or static
array as the buffer instead of dealing with malloc/free.

A fair number of callers used "pointer == NULL" as a proxy for "haven't
requested the password yet".  Maintaining the same behavior requires
adding a separate boolean flag for that, which adds back some of the
complexity we save by removing free()s.  Nonetheless, this nets out
at a small reduction in overall code size, and considerably less code
than we would have had if we'd added the missing NULL-return checks
everywhere they were needed.

In passing, clean up the API comment for simple_prompt() and get rid
of a very-unnecessary malloc/free in its Windows code path.

This is nominally a bug fix, but it does not seem worth back-patching,
because the actual risk of an OOM failure in any of these places seems
pretty tiny, and all of them are client-side not server-side anyway.

This patch is by me, but it owes a great deal to Michael Paquier
who identified the problem and drafted a patch for fixing it the
other way.

Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
2016-08-30 17:02:02 -04:00
Tom Lane b5bce6c1ec Final pgindent + perltidy run for 9.6. 2016-08-15 13:42:51 -04:00
Peter Eisentraut 34927b2920 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cda21c1d7b160b303dc21dfe9d4169f2c8064c60
2016-08-08 11:08:00 -04:00
Noah Misch fcd15f1358 Obstruct shell, SQL, and conninfo injection via database and role names.
Due to simplistic quoting and confusion of database names with conninfo
strings, roles with the CREATEDB or CREATEROLE option could escalate to
superuser privileges when a superuser next ran certain maintenance
commands.  The new coding rule for PQconnectdbParams() calls, documented
at conninfo_array_parse(), is to pass expand_dbname=true and wrap
literal database names in a trivial connection string.  Escape
zero-length values in appendConnStrVal().  Back-patch to 9.1 (all
supported versions).

Nathan Bossart, Michael Paquier, and Noah Misch.  Reviewed by Peter
Eisentraut.  Reported by Nathan Bossart.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 41f18f021a Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.
Rename these newly-extern functions with terms more typical of their new
neighbors.  No functional changes; a subsequent commit will use them in
more places.  Back-patch to 9.1 (all supported versions).  Back branches
lack src/fe_utils, so instead rename the functions in place; the
subsequent commit will copy them into the other programs using them.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch bd65371851 Fix Windows shell argument quoting.
The incorrect quoting may have permitted arbitrary command execution.
At a minimum, it gave broader control over the command line to actors
supposed to have control over a single argument.  Back-patch to 9.1 (all
supported versions).

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 142c24c234 Reject, in pg_dumpall, names containing CR or LF.
These characters prematurely terminate Windows shell command processing,
causing the shell to execute a prefix of the intended command.  The
chief alternative to rejecting these characters was to bypass the
Windows shell with CreateProcess(), but the ability to use such names
has little value.  Back-patch to 9.1 (all supported versions).

This change formally revokes support for these characters in database
names and roles names.  Don't document this; the error message is
self-explanatory, and too few users would benefit.  A future major
release may forbid creation of databases and roles so named.  For now,
check only at known weak points in pg_dumpall.  Future commits will,
without notice, reject affected names from other frontend programs.

Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and
--file arguments.  Unlike the effects on role name arguments and
database names, this does not reflect a broad policy change.  A
migration to CreateProcess() could lift these two restrictions.

Reviewed by Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Tom Lane e2e95f5ef3 Fix pg_dump's handling of public schema with both -c and -C options.
Since -c plus -C requests dropping and recreating the target database
as a whole, not dropping individual objects in it, we should assume that
the public schema already exists and need not be created.  The previous
coding considered only the state of the -c option, so it would emit
"CREATE SCHEMA public" anyway, leading to an unexpected error in restore.

Back-patch to 9.2.  Older versions did not accept -c with -C so the
issue doesn't arise there.  (The logic being patched here dates to 8.0,
cf commit 2193121fa, so it's not really wrong that it didn't consider
the case at the time.)

Note that versions before 9.6 will still attempt to emit REVOKE/GRANT
on the public schema; but that happens without -c/-C too, and doesn't
seem to be the focus of this complaint.  I considered extending this
stanza to also skip the public schema's ACL, but that would be a
misfeature, as it'd break cases where users intentionally changed that
ACL.  The real fix for this aspect is Stephen Frost's work to not dump
built-in ACLs, and that's not going to get back-ported.

Per bugs #13804 and #14271.  Solution found by David Johnston and later
rediscovered by me.

Report: <20151207163520.2628.95990@wrigleys.postgresql.org>
Report: <20160801021955.1430.47434@wrigleys.postgresql.org>
2016-08-02 12:49:40 -04:00
Stephen Frost f9e439b1ca Correctly handle owned sequences with extensions
With the refactoring of pg_dump to handle components, getOwnedSeqs needs
to be a bit more intelligent regarding which components to dump when.
Specifically, we can't simply use the owning table's components as the
set of components to dump as the table might only be including certain
components while all components of the sequence should be dumped, for
example, when the table is a member of an extension while the sequence
is not.

Handle this by combining the set of components to be dumped for the
sequence explicitly and those to be dumped for the table when setting
the components to be dumped for the sequence.

Also add a number of regression tests around this to, hopefully, catch
any future changes which break the expected behavior.

Discovered by: Philippe BEAUDOIN
Reviewed by: Michael Paquier
2016-07-31 10:57:15 -04:00
Peter Eisentraut 7d67606569 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3d71988dffd3c0798a8864c55ca4b7833b48abb1
2016-07-18 12:07:49 -04:00
Tom Lane 18555b1323 Establish conventions about global object names used in regression tests.
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects.  There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.

This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_".  It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user".  That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.

There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again.  Again, that's not such a cosmetic change and it will require
discussion.  (But I did use a quick-hack enforcement patch to find these
cases.)

Discussion: <16638.1468620817@sss.pgh.pa.us>
2016-07-17 18:42:43 -04:00
Stephen Frost 47f5bb9f53 Correctly dump database and tablespace ACLs
Dump out the appropriate GRANT/REVOKE commands for databases and
tablespaces from pg_dumpall to replicate what the current state is.

This was broken during the changes to buildACLCommands for 9.6+
servers for pg_init_privs.
2016-07-17 09:04:46 -04:00
Tom Lane f8ace5477e Fix type-safety problem with parallel aggregate serial/deserialization.
The original specification for this called for the deserialization function
to have signature "deserialize(serialtype) returns transtype", which is a
security violation if transtype is INTERNAL (which it always would be in
practice) and serialtype is not (which ditto).  The patch blithely overrode
the opr_sanity check for that, which was sloppy-enough work in itself,
but the indisputable reason this cannot be allowed to stand is that CREATE
FUNCTION will reject such a signature and thus it'd be impossible for
extensions to create parallelizable aggregates.

The minimum fix to make the signature type-safe is to add a second, dummy
argument of type INTERNAL.  But to lock it down a bit more and make misuse
of INTERNAL-accepting functions less likely, let's get rid of the ability
to specify a "serialtype" for an aggregate and just say that the only
useful serialtype is BYTEA --- which, in practice, is the only interesting
value anyway, due to the usefulness of the send/recv infrastructure for
this purpose.  That means we only have to allow "serialize(internal)
returns bytea" and "deserialize(bytea, internal) returns internal" as
the signatures for these support functions.

In passing fix bogus signature of int4_avg_combine, which I found thanks
to adding an opr_sanity check on combinefunc signatures.

catversion bump due to removing pg_aggregate.aggserialtype and adjusting
signatures of assorted built-in functions.

David Rowley and Tom Lane

Discussion: <27247.1466185504@sss.pgh.pa.us>
2016-06-22 16:52:41 -04:00
Peter Eisentraut 47981a4665 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0c374f8d25ed31833a10d24252bc928d41438838
2016-06-20 09:48:08 -04:00
Tom Lane 8383486f10 Force idle_in_transaction_session_timeout off in pg_dump and autovacuum.
We disable statement_timeout and lock_timeout during dump and restore, to
prevent any global settings that might exist from breaking routine backups.
Commit c6dda1f48 should have added idle_in_transaction_session_timeout to
that list, but failed to.

Another place where these timeouts get turned off is autovacuum.  While
I doubt an idle timeout could fire there, it seems better to be safe than
sorry.

pg_dump issue noted by Bernd Helmle, the other one found by grepping.

Report: <352F9B77DB5D3082578D17BB@eje.land.credativ.lan>
2016-06-15 10:53:03 -04:00
Noah Misch 3be0a62ffe Finish pgindent run for 9.6: Perl files. 2016-06-12 04:19:56 -04:00
Robert Haas 4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Stephen Frost 562f06f3f0 pg_dump only selected components of ACCESS METHODs
dumpAccessMethod() didn't get the memo that we now have a bitfield for
the components which should be dumped instead of a simple boolean.

Correct that by checking if the relevant bit is set for each component
being dumped out (and not dumping it out if it isn't set).

This corrects an issue where CREATE ACCESS METHOD commands were being
included in non-binary-upgrades when an extension included an access
method (as the bloom extensions does).

Also add a regression test to make sure that we only dump out the
ACCESS METHOD commands, when they are part of an extension, when doing
a binary upgrade.

Pointed out by Thom Brown.
2016-06-07 09:56:02 -04:00
Tom Lane 6c72a28e5c Suppress -Wunused-result warnings about write(), again.
Adopt the same solution as in commit aa90e148ca, but this time
let's put the ugliness inside the write_stderr() macro, instead of
expecting each call site to deal with it.  Back-port that decision
into psql/common.c where I got the macro from in the first place.

Per gripe from Peter Eisentraut.
2016-06-03 11:29:38 -04:00
Greg Stark e1623c3959 Fix various common mispellings.
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.

Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
2016-06-03 16:08:45 +01:00
Tom Lane e652273e07 Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar
signals and set a flag that was tested in various data-transfer loops.
This was prone to errors of omission (cf commit 3c8aa6654); and even if
the client-side response was prompt, we did nothing that would cause
long-running SQL commands (e.g. CREATE INDEX) to terminate early.
Also, the master process would effectively do nothing at all upon receipt
of SIGINT; the only reason it seemed to work was that in typical scenarios
the signal would also be delivered to the child processes.  We should
support termination when a signal is delivered only to the master process,
though.

Windows builds had no console interrupt handler, so they would just fall
over immediately at control-C, again leaving long-running SQL commands to
finish unmolested.

To fix, remove the flag-checking approach altogether.  Instead, allow the
Unix signal handler to send a cancel request directly and then exit(1).
In the master process, also have it forward the signal to the children.
On Windows, add a console interrupt handler that behaves approximately
the same.  The main difference is that a single execution of the Windows
handler can send all the cancel requests since all the info is available
in one process, whereas on Unix each process sends a cancel only for its
own database connection.

In passing, fix an old problem that DisconnectDatabase tends to send a
cancel request before exiting a parallel worker, even if nothing went
wrong.  This is at least a waste of cycles, and could lead to unexpected
log messages, or maybe even data loss if it happened in pg_restore (though
in the current code the problem seems to affect only pg_dump).  The cause
was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY
state, causing PQtransactionStatus() to report PQTRANS_ACTIVE.  That's
normally harmless because the next PQexec() will silently clear the
PGASYNC_BUSY state; but in a parallel worker we might exit without any
additional SQL commands after a COPY step.  So add an extra PQgetResult()
call after a COPY to allow libpq to return to PGASYNC_IDLE state.

This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore
were introduced.

Thanks to Kyotaro Horiguchi for Windows testing and code suggestions.

Original-Patch: <7005.1464657274@sss.pgh.pa.us>
Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
2016-06-02 13:28:17 -04:00
Tom Lane 763eec6b6d Clean up some minor inefficiencies in parallel dump/restore.
Parallel dump did a totally pointless query to find out the name of each
table to be dumped, which it already knows.  Parallel restore runs issued
lots of redundant SET commands because _doSetFixedOutputState() was invoked
once per TOC item rather than just once at connection start.  While the
extra queries are insignificant if you're dumping or restoring large
tables, it still seems worth getting rid of them.

Also, give the responsibility for selecting the right client_encoding for
a parallel dump worker to setup_connection() where it naturally belongs,
instead of having ad-hoc code for that in CloneArchive().  And fix some
minor bugs like use of strdup() where pg_strdup() would be safer.

Back-patch to 9.3, mostly to keep the branches in sync in an area that
we're still finding bugs in.

Discussion: <5086.1464793073@sss.pgh.pa.us>
2016-06-01 16:14:21 -04:00
Tom Lane 3c8aa6654a Fix missing abort checks in pg_backup_directory.c.
Parallel restore from directory format failed to respond to control-C
in a timely manner, because there were no checkAborting() calls in the
code path that reads data from a file and sends it to the backend.
If any worker was in the midst of restoring data for a large table,
you'd just have to wait.

This fix doesn't do anything for the problem of aborting a long-running
server-side command, but at least it fixes things for data transfers.

Back-patch to 9.3 where parallel restore was introduced.
2016-05-29 13:18:48 -04:00
Tom Lane 210981a4a9 Remove pg_dump/parallel.c's useless "aborting" flag.
This was effectively dead code, since the places that tested it could not
be reached after we entered the on-exit-cleanup routine that would set it.
It seems to have been a leftover from a design in which error abort would
try to send fresh commands to the workers --- a design which could never
have worked reliably, of course.  Since the flag is not cross-platform, it
complicates reasoning about the code's behavior, which we could do without.

Although this is effectively just cosmetic, back-patch anyway, because
there are some actual bugs in the vicinity of this behavior.

Discussion: <15583.1464462418@sss.pgh.pa.us>
2016-05-29 13:00:09 -04:00
Tom Lane 6b3094c26f Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.
The commentary in this file was in extremely sad shape.  The author(s)
had clearly never heard of the project convention that a function header
comment should provide an API spec of some sort for that function.  Much
of it was flat out wrong, too --- maybe it was accurate when written, but
if so it had not been updated to track subsequent code revisions.  Rewrite
and rearrange to try to bring it up to speed, and annotate some of the
places where more work is needed.  (I've refrained from actually fixing
anything of substance ... yet.)

Also, rename a couple of functions for more clarity as to what they do,
do some very minor code rearrangement, remove some pointless Asserts,
fix an incorrect Assert in readMessageFromPipe, and add a missing socket
close in one error exit from pgpipe().  The last would be a bug if we
tried to continue after pgpipe() failure, but since we don't, it's just
cosmetic at present.

Although this is only cosmetic, back-patch to 9.3 where parallel.c was
added.  It's sufficiently invasive that it'll pose a hazard for future
back-patching if we don't.

Discussion: <25239.1464386067@sss.pgh.pa.us>
2016-05-28 14:02:11 -04:00
Tom Lane 807b45375b Clean up thread management in parallel pg_dump for Windows.
Since we start the worker threads with _beginthreadex(), we should use
_endthreadex() to terminate them.  We got this right in the normal-exit
code path, but not so much during an error exit from a worker.
In addition, be sure to apply CloseHandle to the thread handle after
each thread exits.

It's not clear that these oversights cause any user-visible problems,
since the pg_dump run is about to terminate anyway.  Still, it's clearly
better to follow Microsoft's API specifications than ignore them.

Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including
being a bit less random about where to cast between uintptr_t and HANDLE,
and being sure to clear the worker identity field for each dead worker
(not that false matches should be possible later, but let's be careful).

Original observation and patch by Armin Schöffmann, cosmetic improvements
by Michael Paquier and me.  (Armin's patch also included closing sockets
in ShutdownWorkersHard(), but that's been dealt with already in commit
df8d2d8c4.)  Back-patch to 9.3 where parallel pg_dump was introduced.

Discussion: <zarafa.570306bd.3418.074bf1420d8f2ba2@root.aegaeon.de>
2016-05-27 12:02:09 -04:00
Magnus Hagander d74048defc Make pg_dump error cleanly with -j against hot standby
Getting a synchronized snapshot is not supported on a hot standby node,
and is by default taken when using -j with multiple sessions. Trying to
do so still failed, but with a server error that would also go in the
log. Instead, proprely detect this case and give a better error message.
2016-05-26 22:14:23 +02:00
Tom Lane cae2bb1986 Make pg_dump behave more sanely when built without HAVE_LIBZ.
For some reason the code to emit a warning and switch to uncompressed
output was placed down in the guts of pg_backup_archiver.c.  This is
definitely too late in the case of parallel operation (and I rather
wonder if it wasn't too late for other purposes as well).  Put it in
pg_dump.c's option-processing logic, which seems a much saner place.

Also, the default behavior with custom or directory output format was
to emit the warning telling you the output would be uncompressed.  This
seems unhelpful, so silence that case.

Back-patch to 9.3 where parallel dump was introduced.

Kyotaro Horiguchi, adjusted a bit by me

Report: <20160526.185551.242041780.horiguchi.kyotaro@lab.ntt.co.jp>
2016-05-26 11:51:04 -04:00
Tom Lane df8d2d8c42 In Windows pg_dump, ensure idle workers will shut down during error exit.
The Windows coding of ShutdownWorkersHard() thought that setting termEvent
was sufficient to make workers exit after an error.  But that only helps
if a worker is busy and passes through checkAborting().  An idle worker
will just sit, resulting in pg_dump failing to exit until the user gives up
and hits control-C.  We should close the write end of the command pipe
so that idle workers will see socket EOF and exit, as the Unix coding was
already doing.

Back-patch to 9.3 where parallel pg_dump was introduced.

Kyotaro Horiguchi
2016-05-26 10:50:30 -04:00
Tom Lane 9abd64ec99 Fix broken error handling in parallel pg_dump/pg_restore.
In the original design for parallel dump, worker processes reported errors
by sending them up to the master process, which would print the messages.
This is unworkably fragile for a couple of reasons: it risks deadlock if a
worker sends an error at an unexpected time, and if the master has already
died for some reason, the user will never get to see the error at all.
Revert that idea and go back to just always printing messages to stderr.
This approach means that if all the workers fail for similar reasons (eg,
bad password or server shutdown), the user will see N copies of that
message, not only one as before.  While that's slightly annoying, it's
certainly better than not seeing any message; not to mention that we
shouldn't assume that only the first failure is interesting.

An additional problem in the same area was that the master failed to
disable SIGPIPE (at least until much too late), which meant that sending a
command to an already-dead worker would cause the master to crash silently.
That was bad enough in itself but was made worse by the total reliance on
the master to print errors: even if the worker had reported an error, you
would probably not see it, depending on timing.  Instead disable SIGPIPE
right after we've forked the workers, before attempting to send them
anything.

Additionally, the master relies on seeing socket EOF to realize that a
worker has exited prematurely --- but on Windows, there would be no EOF
since the socket is attached to the process that includes both the master
and worker threads, so it remains open.  Make archive_close_connection()
close the worker end of the sockets so that this acts more like the Unix
case.  It's not perfect, because if a worker thread exits without going
through exit_nicely() the closures won't happen; but that's not really
supposed to happen.

This has been wrong all along, so back-patch to 9.3 where parallel dump
was introduced.

Report: <2458.1450894615@sss.pgh.pa.us>
2016-05-25 12:40:12 -04:00
Stephen Frost 018eb027f1 Do not DROP default roles in pg_dumpall -c
When pulling the list of roles to drop, exclude roles whose names
begin with "pg_" (as we do when we are dumping the roles out to
recreate them).

Also add regression tests to cover pg_dumpall -c and this specific
issue.

Noticed by Rushabh Lathia.  Patch by me.
2016-05-24 23:31:55 -04:00
Stephen Frost 2e8b4bf804 Qualify table usage in dumpTable() and use regclass
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catalog ACLs work.

Also, use ::regclass (or ::pg_catalog.regclass, where appropriate)
instead of using a poorly constructed query to get the OID for various
catalog tables.

Issues identified by Noah and Alvaro, patch by me.
2016-05-24 20:10:16 -04:00
Peter Eisentraut 48aaba4acf Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
2016-05-09 10:04:41 -04:00
Tom Lane b818088408 In new pg_dump TAP tests, remove trailing "$" from regexps using /m.
It emerges that some Perl versions before 5.8.9 have a bug with regexps
that use the /m flag and contain "$".  This is the reason why jacana
is still failing on HEAD, and I was able to duplicate the failure on
prairiedog's host.  There's no real need for "$" in these patterns,
since they are already matching through the statement-terminating
semicolons (or matching an explicit \n in some cases).  So just
remove it.

Note: the reason jacana hasn't actually reported any failures in the
last little while is that the way the pg_dump TAP tests are set up, any
failure of this sort results in echoing the entire pg_dump dump output
to stderr.  Since there were about a hundred such failures, that resulted
in a 30MB log file which choked the buildfarm upload script.  There is
room for improvement here :-(.

Per off-list discussion with Andrew and Stephen.
2016-05-07 16:36:50 -04:00
Tom Lane 74a73b1722 Clean up after pg_dump test runs.
The tmp_check directory needs to be removed by "make clean",
and also ignored by .gitignore.
2016-05-06 22:28:01 -04:00
Stephen Frost 0f97c722bb Disable BLOB test in pg_dump TAP tests
Buildfarm member jacana appears to have an issue with running this
test.  It's not entirely clear to me why, but rather than try to
fight with it, just disable it for now.

None of the other tests try to write out from psql directly as
this test does, so it seems likely that the rest of the tests will
be fine (as they have been on numerous other systems).
2016-05-06 21:24:31 -04:00
Stephen Frost c778e27e13 Correct query in pg_dumpall:dumpRoles
We need to use a new branch due to the 9.5 addition of bypassrls
when adding in the clause to exclude pg_* roles from being dumped
by pg_dumpall.

Pointed out by Noah, patch by me.
2016-05-06 16:15:52 -04:00
Stephen Frost eccfeeb631 Remove MODULES_big from test_pg_dump
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension.  Hopefully
this will fix the Windows buildfarm members which were complaining.

In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
2016-05-06 15:26:57 -04:00
Stephen Frost 6bd356c33a Add TAP tests for pg_dump
This TAP test suite will create a new cluster, populate it based on
the 'create_sql' values in the '%tests' hash, run all of the runs
defined in the '%pgdump_runs' hash, and then for each test in the
'%tests' hash, compare each run's output the the regular expression
defined for the test under the 'like' and 'unlike' functions, as
appropriate.

While this test suite covers a fair bit of ground (67% of pg_dump.c
and quite a bit of the other files in src/bin/pg_dump), there is
still quite a bit which remains to be added to provide better code
coverage.  Still, this is quite a bit better than we had, and has
found a few bugs already (note that the CREATE TRANSFORM test is
commented out, as it is currently failing).

Idea for using the TAP system from Tom, though all of the code is mine.
2016-05-06 14:06:50 -04:00
Stephen Frost e1b120a8cb Only issue LOCK TABLE commands when necessary
Reviewing the cases where we need to LOCK a given table during a dump,
it was pointed out by Tom that we really don't need to LOCK a table if
we are only looking to dump the ACL for it, or certain other
components.  After reviewing the queries run for all of the component
pieces, a list of components were determined to not require LOCK'ing
of the table.

This implements a check to avoid LOCK'ing those tables.

Initial complaint from Rushabh Lathia, discussed with Robert and Tom,
the patch is mine.
2016-05-06 14:06:50 -04:00
Stephen Frost 5d589993ca pg_dump performance and other fixes
Do not try to dump objects which do not have ACLs when only ACLs are
being requested.  This results in a significant performance improvement
as we can avoid querying for further information on these objects when
we don't need to.

When limiting the components to dump for an extension, consider what
components have been requested.  Initially, we incorrectly hard-coded
the components of the extension objects to dump, which would mean that
we wouldn't dump some components even with they were asked for and in
other cases we would dump components which weren't requested.

Correct defaultACLs to use 'dump_contains' instead of 'dump'.  The
defaultACL is considered a member of the namespace and should be
dumped based on the same set of components that the other objects in
the schema are, not based on what we're dumping for the namespace
itself (which might not include ACLs, if the namespace has just the
default or initial ACL).

Use DUMP_COMPONENT_ACL for from-initdb objects, to allow users to
change their ACLs, should they wish to.  This just extends what we
are doing for the pg_catalog namespace to objects which are not
members of namespaces.

Due to column ACLs being treated a bit differently from other ACLs
(they are actually reset to NULL when all privileges are revoked),
adjust the query which gathers column-level ACLs to consider all of
the ACL-relevant columns.
2016-05-06 14:06:50 -04:00
Stephen Frost 64d60c8bf0 Correct pg_dump WHERE clause for functions/aggregates
The query to grab the function/aggregate information is now joining
to pg_init_privs, so we can simplify (and correct) the WHERE clause
used to determine if a given function's ACL has changed from the
initial ACL on the function.

Bug found by Noah, patch by me.
2016-05-06 14:06:50 -04:00
Dean Rasheed 93a8c6fd6c Move and rename fmtReloptionsArray().
Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it
is available to other frontend code. In particular psql's \ev and \sv
commands need it to handle view reloptions. Also rename the function
to appendReloptionsArray(), which is a more accurate description of
what it does.

Author: Dean Rasheed
Reviewed-by: Peter Eisentraut
Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
2016-05-06 12:45:36 +01:00
Peter Eisentraut 3019f432d6 pg_dump: Message style improvements
forgotten in b6dacc173b
2016-04-26 21:37:06 -04:00
Peter Eisentraut b6dacc173b pg_dump: Message style improvements 2016-04-25 17:16:59 -04:00
Robert Haas b4e0f18382 Add pg_dump support for the new PARALLEL option for aggregates.
This was an oversight in commit 41ea0c2376.

Fabrízio de Royes Mello, per a report from Tushar Ahuja
2016-04-20 23:06:06 -04:00
Tom Lane 6cead413bb Fix pg_dump so pg_upgrade'ing an extension with simple opfamilies works.
As reported by Michael Feld, pg_upgrade'ing an installation having
extensions with operator families that contain just a single operator class
failed to reproduce the extension membership of those operator families.
This caused no immediate ill effects, but would create problems when later
trying to do a plain dump and restore, because the seemingly-not-part-of-
the-extension operator families would appear separately in the pg_dump
output, and then would conflict with the families created by loading the
extension.  This has been broken ever since extensions were introduced,
and many of the standard contrib extensions are affected, so it's a bit
astonishing nobody complained before.

The cause of the problem is a perhaps-ill-considered decision to omit
such operator families from pg_dump's output on the grounds that the
CREATE OPERATOR CLASS commands could recreate them, and having explicit
CREATE OPERATOR FAMILY commands would impede loading the dump script into
pre-8.3 servers.  Whatever the merits of that decision when 8.3 was being
written, it looks like a poor tradeoff now.  We can fix the pg_upgrade
problem simply by removing that code, so that the operator families are
dumped explicitly (and then will be properly made to be part of their
extensions).

Although this fixes the behavior of future pg_upgrade runs, it does nothing
to clean up existing installations that may have improperly-linked operator
families.  Given the small number of complaints to date, maybe we don't
need to worry about providing an automated solution for that; anyone who
needs to clean it up can do so with manual "ALTER EXTENSION ADD OPERATOR
FAMILY" commands, or even just ignore the duplicate-opfamily errors they
get during a pg_restore.  In any case we need this fix.

Back-patch to all supported branches.

Discussion: <20228.1460575691@sss.pgh.pa.us>
2016-04-13 18:58:14 -04:00
Tom Lane 074050f16a pg_dump: add missing "destroyPQExpBuffer(query)" in dumpForeignServer().
Coverity complained about this resource leak (why now, I don't know,
since it's been like that a long time).  Our general policy in pg_dump
is that PQExpBuffers are worth cleaning up, so do it here too.  But
don't bother with a back-patch, because it seems unlikely that very
many databases contain enough FOREIGN SERVER objects to notice.
2016-04-11 00:00:08 -04:00
Stephen Frost 293007898d Reserve the "pg_" namespace for roles
This will prevent users from creating roles which begin with "pg_" and
will check for those roles before allowing an upgrade using pg_upgrade.

This will allow for default roles to be provided at initdb time.

Reviews by José Luis Tallón and Robert Haas
2016-04-08 16:56:27 -04:00
Stephen Frost fa6075e551 Fix improper usage of 'dump' bitmap
Now that 'dump' is a bitmap, we can't simply set it to 'true'.

Noticed while debugging the prior issue.
2016-04-08 16:30:02 -04:00
Stephen Frost 689f9a0588 In dumpTable, re-instate the skipping logic
Pretty sure I removed this based on some incorrect thinking that it was
no longer possible to reach this point for a table which will not be
dumped, but that's clearly wrong.

Pointed out on IRC by Erik Rijkers.
2016-04-08 15:00:44 -04:00
Teodor Sigaev 8b99edefca Revert CREATE INDEX ... INCLUDING ...
It's not ready yet, revert two commits
690c543550 - unstable test output
386e3d7609 - patch itself
2016-04-08 21:52:13 +03:00
Teodor Sigaev 386e3d7609 CREATE INDEX ... INCLUDING (column[, ...])
Now indexes (but only B-tree for now) can contain "extra" column(s) which
doesn't participate in index structure, they are just stored in leaf
tuples. It allows to use index only scan by using single index instead
of two or more indexes.

Author: Anastasia Lubennikova with minor editorializing by me
Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
2016-04-08 19:45:59 +03:00
Stephen Frost 23f34fa4ba In pg_dump, include pg_catalog and extension ACLs, if changed
Now that all of the infrastructure exists, add in the ability to
dump out the ACLs of the objects inside of pg_catalog or the ACLs
for objects which are members of extensions, but only if they have
been changed from their original values.

The original values are tracked in pg_init_privs.  When pg_dump'ing
9.6-and-above databases, we will dump out the ACLs for all objects
in pg_catalog and the ACLs for all extension members, where the ACL
has been changed from the original value which was set during either
initdb or CREATE EXTENSION.

This should not change dumps against pre-9.6 databases.

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Stephen Frost d217b2c360 In pg_dump, split "dump" into "dump" and "dump_contains"
Historically, the "dump" component of the namespace has been used
to decide if the objects inside of the namespace should be dumped
also.  Given that "dump" is now a bitmask and may be partial, and
we may want to dump out all components of the namespace object but
only some of the components of objects contained in the namespace,
create a "dump_contains" bitmask which will represent what components
of the objects inside of a namespace should be dumped out.

No behavior change here, but in preparation for a change where we
will dump out just the ACLs of objects in pg_catalog, but we might
not dump out the ACL of the pg_catalog namespace itself (for instance,
when it hasn't been changed from the value set at initdb time).

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Stephen Frost a9f0e8e5a2 In pg_dump, use a bitmap to represent what to include
pg_dump has historically used a simple boolean 'dump' value to indicate
if a given object should be included in the dump or not.  Instead, use
a bitmap which breaks down the components of an object into their
distinct pieces and use that bitmap to only include the components
requested.

This does not include any behavioral change, but is in preperation for
the change to dump out just ACLs for objects in pg_catalog.

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Peter Eisentraut 3b3fcc4eea pg_dump: Add table qualifications to some tags
Some object types have names that are only unique for one table.  But
for those we generally didn't put the table name into the dump TOC tag.
So it was impossible to identify these objects if the same name was used
for multiple tables.  This affects policies, column defaults,
constraints, triggers, and rules.

Fix by adding the table name to the TOC tag, so that it now reads
"$schema $table $object".

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-04-06 12:13:11 -04:00
Robert Haas 5fe5a2cee9 Allow aggregate transition states to be serialized and deserialized.
This is necessary infrastructure for supporting parallel aggregation
for aggregates whose transition type is "internal".  Such values
can't be passed between cooperating processes, because they are
just pointers.

David Rowley, reviewed by Tomas Vondra and by me.
2016-03-29 15:04:05 -04:00
Alvaro Herrera 37732a2555 Fix minor leak in pg_dump for ACCESS METHOD.
Bug reported by Coverity.

Author: Michaël Paquier
2016-03-28 14:27:41 -03:00
Teodor Sigaev dabd255d58 Fix comment in pg_dump.
It was missed in 473b932870,
CREATE ACCESS METHOD

Alexander Korotkov
2016-03-28 19:17:28 +03:00
Tom Lane 7caaeaf360 Link libpq after libpgfeutils to satisfy Windows linker.
Some of the non-MSVC Windows buildfarm members seem to need this to avoid
getting "undefined symbol" errors on libpgfeutils' references to libpq.
I could understand that if libpq were a static library, but surely it is
not?  Oh well, at least the extra reference is no more harmful than it is
for libpgcommon or libpgport.
2016-03-24 20:45:31 -04:00
Tom Lane 588d963b00 Create src/fe_utils/, and move stuff into there from pg_dump's dumputils.
Per discussion, we want to create a static library and put the stuff into
it that until now has been shared across src/bin/ directories by ad-hoc
methods like symlinking a source file.  This commit creates the library and
populates it with a couple of files that contain the widely-useful portions
of pg_dump's dumputils.c file.  dumputils.c survives, because it has some
stuff that didn't seem appropriate for fe_utils, but it's significantly
smaller and is no longer referenced from any other directory.

Follow-on patches will move more stuff into fe_utils.

The Mkvcbuild.pm hacking here is just a best guess; we'll see how the
buildfarm likes it.
2016-03-24 15:55:57 -04:00
Alvaro Herrera 473b932870 Support CREATE ACCESS METHOD
This enables external code to create access methods.  This is useful so
that extensions can add their own access methods which can be formally
tracked for dependencies, so that DROP operates correctly.  Also, having
explicit support makes pg_dump work correctly.

Currently only index AMs are supported, but we expect different types to
be added in the future.

Authors: Alexander Korotkov, Petr Jelínek
Reviewed-By: Teodor Sigaev, Petr Jelínek, Jim Nasby
Commitfest-URL: https://commitfest.postgresql.org/9/353/
Discussion: https://www.postgresql.org/message-id/CAPpHfdsXwZmojm6Dx+TJnpYk27kT4o7Ri6X_4OSWcByu1Rm+VA@mail.gmail.com
2016-03-23 23:01:35 -03:00
Tom Lane 2c6af4f442 Move keywords.c/kwlookup.c into src/common/.
Now that we have src/common/ for code shared between frontend and backend,
we can get rid of (most of) the klugy ways that the keyword table and
keyword lookup code were formerly shared between different uses.
This is a first step towards a more general plan of getting rid of
special-purpose kluges for sharing code in src/bin/.

I chose to merge kwlookup.c back into keywords.c, as it once was, and
always has been so far as keywords.h is concerned.  We could have
kept them separate, but there is noplace that uses ScanKeywordLookup
without also wanting access to the backend's keyword list, so there
seems little point.

ecpg is still a bit weird, but at least now the trickiness is documented.

I think that the MSVC build script should require no adjustments beyond
what's done here ... but we'll soon find out.
2016-03-23 20:22:08 -04:00
Robert Haas 3aff33aa68 Fix typos.
Oskari Saarenmaa
2016-03-15 18:06:11 -04:00
Peter Eisentraut a914a04142 pg_dump: Fix inconsistent sscanf() conversions
It was using %u to read a string that was earlier produced by snprintf with %d
into a signed integer variable.  This seems to work in practice but is
incorrect.

found by cppcheck
2016-02-18 20:12:38 -05:00
Tom Lane 0ed707e9b7 In pg_dump, ensure that view triggers are processed after view rules.
If a view is split into CREATE TABLE + CREATE RULE to break a circular
dependency, then any triggers on the view must be dumped/reloaded after
the CREATE RULE; else the backend may reject the CREATE TRIGGER because
it's the wrong type of trigger for a plain table.  This works all right
in plain dump/restore because of pg_dump's sorting heuristic that places
triggers after rules.  However, when using parallel restore, the ordering
must be enforced by a dependency --- and we didn't have one.

Fixing this is a mere matter of adding an addObjectDependency() call,
except that we need to be able to find all the triggers belonging to the
view relation, and there was no easy way to do that.  Add fields to
pg_dump's TableInfo struct to remember where the associated TriggerInfo
struct(s) are.

Per bug report from Dennis Kögel.  The failure can be exhibited at least
as far back as 9.1, so back-patch to all supported branches.
2016-02-04 00:26:10 -05:00
Robert Haas 025b2f3392 Fix cross-version pg_dump for aggregate combine functions.
Fixes a defect in commit a7de3dc5c3.

David Rowley, per report from Jeff Janes, who also checked that the
fix works.
2016-01-27 21:45:07 -05:00
Alvaro Herrera df43fcf457 pg_dump: Fix quoting of domain constraint names
The original code was adding double quotes to an already-quoted
identifier, leading to nonsensical results.  Remove the quoting call.

I introduced the broken code in 7eca575d1c of 9.5 era, so backpatch to
9.5.

Report and patch by Elvis Pranskevichus
Reviewed by Michael Paquier
2016-01-22 20:04:35 -03:00
Robert Haas a7de3dc5c3 Support multi-stage aggregation.
Aggregate nodes now have two new modes: a "partial" mode where they
output the unfinalized transition state, and a "finalize" mode where
they accept unfinalized transition states rather than individual
values as input.

These new modes are not used anywhere yet, but they will be necessary
for parallel aggregation.  The infrastructure also figures to be
useful for cases where we want to aggregate local data and remote
data via the FDW interface, and want to bring back partial aggregates
from the remote side that can then be combined with locally generated
partial aggregates to produce the final value.  It may also be useful
even when neither FDWs nor parallelism are in play, as explained in
the comments in nodeAgg.c.

David Rowley and Simon Riggs, reviewed by KaiGai Kohei, Heikki
Linnakangas, Haribabu Kommi, and me.
2016-01-20 13:46:50 -05:00
Tom Lane 57ce9acc04 Remove dead code in pg_dump.
Coverity quite reasonably complained that this check for fout==NULL
occurred after we'd already dereferenced fout.  However, the check
is just dead code since there is no code path by which CreateArchive
can return a null pointer.  Errors such as can't-open-that-file are
reported down inside CreateArchive, and control doesn't return.
So let's silence the warning by removing the dead code, rather than
continuing to pretend it does something.

Coverity didn't complain about this before 5b5fea2a1, so back-patch
to 9.5 like that patch.
2016-01-17 11:38:40 -05:00
Tom Lane e72d7d8531 Handle extension members when first setting object dump flags in pg_dump.
pg_dump's original approach to handling extension member objects was to
run around and clear (or set) their dump flags rather late in its data
collection process.  Unfortunately, quite a lot of code expects those flags
to be valid before that; which was an entirely reasonable expectation
before we added extensions.  In particular, this explains Karsten Hilbert's
recent report of pg_upgrade failing on a database in which an extension
has been installed into the pg_catalog schema.  Its objects are initially
marked as not-to-be-dumped on the strength of their schema, and later we
change them to must-dump because we're doing a binary upgrade of their
extension; but we've already skipped essential tasks like making associated
DO_SHELL_TYPE objects.

To fix, collect extension membership data first, and incorporate it in the
initial setting of the dump flags, so that those are once again correct
from the get-go.  This has the undesirable side effect of slightly
lengthening the time taken before pg_dump acquires table locks, but testing
suggests that the increase in that window is not very much.

Along the way, get rid of ugly special-case logic for deciding whether
to dump procedural languages, FDWs, and foreign servers; dump decisions
for those are now correct up-front, too.

In 9.3 and up, this also fixes erroneous logic about when to dump event
triggers (basically, they were *always* dumped before).  In 9.5 and up,
transform objects had that problem too.

Since this problem came in with extensions, back-patch to all supported
versions.
2016-01-13 18:55:27 -05:00
Tom Lane 5b5fea2a11 Access pg_dump's options structs through Archive struct, not directly.
Rather than passing around DumpOptions and RestoreOptions as separate
arguments, add fields to struct Archive to carry pointers to these objects,
and access them through those fields when needed.  There already was a
RestoreOptions pointer in Archive, though for no obvious reason it was part
of the "private" struct rather than out where pg_dump.c could see it.

Doing this allows reversion of quite a lot of parameter-addition changes
made in commit 0eea8047bf, which is a good thing IMO because this will
reduce the code delta between 9.4 and 9.5, probably easing a few future
back-patch efforts.  Moreover, the previous commit only added a DumpOptions
argument to functions that had to have it at the time, which means we could
anticipate still more code churn (and more back-patch hazard) as the
requirement spread further.  I'd hit exactly that problem in my upcoming
patch to fix extension membership marking, which is what motivated me to
do this.
2016-01-13 17:48:33 -05:00
Tom Lane 26905e009b Run pgindent on src/bin/pg_dump/*
To ease doing indent fixups on a couple of patches I have in progress.
2016-01-13 15:48:54 -05:00
Tom Lane b416c0bb62 Teach pg_dump to quote reloption values safely.
Commit c7e27becd2 fixed this on the backend side, but we neglected
the fact that several code paths in pg_dump were printing reloptions
values that had not gotten massaged by ruleutils.  Apply essentially the
same quoting logic in those places, too.
2016-01-02 19:04:45 -05:00
Bruce Momjian ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
Tom Lane 96cd61a169 Fix factual and grammatical errors in comments for struct _tableInfo.
Amit Langote, further adjusted by me
2015-12-24 10:42:58 -05:00
Tom Lane 1aa41e3eae In pg_dump, remember connection passwords no matter how we got them.
When pg_dump prompts the user for a password, it remembers the password
for possible re-use by parallel worker processes.  However, libpq might
have extracted the password from a connection string originally passed
as "dbname".  Since we don't record the original form of dbname but
break it down to host/port/etc, the password gets lost.  Fix that by
retrieving the actual password from the PGconn.

(It strikes me that this whole approach is rather broken, as it will also
lose other information such as options that might have been present in
the connection string.  But we'll leave that problem for another day.)

In passing, get rid of rather silly use of malloc() for small fixed-size
arrays.

Back-patch to 9.3 where parallel pg_dump was introduced.

Report and fix by Zeus Kronion, adjusted a bit by Michael Paquier and me
2015-12-23 14:25:53 -05:00
Peter Eisentraut 30c0c4bf12 Remove unnecessary escaping in C character literals
'\"' is more commonly written simply as '"'.
2015-12-22 22:43:46 -05:00
Tom Lane 00cdd83521 Adopt the GNU convention for handling tar-archive members exceeding 8GB.
The POSIX standard for tar headers requires archive member sizes to be
printed in octal with at most 11 digits, limiting the representable file
size to 8GB.  However, GNU tar and apparently most other modern tars
support a convention in which oversized values can be stored in base-256,
allowing any practical file to be a tar member.  Adopt this convention
to remove two limitations:
* pg_dump with -Ft output format failed if the contents of any one table
exceeded 8GB.
* pg_basebackup failed if the data directory contained any file exceeding
8GB.  (This would be a fatal problem for installations configured with a
table segment size of 8GB or more, and it has also been seen to fail when
large core dump files exist in the data directory.)

File sizes under 8GB are still printed in octal, so that no compatibility
issues are created except in cases that would have failed entirely before.

In addition, this patch fixes several bugs in the same area:

* In 9.3 and later, we'd defined tarCreateHeader's file-size argument as
size_t, which meant that on 32-bit machines it would write a corrupt tar
header for file sizes between 4GB and 8GB, even though no error was raised.
This broke both "pg_dump -Ft" and pg_basebackup for such cases.

* pg_restore from a tar archive would fail on tables of size between 4GB
and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits.
This happened even with an archive file not affected by the previous bug.

* pg_basebackup would fail if there were files of size between 4GB and 8GB,
even on 64-bit machines.

* In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size,
on 64-bit big-endian machines.

In view of these potential data-loss bugs, back-patch to all supported
branches, even though removal of the documented 8GB limit might otherwise
be considered a new feature rather than a bug fix.
2015-11-21 20:21:31 -05:00
Stephen Frost 088c83363a ALTER TABLE .. FORCE ROW LEVEL SECURITY
To allow users to force RLS to always be applied, even for table owners,
add ALTER TABLE .. FORCE ROW LEVEL SECURITY.

row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump
output is complete (by default).

Also add SECURITY_NOFORCE_RLS context to avoid data corruption when
ALTER TABLE .. FORCE ROW SECURITY is being used. The
SECURITY_NOFORCE_RLS security context is used only during referential
integrity checks and is only considered in check_enable_rls() after we
have already checked that the current user is the owner of the relation
(which should always be the case during referential integrity checks).

Back-patch to 9.5 where RLS was added.
2015-10-04 21:05:08 -04:00
Tom Lane 8ab4a6bd3f Fix pg_dump to handle inherited NOT VALID check constraints correctly.
This case seems to have been overlooked when unvalidated check constraints
were introduced, in 9.2.  The code would attempt to dump such constraints
over again for each child table, even though adding them to the parent
table is sufficient.

In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make
clean" target fully cleans up after a failed test.  This evidently got
dealt with at some point in 9.4, but it wasn't back-patched.  I ran into
it while testing this fix ...

Per bug #13656 from Ingmar Brouns.
2015-10-01 16:20:13 -04:00
Peter Eisentraut 883af819c1 pg_dump: Fix some messages
Make quoting style match existing style.  Improve plural support.
2015-09-27 20:29:40 -04:00
Noah Misch 8346218c02 Restrict file mode creation mask during tmpfile().
Per Coverity.  Back-patch to 9.0 (all supported versions).

Michael Paquier, reviewed (in earlier versions) by Heikki Linnakangas.
2015-09-20 20:42:27 -04:00
Robert Haas 7aea8e4f2d Determine whether it's safe to attempt a parallel plan for a query.
Commit 924bcf4f16 introduced a framework
for parallel computation in PostgreSQL that makes most but not all
built-in functions safe to execute in parallel mode.  In order to have
parallel query, we'll need to be able to determine whether that query
contains functions (either built-in or user-defined) that cannot be
safely executed in parallel mode.  This requires those functions to be
labeled, so this patch introduces an infrastructure for that.  Some
functions currently labeled as safe may need to be revised depending on
how pending issues related to heavyweight locking under paralllelism
are resolved.

Parallel plans can't be used except for the case where the query will
run to completion.  If portal execution were suspended, the parallel
mode restrictions would need to remain in effect during that time, but
that might make other queries fail.  Therefore, this patch introduces
a framework that enables consideration of parallel plans only when it
is known that the plan will be run to completion.  This probably needs
some refinement; for example, at bind time, we do not know whether a
query run via the extended protocol will be execution to completion or
run with a limited fetch count.  Having the client indicate its
intentions at bind time would constitute a wire protocol break.  Some
contexts in which parallel mode would be safe are not adjusted by this
patch; the default is not to try parallel plans except from call sites
that have been updated to say that such plans are OK.

This commit doesn't introduce any parallel paths or plans; it just
provides a way to determine whether they could potentially be used.
I'm committing it on the theory that the remaining parallel sequential
scan patches will also get committed to this release, hopefully in the
not-too-distant future.

Robert Haas and Amit Kapila.  Reviewed (in earlier versions) by Noah
Misch.
2015-09-16 15:38:47 -04:00
Peter Eisentraut 5878a377ba Review program help output for wording and formatting 2015-09-16 00:59:28 -04:00
Peter Eisentraut 000a21336b Fix whitespace 2015-09-15 15:20:13 -04:00
Teodor Sigaev 0f75928516 Fix wrong comment in commit d02426029b
Per gripe from Robert Haas
2015-09-15 09:33:22 +03:00
Teodor Sigaev d02426029b Check existency of table/schema for -t/-n option (pg_dump/pg_restore)
Patch provides command line option --strict-names which requires that at
least one table/schema should present for each -t/-n option.

Pavel Stehule <pavel.stehule@gmail.com>
2015-09-14 16:19:49 +03:00
Bruce Momjian 7f8d090b89 pg_dump, pg_upgrade: allow postgres/template1 tablespace moves
Modify pg_dump to restore postgres/template1 databases to non-default
tablespaces by switching out of the database to be moved, then switching
back.

Also, to fix potentially cases where the old/new tablespaces might not
match, fix pg_upgrade to process new/old tablespaces separately in all
cases.

Report by Marti Raudsepp

Patch by Marti Raudsepp, me

Backpatch through 9.0
2015-09-11 15:51:11 -04:00
Andres Freund a8015fe7f5 Use the correct type for TableInfo->relreplident.
Mistakenly relreplident was stored as a bool. That works today as c.h
typedefs bool to a char, but isn't very future proof.

Discussion: 20150812084351.GD8470@awork2.anarazel.de
Backpatch: 9.4 where replica identity was introduced.
2015-08-15 16:18:44 +02:00
Tom Lane b861678f50 Fix privilege dumping from servers too old to have that type of privilege.
pg_dump produced fairly silly GRANT/REVOKE commands when dumping types from
pre-9.2 servers, and when dumping functions or procedural languages from
pre-7.3 servers.  Those server versions lack the typacl, proacl, and/or
lanacl columns respectively, and pg_dump substituted default values that
were in fact incorrect.  We ended up revoking all the owner's own
privileges for the object while granting all privileges to PUBLIC.
Of course the owner would then have those privileges again via PUBLIC, so
long as she did not try to revoke PUBLIC's privileges; which may explain
the lack of field reports.  Nonetheless this is pretty silly behavior.

The stakes were raised by my recent patch to make pg_dump dump shell types,
because 9.2 and up pg_dump would proceed to emit bogus GRANT/REVOKE
commands for a shell type if dumping from a pre-9.2 server; and the server
will not accept GRANT/REVOKE commands for a shell type.  (Perhaps it
should, but that's a topic for another day.)  So the resulting dump script
wouldn't load without errors.

The right thing to do is to act as though these objects have default
privileges (null ACL entries), which causes pg_dump to print no
GRANT/REVOKE commands at all for them.  That fixes the silly results
and also dodges the problem with shell types.

In passing, modify getProcLangs() to be less creatively different about
how to handle missing columns when dumping from older server versions.
Every other data-acquisition function in pg_dump does that by substituting
appropriate default values in the version-specific SQL commands, and I see
no reason why this one should march to its own drummer.  Its use of
"SELECT *" was likewise not conformant with anyplace else, not to mention
it's not considered good SQL style for production queries.

Back-patch to all supported versions.  Although 9.0 and 9.1 pg_dump don't
have the issue with typacl, they are more likely than newer versions to be
used to dump from ancient servers, so we ought to fix the proacl/lanacl
issues all the way back.
2015-08-10 20:10:15 -04:00
Tom Lane 3bdd7f90fc Fix pg_dump to dump shell types.
Per discussion, it really ought to do this.  The original choice to
exclude shell types was probably made in the dark ages before we made
it harder to accidentally create shell types; but that was in 7.3.

Also, cause the standard regression tests to leave a shell type behind,
for convenience in testing the case in pg_dump and pg_upgrade.

Back-patch to all supported branches.
2015-08-04 19:34:12 -04:00
Joe Conway e0d4a290f4 Fix pg_dump output of policies.
pg_dump neglected to wrap parenthesis around USING and WITH CHECK
expressions -- fixed. Reported by Noah Misch.
2015-07-27 20:24:18 -07:00
Andrew Dunstan caef94d59f Restore use of zlib default compression in pg_dump directory mode.
This was broken by commit 0e7e355f27 and
friends, which ignored the fact that gzopen() will treat "-1" in the
mode argument as an invalid character, which it ignores, and a flag for
compression level 1. Now, when this value is encountered no compression
level flag is passed  to gzopen, leaving it to use the zlib default.

Also, enforce the documented allowed range for pg_dump's -Z option,
namely 0 .. 9, and remove some consequently dead code from
pg_backup_tar.c.

Problem reported by Marc Mamin.

Backpatch to 9.1, like the patch that introduced the bug.
2015-07-25 17:14:36 -04:00
Tom Lane bcc87b6b00 Fix assorted memory leaks.
Per Coverity (not that any of these are so non-obvious that they should not
have been caught before commit).  The extent of leakage is probably minor
to unnoticeable, but a leak is a leak.  Back-patch as necessary.

Michael Paquier
2015-07-12 16:26:08 -04:00
Tom Lane 5671aaca87 Improve pg_restore's -t switch to match all types of relations.
-t will now match views, foreign tables, materialized views, and sequences,
not only plain tables.  This is more useful, and also more consistent with
the behavior of pg_dump's -t switch, which has always matched all relation
types.

We're still not there on matching pg_dump's behavior entirely, so mention
that in the docs.

Craig Ringer, reviewed by Pavel Stehule
2015-07-02 18:13:34 -04:00
Heikki Linnakangas a3fd7afe30 Remove "const" from convertTSFunction()'s return type.
There's no particular reason to mark it as such. The other convert*
functions have no const either.
2015-07-02 21:11:17 +03:00
Heikki Linnakangas f712289ffa Plug some trivial memory leaks in pg_dump and pg_upgrade.
There's no point in trying to free every small allocation in these
programs that are used in a one-shot fashion, but these ones seems like
an improvement on readability grounds.

Michael Paquier, per Coverity report.
2015-07-02 20:58:51 +03:00
Heikki Linnakangas 7b156c1e07 Don't emit a spurious space at end of line in pg_dump of event triggers.
Backpatch to 9.3 and above, where event triggers were added.
2015-07-02 12:50:29 +03:00
Heikki Linnakangas f92d6a540a Use appendStringInfoString/Char et al where appropriate.
Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in
9.5, and keeping the code in sync with master makes future backpatching
easier.
2015-07-02 12:36:03 +03:00
Peter Eisentraut c5e5d444de Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: fb7e72f46cfafa1b5bfe4564d9686d63a1e6383f
2015-06-28 23:56:55 -04:00
Fujii Masao 232cd63b1f Remove -i/--ignore-version option from pg_dump, pg_dumpall and pg_restore.
The commit c22ed3d523 turned
the -i/--ignore-version options into no-ops and marked as deprecated.
Considering we shipped that in 8.4, it's time to remove all trace of
those switches, per discussion. We'd still have to wait a couple releases
before it'd be safe to use -i for something else, but it'd be a start.
2015-06-04 19:54:43 +09:00
Bruce Momjian 807b9e0dff pgindent run for 9.5 2015-05-23 21:35:49 -04:00
Heikki Linnakangas fa60fb63e5 Fix more typos in comments.
Patch by CharSyam, plus a few more I spotted with grep.
2015-05-20 19:45:43 +03:00
Heikki Linnakangas 4fc72cc7bb Collection of typo fixes.
Use "a" and "an" correctly, mostly in comments. Two error messages were
also fixed (they were just elogs, so no translation work required). Two
function comments in pg_proc.h were also fixed. Etsuro Fujita reported one
of these, but I found a lot more with grep.

Also fix a few other typos spotted while grepping for the a/an typos.
For example, "consists out of ..." -> "consists of ...". Plus a "though"/
"through" mixup reported by Euler Taveira.

Many of these typos were in old code, which would be nice to backpatch to
make future backpatching easier. But much of the code was new, and I didn't
feel like crafting separate patches for each branch. So no backpatching.
2015-05-20 16:56:22 +03:00
Bruce Momjian c71e273402 pg_dump: suppress "Tablespace:" comment for default tablespaces
Report by Hans Ginzel
2015-05-11 11:45:43 -04:00
Magnus Hagander aa7cf3eef4 Fix minor resource leak in pg_dump
Michael Paquier, spotted using Coverity
2015-05-07 11:41:13 +02:00
Peter Eisentraut cac7658205 Add transforms feature
This provides a mechanism for specifying conversions between SQL data
types and procedural languages.  As examples, there are transforms
for hstore and ltree for PL/Perl and PL/Python.

reviews by Pavel Stěhule and Andres Freund
2015-04-26 10:33:14 -04:00
Peter Eisentraut 30982be4e5 Integrate pg_upgrade_support module into backend
Previously, these functions were created in a schema "binary_upgrade",
which was deleted after pg_upgrade was finished.  Because we don't want
to keep that schema around permanently, move them to pg_catalog but
rename them with a binary_upgrade_... prefix.

The provided functions are only small wrappers around global variables
that were added specifically for pg_upgrade use, so keeping the module
separate does not create any modularity.

The functions still check that they are only called in binary upgrade
mode, so it is not possible to call these during normal operation.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-14 19:26:37 -04:00
Tom Lane 785941cdc3 Tweak __attribute__-wrapping macros for better pgindent results.
This improves on commit bbfd7edae5 by
making two simple changes:

* pg_attribute_noreturn now takes parentheses, ie pg_attribute_noreturn().
Likewise pg_attribute_unused(), pg_attribute_packed().  This reduces
pgindent's tendency to misformat declarations involving them.

* attributes are now always attached to function declarations, not
definitions.  Previously some places were taking creative shortcuts,
which were not merely candidates for bad misformatting by pgindent
but often were outright wrong anyway.  (It does little good to put a
noreturn annotation where callers can't see it.)  In any case, if
we would like to believe that these macros can be used with non-gcc
compilers, we should avoid gratuitous variance in usage patterns.

I also went through and manually improved the formatting of a lot of
declarations, and got rid of excessively repetitive (and now obsolete
anyway) comments informing the reader what pg_attribute_printf is for.
2015-03-26 14:03:25 -04:00
Tom Lane cb1ca4d800 Allow foreign tables to participate in inheritance.
Foreign tables can now be inheritance children, or parents.  Much of the
system was already ready for this, but we had to fix a few things of
course, mostly in the area of planner and executor handling of row locks.

As side effects of this, allow foreign tables to have NOT VALID CHECK
constraints (and hence to accept ALTER ... VALIDATE CONSTRAINT), and to
accept ALTER SET STORAGE and ALTER SET WITH/WITHOUT OIDS.  Continuing to
disallow these things would've required bizarre and inconsistent special
cases in inheritance behavior.  Since foreign tables don't enforce CHECK
constraints anyway, a NOT VALID one is a complete no-op, but that doesn't
mean we shouldn't allow it.  And it's possible that some FDWs might have
use for SET STORAGE or SET WITH OIDS, though doubtless they will be no-ops
for most.

An additional change in support of this is that when a ModifyTable node
has multiple target tables, they will all now be explicitly identified
in EXPLAIN output, for example:

 Update on pt1  (cost=0.00..321.05 rows=3541 width=46)
   Update on pt1
   Foreign Update on ft1
   Foreign Update on ft2
   Update on child3
   ->  Seq Scan on pt1  (cost=0.00..0.00 rows=1 width=46)
   ->  Foreign Scan on ft1  (cost=100.00..148.03 rows=1170 width=46)
   ->  Foreign Scan on ft2  (cost=100.00..148.03 rows=1170 width=46)
   ->  Seq Scan on child3  (cost=0.00..25.00 rows=1200 width=46)

This was done mainly to provide an unambiguous place to attach "Remote SQL"
fields, but it is useful for inherited updates even when no foreign tables
are involved.

Shigeru Hanada and Etsuro Fujita, reviewed by Ashutosh Bapat and Kyotaro
Horiguchi, some additional hacking by me
2015-03-22 13:53:21 -04:00
Bruce Momjian 0c8fa710b6 C comment: clearify SQL command mention
Patch by Amit Langote
2015-03-20 18:30:30 -04:00
Andres Freund bbfd7edae5 Add macros wrapping all usage of gcc's __attribute__.
Until now __attribute__() was defined to be empty for all compilers but
gcc. That's problematic because it prevents using it in other compilers;
which is necessary e.g. for atomics portability.  It's also just
generally dubious to do so in a header as widely included as c.h.

Instead add pg_attribute_format_arg, pg_attribute_printf,
pg_attribute_noreturn macros which are implemented in the compilers that
understand them. Also add pg_attribute_noreturn and pg_attribute_packed,
but don't provide fallbacks, since they can affect functionality.

This means that external code that, possibly unwittingly, relied on
__attribute__ defined to be empty on !gcc compilers may now run into
warnings or errors on those compilers. But there shouldn't be many
occurances of that and it's hard to work around...

Discussion: 54B58BA3.8040302@ohmu.fi
Author: Oskari Saarenmaa, with some minor changes by me.
2015-03-11 14:30:01 +01:00
Tom Lane e3bfe6d84d Rethink function argument sorting in pg_dump.
Commit 7b583b20b1 created an unnecessary
dump failure hazard by applying pg_get_function_identity_arguments()
to every function in the database, even those that won't get dumped.
This could result in snapshot-related problems if concurrent sessions are,
for example, creating and dropping temporary functions, as noted by Marko
Tiikkaja in bug #12832.  While this is by no means pg_dump's only such
issue with concurrent DDL, it's unfortunate that we added a new failure
mode for cases that used to work, and even more so that the failure was
created for basically cosmetic reasons (ie, to sort overloaded functions
more deterministically).

To fix, revert that patch and instead sort function arguments using
information that pg_dump has available anyway, namely the names of the
argument types.  This will produce a slightly different sort ordering for
overloaded functions than the previous coding; but applying strcmp
directly to the output of pg_get_function_identity_arguments really was
a bit odd anyway.  The sorting will still be name-based and hence
independent of possibly-installation-specific OID assignments.  A small
additional benefit is that sorting now works regardless of server version.

Back-patch to 9.3, where the previous commit appeared.
2015-03-06 13:27:46 -05:00
Stephen Frost ebd092bc2a Fix pg_dump handling of extension config tables
Since 9.1, we've provided extensions with a way to denote
"configuration" tables- tables created by an extension which the user
may modify.  By marking these as "configuration" tables, the extension
is asking for the data in these tables to be pg_dump'd (tables which
are not marked in this way are assumed to be entirely handled during
CREATE EXTENSION and are not included at all in a pg_dump).

Unfortunately, pg_dump neglected to consider foreign key relationships
between extension configuration tables and therefore could end up
trying to reload the data in an order which would cause FK violations.

This patch teaches pg_dump about these dependencies, so that the data
dumped out is done so in the best order possible.  Note that there's no
way to handle circular dependencies, but those have yet to be seen in
the wild.

The release notes for this should include a caution to users that
existing pg_dump-based backups may be invalid due to this issue.  The
data is all there, but restoring from it will require extracting the
data for the configuration tables and then loading them in the correct
order by hand.

Discussed initially back in bug #6738, more recently brought up by
Gilles Darold, who provided an initial patch which was further reworked
by Michael Paquier.  Further modifications and documentation updates
by me.

Back-patch to 9.1 where we added the concept of extension configuration
tables.
2015-03-02 14:12:21 -05:00
Tom Lane 09d8d110a6 Use FLEXIBLE_ARRAY_MEMBER in a bunch more places.
Replace some bogus "x[1]" declarations with "x[FLEXIBLE_ARRAY_MEMBER]".
Aside from being more self-documenting, this should help prevent bogus
warnings from static code analyzers and perhaps compiler misoptimizations.

This patch is just a down payment on eliminating the whole problem, but
it gets rid of a lot of easy-to-fix cases.

Note that the main problem with doing this is that one must no longer rely
on computing sizeof(the containing struct), since the result would be
compiler-dependent.  Instead use offsetof(struct, lastfield).  Autoconf
also warns against spelling that offsetof(struct, lastfield[0]).

Michael Paquier, review and additional fixes by me.
2015-02-20 00:11:42 -05:00
Tom Lane 297b2c1ef9 Fix placement of "SET row_security" command issuance in pg_dump.
Somebody apparently threw darts at the code to decide where to insert
these.  They certainly didn't proceed by adding them where other similar
SETs were handled.  This at least broke pg_restore, and perhaps other
use-cases too.
2015-02-18 12:23:40 -05:00
Tom Lane 0e7e355f27 Fix failure to honor -Z compression level option in pg_dump -Fd.
cfopen() and cfopen_write() failed to pass the compression level through
to zlib, so that you always got the default compression level if you got
any at all.

In passing, also fix these and related functions so that the correct errno
is reliably returned on failure; the original coding supposes that free()
cannot change errno, which is untrue on at least some platforms.

Per bug #12779 from Christoph Berg.  Back-patch to 9.1 where the faulty
code was introduced.

Michael Paquier
2015-02-18 11:43:00 -05:00
Bruce Momjian 866f3017a8 pg_upgrade: preserve freeze info for postgres/template1 dbs
pg_database.datfrozenxid and pg_database.datminmxid were not preserved
for the 'postgres' and 'template1' databases.  This could cause missing
clog file errors on access to user tables and indexes after upgrades in
these databases.

Backpatch through 9.0
2015-02-11 21:02:44 -05:00
Tom Lane 9179444d07 Fix more memory leaks in failure path in buildACLCommands.
We already had one go at this issue in commit d73b7f973d, but we
failed to notice that buildACLCommands also leaked several PQExpBuffers
along with a simply malloc'd string.  This time let's try to make the
fix a bit more future-proof by eliminating the separate exit path.

It's still not exactly critical because pg_dump will curl up and die on
failure; but since the amount of the potential leak is now several KB,
it seems worth back-patching as far as 9.2 where the previous fix landed.

Per Coverity, which evidently is smarter than clang's static analyzer.
2015-02-11 18:35:23 -05:00
Tom Lane 9feefedf9e Fix pg_dump's heuristic for deciding which casts to dump.
Back in 2003 we had a discussion about how to decide which casts to dump.
At the time pg_dump really only considered an object's containing schema
to decide what to dump (ie, dump whatever's not in pg_catalog), and so
we chose a complicated idea involving whether the underlying types were to
be dumped (cf commit a6790ce857).  But users
are allowed to create casts between built-in types, and we failed to dump
such casts.  Let's get rid of that heuristic, which has accreted even more
ugliness since then, in favor of just looking at the cast's OID to decide
if it's a built-in cast or not.

In passing, also fix some really ancient code that supposed that it had to
manufacture a dependency for the cast on its cast function; that's only
true when dumping from a pre-7.3 server.  This just resulted in some wasted
cycles and duplicate dependency-list entries with newer servers, but we
might as well improve it.

Per gripes from a number of people, most recently Greg Sabino Mullane.
Back-patch to all supported branches.
2015-02-10 22:38:15 -05:00
Peter Eisentraut f8948616c9 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 19c72ea8d856d7b1d4f5d759a766c8206bf9ce53
2015-02-01 23:23:40 -05:00
Kevin Grittner cff1bd2a3c Allow pg_dump to use jobs and serializable transactions together.
Since 9.3, when the --jobs option was introduced, using it together
with the --serializable-deferrable option generated multiple
errors.  We can get correct behavior by allowing the connection
which acquires the snapshot to use SERIALIZABLE, READ ONLY,
DEFERRABLE and pass that to the workers running the other
connections using REPEATABLE READ, READ ONLY.  This is a bit of a
kluge since the SERIALIZABLE behavior is achieved by running some
of the participating connections at a different isolation level,
but it is a simple and safe change, suitable for back-patching.

This will be followed by a proposal for a more invasive fix with
some slight behavioral changes on just the master branch, based on
suggestions from Andres Freund, but the kluge will be applied to
master until something is agreed along those lines.

Back-patched to 9.3, where the --jobs option was added.

Based on report from Alexander Korotkov
2015-01-30 08:57:24 -06:00
Tom Lane fd496129d1 Clean up some mess in row-security patches.
Fix unsafe coding around PG_TRY in RelationBuildRowSecurity: can't change
a variable inside PG_TRY and then use it in PG_CATCH without marking it
"volatile".  In this case though it seems saner to avoid that by doing
a single assignment before entering the TRY block.

I started out just intending to fix that, but the more I looked at the
row-security code the more distressed I got.  This patch also fixes
incorrect construction of the RowSecurityPolicy cache entries (there was
not sufficient care taken to copy pass-by-ref data into the cache memory
context) and a whole bunch of sloppiness around the definition and use of
pg_policy.polcmd.  You can't use nulls in that column because initdb will
mark it NOT NULL --- and I see no particular reason why a null entry would
be a good idea anyway, so changing initdb's behavior is not the right
answer.  The internal value of '\0' wouldn't be suitable in a "char" column
either, so after a bit of thought I settled on using '*' to represent ALL.
Chasing those changes down also revealed that somebody wasn't paying
attention to what the underlying values of ACL_UPDATE_CHR etc really were,
and there was a great deal of lackadaiscalness in the catalogs.sgml
documentation for pg_policy and pg_policies too.

This doesn't pretend to be a complete code review for the row-security
stuff, it just fixes the things that were in my face while dealing with
the bugs in RelationBuildRowSecurity.
2015-01-24 16:16:22 -05:00
Tom Lane 586dd5d6a5 Replace a bunch more uses of strncpy() with safer coding.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.

A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about.  So just use memcpy() in
these cases.

In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).

I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution).  There are also a
few such calls in ecpg that could possibly do with more analysis.

AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior.  These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
2015-01-24 13:05:42 -05:00
Alvaro Herrera a179232047 vacuumdb: enable parallel mode
This mode allows vacuumdb to open several server connections to vacuum
or analyze several tables simultaneously.

Author: Dilip Kumar.  Some reworking by Álvaro Herrera
Reviewed by: Jeff Janes, Amit Kapila, Magnus Hagander, Andres Freund
2015-01-23 15:02:45 -03:00
Andres Freund 525b84c576 Fix use of already freed memory when dumping a database's security label.
pg_dump.c:dumDatabase() called ArchiveEntry() with the results of a a
query that was PQclear()ed a couple lines earlier.

Backpatch to 9.2 where security labels for shared objects where
introduced.
2015-01-18 16:04:10 +01:00
Tom Lane 44096f1c66 Fix portability breakage in pg_dump.
Commit 0eea8047bf introduced some overly
optimistic assumptions about what could be in a local struct variable's
initializer.  (This might in fact be valid code according to C99, but I've
got at least one pre-C99 compiler that falls over on those nonconstant
address expressions.)  There is no reason whatsoever for main()'s workspace
to not be static, so revert long_options[] to a static and make the
DumpOptions struct static as well.
2015-01-11 13:28:26 -05:00
Bruce Momjian 4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Tom Lane adfc157dd9 Fix broken pg_dump code for dumping comments on event triggers.
This never worked, I think.  Per report from Marc Munro.

In passing, fix funny spacing in the COMMENT ON command as a result of
excess space in the "label" string.
2015-01-05 19:27:04 -05:00
Alvaro Herrera a609d96778 Revert "Use a bitmask to represent role attributes"
This reverts commit 1826987a46.

The overall design was deemed unacceptable, in discussion following the
previous commit message; we might find some parts of it still
salvageable, but I don't want to be on the hook for fixing it, so let's
wait until we have a new patch.
2014-12-23 15:35:49 -03:00
Alvaro Herrera 1826987a46 Use a bitmask to represent role attributes
The previous representation using a boolean column for each attribute
would not scale as well as we want to add further attributes.

Extra auxilliary functions are added to go along with this change, to
make up for the lost convenience of access of the old representation.

Catalog version bumped due to change in catalogs and the new functions.

Author: Adam Brightwell, minor tweaks by Álvaro
Reviewed by: Stephen Frost, Andres Freund, Álvaro Herrera
2014-12-23 10:22:09 -03:00
Alvaro Herrera 7eca575d1c get_object_address: separate domain constraints from table constraints
Apart from enabling comments on domain constraints, this enables a
future project to replicate object dropping to remote servers: with the
current mechanism there's no way to distinguish between the two types of
constraints, so there's no way to know what to drop.

Also added support for the domain constraint comments in psql's \dd and
pg_dump.

Catalog version bumped due to the change in ObjectType enum.
2014-12-23 09:06:44 -03:00
Peter Eisentraut ee3bec5e22 Translation updates 2014-12-15 00:25:35 -05:00
Tom Lane 06d5803ffa Fix assorted confusion between Oid and int32.
In passing, also make some debugging elog's in pgstat.c a bit more
consistently worded.

Back-patch as far as applicable (9.3 or 9.4; none of these mistakes are
really old).

Mark Dilger identified and patched the type violations; the message
rewordings are mine.
2014-12-11 15:41:15 -05:00
Stephen Frost 143b39c185 Rename pg_rowsecurity -> pg_policy and other fixes
As pointed out by Robert, we should really have named pg_rowsecurity
pg_policy, as the objects stored in that catalog are policies.  This
patch fixes that and updates the column names to start with 'pol' to
match the new catalog name.

The security consideration for COPY with row level security, also
pointed out by Robert, has also been addressed by remembering and
re-checking the OID of the relation initially referenced during COPY
processing, to make sure it hasn't changed under us by the time we
finish planning out the query which has been built.

Robert and Alvaro also commented on missing OCLASS and OBJECT entries
for POLICY (formerly ROWSECURITY or POLICY, depending) in various
places.  This patch fixes that too, which also happens to add the
ability to COMMENT on policies.

In passing, attempt to improve the consistency of messages, comments,
and documentation as well.  This removes various incarnations of
'row-security', 'row-level security', 'Row-security', etc, in favor
of 'policy', 'row level security' or 'row_security' as appropriate.

Happy Thanksgiving!
2014-11-27 01:15:57 -05:00
Tom Lane 8b13e5c6c0 Fix some bogus direct uses of realloc().
pg_dump/parallel.c was using realloc() directly with no error check.
While the odds of an actual failure here seem pretty low, Coverity
complains about it, so fix by using pg_realloc() instead.

While looking for other instances, I noticed a couple of places in
psql that hadn't gotten the memo about the availability of pg_realloc.
These aren't bugs, since they did have error checks, but verbosely
inconsistent code is not a good thing.

Back-patch as far as 9.3.  9.2 did not have pg_dump/parallel.c, nor
did it have pg_realloc available in all frontend code.
2014-11-18 13:28:06 -05:00
Simon Riggs be1cc8f46f Add pg_dump --snapshot option
Allows pg_dump to use a snapshot previously defined by a concurrent
session that has either used pg_export_snapshot() or obtained a
snapshot when creating a logical slot. When this option is used with
parallel pg_dump, the snapshot defined by this option is used and no
new snapshot is taken.

Simon Riggs and Michael Paquier
2014-11-17 22:15:07 +00:00
Peter Eisentraut 7466a1b75f Translation updates 2014-11-16 21:32:51 -05:00
Tom Lane be09ceb218 Fix pg_dumpall to restore its ability to dump from ancient servers.
Fix breakage induced by commits d8d3d2a4f3
and 463f2625a5fb183b6a8925ccde98bb3889f921d9: pg_dumpall has crashed when
attempting to dump from pre-8.1 servers since then, due to faulty
construction of the query used for dumping roles from older servers.
The query was erroneous as of the earlier commit, but it wasn't exposed
unless you tried to use --binary-upgrade, which you presumably wouldn't
with a pre-8.1 server.  However commit 463f2625a made it fail always.

In HEAD, also fix additional breakage induced in the same query by
commit 491c029dbc, which evidently wasn't
tested against pre-8.1 servers either.

The bug is only latent in 9.1 because 463f2625a hadn't landed yet, but
it seems best to back-patch all branches containing the faulty query.

Gilles Darold
2014-11-13 18:19:26 -05:00
Tom Lane f455fcfdb8 Avoid unportable strftime() behavior in pg_dump/pg_dumpall.
Commit ad5d46a449 thought that we could
get around the known portability issues of strftime's %Z specifier by
using %z instead.  However, that idea seems to have been innocent of
any actual research, as it certainly missed the facts that
(1) %z is not portable to pre-C99 systems, and
(2) %z doesn't actually act differently from %Z on Windows anyway.

Per failures on buildfarm member hamerkop.

While at it, centralize the code defining what strftime format we
want to use in pg_dump; three copies of that string seems a bit much.
2014-10-26 20:59:21 -04:00
Tom Lane 5c38a1d4ec Fix core dump in pg_dump --binary-upgrade on zero-column composite type.
This reverts nearly all of commit 28f6cab61a
in favor of just using the typrelid we already have in pg_dump's TypeInfo
struct for the composite type.  As coded, it'd crash if the composite type
had no attributes, since then the query would return no rows.

Back-patch to all supported versions.  It seems to not really be a problem
in 9.0 because that version rejects the syntax "create type t as ()", but
we might as well keep the logic similar in all affected branches.

Report and fix by Rushabh Lathia.
2014-10-17 12:49:00 -04:00
Tom Lane 7584649a1c Re-pgindent src/bin/pg_dump/*.
Seems to have gotten rather messy lately, as a consequence of a couple
of large recent commits.
2014-10-17 12:19:05 -04:00
Stephen Frost 389573fd19 Fix pg_dump for UPDATE policies
pg_dump had the wrong character for update and so was failing when
attempts were made to pg_dump databases with UPDATE policies.

Pointed out by Fujii Masao (thanks!)
2014-10-17 08:07:46 -04:00
Alvaro Herrera 076d29a1ee Blind attempt at fixing Win32 pg_dump issues
Per buildfarm failures
2014-10-14 17:33:36 -03:00
Alvaro Herrera 0eea8047bf pg_dump: Reduce use of global variables
Most pg_dump.c global variables, which were passed down individually to
dumping routines, are now grouped as members of the new DumpOptions
struct, which is used as a local variable and passed down into routines
that need it.  This helps future development efforts; in particular it
is said to enable a mode in which a parallel pg_dump run can output
multiple streams, and have them restored in parallel.

Also take the opportunity to clean up the pg_dump header files somewhat,
to avoid circularity.

Author: Joachim Wieland, revised by Álvaro Herrera
Reviewed by Peter Eisentraut
2014-10-14 15:00:55 -03:00
Peter Eisentraut 1ec4a970fe Translation updates 2014-10-05 23:23:50 -04:00
Stephen Frost 78d72563ef Fix CreatePolicy, pg_dump -v; psql and doc updates
Peter G pointed out that valgrind was, rightfully, complaining about
CreatePolicy() ending up copying beyond the end of the parsed policy
name.  Name is a fixed-size type and we need to use namein (through
DirectFunctionCall1()) to flush out the entire array before we pass
it down to heap_form_tuple.

Michael Paquier pointed out that pg_dump --verbose was missing a
newline and Fabrízio de Royes Mello further pointed out that the
schema was also missing from the messages, so fix those also.

Also, based on an off-list comment from Kevin, rework the psql \d
output to facilitate copy/pasting into a new CREATE or ALTER POLICY
command.

Lastly, improve the pg_policies view and update the documentation for
it, along with a few other minor doc corrections based on an off-list
discussion with Adam Brightwell.
2014-10-03 16:31:53 -04:00
Alvaro Herrera fd02931a6c Fix pg_dump's --if-exists for large objects
This was born broken in 9067310cc5.

Per trouble report from Joachim Wieland.

Pavel Stěhule and Álvaro Herrera
2014-09-30 12:06:37 -03:00
Stephen Frost ff27fcfa0a Fix relcache for policies, and doc updates
Andres pointed out that there was an extra ';' in equalPolicies, which
made me realize that my prior testing with CLOBBER_CACHE_ALWAYS was
insufficient (it didn't always catch the issue, just most of the time).
Thanks to that, a different issue was discovered, specifically in
equalRSDescs.  This change corrects eqaulRSDescs to return 'true' once
all policies have been confirmed logically identical.  After stepping
through both functions to ensure correct behavior, I ran this for
about 12 hours of CLOBBER_CACHE_ALWAYS runs of the regression tests
with no failures.

In addition, correct a few typos in the documentation which were pointed
out by Thom Brown (thanks!) and improve the policy documentation further
by adding a flushed out usage example based on a unix passwd file.

Lastly, clean up a few comments in the regression tests and pg_dump.h.
2014-09-26 12:46:26 -04:00
Robert Haas 07d46a8963 Fix identify_locking_dependencies for schema-only dumps.
Without this fix, parallel restore of a schema-only dump can deadlock,
because when the dump is schema-only, the dependency will still be
pointing at the TABLE item rather than the TABLE DATA item.

Robert Haas and Tom Lane
2014-09-26 11:21:35 -04:00
Stephen Frost 6550b901fe Code review for row security.
Buildfarm member tick identified an issue where the policies in the
relcache for a relation were were being replaced underneath a running
query, leading to segfaults while processing the policies to be added
to a query.  Similar to how TupleDesc RuleLocks are handled, add in a
equalRSDesc() function to check if the policies have actually changed
and, if not, swap back the rsdesc field (using the original instead of
the temporairly built one; the whole structure is swapped and then
specific fields swapped back).  This now passes a CLOBBER_CACHE_ALWAYS
for me and should resolve the buildfarm error.

In addition to addressing this, add a new chapter in Data Definition
under Privileges which explains row security and provides examples of
its usage, change \d to always list policies (even if row security is
disabled- but note that it is disabled, or enabled with no policies),
rework check_role_for_policy (it really didn't need the entire policy,
but it did need to be using has_privs_of_role()), and change the field
in pg_class to relrowsecurity from relhasrowsecurity, based on
Heikki's suggestion.  Also from Heikki, only issue SET ROW_SECURITY in
pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and
document --enable-row-security options for pg_dump and pg_restore.

Lastly, fix a number of minor whitespace and typo issues from Heikki,
Dimitri, add a missing #include, per Peter E, fix a few minor
variable-assigned-but-not-used and resource leak issues from Coverity
and add tab completion for role attribute bypassrls as well.
2014-09-24 16:32:22 -04:00
Stephen Frost 491c029dbc Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table.  Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.

New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner.  Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.

Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used.  If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.

By default, row security is applied at all times except for the
table owner and the superuser.  A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE.  When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.

Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.

A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.

Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.

Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.

Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
2014-09-19 11:18:35 -04:00
Bruce Momjian ad5d46a449 Report timezone offset in pg_dump/pg_dumpall
Use consistent format for all such displays.

Report by Gavin Flower
2014-09-05 19:22:31 -04:00
Heikki Linnakangas 2bde29739d Show schema names in pg_dump verbose output.
Fabrízio de Royes Mello, reviewed by Michael Paquier
2014-08-26 11:50:48 +03:00
Peter Eisentraut 256bfb2c9a doc: Improve pg_restore help output
Add a note that some options can be specified multiple times to select
multiple objects to restore.  This replaces the somewhat confusing use
of plurals in the option descriptions themselves.
2014-08-23 00:25:28 -04:00
Peter Eisentraut f25e0bf5e0 Small message fixes 2014-08-09 00:07:00 -04:00
Tom Lane c8e2e0e712 Fix a performance problem in pg_dump's dump order selection logic.
findDependencyLoops() was not bright about cases where there are multiple
dependency paths between the same two dumpable objects.  In most scenarios
this did not hurt us too badly; but since the introduction of section
boundary pseudo-objects in commit a1ef01fe16,
it was possible for this code to take unreasonable amounts of time (tens
of seconds on a database with a couple thousand objects), as reported in
bug #11033 from Joe Van Dyk.  Joe's particular problem scenario involved
"pg_dump -a" mode with long chains of foreign key constraints, but I think
that similar problems could arise with other situations as long as there
were enough objects.  To fix, add a flag array that lets us notice when we
arrive at the same object again while searching from a given start object.
This simple change seems to be enough to eliminate the performance problem.

Back-patch to 9.1, like the patch that introduced section boundary objects.
2014-07-25 19:48:42 -04:00
Peter Eisentraut cac0d5193b Translation updates 2014-07-21 01:08:04 -04:00
Peter Eisentraut f9ddcf7543 Add missing source files to nls.mk
These are files under common/ that have been moved around.  Updating
these manually is not satisfactory, but it's the only solution at the
moment.
2014-07-15 10:10:42 -04:00
Tom Lane 7700597b34 In pg_dump, show server and pg_dump versions with or without --verbose.
We used to print this information only in verbose mode, but it's argued
that it's useful enough to print always; one reason being that this
provides some documentation about which Postgres versions the dump is
meant to reload into.

Jing Wang, reviewed by Jeevan Chalke
2014-07-07 19:02:45 -04:00
Bruce Momjian a61daa14d5 pg_upgrade: preserve database and relation minmxid values
Also set these values for pre-9.3 old clusters that don't have values to
preserve.

Analysis by Alvaro

Backpatch through 9.3
2014-07-02 15:29:38 -04:00
Tom Lane fbb1d7d73f Allow CREATE/ALTER DATABASE to manipulate datistemplate and datallowconn.
Historically these database properties could be manipulated only by
manually updating pg_database, which is error-prone and only possible for
superusers.  But there seems no good reason not to allow database owners to
set them for their databases, so invent CREATE/ALTER DATABASE options to do
that.  Adjust a couple of places that were doing it the hard way to use the
commands instead.

Vik Fearing, reviewed by Pavel Stehule
2014-07-01 20:10:38 -04:00
Bruce Momjian ac608fe758 Use type pgsocket for Windows pipe emulation socket calls
This prevents several compiler warnings on Windows.
2014-06-16 15:24:38 -04:00
Tom Lane c81e63d85f Fix pg_restore's processing of old-style BLOB COMMENTS data.
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
of COMMENT commands into a single BLOB COMMENTS archive object.  With
sufficiently many such comments, some of the commands would likely get
split across bufferloads when restoring, causing failures in
direct-to-database restores (though no problem would be evident in text
output).  This is the same type of issue we have with table data dumped as
INSERT commands, and it can be fixed in the same way, by using a mini SQL
lexer to figure out where the command boundaries are.  Fortunately, the
COMMENT commands are no more complex to lex than INSERTs, so we can just
re-use the existing lexer for INSERTs.

Per bug #10611 from Jacek Zalewski.  Back-patch to all active branches.
2014-06-12 20:14:32 -04:00
Noah Misch d098b236f3 Fix typos in comments. 2014-06-11 19:50:29 -04:00
Peter Eisentraut e136271a94 Translation updates 2014-05-10 22:16:59 -04:00
Bruce Momjian 4335c95815 Fix improperly passed file descriptors
Fix for commit 14ea89366f

Report by Andres Freund
2014-05-06 12:20:51 -04:00
Bruce Momjian 0a78320057 pgindent run for 9.4
This includes removing tabs after periods in C comments, which was
applied to back branches, so this change should not effect backpatching.
2014-05-06 12:12:18 -04:00
Bruce Momjian 55d5ff825f Fix detection of short tar files, broken by commit 14ea89366f
Report by Noah Misch
2014-05-06 10:01:20 -04:00
Bruce Momjian 14ea89366f Properly detect read and write errors in pg_dump/dumpall, and pg_restore
Previously some I/O errors were ignored.
2014-05-05 20:27:16 -04:00
Tom Lane e03485ae8a Fix case of pg_dump -Fc to an unseekable file (such as a pipe).
This was accidentally broken in commits cfa1b4a711/5e8e794e3b.
It saves a line or so to call ftello unconditionally in _CloseArchive,
but we have to expect that it might fail if we're not in hasSeek mode.
Per report from Bernd Helmle.

In passing, improve _getFilePos to print an appropriate message if
ftello fails unexpectedly, rather than just a vague complaint about
"ftell mismatch".
2014-05-05 11:26:41 -04:00
Heikki Linnakangas a692ee5870 Replace SYSTEMQUOTEs with Windows-specific wrapper functions.
It's easy to forget using SYSTEMQUOTEs when constructing command strings
for system() or popen(). Even if we fix all the places missing it now, it is
bound to be forgotten again in the future. Introduce wrapper functions that
do the the extra quoting for you, and get rid of SYSTEMQUOTEs in all the
callers.

We previosly used SYSTEMQUOTEs in all the hard-coded command strings, and
this doesn't change the behavior of those. But user-supplied commands, like
archive_command, restore_command, COPY TO/FROM PROGRAM calls, as well as
pgbench's \shell, will now gain an extra pair of quotes. That is desirable,
but if you have existing scripts or config files that include an extra
pair of quotes, those might need to be adjusted.

Reviewed by Amit Kapila and Tom Lane
2014-05-05 16:07:40 +03:00
Tom Lane f0fedfe82c Allow polymorphic aggregates to have non-polymorphic state data types.
Before 9.4, such an aggregate couldn't be declared, because its final
function would have to have polymorphic result type but no polymorphic
argument, which CREATE FUNCTION would quite properly reject.  The
ordered-set-aggregate patch found a workaround: allow the final function
to be declared as accepting additional dummy arguments that have types
matching the aggregate's regular input arguments.  However, we failed
to notice that this problem applies just as much to regular aggregates,
despite the fact that we had a built-in regular aggregate array_agg()
that was known to be undeclarable in SQL because its final function
had an illegal signature.  So what we should have done, and what this
patch does, is to decouple the extra-dummy-arguments behavior from
ordered-set aggregates and make it generally available for all aggregate
declarations.  We have to put this into 9.4 rather than waiting till
later because it slightly alters the rules for declaring ordered-set
aggregates.

The patch turned out a bit bigger than I'd hoped because it proved
necessary to record the extra-arguments option in a new pg_aggregate
column.  I'd thought we could just look at the final function's pronargs
at runtime, but that didn't work well for variadic final functions.
It's probably just as well though, because it simplifies life for pg_dump
to record the option explicitly.

While at it, fix array_agg() to have a valid final-function signature,
and add an opr_sanity test to notice future deviations from polymorphic
consistency.  I also marked the percentile_cont() aggregates as not
needing extra arguments, since they don't.
2014-04-23 19:17:41 -04:00
Tom Lane cad4fe6455 Use AF_UNSPEC not PF_UNSPEC in getaddrinfo calls.
According to the Single Unix Spec and assorted man pages, you're supposed
to use the constants named AF_xxx when setting ai_family for a getaddrinfo
call.  In a few places we were using PF_xxx instead.  Use of PF_xxx
appears to be an ancient BSD convention that was not adopted by later
standardization.  On BSD and most later Unixen, it doesn't matter much
because those constants have equivalent values anyway; but nonetheless
this code is not per spec.

In the same vein, replace PF_INET by AF_INET in one socket() call, which
wasn't even consistent with the other socket() call in the same function
let alone the remainder of our code.

Per investigation of a Cygwin trouble report from Marco Atzeri.  It's
probably a long shot that this will fix his issue, but it's wrong in
any case.
2014-04-16 13:21:20 -04:00
Tom Lane a9d9acbf21 Create infrastructure for moving-aggregate optimization.
Until now, when executing an aggregate function as a window function
within a window with moving frame start (that is, any frame start mode
except UNBOUNDED PRECEDING), we had to recalculate the aggregate from
scratch each time the frame head moved.  This patch allows an aggregate
definition to include an alternate "moving aggregate" implementation
that includes an inverse transition function for removing rows from
the aggregate's running state.  As long as this can be done successfully,
runtime is proportional to the total number of input rows, rather than
to the number of input rows times the average frame length.

This commit includes the core infrastructure, documentation, and regression
tests using user-defined aggregates.  Follow-on commits will update some
of the built-in aggregates to use this feature.

David Rowley and Florian Pflug, reviewed by Dean Rasheed; additional
hacking by me
2014-04-12 12:03:30 -04:00
Robert Haas 59202fae04 Fix some compiler warnings that clang emits with -pedantic.
Andres Freund
2014-04-04 11:29:50 -04:00
Tom Lane 62215de292 Fix dumping of a materialized view that depends on a table's primary key.
It is possible for a view or materialized view to depend on a table's
primary key, if the view query relies on functional dependency to
abbreviate a GROUP BY list.  This is problematic for pg_dump since we
ordinarily want to dump view definitions in the pre-data section but
indexes in post-data.  pg_dump knows how to deal with this situation for
regular views, by breaking the view's ON SELECT rule apart from the view
proper.  But it had not been taught what to do about materialized views,
and in fact mistakenly dumped them as regular views in such cases, as
seen in bug #9616 from Jesse Denardo.

If we had CREATE OR REPLACE MATERIALIZED VIEW, we could fix this in a
manner analogous to what's done for regular views; but we don't yet,
and we'd not back-patch such a thing into 9.3 anyway.  As a hopefully-
temporary workaround, break the circularity by postponing the matview
into post-data altogether when this case occurs.
2014-03-29 17:34:00 -04:00
Bruce Momjian 1494931d73 Remove MinGW readdir/errno bug workaround fixed on 2003-10-10 2014-03-21 13:47:37 -04:00
Bruce Momjian 6f03927fce Properly check for readdir/closedir() failures
Clear errno before calling readdir() and handle old MinGW errno bug
while adding full test coverage for readdir/closedir failures.

Backpatch through 8.4.
2014-03-21 13:45:11 -04:00
Tom Lane 19f2d6cdae Fix pg_dumpall option parsing: -i doesn't take an argument.
This used to work properly, but got fat-fingered in commit
3dee636e04.  Per bug #9620 from
Nicolas Payart.
2014-03-18 10:38:25 -04:00
Simon Riggs 77049443a1 Correct copy/pasto in comment for REPLICA IDENTITY 2014-03-09 09:05:16 +00:00
Bruce Momjian b44fc39fce pg_dump: make argument combination error exit code consistent
Per report from Pavel Golub
2014-03-05 18:15:49 -05:00
Tom Lane 114b26c06f Remove unused field "evttype".
Apparent oversight in commit 3855968f.
2014-03-05 11:57:53 -05:00
Alvaro Herrera 9067310cc5 pg_dump et al: Add --if-exists option
This option makes pg_dump, pg_dumpall and pg_restore inject an IF EXISTS
clause to each DROP command they emit.  (In pg_dumpall, the clause is
not added to individual objects drops, but rather to the CREATE DATABASE
commands, as well as CREATE ROLE and CREATE TABLESPACE.)

This allows for a better user dump experience when using --clean in case
some objects do not already exist.  Per bug #7873 by Dave Rolsky.

Author: Pavel Stěhule
Reviewed-by: Jeevan Chalke, Álvaro Herrera, Josh Kupershmidt
2014-03-03 15:02:18 -03:00
Stephen Frost b1aebbb6a8 Various Coverity-spotted fixes
A number of issues were identified by the Coverity scanner and are
addressed in this patch.  None of these appear to be security issues
and many are mostly cosmetic changes.

Short comments for each of the changes follows.

Correct the semi-colon placement in be-secure.c regarding SSL retries.
Remove a useless comparison-to-NULL in proc.c (value is dereferenced
  prior to this check and therefore can't be NULL).
Add checking of chmod() return values to initdb.
Fix a couple minor memory leaks in initdb.
Fix memory leak in pg_ctl- involves free'ing the config file contents.
Use an int to capture fgetc() return instead of an enum in pg_dump.
Fix minor memory leaks in pg_dump.
  (note minor change to convertOperatorReference()'s API)
Check fclose()/remove() return codes in psql.
Check fstat(), find_my_exec() return codes in psql.
Various ECPG memory leak fixes.
Check find_my_exec() return in ECPG.
Explicitly ignore pqFlush return in libpq error-path.
Change PQfnumber() to avoid doing an strdup() when no changes required.
Remove a few useless check-against-NULL's (value deref'd beforehand).
Check rmtree(), malloc() results in pg_regress.
Also check get_alternative_expectfile() return in pg_regress.
2014-03-01 22:14:14 -05:00
Bruce Momjian d613861b95 pg_dump: fix subtle memory leak in func and arg signature processing 2014-02-24 12:32:41 -05:00
Tom Lane 60ff2fdd99 Centralize getopt-related declarations in a new header file pg_getopt.h.
We used to have externs for getopt() and its API variables scattered
all over the place.  Now that we find we're going to need to tweak the
variable declarations for Cygwin, it seems like a good idea to have
just one place to tweak.

In this commit, the variables are declared "#ifndef HAVE_GETOPT_H".
That may or may not work everywhere, but we'll soon find out.

Andres Freund
2014-02-15 14:31:30 -05:00
Bruce Momjian 32be1c8e90 Remove use of sscanf in pg_upgrade, and add C comment to pg_dump
Per report from Jackie Chang
2014-02-15 11:50:56 -05:00
Stephen Frost dfb1e9bdc0 Further pg_dump / ftello improvements
Make ftello error-checking consistent to all calls and remove a
bit of ftello-related code which has been #if 0'd out since 2001.

Note that we are not concerned with the ftello() call under
snprintf() failing as it is just building a string to call
exit_horribly() with; printing -1 in such a case is fine.
2014-02-09 18:28:14 -05:00
Stephen Frost 5e8e794e3b Focus on ftello result < 0 instead of errno
Rather than reset errno (or just hope that its cleared already),
check just the result of the ftello for < 0 to determine if there
was an issue.

Oversight by me, pointed out by Tom.
2014-02-09 13:29:36 -05:00
Stephen Frost cfa1b4a711 Minor pg_dump improvements
Improve pg_dump by checking results on various fgetc() calls which
previously were unchecked, ditto for ftello.  Also clean up a couple
of very minor memory leaks by waiting to allocate structures until
after the initial check(s).

Issues spotted by Coverity.
2014-02-08 21:25:47 -05:00
Bruce Momjian 5168c76964 pg_restore: make help output plural for multi-enabled options
per report from Josh Kupershmidt
2014-01-31 22:29:01 -05:00
Stephen Frost 152d24f5dd Fix minor leak in pg_dump
Move allocation to after we check the remote server version, to avoid
a possible, very minor, memory leak.  This makes us more consistent
throughout as most places in pg_dump are done in the same way (due, in
part, to previous fixes like this).

Spotted by the Coverity scanner.
2014-01-26 17:58:48 -05:00
Stephen Frost 6794a9f9a1 Avoid minor leak in parallel pg_dump
During parallel pg_dump, a worker process closing the connection caused
a minor memory leak (particularly minor as we are likely about to exit
anyway).  Instead, free the memory in this case prior to returning NULL
to indicate connection closed.

Spotting by the Coverity scanner.

Back patch to 9.3 where this was introduced.
2014-01-24 15:10:08 -05:00
Bruce Momjian bb953ad164 Fix pg_dumpall on pre-8.1 servers
rolname did not exist in pg_shadow.

Backpatch to 9.3

Report by Andrew Gierth via IRC
2014-01-12 22:25:36 -05:00
Bruce Momjian 7e04792a1c Update copyright for 2014
Update all files in head, and files COPYRIGHT and legal.sgml in all back
branches.
2014-01-07 16:05:30 -05:00
Heikki Linnakangas 10a82cda67 Remove bogus -K option from pg_dump.
I added it to the getopt call by accident in commit
691e595dd9.

Amit Kapila
2014-01-06 12:30:19 +02:00
Tom Lane c01bc51f8d Fix broken support for event triggers as extension members.
CREATE EVENT TRIGGER forgot to mark the event trigger as a member of its
extension, and pg_dump didn't pay any attention anyway when deciding
whether to dump the event trigger.  Per report from Moshe Jacobson.

Given the obvious lack of testing here, it's rather astonishing that
ALTER EXTENSION ADD/DROP EVENT TRIGGER work, but they seem to.
2013-12-30 14:00:02 -05:00
Kevin Grittner 47f50262e7 Don't attempt to limit target database for pg_restore.
There was an apparent attempt to limit the target database for
pg_restore to version 7.1.0 or later.  Due to a leading zero this
was interpreted as an octal number, which allowed targets with
version numbers down to 2.87.36.  The lowest actual release above
that was 6.0.0, so that was effectively the limit.

Since the success of the restore attempt will depend primarily on
on what statements were generated by the dump run, we don't want
pg_restore trying to guess whether a given target should be allowed
based on version number.  Allow a connection to any version.  Since
it is very unlikely that anyone would be using a recent version of
pg_restore to restore to a pre-6.0 database, this has little to no
practical impact, but it makes the code less confusing to read.

Issue reported and initial patch suggestion from Joel Jacobson
based on an article by Andrey Karpov reporting on issues found by
PVS-Studio static code analyzer.  Final patch based on analysis by
Tom Lane.  Back-patch to all supported branches.
2013-12-29 15:17:52 -06:00
Tom Lane 8d65da1f01 Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()).  We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.

Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions.  To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need.  There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.

In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.

Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 16:11:35 -05:00
Heikki Linnakangas 30b96549ab Mark variables 'static' where possible. Move GinFuzzySearchLimit to ginget.c
Per "clang -Wmissing-variable-declarations" output, posted by Andres Freund.
I didn't silence all those warnings, though, only the most obvious cases.
2013-12-16 11:41:17 +02:00
Peter Eisentraut 3e3520cf7a Translation updates 2013-12-02 00:17:07 -05:00
Kevin Grittner 4bd371f6f8 Fix pg_dumpall to work for databases flagged as read-only.
pg_dumpall's charter is to be able to recreate a database cluster's
contents in a virgin installation, but it was failing to honor that
contract if the cluster had any ALTER DATABASE SET
default_transaction_read_only settings.  By including a SET command
for the connection for each connection opened by pg_dumpall output,
errors are avoided and the source cluster is successfully
recreated.

There was discussion of whether to also set this for the connection
applying pg_dump output, but it was felt that it was both less
appropriate in that context, and far easier to work around.

Backpatch to all supported branches.
2013-11-30 11:24:56 -06:00
Heikki Linnakangas fea437681d Spell SQL keywords in uppercase in pg_dump's query.
The server won't care, but let's be consistent.

David Rowley.
2013-11-18 18:34:51 +02:00
Heikki Linnakangas 32ceba3ea7 Replace appendPQExpBuffer(..., <constant>) with appendPQExpBufferStr
Arguably makes the code a bit more readable, and might give a small
performance gain.

David Rowley
2013-11-18 18:34:51 +02:00
Tom Lane 6cb86143e8 Allow aggregates to provide estimates of their transition state data size.
Formerly the planner had a hard-wired rule of thumb for guessing the amount
of space consumed by an aggregate function's transition state data.  This
estimate is critical to deciding whether it's OK to use hash aggregation,
and in many situations the built-in estimate isn't very good.  This patch
adds a column to pg_aggregate wherein a per-aggregate estimate can be
provided, overriding the planner's default, and infrastructure for setting
the column via CREATE AGGREGATE.

It may be that additional smarts will be required in future, perhaps even
a per-aggregate estimation function.  But this is already a step forward.

This is extracted from a larger patch to improve the performance of numeric
and int8 aggregates.  I (tgl) thought it was worth reviewing and committing
this infrastructure separately.  In this commit, all built-in aggregates
are given aggtransspace = 0, so no behavior should change.

Hadi Moshayedi, reviewed by Pavel Stehule and Tomas Vondra
2013-11-16 16:03:40 -05:00
Tom Lane 97e1ec4670 Speed up printing of INSERT statements in pg_dump.
In --inserts and especially --column-inserts mode, we can get a useful
speedup by generating the common prefix of all a table's INSERT commands
just once, and then printing the prebuilt string for each row.  This avoids
multiple invocations of fmtId() and other minor fooling around.

David Rowley
2013-11-15 18:02:06 -05:00
Peter Eisentraut 001e114b8d Fix whitespace issues found by git diff --check, add gitattributes
Set per file type attributes in .gitattributes to fine-tune whitespace
checks.  With the associated cleanups, the tree is now clean for git
2013-11-10 14:48:29 -05:00
Robert Haas 07cacba983 Add the notion of REPLICA IDENTITY for a table.
Pending patches for logical replication will use this to determine
which columns of a tuple ought to be considered as its candidate key.

Andres Freund, with minor, mostly cosmetic adjustments by me
2013-11-08 12:30:43 -05:00
Tom Lane 3147acd63e Use improved vsnprintf calling logic in more places.
When we are using a C99-compliant vsnprintf implementation (which should be
most places, these days) it is worth the trouble to make use of its report
of how large the buffer needs to be to succeed.  This patch adjusts
stringinfo.c and some miscellaneous usages in pg_dump to do that, relying
on the logic recently added in libpgcommon's psprintf.c.  Since these
places want to know the number of bytes written once we succeed, modify the
API of pvsnprintf() to report that.

There remains near-duplicate logic in pqexpbuffer.c, but since that code
is in libpq, psprintf.c's approach of exit()-on-error isn't appropriate
for use there.  Also note that I didn't bother touching the multitude
of places that call (v)snprintf without any attempt to provide a resizable
buffer.

Release-note-worthy incompatibility: the API of appendStringInfoVA()
changed.  If there's any third-party code that's calling that directly,
it will need tweaking along the same lines as in this patch.

David Rowley and Tom Lane
2013-10-24 21:43:57 -04:00
Tom Lane 2c66f9924c Replace pg_asprintf() with psprintf().
This eliminates an awkward coding pattern that's also unnecessarily
inconsistent with backend coding.  psprintf() is now the thing to
use everywhere.
2013-10-22 19:40:26 -04:00
Peter Eisentraut f39418e9b3 Switch dependency order of libpgcommon and libpgport
Continuing 63f32f3416, libpgcommon should
depend on libpgport, but not vice versa.  But wait_result_to_str() in
wait_error.c depends on pstrdup() in libpgcommon.  So move exec.c and
wait_error.c from libpgport to libpgcommon.  Also switch the link order
in the place that's actually used by the failing ecpg builds.

The function declarations have been left in port.h for now.  That should
perhaps be separated sometime.
2013-10-17 22:02:35 -04:00
Peter Eisentraut 5b6d08cd29 Add use of asprintf()
Add asprintf(), pg_asprintf(), and psprintf() to simplify string
allocation and composition.  Replacement implementations taken from
NetBSD.

Reviewed-by: Álvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Asif Naeem <anaeem.it@gmail.com>
2013-10-13 00:09:18 -04:00
Peter Eisentraut 0b109c822b Translation updates 2013-10-07 16:51:52 -04:00
Tom Lane 0d3f4406df Allow aggregate functions to be VARIADIC.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.

It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg().  However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it.  So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.

In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation.  That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about.  Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.

initdb forced because of new aggvariadic field in Aggref parse nodes.
2013-09-03 17:08:46 -04:00
Peter Eisentraut 6a007fa1eb Translation updates 2013-09-02 02:43:18 -04:00
Heikki Linnakangas da85fb4747 Accept multiple -I, -P, -T and -n options in pg_restore.
We already did this for -t (--table) in 9.3, but missed the other similar
options. For consistency, allow all of them to be specified multiple times.

Unfortunately it's too late to sneak this into 9.3, so commit to master
only.
2013-08-28 09:43:34 +03:00
Peter Eisentraut a2f2e902b8 Translation updates 2013-08-18 23:41:03 -04:00
Bruce Momjian 808f8f5d6d pg_dump: avoid schema qualification for ALTER ... OWNER
We already use search_path to specify the schema, so there is no need
for pg_dump to schema-qualify the name.  Also remove dead code.
2013-08-13 11:45:56 -04:00
Bruce Momjian 8eb29194fc pg_dump/pg_dumpall: remove unnecessary SQL trailing semicolons
Patch by Ian Lawrence Barwick
2013-07-31 11:37:17 -04:00
Stephen Frost 4cbe3ac3e8 WITH CHECK OPTION support for auto-updatable VIEWs
For simple views which are automatically updatable, this patch allows
the user to specify what level of checking should be done on records
being inserted or updated.  For 'LOCAL CHECK', new tuples are validated
against the conditionals of the view they are being inserted into, while
for 'CASCADED CHECK' the new tuples are validated against the
conditionals for all views involved (from the top down).

This option is part of the SQL specification.

Dean Rasheed, reviewed by Pavel Stehule
2013-07-18 17:10:16 -04:00
Peter Eisentraut 233bfe0673 Fix PQconninfoParse error message handling
The returned error message already includes a newline, but the callers
were adding their own when printing it out.
2013-07-15 20:04:14 -04:00
Stephen Frost 3355443fb1 Check version before allocating PQExpBuffer
In pg_dump.c:getEventTriggers, check what major version we are on
before calling createPQExpBuffer() to avoid leaking that bit of
memory.

Leak discovered by the Coverity scanner.

Back-patch to 9.3 where support for dumping event triggers was
added.
2013-07-14 21:17:59 -04:00
Stephen Frost 234e4cf6e1 During parallel pg_dump, free commands from master
The command strings read by the child processes during parallel
pg_dump, after being read and handled, were not being free'd.
This patch corrects this relatively minor memory leak.

Leak found by the Coverity scanner.

Back patch to 9.3 where parallel pg_dump was introduced.
2013-07-14 14:35:26 -04:00
Peter Eisentraut e852c5df2d pg_dump: Formatting cleanup of new messages 2013-07-11 21:48:09 -04:00
Noah Misch 02d2b694ee Update messages, comments and documentation for materialized views.
All instances of the verbiage lagging the code.  Back-patch to 9.3,
where materialized views were introduced.
2013-07-05 15:37:51 -04:00
Fujii Masao 2ef085d0e6 Get rid of pg_class.reltoastidxid.
Treat TOAST index just the same as normal one and get the OID
of TOAST index from pg_index but not pg_class.reltoastidxid.
This change allows us to handle multiple TOAST indexes, and
which is required infrastructure for upcoming
REINDEX CONCURRENTLY feature.

Patch by Michael Paquier, reviewed by Andres Freund and me.
2013-07-04 03:24:09 +09:00
Peter Eisentraut 614ce64f6c pg_restore: Error about incompatible options
This mirrors the equivalent error cases in pg_dump.
2013-07-02 20:07:35 -04:00
Robert Haas 568d4138c6 Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row.  In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result.  This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.

The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow.  However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads.  To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed.  The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all.  Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.

Patch by me.  Review by Michael Paquier and Andres Freund.
2013-07-02 09:47:01 -04:00
Tom Lane 9ef86cd994 Mark index-constraint comments with correct dependency in pg_dump.
When there's a comment on an index that was created with UNIQUE or PRIMARY
KEY constraint syntax, we need to label the comment as depending on the
constraint not the index, since only the constraint object actually appears
in the dump.  This incorrect dependency can lead to parallel pg_restore
trying to restore the comment before the index has been created, per bug
#8257 from Lloyd Albin.

This patch fixes pg_dump to produce the right dependency in dumps made
in the future.  Usually we also try to hack pg_restore to work around
bogus dependencies, so that existing (wrong) dumps can still be restored in
parallel mode; but that doesn't seem practical here since there's no easy
way to relate the constraint dump entry to the comment after the fact.

Andres Freund
2013-06-27 13:54:50 -04:00
Andrew Dunstan 81166a2f7e Properly dump dropped foreign table cols in binary-upgrade mode.
In binary upgrade mode, we need to recreate and then drop dropped
columns so that all the columns get the right attribute number. This is
true for foreign tables as well as for native tables. For foreign
tables we have been getting the first part right but not the second,
leading to bogus columns in the upgraded database. Fix this all the way
back to 9.1, where foreign tables were introduced.
2013-06-25 13:46:34 -04:00
Peter Eisentraut ce18b01159 Translation updates 2013-06-24 14:16:44 -04:00
Fujii Masao f69aece6f4 Fix pg_restore -l with the directory archive to display the correct format name.
Back-patch to 9.1 where the directory archive was introduced.
2013-06-16 05:07:02 +09:00
Joe Conway 33a4466f76 Fix ordering of obj id for Rules and EventTriggers in pg_dump.
getSchemaData() must identify extension member objects and mark them
as not to be dumped. This must happen after reading all objects that can be
direct members of extensions, but before we begin to process table subsidiary
objects. Both rules and event triggers were wrong in this regard.

Backport rules portion of patch to 9.1 -- event triggers do not exist prior to 9.3.
Suggested fix by Tom Lane, initial complaint and patch by me.
2013-06-09 17:30:39 -07:00
Peter Eisentraut 01497e738e Add new source files to nls.mk 2013-05-31 20:03:39 -04:00
Bruce Momjian 9af4159fce pgindent run for release 9.3
This is the first run of the Perl-based pgindent script.  Also update
pgindent instructions.
2013-05-29 16:58:43 -04:00
Tom Lane 1d6c72a55b Move materialized views' is-populated status into their pg_class entries.
Previously this state was represented by whether the view's disk file had
zero or nonzero size, which is problematic for numerous reasons, since it's
breaking a fundamental assumption about heap storage.  This was done to
allow unlogged matviews to revert to unpopulated status after a crash
despite our lack of any ability to update catalog entries post-crash.
However, this poses enough risk of future problems that it seems better to
not support unlogged matviews until we can find another way.  Accordingly,
revert that choice as well as a number of existing kluges forced by it
in favor of creating a pg_class.relispopulated flag column.
2013-05-06 13:27:22 -04:00
Simon Riggs b2ad82dafa Execute SET TRANSACTION SNAPSHOT during pg_dump
Previous coding set the SQL buffer but never executed

Bug noted by me during beta testing
2013-05-06 15:37:17 +01:00
Peter Eisentraut 539ecc9241 Translation updates 2013-05-05 22:34:23 -04:00
Peter Eisentraut bbb4db4e04 pg_dump: Improve message formatting 2013-04-27 23:06:37 -04:00
Joe Conway b42ea7981c Ensure that user created rows in extension tables get dumped if the table is explicitly requested, either with a -t/--table switch of the table itself, or by -n/--schema switch of the schema containing the extension table. Patch reviewed by Vibhor Kumar and Dimitri Fontaine.
Backpatched to 9.1 when the extension management facility was added.
2013-04-26 12:02:40 -07:00
Kevin Grittner 52e6e33ab4 Create a distinction between a populated matview and a scannable one.
The intent was that being populated would, long term, be just one
of the conditions which could affect whether a matview was
scannable; being populated should be necessary but not always
sufficient to scan the relation.  Since only CREATE and REFRESH
currently determine the scannability, names and comments
accidentally conflated these concepts, leading to confusion.

Also add missing locking for the SQL function which allows a
test for scannability, and fix a modularity violatiion.

Per complaints from Tom Lane, although its not clear that these
will satisfy his concerns.  Hopefully this will at least better
frame the discussion.
2013-04-09 13:02:49 -05:00
Tom Lane aa02864f64 Must check indisready not just indisvalid when dumping from 9.2 server.
9.2 uses a kluge representation of "indislive"; we have to account for
that when examining pg_index.  Simplest solution is to check indisready
for 9.0 and 9.1 as well; that's harmless though unnecessary, so it's
not worth making a version distinction for.

Fixes oversight in commit 683abc73df,
as noted by Andres Freund.
2013-03-28 22:09:12 -04:00
Heikki Linnakangas 7800a71291 Move some pg_dump function around.
Move functions used only by pg_dump and pg_restore from dumputils.c to a new
file, pg_backup_utils.c. dumputils.c is linked into psql and some programs
in bin/scripts, so it seems good to keep it slim. The parallel functionality
is moved to parallel.c, as is exit_horribly, because the interesting code in
exit_horribly is parallel-related.

This refactoring gets rid of the on_exit_msg_func function pointer. It was
problematic, because a modern gcc version with -Wmissing-format-attribute
complained if it wasn't marked with PF_PRINTF_ATTRIBUTE, but the ancient gcc
version that Tom Lane's old HP-UX box has didn't accept that attribute on a
function pointer, and gave an error. We still use a similar function pointer
trick for getLocalPQBuffer() function, to use a thread-local version of that
in parallel mode on Windows, but that dodges the problem because it doesn't
take printf-like arguments.
2013-03-27 18:10:40 +02:00
Tom Lane 683abc73df Ignore invalid indexes in pg_dump.
Dumping invalid indexes can cause problems at restore time, for example
if the reason the index creation failed was because it tried to enforce
a uniqueness condition not satisfied by the table's data.  Also, if the
index creation is in fact still in progress, it seems reasonable to
consider it to be an uncommitted DDL change, which pg_dump wouldn't be
expected to dump anyway.

Back-patch to all active versions, and teach them to ignore invalid
indexes in servers back to 8.2, where the concept was introduced.

Michael Paquier
2013-03-26 17:43:19 -04:00
Heikki Linnakangas 625b237f79 Fix pg_dump against 9.1/9.2 servers.
The parallel pg_dump patch forgot to add relpages column to 9.1/9.2 version
of the getTables() query.

Reported by Bernd Helmle.
2013-03-26 15:44:26 +02:00
Heikki Linnakangas 901b89e37b Get rid of obsolete parse_version helper function.
For getting the server's version in numeric form, use PQserverVersion().
It does the exact same parsing as dumputils.c's parse_version(), and has
been around in libpq for a long time. For the client's version, just use
the PG_VERSION_NUM constant.
2013-03-26 15:32:02 +02:00
Andrew Dunstan ec143f9405 Fix a small logic bug in adjusted parallel restore code. 2013-03-25 22:52:28 -04:00
Heikki Linnakangas ea988ee8c8 Add PF_PRINTF_ATTRIBUTE to on_exit_msg_fmt.
Per warning from -Wmissing-format-attribute.
2013-03-25 10:06:03 +02:00
Tom Lane 846681fdd5 Fix some unportable constructs in parallel pg_dump code.
Didn't compile on semi-obsolete gcc, and probably not on not-gcc-at-all
either.
2013-03-24 15:35:37 -04:00
Andrew Dunstan 9e257a181c Add parallel pg_dump option.
New infrastructure is added which creates a set number of workers
(threads on Windows, forked processes on Unix). Jobs are then
handed out to these workers by the master process as needed.
pg_restore is adjusted to use this new infrastructure in place of the
old setup which created a new worker for each step on the fly. Parallel
dumps acquire a snapshot clone in order to stay consistent, if
available.

The parallel option is selected by the -j / --jobs command line
parameter of pg_dump.

Joachim Wieland, lightly editorialized by Andrew Dunstan.
2013-03-24 11:27:20 -04:00
Tom Lane d43837d030 Add lock_timeout configuration parameter.
This GUC allows limiting the time spent waiting to acquire any one
heavyweight lock.

In support of this, improve the recently-added timeout infrastructure
to permit efficiently enabling or disabling multiple timeouts at once.
That reduces the performance hit from turning on lock_timeout, though
it's still not zero.

Zoltán Böszörményi, reviewed by Tom Lane,
Stephen Frost, and Hari Babu
2013-03-16 23:22:57 -04:00
Kevin Grittner a18b72adcd Fix bug in dumping prior releases due to MV REFRESH dependency checking.
Reports and suggested patches from Fujii Masao and Andrew Dunstan.

Andrew Dunstan
2013-03-13 20:20:32 -05:00
Peter Eisentraut 9795113916 Add fe_memutils.c to nls.mk where used 2013-03-06 23:45:16 -05:00
Kevin Grittner cfa3df3de1 Fix broken pg_dump for 9.0 and 9.1 caused by the MV patch.
Per report and suggestion from Bernd Helmle
2013-03-06 09:51:49 -06:00
Kevin Grittner 3bf3ab8c56 Add a materialized view relations.
A materialized view has a rule just like a view and a heap and
other physical properties like a table.  The rule is only used to
populate the table, references in queries refer to the
materialized data.

This is a minimal implementation, but should still be useful in
many cases.  Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed.  At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.

Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
2013-03-03 18:23:31 -06:00
Heikki Linnakangas 2953cd6d17 Only quote libpq connection string values that need quoting.
There's no harm in excessive quoting per se, but it makes the strings nicer
to read. The values can get quite unwieldy, when they're first quoted within
within single-quotes when included in the connection string, and then all
the single-quotes are escaped when the connection string is passed as a
shell argument.
2013-02-25 19:53:04 +02:00
Heikki Linnakangas 3dee636e04 Add -d option to pg_dumpall, for specifying a connection string.
Like with pg_basebackup and pg_receivexlog, it's a bit strange to call the
option -d/--dbname, when in fact you cannot pass a database name in it.

Original patch by Amit Kapila, heavily modified by me.
2013-02-25 19:39:10 +02:00
Heikki Linnakangas 691e595dd9 Add -d/--dbname option to pg_dump.
You could already pass a database name just by passing it as the last
option, without -d. This is an alias for that, like the -d/--dbname option
in psql and many other client applications. For consistency.
2013-02-25 19:39:04 +02:00
Heikki Linnakangas f435cd1d38 Fix pg_dumpall with database names containing =
If a database name contained a '=' character, pg_dumpall failed. The problem
was in the way pg_dumpall passes the database name to pg_dump on the
command line. If it contained a '=' character, pg_dump would interpret it
as a libpq connection string instead of a plain database name.

To fix, pass the database name to pg_dump as a connection string,
"dbname=foo", with the database name escaped if necessary.

Back-patch to all supported branches.
2013-02-20 17:08:54 +02:00
Heikki Linnakangas 2930c05634 Don't pass NULL to fprintf, if a bogus connection string is given to pg_dump.
Back-patch to all supported branches.
2013-02-20 16:33:24 +02:00
Alvaro Herrera 8396447cdb Create libpgcommon, and move pg_malloc et al to it
libpgcommon is a new static library to allow sharing code among the
various frontend programs and backend; this lets us eliminate duplicate
implementations of common routines.  We avoid libpgport, because that's
intended as a place for porting issues; per discussion, it seems better
to keep them separate.

The first use case, and the only implemented by this patch, is pg_malloc
and friends, which many frontend programs were already using.

At the same time, we can use this to provide palloc emulation functions
for the frontend; this way, some palloc-using files in the backend can
also be used by the frontend cleanly.  To do this, we change palloc() in
the backend to be a function instead of a macro on top of
MemoryContextAlloc().  This was previously believed to cause loss of
performance, but this implementation has been tweaked by Tom and Andres
so that on modern compilers it provides a slight improvement over the
previous one.

This lets us clean up some places that were already with
localized hacks.

Most of the pg_malloc/palloc changes in this patch were authored by
Andres Freund. Zoltán Böszörményi also independently provided a form of
that.  libpgcommon infrastructure was authored by Álvaro.
2013-02-12 11:21:05 -03:00
Magnus Hagander be926474be Make pg_dump exclude unlogged table data on hot standby slaves
Noted by Joe Van Dyk
2013-01-25 09:46:07 +01:00
Tom Lane 26d905a12d Use SET TRANSACTION READ ONLY in pg_dump, if server supports it.
This currently does little except serve as documentation.  (The one case
where it has a performance benefit, SERIALIZABLE mode in 9.1 and up, was
already using READ ONLY mode.)  However, it's possible that it might have
performance benefits in future, and in any case it seems like good
practice since it would catch any accidentally non-read-only operations.

Pavan Deolasee
2013-01-19 17:56:40 -05:00
Magnus Hagander f3af53441e Support multiple -t/--table arguments for more commands
On top of the previous support in pg_dump, add support to specify
multiple tables (by using the -t option multiple times) to
pg_restore, clsuterdb, reindexdb and vacuumdb.

Josh Kupershmidt, reviewed by Karl O. Pinc
2013-01-17 11:24:47 +01:00
Peter Eisentraut 36bdfa52a0 Get rid of pg_dump's README
It was largely full of outdated and incorrect information.  Move the few
notes which were still relevant into header comments of pg_backup_tar.c
and pg_dumpall.c.

Josh Kupershmidt
2013-01-16 23:49:54 -05:00
Magnus Hagander 794397ae1d Move tar function headers to pgtar.h
This makes it possible to include them only where they are used, so
we can avoid the conflict of the uid_t and gid_t datatypes that happened
in plperl (since plperl doesn't need the tar functions)
2013-01-02 20:34:08 +01:00