Commit Graph

4902 Commits

Author SHA1 Message Date
Stephen Frost 2259bf672c Fix dumping of casts and transforms using built-in functions
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the
cast or transform if it happened to use a built-in function because we
weren't including the information about built-in functions when querying
pg_proc from getFuncs().

Modify the query in getFuncs() to also gather information about
functions which are used by user-defined casts and transforms (where
"user-defined" means "has an OID >= FirstNormalObjectId").  This also
adds to the TAP regression tests for 9.6 and master to cover these
types of objects.

Back-patch all the way for casts, back to 9.5 for transforms.

Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
2016-12-21 13:47:06 -05:00
Stephen Frost 19990918d3 For 8.0 servers, get last built-in oid from pg_database
We didn't start ensuring that all built-in objects had OIDs less than
16384 until 8.1, so for 8.0 servers we still need to query the value out
of pg_database.  We need this, in particular, to distinguish which casts
were built-in and which were user-defined.

For HEAD, we only worry about going back to 8.0, for the back-branches,
we also ensure that 7.0-7.4 work.

Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
2016-12-21 13:47:06 -05:00
Fujii Masao ecbdc4c555 Forbid invalid combination of options in pg_basebackup.
Commit 56c7d8d455 allowed pg_basebackup
to stream WAL in tar mode. But there is the restriction that WAL
streaming in tar mode works only when the value - (dash) is not
specified as output directory. This means that the combination of
three options "-D -", "-F t" and "-X stream" is invalid. However,
previously, even when those options were specified at the same time,
pg_basebackup background process unexpectedly started streaming WAL.
And then it exited with an error.

This commit changes pg_basebackup so that it errors out on such
invalid combination of options at the beginning.

Reviewed by Magnus Hagander, and patch by me.
2016-12-21 20:27:37 +09:00
Peter Eisentraut 1753b1b027 Add pg_sequence system catalog
Move sequence metadata (start, increment, etc.) into a proper system
catalog instead of storing it in the sequence heap object.  This
separates the metadata from the sequence data.  Sequence metadata is now
operated on transactionally by DDL commands, whereas previously
rollbacks of sequence-related DDL commands would be ignored.

Reviewed-by: Andreas Karlsson <andreas@proxel.se>
2016-12-20 08:28:18 -05:00
Magnus Hagander 8cb545bfd4 Add missing newline in message 2016-12-15 16:45:31 +01:00
Robert Haas 06e184876b psql: Fix incorrect version check for table partitining.
Table partitioning was added in 10, not 9.6.

Fabrízio de Royes Mello, per report from Jeff Janes
2016-12-12 11:57:58 -05:00
Heikki Linnakangas 2560d244b4 Fix quoting and a compiler warning in dumping partitions.
Partition name needs to be quoted in the ATTACH PARTITION command
constructed in binary-upgrade mode.

Silence compiler warning about set but unused variable, without
--enable-cassert.
2016-12-08 14:10:10 +02:00
Robert Haas f0e44751d7 Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own.  The children are called
partitions and contain all of the actual data.  Each partition has an
implicit partitioning constraint.  Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed.  Partitions
can't have extra columns and may not allow nulls unless the parent
does.  Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.

Currently, tables can be range-partitioned or list-partitioned.  List
partitioning is limited to a single column, but range partitioning can
involve multiple columns.  A partitioning "column" can be an
expression.

Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations.  The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.

Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others.  Minor revisions by me.
2016-12-07 13:17:55 -05:00
Stephen Frost 093129c9d9 Add support for restrictive RLS policies
We have had support for restrictive RLS policies since 9.5, but they
were only available through extensions which use the appropriate hooks.
This adds support into the grammer, catalog, psql and pg_dump for
restrictive RLS policies, thus reducing the cases where an extension is
necessary.

In passing, also move away from using "AND"d and "OR"d in comments.
As pointed out by Alvaro, it's not really appropriate to attempt
to make verbs out of "AND" and "OR", so reword those comments which
attempted to.

Reviewed By: Jeevan Chalke, Dean Rasheed
Discussion: https://postgr.es/m/20160901063404.GY4028@tamriel.snowman.net
2016-12-05 15:50:55 -05:00
Robert Haas 2b959d4957 Reduce the default for max_worker_processes back to 8.
Commit b460f5d669 -- at my suggestion --
increased the default value of max_worker_processes from 8 to 16, on
the theory that this would be harmless and convenient for users.
Unfortunately, this caused some buildfarm machines with low connection
limits to start failing, so apparently it's not harmless after all.
2016-12-05 10:53:21 -05:00
Robert Haas b460f5d669 Add max_parallel_workers GUC.
Increase the default value of the existing max_worker_processes GUC
from 8 to 16, and add a new max_parallel_workers GUC with a maximum
of 8.  This way, even if the maximum amount of parallel query is
happening, there is still room for background workers that do other
things, as originally envisioned when max_worker_processes was added.

Julien Rouhaud, reviewed by Amit Kapila and by revised by me.
2016-12-02 07:42:58 -05:00
Stephen Frost 4fafa579b0 Add --no-blobs option to pg_dump
Add an option to exclude blobs when running pg_dump.  By default, blobs
are included but this option can be used to exclude them while keeping
the rest of the dump.

Commment updates and regression tests from me.

Author: Guillaume Lelarge
Reviewed-by: Amul Sul
Discussion: https://postgr.es/m/VisenaEmail.48.49926ea6f91dceb6.15355a48249@tc7-visena
2016-11-29 11:09:35 -05:00
Tom Lane 404e667580 Fix busted tab-completion pattern for ALTER TABLE t ALTER c DROP ...
Evidently a thinko in commit 9b181b036.

Kyotaro Horiguchi
2016-11-28 11:51:30 -05:00
Tom Lane dbdfd114f3 Bring some clarity to the defaults for the xxx_flush_after parameters.
Instead of confusingly stating platform-dependent defaults for these
parameters in the comments in postgresql.conf.sample (with the main
entry being a lie on Linux), teach initdb to install the correct
platform-dependent value in postgresql.conf, similarly to the way
we handle other platform-dependent defaults.  This won't do anything
for existing 9.6 installations, but since it's effectively only a
documentation improvement, that seems OK.

Since this requires initdb to have access to the default values,
move the #define's for those to pg_config_manual.h; the original
placement in bufmgr.h is unworkable because that file can't be
included by frontend programs.

Adjust the default value for wal_writer_flush_after so that it is 1MB
regardless of XLOG_BLCKSZ, conforming to what is stated in both the
SGML docs and postgresql.conf.  (We could alternatively make it scale
with XLOG_BLCKSZ, but I'm not sure I see the point.)

Copy-edit related SGML documentation.

Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra.

Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
2016-11-25 18:36:10 -05:00
Stephen Frost 8f91f323b4 Clean up pg_dump tests, re-enable BLOB testing
Add a loop to check that each test covers all of the pg_dump runs.  We
(I) had been a bit sloppy when adding new runs and not making sure to
mark if they should be under like or unlike for each test, this loop
makes sure that the test system will complain if any are forgotten in
the future.

The loop also correctly handles the 'catch all' cases, which are used to
avoid running unnecessary specific checks when a single catch-all can be
done (eg: a no-acl run should not have any GRANT commands).

Also, re-enable the testing of blobs, but use lo_from_bytea() instead of
trying to be cute and writing out to a file and then reading it back in
with psql, which proved to be difficult for some buildfarm members.
This allows us to add support for testing the --no-blobs option which
will be getting added shortly, provided the buildfarm doesn't blow up on
this.
2016-11-18 14:21:33 -05:00
Tom Lane d8c05aff56 Fix pg_dump's handling of circular dependencies in views.
pg_dump's traditional solution for breaking a circular dependency involving
a view was to create the view with CREATE TABLE and then later issue CREATE
RULE "_RETURN" ... to convert the table to a view, relying on the backend's
very very ancient code that supports making views that way.  We've wanted
to get rid of that kluge for a long time, but the thing that finally
motivates doing something about it is the recognition that this method
fails with the --clean option, because it leads to issuing DROP RULE
"_RETURN" followed by DROP TABLE --- and the backend won't let you drop a
view's _RETURN rule.

Instead, let's break circular dependencies by initially creating the view
using CREATE VIEW AS SELECT NULL::columntype AS columnname, ... (so that
it has the right column names and types to support external references,
but no dependencies beyond the column data types), and then later dumping
the ON SELECT rule using the spelling CREATE OR REPLACE VIEW.  This method
wasn't available when this code was originally written, but it's been
possible since PG 7.3, so it seems fine to start relying on it now.

To solve the --clean problem, make the dropStmt for an ON SELECT rule
be CREATE OR REPLACE VIEW with the same dummy target list as above.
In this way, during the DROP phase, we first reduce the view to have
no extra dependencies, and then we can drop it entirely when we've
gotten rid of whatever had a circular dependency on it.

(Note: this should work adequately well with the --if-exists option, since
the CREATE OR REPLACE VIEW will go through whether the view exists or not.
It could fail if the view exists with a conflicting column set, but we
don't really support --clean against a non-matching database anyway.)

This allows cleaning up some other kluges inside pg_dump, notably that
we don't need a notion of reloptions attached to a rule anymore.

Although this is a bug fix, commit to HEAD only for now.  The problem's
existed for a long time and we've had relatively few complaints, so it
doesn't really seem worth taking risks to fix it in the back branches.
We might revisit that choice if no problems emerge.

Discussion: <19092.1479325184@sss.pgh.pa.us>
2016-11-17 15:25:59 -05:00
Tom Lane ac888986fc Improve pg_dump/pg_restore --create --if-exists logic.
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix.  Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified.  That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.

Back-patch to 9.4 where this option was introduced.

Discussion: <19092.1479325184@sss.pgh.pa.us>
2016-11-17 14:59:13 -05:00
Tom Lane fcf70e0dbc Re-pgindent src/bin/pg_dump/*
Cleanup for recent patches --- it's not much change, but I got annoyed
while re-indenting the view-rule fix I'm working on.
2016-11-17 14:36:59 -05:00
Robert Haas 56eba9b8a1 pgbench: Increase maximum size of log filename from 64 to MAXPGPATH.
Commit 41124a91e6 allowed the
transaction log file prefix to be changed but left in place the
existing 64-character limit on the total length of a log file name.
It's possible that could be inconvenient for somebody, so increase the
limit to MAXPGPATH, which ought to be enough for anybody.

Per a suggestion from Tom Lane.
2016-11-15 09:11:51 -05:00
Peter Eisentraut a7e5457db8 pg_upgrade: Upgrade sequence data via pg_dump
Previously, pg_upgrade migrated sequence data like tables by copying the
on-disk file.  This does not allow any changes in the on-disk format for
sequences.  It's simpler to just have pg_dump set the new sequence
values as it normally does.  To do that, create a hidden submode in
pg_dump that dumps sequence data even when a schema-only dump is
requested, and trigger that submode in binary upgrade mode.  (This new
submode could easily be exposed as a command-line option, but it has
limited use outside of pg_dump and would probably cause some confusion,
so we don't do that at this time.)

Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-13 21:44:58 -05:00
Peter Eisentraut 27d2c12328 pg_dump: Separate table and sequence data object types
Instead of handling both sequence data and table data internally as
"table data", handle sequences separately under a "sequence set" type.
We already handled materialized view data differently, so it makes the
code somewhat cleaner to handle each relation kind separately at the top
level.

This does not change the output format, since there already was a
separate "SEQUENCE SET" archive entry type.  A noticeable difference is
that SEQUENCE SET entries now always appear after TABLE DATA entries.
And in parallel mode there is less sorting to do, because the sequence
data entries are no longer considered table data.

Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-13 21:44:58 -05:00
Robert Haas 41124a91e6 pgbench: Allow the transaction log file prefix to be changed.
Masahiko Sawada, reviewed by Fabien Coelho and Beena Emerson, with
some a bit of wordsmithing and cosmetic adjustment by me.
2016-11-09 16:28:43 -05:00
Robert Haas 577f0bdd2b psql: Tab completion for renaming enum values.
For ALTER TYPE .. RENAME, add "VALUE" to the list of possible
completions.  Complete ALTER TYPE .. RENAME VALUE with possible
enum values.  After that, complete with "TO".

Dagfinn Ilmari Mannsåker, reviewed by Artur Zakirov.
2016-11-08 16:30:51 -05:00
Noah Misch 650b967076 Change qr/foo$/m to qr/foo\n/m, for Perl 5.8.8.
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug.  Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version.  Commit
e7293e3271 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code.  As that
commit had shown, slight increases in pattern complexity can suppress
the bug.  This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes.  Back-patch to 9.6, where test_pg_dump was introduced.

As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8.  The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
2016-11-07 20:27:30 -05:00
Peter Eisentraut 77517ba59f pg_upgrade: Add NLS
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-07 10:12:52 -05:00
Peter Eisentraut 48dbcbf22c pg_rewing pg_upgrade: Fix translation markers
In pg_log_v(), we need to translate the fmt before processing, not the
formatted message afterwards.
2016-11-07 09:31:26 -05:00
Kevin Grittner 927d7bb6b1 Improve tab completion for CREATE TRIGGER.
This includes support for the new REFERENCING clause.
2016-11-04 11:02:07 -05:00
Peter Eisentraut 69d590fffb pg_xlogdump: Add NLS
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-04 10:40:05 -04:00
Peter Eisentraut 59fa9d2d9d pg_test_timing: Add NLS
Also straighten out use of time unit abbreviations a bit.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-04 10:40:05 -04:00
Peter Eisentraut a39255d766 pg_test_fsync: Add NLS
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-04 10:40:05 -04:00
Peter Eisentraut 632fbe772c pg_archivecleanup: Add NLS
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-11-04 10:40:05 -04:00
Peter Eisentraut a0f357e570 psql: Split up "Modifiers" column in \d and \dD
Make separate columns "Collation", "Nullable", "Default".

Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>
2016-11-03 14:02:46 -04:00
Robert Haas 1d15d0db50 psql: Tab-complete LOCK [TABLE] ... IN {ACCESS|ROW|SHARE}.
Suggest the lock modes that begin with the word in question.

Thomas Munro, reviewed by Marllius Ribeiro.  Comments tweaked by me.
2016-11-03 11:42:13 -04:00
Magnus Hagander a775406ec4 Fix memory leak in tar file padding
Spotted by Coverity, patch by Michael Paquier
2016-10-30 14:10:39 +01:00
Peter Eisentraut 8c035e55c4 pg_dump: Simplify internal archive version handling
The ArchiveHandle structure contained the archive format version number
twice, once as a single field and once split into components.  Simplify
that by just keeping the single field and adding some macros to extract
the components.  Introduce some macros for composing version numbers, to
eliminate the repeated use of magic formulas.  Drop the unused trailing
zero byte from the run-time composite version representation.

reviewed by Tom Lane
2016-10-25 17:02:22 -04:00
Magnus Hagander 78d109150b Free walmethods before exiting
Not strictly necessary since we quite after, but could become important
in the future if we do restarts etc.

Michael Paquier with nitpicking from me
2016-10-25 19:00:12 +02:00
Magnus Hagander 8c46f0c9ce Don't fsync() files when --no-sync is specified
Michael Paquier
2016-10-25 19:00:01 +02:00
Magnus Hagander 2dde01ccbf Use ssize_t where signed results can happen
Noted by Alexander Korotkov
2016-10-24 20:10:18 +02:00
Magnus Hagander eade082b12 Rename walmethod fsync method to sync
Using the name fsync clashed with the #define we have on Windows that
redefines it to _commit. Naming it sync should remove that conflict.

Per all the Windows buildfarm members
2016-10-23 18:04:34 +02:00
Magnus Hagander a5c17c1dce Fix obviously too quickly applied fix to zlib issue 2016-10-23 16:07:31 +02:00
Magnus Hagander 9ae6713cdf Fix walmethods.c build without libz
Per numerous buildfarm manuals
2016-10-23 16:00:42 +02:00
Magnus Hagander d97a59a4c5 Remove extra comma at end of enum list
C99-specific feature, and wasn't intentional in the first place.

Per buildfarm member mylodon
2016-10-23 15:57:25 +02:00
Magnus Hagander 56c7d8d455 Allow pg_basebackup to stream transaction log in tar mode
This will write the received transaction log into a file called
pg_wal.tar(.gz) next to the other tarfiles instead of writing it to
base.tar. When using fetch mode, the transaction log is still written to
base.tar like before, and when used against a pre-10 server, the file
is named pg_xlog.tar.

To do this, implement a new concept of a "walmethod", which is
responsible for writing the WAL. Two implementations exist, one that
writes to a plain directory (which is also used by pg_receivexlog) and
one that writes to a tar file with optional compression.

Reviewed by Michael Paquier
2016-10-23 15:23:11 +02:00
Robert Haas f82ec32ac3 Rename "pg_xlog" directory to "pg_wal".
"xlog" is not a particularly clear abbreviation for "write-ahead log",
and it sometimes confuses users into believe that the contents of the
"pg_xlog" directory are not critical data, leading to unpleasant
consequences.  So, rename the directory to "pg_wal".

This patch modifies pg_upgrade and pg_basebackup to understand both
the old and new directory layouts; the former is necessary given the
purpose of the tool, while the latter merely avoids an unnecessary
backward-compatibility break.

We may wish to consider renaming other programs, switches, and
functions which still use the old "xlog" naming to also refer to
"wal".  However, that's still under discussion, so let's do just this
much for now.

Discussion: CAB7nPqTeC-8+zux8_-4ZD46V7YPwooeFxgndfsq5Rg8ibLVm1A@mail.gmail.com

Michael Paquier
2016-10-20 11:32:18 -04:00
Peter Eisentraut e5a9bcb529 Use pg_ctl promote -w in TAP tests
Switch TAP tests to use the new wait mode of pg_ctl promote.  This
allows avoiding extra logic with poll_query_until() to be sure that a
promoted standby is ready for read-write queries.

From: Michael Paquier <michael.paquier@gmail.com>
2016-10-19 09:18:50 -04:00
Peter Eisentraut 5d58c07a44 initdb pg_basebackup: Rename --noxxx options to --no-xxx
--noclean and --nosync were the only options spelled without a hyphen,
so change this for consistency with other options.  The options in
pg_basebackup have not been in a release, so we just rename them.  For
initdb, we retain the old variants.

Vik Fearing and me
2016-10-19 08:48:48 -04:00
Peter Eisentraut caf936b09f pg_ctl: Add long option for -o
Now all normally used options are covered by long options as well.
2016-10-19 08:48:48 -04:00
Peter Eisentraut 0be22457d7 pg_ctl: Add long options for -w and -W
From: Vik Fearing <vik@2ndquadrant.fr>
2016-10-19 08:48:22 -04:00
Tom Lane c08521eb55 Remove dead code in pg_dump.
I'm not sure if this provision for "pg_backup" behaving a bit differently
from "pg_dump" ever did anything useful in a released version.  But it's
definitely dead code now.

Michael Paquier
2016-10-13 16:08:16 -04:00
Tom Lane 0a4bf6b192 Fix pg_dumpall regression test to be locale-independent.
The expected results in commit b4fc64578 seem to have been generated
in a non-C locale, which just points up the fact that the ORDER BY
clause was locale-sensitive.

Per buildfarm.
2016-10-13 10:46:22 -04:00
Andres Freund b4fc645787 Make pg_dumpall's database ACL query independent of hash table order.
Previously GRANT order on databases was not well defined, due to the use
of EXCEPT without an ORDER BY.  Add an ORDER BY, adapt test output.

I don't, at the moment, see reason to backpatch this.
2016-10-12 18:29:57 -07:00
Tom Lane c0a3b211bc pg_dump's getTypes() needn't retrieve typinput or typoutput anymore.
Commit 64f3524e2 removed the stanza of code that examined these values.
I failed to notice they were unnecessary because my compiler didn't
warn about the un-read variables.  Noted by Peter Eisentraut.
2016-10-12 15:11:31 -04:00
Tom Lane 64f3524e2c Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers.
The need for dumping from such ancient servers has decreased to about nil
in the field, so let's remove all the code that catered to it.  Aside
from removing a lot of boilerplate variant queries, this allows us to not
have to cope with servers that don't have (a) schemas or (b) pg_depend.
That means we can get rid of assorted squishy code around that.  There
may be some nonobvious additional simplifications possible, but this patch
already removes about 1500 lines of code.

I did not remove the ability for pg_restore to read custom-format archives
generated by these old versions (and light testing says that that does
still work).  If you have an old server, you probably also have a pg_dump
that will work with it; but you have an old custom-format backup file,
that might be all you have.

It'd be possible at this point to remove fmtQualifiedId()'s version
argument, but I refrained since that would affect code outside pg_dump.

Discussion: <2661.1475849167@sss.pgh.pa.us>
2016-10-12 12:20:02 -04:00
Tom Lane 4806f26f9e Fix pg_dump to work against pre-9.0 servers again.
getBlobs' queries for pre-9.0 servers were broken in two ways:
the 7.x/8.x query uses DISTINCT so it can't have unspecified-type
NULLs in the target list, and both that query and the 7.0 one
failed to provide the correct output column labels, so that the
subsequent code to extract data from the PGresult would fail.

Back-patch to 9.6 where the breakage was introduced (by commit 23f34fa4b).

Amit Langote and Tom Lane

Discussion: <0a3e7a0e-37bd-8427-29bd-958135862f0a@lab.ntt.co.jp>
2016-10-07 09:51:18 -04:00
Heikki Linnakangas 0d4d7d6185 Don't allow both --source-server and --source-target args to pg_rewind.
They are supposed to be mutually exclusive, but there was no check for
that.

Michael Banck

Discussion: <20161007103414.GD12247@nighthawk.caipicrew.dd-dns.de>
2016-10-07 14:35:17 +03:00
Heikki Linnakangas d7eb76b908 Disable synchronous commits in pg_rewind.
If you point pg_rewind to a server that is using synchronous replication,
with "pg_rewind --source-server=...", and the replication is not working
for some reason, pg_rewind will get stuck because it creates a temporary
table, which needs to be replicated. You could call broken replication a
pilot error, but pg_rewind is often used in special circumstances, when
there are changes to the replication setup.

We don't do any "real" updates, and we don't care about fsyncing or
replicating the operations on the temporary tables, so fix that by
setting synchronous_commit off.

Michael Banck, Michael Paquier. Backpatch to 9.5, where pg_rewind was
introduced.

Discussion: <20161005143938.GA12247@nighthawk.caipicrew.dd-dns.de>
2016-10-06 13:24:46 +03:00
Tom Lane 83c2492002 Enforce a specific order for probing library loadability in pg_upgrade.
pg_upgrade checks whether all the shared libraries used in the old cluster
are also available in the new one by issuing LOAD for each library name.
Previously, it cared not what order it did the LOADs in.  Ideally it
should not have to care, but currently the transform modules in contrib
fail unless both the language and datatype modules they depend on are
loaded first.  A backend-side solution for that looks possible but
probably not back-patchable, so as a stopgap measure, let's do the LOAD
tests in order by library name length.  That should fix the problem for
reasonably-named transform modules, eg "hstore_plpython" will be loaded
after both "hstore" and "plpython".  (Yeah, it's a hack.)

In a larger sense, having a predictable order of these probes is a good
thing, since it will make upgrades predictably work or not work in the
face of inter-library dependencies.  Also, this patch replaces O(N^2)
de-duplication logic with O(N log N) logic, which could matter in
installations with very many databases.  So I don't foresee reverting this
even after we have a proper fix for the library-dependency problem.

In passing, improve a couple of SQL queries used here.

Per complaint from Andrew Dunstan that pg_upgrade'ing the transform contrib
modules failed.  Back-patch to 9.5 where transform modules were introduced.

Discussion: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
2016-10-03 10:07:49 -04:00
Tom Lane e8bdee2770 Add ALTER EXTENSION ADD/DROP ACCESS METHOD, and use it in pg_upgrade.
Without this, an extension containing an access method is not properly
dumped/restored during pg_upgrade --- the AM ends up not being a member
of the extension after upgrading.

Another oversight in commit 473b93287, reported by Andrew Dunstan.

Report: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
2016-10-02 14:31:28 -04:00
Tom Lane 728ceba938 Avoid leaking FDs after an fsync failure.
Fixes errors introduced in commit bc34223bc, as detected by Coverity.

In passing, report ENOSPC for a short write while padding a new wal file in
open_walfile, make certain that close_walfile closes walfile in all cases,
and improve a couple of comments.

Michael Paquier and Tom Lane
2016-10-02 12:33:46 -04:00
Tom Lane f002ed2b8e Improve error reporting in pg_upgrade's file copying/linking/rewriting.
The previous design for this had copyFile(), linkFile(), and
rewriteVisibilityMap() returning strerror strings, with the caller
producing one-size-fits-all error messages based on that.  This made it
impossible to produce messages that described the failures with any degree
of precision, especially not short-read problems since those don't set
errno at all.

Since pg_upgrade has no intention of continuing after any error in this
area, let's fix this by just letting these functions call pg_fatal() for
themselves, making it easy for each point of failure to have a suitable
error message.  Taking this approach also allows dropping cleanup code
that was unnecessary and was often rather sloppy about preserving errno.
To not lose relevant info that was reported before, pass in the schema name
and table name of the current table so that they can be included in the
error reports.

An additional problem was the use of getErrorText(), which was flat out
wrong for all but a couple of call sites, because it unconditionally did
"_dosmaperr(GetLastError())" on Windows.  That's only appropriate when
reporting an error from a Windows-native API, which only a couple of
the callers were actually doing.  Thus, even the reported strerror string
would be unrelated to the actual failure in many cases on Windows.
To fix, get rid of getErrorText() altogether, and just have call sites
do strerror(errno) instead, since that's the way all the rest of our
frontend programs do it.  Add back the _dosmaperr() calls in the two
places where that's actually appropriate.

In passing, make assorted messages hew more closely to project style
guidelines, notably by removing initial capitals in not-complete-sentence
primary error messages.  (I didn't make any effort to clean up places
I didn't have another reason to touch, though.)

Per discussion of a report from Thomas Kellerer.  Back-patch to 9.6,
but no further; given the relative infrequency of reports of problems
here, it's not clear it's worth adapting the patch to older branches.

Patch by me, but with credit to Alvaro Herrera for spotting the issue
with getErrorText's misuse of _dosmaperr().

Discussion: <nsjrbh$8li$1@blaine.gmane.org>
2016-09-30 20:40:56 -04:00
Tom Lane 5afcd2aa74 Fix multiple portability issues in pg_upgrade's rewriteVisibilityMap().
This is new code in 9.6, and evidently we missed out testing it as
thoroughly as it should have been.  Bugs fixed here:

1. Use binary not text mode to open the files on Windows.  Before, if
the visibility map chanced to contain two bytes that looked like \r\n,
Windows' read() would convert that to \n, which both corrupts the map
data and causes the file to look shorter than it should.  Unless you
were *very* unlucky and had an exact multiple of 8K such occurrences
in each VM file, this would cause pg_upgrade to report a failure,
though with a rather obscure error message.

2. The code for copying rebuilt bytes into the output was simply wrong.
It chanced to work okay on little-endian machines but would emit the
bytes in the wrong order on big-endian, leading to silent corruption
of the visibility map data.

3. The code was careless about alignment of the working buffers.  Given
all three of an alignment-picky architecture, a compiler that chooses
to put the new_vmbuf[] local variable at an odd starting address, and
a checksum-enabled database, pg_upgrade would dump core.

Point one was reported by Thomas Kellerer, the other two detected by
code-reading.

Point two is much the nastiest of these issues from an impact standpoint,
though fortunately it affects only a minority of users.  The Windows issue
will definitely bite people, but it seems quite unlikely that there would
be undetected corruption from that.

In addition, I failed to resist the temptation to do some minor cosmetic
adjustments, mostly improving the comments.

It would be a good idea to try to improve the error reporting here, but
that seems like material for a separate patch.

Discussion: <nsjrbh$8li$1@blaine.gmane.org>
2016-09-30 20:40:55 -04:00
Magnus Hagander 3d39244e6e Retry opening new segments in pg_xlogdump --folllow
There is a small window between when the server closes out the existing
segment and the new one is created. Put a loop around the open call in
this case to make sure we wait for the new file to actually appear.
2016-09-30 11:22:00 +02:00
Peter Eisentraut 6ed2d8584c pg_basebackup: Add --nosync option
This is useful for testing, similar to initdb's --nosync.

From: Michael Paquier <michael.paquier@gmail.com>
2016-09-29 12:00:00 -04:00
Peter Eisentraut bc34223bc1 pg_basebackup pg_receivexlog: Issue fsync more carefully
Several places weren't careful about fsyncing in the way.  See 1d4a0ab1
and 606e0f98 for details about required fsyncs.

This adds a couple of functions in src/common/ that have an equivalent
in the backend: durable_rename(), fsync_parent_path()

From: Michael Paquier <michael.paquier@gmail.com>
2016-09-29 12:00:00 -04:00
Peter Eisentraut bf5bb2e85b Move fsync routines of initdb into src/common/
The intention is to used those in other utilities such as pg_basebackup
and pg_receivexlog.

From: Michael Paquier <michael.paquier@gmail.com>
2016-09-29 12:00:00 -04:00
Peter Eisentraut 6ad8ac6026 Exclude additional directories in pg_basebackup
The list of files and directories that pg_basebackup excludes from the
backup was somewhat incomplete and unorganized.  Change that with having
the exclusion driven from tables.  Clean up some code around it.  Also
document the exclusions in more detail so that users of pg_start_backup
can make use of it as well.

The contents of these directories are now excluded from the backup:
pg_dynshmem, pg_notify, pg_serial, pg_snapshots, pg_subtrans

Also fix a bug that a pg_repl_slot or pg_stat_tmp being a symlink would
cause a corrupt tar header to be created.  Now such symlinks are
included in the backup as empty directories.  Bug found by Ashutosh
Sharma <ashu.coek88@gmail.com>.

From: David Steele <david@pgmasters.net>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-09-28 12:00:00 -04:00
Peter Eisentraut e79e6c4da1 Fix CRC check handling in get_controlfile
The previous patch broke this by returning NULL for a failed CRC check,
which pg_controldata would then try to read.  Fix by returning the
result of the CRC check in a separate argument.

Michael Paquier and myself
2016-09-28 12:00:00 -04:00
Tom Lane 0109ab2760 Make struct ParallelSlot private within pg_dump/parallel.c.
The only field of this struct that other files have any need to touch
is the pointer to the TocEntry a worker is working on.  (Well,
pg_backup_archiver.c is actually looking at workerStatus too, but that
can be finessed by specifying that the TocEntry pointer is NULL for a
non-busy worker.)

Hence, move out the TocEntry pointers to a separate array within
struct ParallelState, and then we can make struct ParallelSlot private.

I noted the possibility of this previously, but hadn't got round to
actually doing it.

Discussion: <1188.1464544443@sss.pgh.pa.us>
2016-09-27 14:29:12 -04:00
Tom Lane fb03d08a89 Rationalize parallel dump/restore's handling of worker cmd/status messages.
The existing APIs for creating and parsing command and status messages are
rather messy; for example, archive-format modules have to provide code
for constructing command messages, which is entirely pointless since
the code to read them is hard-wired in WaitForCommands() and hence
no format-specific variation is actually possible.  But there's little
foreseeable reason to need format-specific variation anyway.

The situation for status messages is no better; at least those are both
constructed and parsed by format-specific code, but said code is quite
redundant since there's no actual need for format-specific variation.

To add insult to injury, the first API involves returning pointers to
static buffers, which is bad, while the second involves returning pointers
to malloc'd strings, which is safer but randomly inconsistent.

Hence, get rid of the MasterStartParallelItem and MasterEndParallelItem
APIs, and instead write centralized functions that construct and parse
command and status messages.  If we ever do need more flexibility, these
functions can be the standard implementations of format-specific
callback methods, but that's a long way off if it ever happens.

Tom Lane, reviewed by Kevin Grittner

Discussion: <17340.1464465717@sss.pgh.pa.us>
2016-09-27 13:56:04 -04:00
Tom Lane b7b8cc0cfc Redesign parallel dump/restore's wait-for-workers logic.
The ListenToWorkers/ReapWorkerStatus APIs were messy and hard to use.
Instead, make DispatchJobForTocEntry register a callback function that
will take care of state cleanup, doing whatever had been done by the caller
of ReapWorkerStatus in the old design.  (This callback is essentially just
the old mark_work_done function in the restore case, and a trivial test for
worker failure in the dump case.)  Then we can have ListenToWorkers call
the callback immediately on receipt of a status message, and return the
worker to WRKR_IDLE state; so the WRKR_FINISHED state goes away.

This allows us to design a unified wait-for-worker-messages loop:
WaitForWorkers replaces EnsureIdleWorker and EnsureWorkersFinished as well
as the mess in restore_toc_entries_parallel.  Also, we no longer need the
fragile API spec that the caller of DispatchJobForTocEntry is responsible
for ensuring there's an idle worker, since DispatchJobForTocEntry can just
wait until there is one.

In passing, I got rid of the ParallelArgs struct, which was a net negative
in terms of notational verboseness, and didn't seem to be providing any
noticeable amount of abstraction either.

Tom Lane, reviewed by Kevin Grittner

Discussion: <1188.1464544443@sss.pgh.pa.us>
2016-09-27 13:22:39 -04:00
Alvaro Herrera 51c3e9fade Include <sys/select.h> where needed
<sys/select.h> is required by POSIX.1-2001 to get the prototype of
select(2), but nearly no systems enforce that because older standards
let you get away with including some other headers.  Recent OpenBSD
hacking has removed that frail touch of friendliness, however, which
broke some compiles; fix all the way back to 9.1 by adding the required
standard.  Only vacuumdb.c was reported to fail, but it seems easier to
fix the whole lot in a fell swoop.

Per bug #14334 by Sean Farrell.
2016-09-27 01:05:21 -03:00
Tom Lane 9779bda86c Fix newly-introduced issues in pgbench.
The result of FD_ISSET() doesn't necessarily fit in a bool, though
assigning it to one might accidentally work depending on platform and which
socket FD number is being inquired of.  Rewrite to test it with if(),
rather than making any specific assumption about the result width,
to match the way every other such call in PG is written.

Don't break out of the input_mask-filling loop after finding the first
client that we're waiting for results from.  That mostly breaks parallel
query management.

Also, if we choose not to call select(), be sure to clear out any bits
the mask-filling loop might have set, so that we don't accidentally call
doCustom for clients we don't know have input.  Doing so would likely
be harmless, but it's a waste of cycles and doesn't seem to be intended.

Make this_usec wide enough.  (Yeah, the value would usually fit in an
int, but then why are we using int64 everywhere else?)

Minor cosmetic adjustments, mostly comment improvements.

Problems introduced by commit 12788ae49.  The first issue was discovered
by buildfarm testing, the others by code review.
2016-09-26 20:23:50 -04:00
Heikki Linnakangas 12788ae49e Refactor script execution state machine in pgbench.
The doCustom() function had grown into quite a mess. Rewrite it, in a more
explicit state machine style, for readability.

This also fixes one minor bug: if a script consisted entirely of meta
commands, doCustom() never returned to the caller, so progress reports
with the -P option were not printed. I don't want to backpatch this
refactoring, and the bug is quite insignificant, so only commit this to
master, and leave the bug unfixed in back-branches.

Review and original bug report by Fabien Coelho.

Discussion: <alpine.DEB.2.20.1607090850120.3412@sto>
2016-09-26 10:56:02 +03:00
Tom Lane da6c4f6ca8 Refer to OS X as "macOS", except for the port name which is still "darwin".
We weren't terribly consistent about whether to call Apple's OS "OS X"
or "Mac OS X", and the former is probably confusing to people who aren't
Apple users.  Now that Apple has rebranded it "macOS", follow their lead
to establish a consistent naming pattern.  Also, avoid the use of the
ancient project name "Darwin", except as the port code name which does not
seem desirable to change.  (In short, this patch touches documentation and
comments, but no actual code.)

I didn't touch contrib/start-scripts/osx/, either.  I suspect those are
obsolete and due for a rewrite, anyway.

I dithered about whether to apply this edit to old release notes, but
those were responsible for quite a lot of the inconsistencies, so I ended
up changing them too.  Anyway, Apple's being ahistorical about this,
so why shouldn't we be?
2016-09-25 15:40:57 -04:00
Tom Lane 12f6eadffd Fix incorrect logic for excluding range constructor functions in pg_dump.
Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to
dumping range constructor functions if they are part of an extension
and we're in binary-upgrade mode.  Actually, we don't want to dump them
separately even then, since CREATE TYPE AS RANGE will create the range's
constructor functions regardless.  Per report from Andrew Dunstan.

It looks like this mistake was introduced by me, in commit b985d4877, in
perhaps-overzealous refactoring to reduce code duplication.  I'm suitably
embarrassed.

Report: <34854939-02d7-f591-5677-ce2994104599@dunslane.net>
2016-09-23 13:49:26 -04:00
Peter Eisentraut 6fa51c79c7 pg_ctl: Add promote wait option to help output
pointed out by Masahiko Sawada <sawada.mshk@gmail.com>
2016-09-23 12:00:00 -04:00
Peter Eisentraut 8b845520fb Add tests for various connection string issues
Add tests for consistent support of connection strings in frontend
programs as well as proper handling of unusual characters in database
and user names.  These tests were developed for the issues of
CVE-2016-5424.

To allow testing of names with spaces, change the pg_regress
command-line options --create-role and --dbname to split their arguments
by comma only, not space or comma as before.  Only commas were actually
used in existing uses.

Noah Misch, Michael Paquier, Peter Eisentraut
2016-09-22 12:00:00 -04:00
Peter Eisentraut e7010ce479 pg_ctl: Add wait option to promote action
When waiting is selected for the promote action, look into pg_control
until the state changes, then use the PQping-based waiting until the
server is reachable.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-09-21 12:00:00 -04:00
Peter Eisentraut c1dc51d484 pg_ctl: Detect current standby state from pg_control
pg_ctl used to determine whether a server was in standby mode by looking
for a recovery.conf file.  With this change, it instead looks into
pg_control, which is potentially more accurate.  There are also
occasional discussions about removing recovery.conf, so this removes one
dependency.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-09-21 12:00:00 -04:00
Peter Eisentraut eb5089a05b pg_ctl: Add tests for promote action
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-09-21 12:00:00 -04:00
Heikki Linnakangas 2a7f4f7643 Print test parameters like "foo: 123", and results like "foo = 123".
The way "latency average" was printed was differently if it was calculated
from the overall run time or was measured on a per-transaction basis.
Also, the per-script weight is a test parameter, rather than a result, so
use the "weight: %f" style for that.

Backpatch to 9.6, since the inconsistency on "latency average" was
introduced there.

Fabien Coelho

Discussion: <alpine.DEB.2.20.1607131015370.7486@sto>
2016-09-21 13:24:13 +03:00
Heikki Linnakangas 65c6556384 Fix pgbench's calculation of average latency, when -T is not used.
If the test duration was given in # of transactions (-t or no option),
rather as a duration (-T), the latency average was always printed as 0.
It has been broken ever since the display of latency average was added,
in 9.4.

Fabien Coelho

Discussion: <alpine.DEB.2.20.1607131015370.7486@sto>
2016-09-21 13:14:48 +03:00
Peter Eisentraut 46b55e7f85 pg_restore: Add -N option to exclude schemas
This is similar to the -N option in pg_dump, except that it doesn't take
a pattern, just like the existing -n option in pg_restore.

From: Michael Banck <michael.banck@credativ.de>
2016-09-20 12:00:00 -04:00
Heikki Linnakangas 40c3fe4980 Fix latency calculation when there are \sleep commands in the script.
We can't use txn_scheduled to hold the sleep-until time for \sleep, because
that interferes with calculation of the latency of the transaction as whole.

Backpatch to 9.4, where this bug was introduced.

Fabien COELHO

Discussion: <alpine.DEB.2.20.1608231622170.7102@lancre>
2016-09-19 22:55:43 +03:00
Peter Eisentraut 9083353b15 pg_basebackup: Clean created directories on failure
Like initdb, clean up created data and xlog directories, unless the new
-n/--noclean option is specified.

Tablespace directories are not cleaned up, but a message is written
about that.

Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
2016-09-12 12:00:00 -04:00
Kevin Grittner 63c1a87194 Fix recent commit for tab-completion of database template.
The details of commit 52803098ab were
based on a misunderstanding of the role inheritance allowing use
of a database for a template.  While the CREATEDB privilege is not
inherited, the database ownership is privileges are.

Pointed out by Vitaly Burovoy and Tom Lane.
Fix provided by Tom Lane, reviewed by Vitaly Burovoy.
2016-09-12 09:22:57 -05:00
Kevin Grittner 52803098ab psql tab completion for CREATE DATABASE ... TEMPLATE ...
Sehrope Sarkuni, reviewed by Merlin Moncure & Vitaly Burovoy
with some editing by me
2016-09-11 15:37:27 -05:00
Tom Lane df5d9bb8d5 Allow pg_dump to dump non-extension members of an extension-owned schema.
Previously, if a schema was created by an extension, a normal pg_dump run
(not --binary-upgrade) would summarily skip every object in that schema.
In a case where an extension creates a schema and then users create other
objects within that schema, this does the wrong thing: we want pg_dump
to skip the schema but still create the non-extension-owned objects.

There's no easy way to fix this pre-9.6, because in earlier versions the
"dump" status for a schema is just a bool and there's no way to distinguish
"dump me" from "dump my members".  However, as of 9.6 we do have enough
state to represent that, so this is a simple correction of the logic in
selectDumpableNamespace.

In passing, make some cosmetic fixes in nearby code.

Martín Marqués, reviewed by Michael Paquier

Discussion: <99581032-71de-6466-c325-069861f1947d@2ndquadrant.com>
2016-09-08 13:12:01 -04:00
Tom Lane e97e9c57bd Don't print database's tablespace in pg_dump -C --no-tablespaces output.
If the database has a non-default tablespace, we emitted a TABLESPACE
clause in the CREATE DATABASE command emitted by -C, even if
--no-tablespaces was also specified.  This seems wrong, and it's
inconsistent with what pg_dumpall does, so change it.  Per bug #14315
from Danylo Hlynskyi.

Back-patch to 9.5.  The bug is much older, but it'd be a more invasive
change before 9.5 because dumpDatabase() hasn't got an easy way to get
to the outputNoTablespaces flag.  Doesn't seem worth the work given
the lack of previous complaints.

Report: <20160908081953.1402.75347@wrigleys.postgresql.org>
2016-09-08 10:48:03 -04:00
Tom Lane a2ee579b6d Repair whitespace in initdb message.
What used to be four spaces somehow turned into a tab and a couple of
spaces in commit a00c58314, no doubt from overhelpful emacs autoindent.
Noted by Peter Eisentraut.
2016-09-06 13:26:43 -04:00
Tom Lane 6591f4226c Improve readability of the output of psql's \timing command.
In addition to the existing decimal-milliseconds output value,
display the same value in mm:ss.fff format if it exceeds one second.
Tack on hours and even days fields if the interval is large enough.
This avoids needing mental arithmetic to convert the values into
customary time units.

Corey Huinker, reviewed by Gerdan Santos; bikeshedding by many

Discussion: <CADkLM=dbC4R8sbbuFXQVBFWoJGQkTEW8RWnC0PbW9nZsovZpJQ@mail.gmail.com>
2016-09-03 15:29:03 -04:00
Kevin Grittner 76f9dd4fa8 Improve tab completion for BEGIN & START|SET TRANSACTION.
Andreas Karlsson with minor change by me for SET TRANSACTION
SNAPSHOT.
2016-09-01 16:10:30 -05:00
Tom Lane 052cc223d5 Fix a bunch of places that called malloc and friends with no NULL check.
Where possible, use palloc or pg_malloc instead; otherwise, insert
explicit NULL checks.

Generally speaking, these are places where an actual OOM is quite
unlikely, either because they're in client programs that don't
allocate all that much, or they're very early in process startup
so that we'd likely have had a fork() failure instead.  Hence,
no back-patch, even though this is nominally a bug fix.

Michael Paquier, with some adjustments by me

Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
2016-08-30 18:22:43 -04:00
Tom Lane 9daec77e16 Simplify correct use of simple_prompt().
The previous API for this function had it returning a malloc'd string.
That meant that callers had to check for NULL return, which few of them
were doing, and it also meant that callers had to remember to free()
the string later, which required extra logic in most cases.

Instead, make simple_prompt() write into a buffer supplied by the caller.
Anywhere that the maximum required input length is reasonably small,
which is almost all of the callers, we can just use a local or static
array as the buffer instead of dealing with malloc/free.

A fair number of callers used "pointer == NULL" as a proxy for "haven't
requested the password yet".  Maintaining the same behavior requires
adding a separate boolean flag for that, which adds back some of the
complexity we save by removing free()s.  Nonetheless, this nets out
at a small reduction in overall code size, and considerably less code
than we would have had if we'd added the missing NULL-return checks
everywhere they were needed.

In passing, clean up the API comment for simple_prompt() and get rid
of a very-unnecessary malloc/free in its Windows code path.

This is nominally a bug fix, but it does not seem worth back-patching,
because the actual risk of an OOM failure in any of these places seems
pretty tiny, and all of them are client-side not server-side anyway.

This patch is by me, but it owes a great deal to Michael Paquier
who identified the problem and drafted a patch for fixing it the
other way.

Discussion: <CAB7nPqRu07Ot6iht9i9KRfYLpDaF2ZuUv5y_+72uP23ZAGysRg@mail.gmail.com>
2016-08-30 17:02:02 -04:00
Tom Lane 37f6fd1eaa Fix initdb misbehavior when user mis-enters superuser password.
While testing simple_prompt() revisions, I happened to notice that
current initdb behaves rather badly when --pwprompt is specified and
the user miskeys the second password.  It complains about the mismatch,
does "rm -rf" on the data directory, and exits.  The problem is that
since commit c4a8812cf, there's a standalone backend sitting waiting
for commands at that point.  It gets unhappy about its datadir having
gone away, and spews a PANIC message at the user, which is not nice.
(And the shell then adds to the mess with meaningless bleating about a
core dump...)  We don't really want that sort of thing to happen unless
there's an internal failure in initdb, which this surely is not.

The best fix seems to be to move the collection of the password
earlier, so that it's done essentially as part of argument collection,
rather than at the rather ad-hoc time it was done before.

Back-patch to 9.6 where the problem was introduced.
2016-08-30 15:25:01 -04:00
Alvaro Herrera 8e1e3f958f Split hash.h → hash_xlog.h
Since the hash AM is going to be revamped to have WAL, this is a good
opportunity to clean up the include file a little bit to avoid including
a lot of extra stuff in the future.

Author: Amit Kapila
2016-08-29 18:55:49 -03:00
Simon Riggs 49340627f9 Fix pg_receivexlog --synchronous
Make pg_receivexlog work correctly with --synchronous without slots

Backpatch to 9.5

Gabriele Bartolini, reviewed by Michael Paquier and Simon Riggs
2016-08-29 12:16:18 +01:00
Noah Misch 0395198728 Build libpgfeutils before src/bin/pg_basebackup programs.
Oversight in commit 9132c01429.
2016-08-23 23:40:38 -04:00
Noah Misch b6418a0919 Build libpgfeutils before pg_isready.
Every program having -lpgfeutils in LDFLAGS must have this dependency,
whether or not the program uses a libpgfeutils symbol.  Back-patch to
9.6, where libpgfeutils was introduced.
2016-08-23 23:40:38 -04:00
Tom Lane 234309fa87 initdb now needs submake-libpq and submake-libpgfeutils.
More fallout from commit a00c58314.  Pointed out by Michael Paquier.
2016-08-22 08:01:12 -04:00
Noah Misch 9132c01429 Retire escapeConnectionParameter().
It is redundant with appendConnStrVal(), which became an extern function
in commit 41f18f021a.  This changes the
handling of out-of-memory and of certain inputs for which quoting is
optional, but pg_basebackup has no need for unusual treatment thereof.
2016-08-21 22:05:57 -04:00
Tom Lane a00c583147 Make initdb's suggested "pg_ctl start" command line more reliable.
The original coding here was not nearly careful enough about quoting
special characters, and it didn't get corner cases right for constructing
the pg_ctl path either.  Use join_path_components() and appendShellString()
to do it honestly, so that the string will more likely work if blindly
copied-and-pasted.

While at it, teach appendShellString() not to quote strings that clearly
don't need it, so that the output from initdb doesn't become uglier than
it was before in typical cases where quoting is not needed.

Ryan Murphy, reviewed by Michael Paquier and myself

Discussion: <CAHeEsBeAe1FeBypT3E8R1ZVZU0e8xv3A-7BHg6bEOi=jZny2Uw@mail.gmail.com>
2016-08-20 15:05:25 -04:00
Tom Lane 6471045230 Allow empty queries in pgbench.
This might have been too much of a foot-gun before 9.6, but with the
new commands-end-at-semicolons parsing rule, the only way to get an
empty query into a script is to explicitly write an extra ";".
So we may as well allow the case.

Fabien Coelho

Patch: <alpine.DEB.2.20.1607090922170.3412@sto>
2016-08-19 17:32:59 -04:00
Tom Lane c5d4f40cb5 Update line count totals for psql help displays.
As usual, we've been pretty awful about maintaining these counts.
They're not all that critical, perhaps, but let's get them right
at release time.  Also fix 9.5, which I notice is just as bad.
It's probably wrong further back, but the lack of --help=foo
options before 9.5 makes it too painful to count.
2016-08-18 16:04:35 -04:00
Tom Lane 8019b5a89c Improve psql's tab completion for \l.
Offer a list of database names; formerly no help was offered.

Ian Barwick, reviewed by Gerdan Santos

Patch: <5724132E.1030804@2ndquadrant.com>
2016-08-18 11:29:16 -04:00
Tom Lane 49917dbd76 Improve psql's tab completion for ALTER EXTENSION foo UPDATE ...
Offer a list of available versions for that extension.  Formerly, since
there was no special support for this, it triggered off the UPDATE
keyword and offered a list of table names --- not too helpful.

Jeff Janes, reviewed by Gerdan Santos

Patch: <CAMkU=1z0gxEOLg2BWa69P4X4Ot8xBxipGUiGkXe_tC+raj79-Q@mail.gmail.com>
2016-08-18 11:17:10 -04:00
Magnus Hagander a79a685622 Update Windows timezone mapping from Windows 7 and 10
This adds a couple of new timezones that are present in the newer
versions of Windows. It also updates comments to reference UTC rather
than GMT, as this change has been made in Windows.

Michael Paquier
2016-08-18 12:32:42 +02:00
Magnus Hagander 0921554657 Disable update_process_title by default on Windows
The performance overhead of this can be significant on Windows, and most
people don't have the tools to view it anyway as Windows does not have
native support for process titles.

Discussion: <0A3221C70F24FB45833433255569204D1F5BE3E8@G01JPEXMBYT05>

Takayuki Tsunakawa
2016-08-17 10:43:16 +02:00
Tom Lane 7f61fd10ce Fix assorted places in psql to print version numbers >= 10 in new style.
This is somewhat cosmetic, since as long as you know what you are looking
at, "10.0" is a serviceable substitute for "10".  But there is a potential
for confusion between version numbers with minor numbers and those without
--- we don't want people asking "why is psql saying 10.0 when my server is
10.2".  Therefore, back-patch as far as practical, which turns out to be
9.3.  I could have redone the patch to use fprintf(stderr) in place of
psql_error(), but it seems more work than is warranted for branches that
will be EOL or nearly so by the time v10 comes out.

Although only psql seems to contain any code that needs this, I chose
to put the support function into fe_utils, since it seems likely we'll
need it in other client programs in future.  (In 9.3-9.5, use dumputils.c,
the predecessor of fe_utils/string_utils.c.)

In HEAD, also fix the backend code that whines about loadable-library
version mismatch.  I don't see much need to back-patch that.
2016-08-16 15:58:45 -04:00
Tom Lane ca9112a424 Stamp HEAD as 10devel.
This is a good bit more complicated than the average new-version stamping
commit, because it includes various adjustments in pursuit of changing
from three-part to two-part version numbers.  It's likely some further
work will be needed around that change; but this is enough to get through
the regression tests, at least in Unix builds.

Peter Eisentraut and Tom Lane
2016-08-15 13:49:49 -04:00
Tom Lane b5bce6c1ec Final pgindent + perltidy run for 9.6. 2016-08-15 13:42:51 -04:00
Peter Eisentraut 34927b2920 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: cda21c1d7b160b303dc21dfe9d4169f2c8064c60
2016-08-08 11:08:00 -04:00
Noah Misch fcd15f1358 Obstruct shell, SQL, and conninfo injection via database and role names.
Due to simplistic quoting and confusion of database names with conninfo
strings, roles with the CREATEDB or CREATEROLE option could escalate to
superuser privileges when a superuser next ran certain maintenance
commands.  The new coding rule for PQconnectdbParams() calls, documented
at conninfo_array_parse(), is to pass expand_dbname=true and wrap
literal database names in a trivial connection string.  Escape
zero-length values in appendConnStrVal().  Back-patch to 9.1 (all
supported versions).

Nathan Bossart, Michael Paquier, and Noah Misch.  Reviewed by Peter
Eisentraut.  Reported by Nathan Bossart.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 41f18f021a Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.
Rename these newly-extern functions with terms more typical of their new
neighbors.  No functional changes; a subsequent commit will use them in
more places.  Back-patch to 9.1 (all supported versions).  Back branches
lack src/fe_utils, so instead rename the functions in place; the
subsequent commit will copy them into the other programs using them.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch bd65371851 Fix Windows shell argument quoting.
The incorrect quoting may have permitted arbitrary command execution.
At a minimum, it gave broader control over the command line to actors
supposed to have control over a single argument.  Back-patch to 9.1 (all
supported versions).

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 142c24c234 Reject, in pg_dumpall, names containing CR or LF.
These characters prematurely terminate Windows shell command processing,
causing the shell to execute a prefix of the intended command.  The
chief alternative to rejecting these characters was to bypass the
Windows shell with CreateProcess(), but the ability to use such names
has little value.  Back-patch to 9.1 (all supported versions).

This change formally revokes support for these characters in database
names and roles names.  Don't document this; the error message is
self-explanatory, and too few users would benefit.  A future major
release may forbid creation of databases and roles so named.  For now,
check only at known weak points in pg_dumpall.  Future commits will,
without notice, reject affected names from other frontend programs.

Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and
--file arguments.  Unlike the effects on role name arguments and
database names, this does not reflect a broad policy change.  A
migration to CreateProcess() could lift these two restrictions.

Reviewed by Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch c400717172 Field conninfo strings throughout src/bin/scripts.
These programs nominally accepted conninfo strings, but they would
proceed to use the original dbname parameter as though it were an
unadorned database name.  This caused "reindexdb dbname=foo" to issue an
SQL command that always failed, and other programs printed a conninfo
string in error messages that purported to print a database name.  Fix
both problems by using PQdb() to retrieve actual database names.
Continue to print the full conninfo string when reporting a connection
failure.  It is informative there, and if the database name is the sole
problem, the server-side error message will include the name.  Beyond
those user-visible fixes, this allows a subsequent commit to synthesize
and use conninfo strings without that implementation detail leaking into
messages.  As a side effect, the "vacuuming database" message now
appears after, not before, the connection attempt.  Back-patch to 9.1
(all supported versions).

Reviewed by Michael Paquier and Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 9d924e9a64 Introduce a psql "\connect -reuse-previous=on|off" option.
The decision to reuse values of parameters from a previous connection
has been based on whether the new target is a conninfo string.  Add this
means of overriding that default.  This feature arose as one component
of a fix for security vulnerabilities in pg_dump, pg_dumpall, and
pg_upgrade, so back-patch to 9.1 (all supported versions).  In 9.3 and
later, comment paragraphs that required update had already-incorrect
claims about behavior when no connection is open; fix those problems.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Noah Misch 984e5beb38 Sort out paired double quotes in \connect, \password and \crosstabview.
In arguments, these meta-commands wrongly treated each pair as closing
the double quoted string.  Make the behavior match the documentation.
This is a compatibility break, but I more expect to find software with
untested reliance on the documented behavior than software reliant on
today's behavior.  Back-patch to 9.1 (all supported versions).

Reviewed by Tom Lane and Peter Eisentraut.

Security: CVE-2016-5424
2016-08-08 10:07:46 -04:00
Tom Lane e2e95f5ef3 Fix pg_dump's handling of public schema with both -c and -C options.
Since -c plus -C requests dropping and recreating the target database
as a whole, not dropping individual objects in it, we should assume that
the public schema already exists and need not be created.  The previous
coding considered only the state of the -c option, so it would emit
"CREATE SCHEMA public" anyway, leading to an unexpected error in restore.

Back-patch to 9.2.  Older versions did not accept -c with -C so the
issue doesn't arise there.  (The logic being patched here dates to 8.0,
cf commit 2193121fa, so it's not really wrong that it didn't consider
the case at the time.)

Note that versions before 9.6 will still attempt to emit REVOKE/GRANT
on the public schema; but that happens without -c/-C too, and doesn't
seem to be the focus of this complaint.  I considered extending this
stanza to also skip the public schema's ACL, but that would be a
misfeature, as it'd break cases where users intentionally changed that
ACL.  The real fix for this aspect is Stephen Frost's work to not dump
built-in ACLs, and that's not going to get back-ported.

Per bugs #13804 and #14271.  Solution found by David Johnston and later
rediscovered by me.

Report: <20151207163520.2628.95990@wrigleys.postgresql.org>
Report: <20160801021955.1430.47434@wrigleys.postgresql.org>
2016-08-02 12:49:40 -04:00
Fujii Masao 74d8c95b74 Fix pg_basebackup so that it accepts 0 as a valid compression level.
The help message for pg_basebackup specifies that the numbers 0 through 9
are accepted as valid values of -Z option. But, previously -Z 0 was rejected
as an invalid compression level.

Per discussion, it's better to make pg_basebackup treat 0 as valid
compression level meaning no compression, like pg_dump.

Back-patch to all supported versions.

Reported-By: Jeff Janes
Reviewed-By: Amit Kapila
Discussion: CAMkU=1x+GwjSayc57v6w87ij6iRGFWt=hVfM0B64b1_bPVKRqg@mail.gmail.com
2016-08-01 17:36:14 +09:00
Stephen Frost f9e439b1ca Correctly handle owned sequences with extensions
With the refactoring of pg_dump to handle components, getOwnedSeqs needs
to be a bit more intelligent regarding which components to dump when.
Specifically, we can't simply use the owning table's components as the
set of components to dump as the table might only be including certain
components while all components of the sequence should be dumped, for
example, when the table is a member of an extension while the sequence
is not.

Handle this by combining the set of components to be dumped for the
sequence explicitly and those to be dumped for the table when setting
the components to be dumped for the sequence.

Also add a number of regression tests around this to, hopefully, catch
any future changes which break the expected behavior.

Discovered by: Philippe BEAUDOIN
Reviewed by: Michael Paquier
2016-07-31 10:57:15 -04:00
Tom Lane ed0b228d7a Guard against empty buffer in gets_fromFile()'s check for a newline.
Per the fgets() specification, it cannot return without reading some data
unless it reports EOF or error.  So the code here assumed that the data
buffer would necessarily be nonempty when we go to check for a newline
having been read.  However, Agostino Sarubbo noticed that this could fail
to be true if the first byte of the data is a NUL (\0).  The fgets() API
doesn't really work for embedded NULs, which is something I don't feel
any great need for us to worry about since we generally don't allow NULs
in SQL strings anyway.  But we should not access off the end of our own
buffer if the case occurs.  Normally this would just be a harmless read,
but if you were unlucky the byte before the buffer would contain '\n'
and we'd overwrite it with '\0', and if you were really unlucky that
might be valuable data and psql would crash.

Agostino reported this to pgsql-security, but after discussion we concluded
that it isn't worth treating as a security bug; if you can control the
input to psql you can do far more interesting things than just maybe-crash
it.  Nonetheless, it is a bug, so back-patch to all supported versions.
2016-07-28 18:57:24 -04:00
Tom Lane d9e74959a7 Register atexit hook only once in pg_upgrade.
start_postmaster() registered stop_postmaster_atexit as an atexit(3)
callback each time through, although the obvious intention was to do
so only once per program run.  The extra registrations were harmless,
so long as we didn't exceed ATEXIT_MAX, but still it's a bug.

Artur Zakirov, with bikeshedding by Kyotaro Horiguchi and me

Discussion: <d279e817-02b5-caa6-215f-cfb05dce109a@postgrespro.ru>
2016-07-28 11:39:10 -04:00
Peter Eisentraut 7d67606569 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3d71988dffd3c0798a8864c55ca4b7833b48abb1
2016-07-18 12:07:49 -04:00
Tom Lane 18555b1323 Establish conventions about global object names used in regression tests.
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects.  There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.

This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_".  It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user".  That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.

There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again.  Again, that's not such a cosmetic change and it will require
discussion.  (But I did use a quick-hack enforcement patch to find these
cases.)

Discussion: <16638.1468620817@sss.pgh.pa.us>
2016-07-17 18:42:43 -04:00
Stephen Frost 47f5bb9f53 Correctly dump database and tablespace ACLs
Dump out the appropriate GRANT/REVOKE commands for databases and
tablespaces from pg_dumpall to replicate what the current state is.

This was broken during the changes to buildACLCommands for 9.6+
servers for pg_init_privs.
2016-07-17 09:04:46 -04:00
Magnus Hagander 00e0b67a58 Remove reference to range mode in pg_xlogdump error
pg_xlogdump doesn't have any other mode, so it's just confusing to
include this in the error message as it indicates there might be another
mode.
2016-07-14 15:39:01 +02:00
Peter Eisentraut b9fc9f7c3c Put some things in a better order in psql help 2016-07-12 18:11:45 -04:00
Tom Lane a670c24c38 Improve output of psql's \df+ command.
Add display of proparallel (parallel-safety) when the server is >= 9.6,
and display of proacl (access privileges) for all server versions.
Minor tweak of column ordering to keep related columns together.

Michael Paquier

Discussion: <CAB7nPqTR3Vu3xKOZOYqSm-+bSZV0kqgeGAXD6w5GLbkbfd5Q6w@mail.gmail.com>
2016-07-11 12:35:08 -04:00
Magnus Hagander ae7d78c3e2 Add missing newline in error message 2016-07-11 13:53:17 +02:00
Peter Eisentraut bd406af168 psql: Improve \crosstabview error messages 2016-06-24 01:08:08 -04:00
Andrew Dunstan 562e449724 Add tab completion for pager_min_lines to psql.
This was inadvertantly omitted from commit
7655f4ccea. Mea culpa.

Backpatched to 9.5 where pager_min_lines was introduced.
2016-06-23 16:10:15 -04:00
Tom Lane f8ace5477e Fix type-safety problem with parallel aggregate serial/deserialization.
The original specification for this called for the deserialization function
to have signature "deserialize(serialtype) returns transtype", which is a
security violation if transtype is INTERNAL (which it always would be in
practice) and serialtype is not (which ditto).  The patch blithely overrode
the opr_sanity check for that, which was sloppy-enough work in itself,
but the indisputable reason this cannot be allowed to stand is that CREATE
FUNCTION will reject such a signature and thus it'd be impossible for
extensions to create parallelizable aggregates.

The minimum fix to make the signature type-safe is to add a second, dummy
argument of type INTERNAL.  But to lock it down a bit more and make misuse
of INTERNAL-accepting functions less likely, let's get rid of the ability
to specify a "serialtype" for an aggregate and just say that the only
useful serialtype is BYTEA --- which, in practice, is the only interesting
value anyway, due to the usefulness of the send/recv infrastructure for
this purpose.  That means we only have to allow "serialize(internal)
returns bytea" and "deserialize(bytea, internal) returns internal" as
the signatures for these support functions.

In passing fix bogus signature of int4_avg_combine, which I found thanks
to adding an opr_sanity check on combinefunc signatures.

catversion bump due to removing pg_aggregate.aggserialtype and adjusting
signatures of assorted built-in functions.

David Rowley and Tom Lane

Discussion: <27247.1466185504@sss.pgh.pa.us>
2016-06-22 16:52:41 -04:00
Peter Eisentraut 47981a4665 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0c374f8d25ed31833a10d24252bc928d41438838
2016-06-20 09:48:08 -04:00
Alvaro Herrera 3b5a2a8856 Reword bogus comment 2016-06-16 12:43:35 -04:00
Alvaro Herrera b000afea65 Remove unused prototype
Commit 6f56b41ac0 removed function get_pg_database_relfilenode but left
its prototype in place.  Remove it.
2016-06-16 12:06:51 -04:00
Tom Lane 9901d8ac2e Use strftime("%c") to format timestamps in psql's \watch command.
This allows the timestamps to follow local conventions (in particular,
they respond to the LC_TIME environment setting).  In C locale you get
the same results as before.  It seems like a good idea to do this now not
later because we already changed the format of \watch headers for 9.6.

Also, increase the buffer sizes a tad to ensure there's enough space for
translated strings.

Discussion: <20160612145532.GA22965@postgresql.kr>
2016-06-15 19:31:13 -04:00
Tom Lane 8383486f10 Force idle_in_transaction_session_timeout off in pg_dump and autovacuum.
We disable statement_timeout and lock_timeout during dump and restore, to
prevent any global settings that might exist from breaking routine backups.
Commit c6dda1f48 should have added idle_in_transaction_session_timeout to
that list, but failed to.

Another place where these timeouts get turned off is autovacuum.  While
I doubt an idle timeout could fire there, it seems better to be safe than
sorry.

pg_dump issue noted by Bernd Helmle, the other one found by grepping.

Report: <352F9B77DB5D3082578D17BB@eje.land.credativ.lan>
2016-06-15 10:53:03 -04:00
Noah Misch 3be0a62ffe Finish pgindent run for 9.6: Perl files. 2016-06-12 04:19:56 -04:00
Robert Haas 4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
Robert Haas c9ce4a1c61 Eliminate "parallel degree" terminology.
This terminology provoked widespread complaints.  So, instead, rename
the GUC max_parallel_degree to max_parallel_workers_per_gather
(leaving room for a possible future GUC max_parallel_workers that acts
as a system-wide limit), and rename the parallel_degree reloption to
parallel_workers.  Rename structure members to match.

These changes create a dump/restore hazard for users of PostgreSQL
9.6beta1 who have set the reloption (or applied the GUC using ALTER
USER or ALTER DATABASE).
2016-06-09 10:00:26 -04:00
Alvaro Herrera 8b3746208c Add missing translate_columns array entry
This omission caused an assertion error in \dA+.
2016-06-07 18:03:31 -04:00
Alvaro Herrera 4f04b66f97 Fix loose ends for SQL ACCESS METHOD objects
COMMENT ON ACCESS METHOD was missing; add it, along psql tab-completion
support for it.

psql was also missing a way to list existing access methods; the new \dA
command does that.

Also add tab-completion support for DROP ACCESS METHOD.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqTzdZdu8J7EF8SXr_R2U5bSUUYNOT3oAWBZdEoggnwhGA@mail.gmail.com
2016-06-07 17:59:34 -04:00
Peter Eisentraut 5c6d2a5e7c Message style and wording fixes 2016-06-07 14:18:55 -04:00
Peter Eisentraut d8c2dccfdb psql: Add missing file to nls.mk
crosstabview.c was not added to nls.mk when it was added.  Also remove
redundant gettext markers, since psql_error() is already registered as a
gettext keyword.
2016-06-07 10:58:46 -04:00
Peter Eisentraut 298706de26 pgbench: Fix order in --help output
The new option --progress-timestamp was just added at the end.  Put it
in alphabetical order.
2016-06-07 10:41:20 -04:00
Stephen Frost 562f06f3f0 pg_dump only selected components of ACCESS METHODs
dumpAccessMethod() didn't get the memo that we now have a bitfield for
the components which should be dumped instead of a simple boolean.

Correct that by checking if the relevant bit is set for each component
being dumped out (and not dumping it out if it isn't set).

This corrects an issue where CREATE ACCESS METHOD commands were being
included in non-binary-upgrades when an extension included an access
method (as the bloom extensions does).

Also add a regression test to make sure that we only dump out the
ACCESS METHOD commands, when they are part of an extension, when doing
a binary upgrade.

Pointed out by Thom Brown.
2016-06-07 09:56:02 -04:00
Robert Haas e191a69005 pg_upgrade: Don't overwrite existing files.
For historical reasons, copyFile and rewriteVisibilityMap took a force
argument which was always passed as true, meaning that any existing
file should be overwritten.  However, it seems much safer to instead
fail if a file we need to write already exists.

While we're at it, remove the "force" argument altogether, since it was
never passed as anything other than true (and now we would never pass
it as anything other than false, if we kept it).

Noted by Andres Freund during post-commit review of the patch that added
rewriteVisibilityMap, commit 7087166a88,
but this also changes the behavior when copying files without rewriting
them.

Patch by Masahiko Sawada.
2016-06-06 09:51:56 -04:00
Robert Haas aba8943082 pg_upgrade: Improve error checking in rewriteVisibilityMap.
In the old logic, if read() were to return an error, we'd silently stop
rewriting the visibility map at that point in the file.  That's safe,
but reporting the error is better, so do that instead.

Report by Andres Freund.  Patch by Masahiko Sawada, with one correction
by me.
2016-06-06 06:17:10 -04:00
Tom Lane 6c72a28e5c Suppress -Wunused-result warnings about write(), again.
Adopt the same solution as in commit aa90e148ca, but this time
let's put the ugliness inside the write_stderr() macro, instead of
expecting each call site to deal with it.  Back-port that decision
into psql/common.c where I got the macro from in the first place.

Per gripe from Peter Eisentraut.
2016-06-03 11:29:38 -04:00
Greg Stark e1623c3959 Fix various common mispellings.
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.

Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
2016-06-03 16:08:45 +01:00
Tom Lane e652273e07 Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar
signals and set a flag that was tested in various data-transfer loops.
This was prone to errors of omission (cf commit 3c8aa6654); and even if
the client-side response was prompt, we did nothing that would cause
long-running SQL commands (e.g. CREATE INDEX) to terminate early.
Also, the master process would effectively do nothing at all upon receipt
of SIGINT; the only reason it seemed to work was that in typical scenarios
the signal would also be delivered to the child processes.  We should
support termination when a signal is delivered only to the master process,
though.

Windows builds had no console interrupt handler, so they would just fall
over immediately at control-C, again leaving long-running SQL commands to
finish unmolested.

To fix, remove the flag-checking approach altogether.  Instead, allow the
Unix signal handler to send a cancel request directly and then exit(1).
In the master process, also have it forward the signal to the children.
On Windows, add a console interrupt handler that behaves approximately
the same.  The main difference is that a single execution of the Windows
handler can send all the cancel requests since all the info is available
in one process, whereas on Unix each process sends a cancel only for its
own database connection.

In passing, fix an old problem that DisconnectDatabase tends to send a
cancel request before exiting a parallel worker, even if nothing went
wrong.  This is at least a waste of cycles, and could lead to unexpected
log messages, or maybe even data loss if it happened in pg_restore (though
in the current code the problem seems to affect only pg_dump).  The cause
was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY
state, causing PQtransactionStatus() to report PQTRANS_ACTIVE.  That's
normally harmless because the next PQexec() will silently clear the
PGASYNC_BUSY state; but in a parallel worker we might exit without any
additional SQL commands after a COPY step.  So add an extra PQgetResult()
call after a COPY to allow libpq to return to PGASYNC_IDLE state.

This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore
were introduced.

Thanks to Kyotaro Horiguchi for Windows testing and code suggestions.

Original-Patch: <7005.1464657274@sss.pgh.pa.us>
Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
2016-06-02 13:28:17 -04:00
Tom Lane 763eec6b6d Clean up some minor inefficiencies in parallel dump/restore.
Parallel dump did a totally pointless query to find out the name of each
table to be dumped, which it already knows.  Parallel restore runs issued
lots of redundant SET commands because _doSetFixedOutputState() was invoked
once per TOC item rather than just once at connection start.  While the
extra queries are insignificant if you're dumping or restoring large
tables, it still seems worth getting rid of them.

Also, give the responsibility for selecting the right client_encoding for
a parallel dump worker to setup_connection() where it naturally belongs,
instead of having ad-hoc code for that in CloneArchive().  And fix some
minor bugs like use of strdup() where pg_strdup() would be safer.

Back-patch to 9.3, mostly to keep the branches in sync in an area that
we're still finding bugs in.

Discussion: <5086.1464793073@sss.pgh.pa.us>
2016-06-01 16:14:21 -04:00
Tom Lane 3c8aa6654a Fix missing abort checks in pg_backup_directory.c.
Parallel restore from directory format failed to respond to control-C
in a timely manner, because there were no checkAborting() calls in the
code path that reads data from a file and sends it to the backend.
If any worker was in the midst of restoring data for a large table,
you'd just have to wait.

This fix doesn't do anything for the problem of aborting a long-running
server-side command, but at least it fixes things for data transfers.

Back-patch to 9.3 where parallel restore was introduced.
2016-05-29 13:18:48 -04:00
Tom Lane 210981a4a9 Remove pg_dump/parallel.c's useless "aborting" flag.
This was effectively dead code, since the places that tested it could not
be reached after we entered the on-exit-cleanup routine that would set it.
It seems to have been a leftover from a design in which error abort would
try to send fresh commands to the workers --- a design which could never
have worked reliably, of course.  Since the flag is not cross-platform, it
complicates reasoning about the code's behavior, which we could do without.

Although this is effectively just cosmetic, back-patch anyway, because
there are some actual bugs in the vicinity of this behavior.

Discussion: <15583.1464462418@sss.pgh.pa.us>
2016-05-29 13:00:09 -04:00
Tom Lane 6b3094c26f Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.
The commentary in this file was in extremely sad shape.  The author(s)
had clearly never heard of the project convention that a function header
comment should provide an API spec of some sort for that function.  Much
of it was flat out wrong, too --- maybe it was accurate when written, but
if so it had not been updated to track subsequent code revisions.  Rewrite
and rearrange to try to bring it up to speed, and annotate some of the
places where more work is needed.  (I've refrained from actually fixing
anything of substance ... yet.)

Also, rename a couple of functions for more clarity as to what they do,
do some very minor code rearrangement, remove some pointless Asserts,
fix an incorrect Assert in readMessageFromPipe, and add a missing socket
close in one error exit from pgpipe().  The last would be a bug if we
tried to continue after pgpipe() failure, but since we don't, it's just
cosmetic at present.

Although this is only cosmetic, back-patch to 9.3 where parallel.c was
added.  It's sufficiently invasive that it'll pose a hazard for future
back-patching if we don't.

Discussion: <25239.1464386067@sss.pgh.pa.us>
2016-05-28 14:02:11 -04:00
Tom Lane 807b45375b Clean up thread management in parallel pg_dump for Windows.
Since we start the worker threads with _beginthreadex(), we should use
_endthreadex() to terminate them.  We got this right in the normal-exit
code path, but not so much during an error exit from a worker.
In addition, be sure to apply CloseHandle to the thread handle after
each thread exits.

It's not clear that these oversights cause any user-visible problems,
since the pg_dump run is about to terminate anyway.  Still, it's clearly
better to follow Microsoft's API specifications than ignore them.

Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including
being a bit less random about where to cast between uintptr_t and HANDLE,
and being sure to clear the worker identity field for each dead worker
(not that false matches should be possible later, but let's be careful).

Original observation and patch by Armin Schöffmann, cosmetic improvements
by Michael Paquier and me.  (Armin's patch also included closing sockets
in ShutdownWorkersHard(), but that's been dealt with already in commit
df8d2d8c4.)  Back-patch to 9.3 where parallel pg_dump was introduced.

Discussion: <zarafa.570306bd.3418.074bf1420d8f2ba2@root.aegaeon.de>
2016-05-27 12:02:09 -04:00
Magnus Hagander d74048defc Make pg_dump error cleanly with -j against hot standby
Getting a synchronized snapshot is not supported on a hot standby node,
and is by default taken when using -j with multiple sessions. Trying to
do so still failed, but with a server error that would also go in the
log. Instead, proprely detect this case and give a better error message.
2016-05-26 22:14:23 +02:00
Tom Lane cae2bb1986 Make pg_dump behave more sanely when built without HAVE_LIBZ.
For some reason the code to emit a warning and switch to uncompressed
output was placed down in the guts of pg_backup_archiver.c.  This is
definitely too late in the case of parallel operation (and I rather
wonder if it wasn't too late for other purposes as well).  Put it in
pg_dump.c's option-processing logic, which seems a much saner place.

Also, the default behavior with custom or directory output format was
to emit the warning telling you the output would be uncompressed.  This
seems unhelpful, so silence that case.

Back-patch to 9.3 where parallel dump was introduced.

Kyotaro Horiguchi, adjusted a bit by me

Report: <20160526.185551.242041780.horiguchi.kyotaro@lab.ntt.co.jp>
2016-05-26 11:51:04 -04:00
Tom Lane df8d2d8c42 In Windows pg_dump, ensure idle workers will shut down during error exit.
The Windows coding of ShutdownWorkersHard() thought that setting termEvent
was sufficient to make workers exit after an error.  But that only helps
if a worker is busy and passes through checkAborting().  An idle worker
will just sit, resulting in pg_dump failing to exit until the user gives up
and hits control-C.  We should close the write end of the command pipe
so that idle workers will see socket EOF and exit, as the Unix coding was
already doing.

Back-patch to 9.3 where parallel pg_dump was introduced.

Kyotaro Horiguchi
2016-05-26 10:50:30 -04:00
Tom Lane 9abd64ec99 Fix broken error handling in parallel pg_dump/pg_restore.
In the original design for parallel dump, worker processes reported errors
by sending them up to the master process, which would print the messages.
This is unworkably fragile for a couple of reasons: it risks deadlock if a
worker sends an error at an unexpected time, and if the master has already
died for some reason, the user will never get to see the error at all.
Revert that idea and go back to just always printing messages to stderr.
This approach means that if all the workers fail for similar reasons (eg,
bad password or server shutdown), the user will see N copies of that
message, not only one as before.  While that's slightly annoying, it's
certainly better than not seeing any message; not to mention that we
shouldn't assume that only the first failure is interesting.

An additional problem in the same area was that the master failed to
disable SIGPIPE (at least until much too late), which meant that sending a
command to an already-dead worker would cause the master to crash silently.
That was bad enough in itself but was made worse by the total reliance on
the master to print errors: even if the worker had reported an error, you
would probably not see it, depending on timing.  Instead disable SIGPIPE
right after we've forked the workers, before attempting to send them
anything.

Additionally, the master relies on seeing socket EOF to realize that a
worker has exited prematurely --- but on Windows, there would be no EOF
since the socket is attached to the process that includes both the master
and worker threads, so it remains open.  Make archive_close_connection()
close the worker end of the sockets so that this acts more like the Unix
case.  It's not perfect, because if a worker thread exits without going
through exit_nicely() the closures won't happen; but that's not really
supposed to happen.

This has been wrong all along, so back-patch to 9.3 where parallel dump
was introduced.

Report: <2458.1450894615@sss.pgh.pa.us>
2016-05-25 12:40:12 -04:00
Stephen Frost 018eb027f1 Do not DROP default roles in pg_dumpall -c
When pulling the list of roles to drop, exclude roles whose names
begin with "pg_" (as we do when we are dumping the roles out to
recreate them).

Also add regression tests to cover pg_dumpall -c and this specific
issue.

Noticed by Rushabh Lathia.  Patch by me.
2016-05-24 23:31:55 -04:00
Stephen Frost 2e8b4bf804 Qualify table usage in dumpTable() and use regclass
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catalog ACLs work.

Also, use ::regclass (or ::pg_catalog.regclass, where appropriate)
instead of using a poorly constructed query to get the OID for various
catalog tables.

Issues identified by Noah and Alvaro, patch by me.
2016-05-24 20:10:16 -04:00
Tom Lane 1087aa2314 Fix typo in TAP test identification string.
Michael Paquier
2016-05-23 20:04:27 -04:00
Peter Eisentraut a50b605aa4 psql: Message style improvements 2016-05-21 22:17:00 -04:00
Tom Lane 16ea51a263 Pin the built-in index access methods.
This was overlooked in commit 473b93287, which introduced DROP ACCESS
METHOD.  Although that command is restricted to superusers, we don't want
even superusers dropping the built-in methods; "DROP ACCESS METHOD btree"
in particular is unrecoverable from.  Pin these objects in the same way
that other initdb-created objects are pinned.

I chose to bump catversion for this fix.  That's not absolutely necessary
perhaps, but it will ensure that no 9.6 production systems are missing
the pin entries.
2016-05-19 14:40:02 -04:00
Peter Eisentraut 48aaba4acf Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
2016-05-09 10:04:41 -04:00
Stephen Frost 6f69b96390 Wording quibbles regarding initdb username
Use disallowed instead of reserved, cannot instead of can not, and
double quotes instead of single quotes.

Also add a test to cover the bug which started this discussion.

Per discussion with Tom.
2016-05-08 12:58:21 -04:00
Stephen Frost 7df974ee0b Disallow superuser names starting with 'pg_' in initdb
As with CREATE ROLE, disallow users from specifying initial
superuser names which begin with 'pg_' in initdb.

Per discussion with Tom.
2016-05-08 11:55:44 -04:00
Tom Lane b818088408 In new pg_dump TAP tests, remove trailing "$" from regexps using /m.
It emerges that some Perl versions before 5.8.9 have a bug with regexps
that use the /m flag and contain "$".  This is the reason why jacana
is still failing on HEAD, and I was able to duplicate the failure on
prairiedog's host.  There's no real need for "$" in these patterns,
since they are already matching through the statement-terminating
semicolons (or matching an explicit \n in some cases).  So just
remove it.

Note: the reason jacana hasn't actually reported any failures in the
last little while is that the way the pg_dump TAP tests are set up, any
failure of this sort results in echoing the entire pg_dump dump output
to stderr.  Since there were about a hundred such failures, that resulted
in a 30MB log file which choked the buildfarm upload script.  There is
room for improvement here :-(.

Per off-list discussion with Andrew and Stephen.
2016-05-07 16:36:50 -04:00
Tom Lane 74a73b1722 Clean up after pg_dump test runs.
The tmp_check directory needs to be removed by "make clean",
and also ignored by .gitignore.
2016-05-06 22:28:01 -04:00
Tom Lane 1a2c17f8e2 Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old.
This patch essentially reverts commit 4c6780fd17, in favor of a much
simpler solution for the case where the new cluster would choose to create
a TOAST table but the old cluster doesn't have one: just don't create a
TOAST table.

The existing code failed in at least two different ways if the situation
arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the
lock sanity check in create_toast_table failed; (2) pg_upgrade did not
provide a pg_type OID for the new toast table, so that the crosscheck in
TypeCreate failed.  While both these problems were introduced by later
patches, they show that the hack being used to cause TOAST table creation
is overwhelmingly fragile (and untested).  I also note that before the
TypeCreate crosscheck was added, the code would have resulted in assigning
an indeterminate pg_type OID to the toast table, possibly causing a later
OID conflict in that catalog; so that it didn't really work even when
committed.

If we simply don't create a TOAST table, there will only be a problem if
the code tries to store a tuple that's wider than a page, and field
compression isn't sufficient to get it under a page.  Given that the TOAST
creation threshold is intended to be about a quarter of a page, it's very
hard to believe that cross-version differences in the do-we-need-a-toast-
table heuristic could result in an observable problem.  So let's just
follow the old version's conclusion about whether a TOAST table is needed.

(If we ever do change needs_toast_table() so much that this conclusion
doesn't apply, we can devise a solution at that time, and hopefully do
it in a less klugy way than 4c6780fd17 did.)

Back-patch to 9.3, like the previous patch.

Discussion: <8110.1462291671@sss.pgh.pa.us>
2016-05-06 22:05:56 -04:00
Stephen Frost 0f97c722bb Disable BLOB test in pg_dump TAP tests
Buildfarm member jacana appears to have an issue with running this
test.  It's not entirely clear to me why, but rather than try to
fight with it, just disable it for now.

None of the other tests try to write out from psql directly as
this test does, so it seems likely that the rest of the tests will
be fine (as they have been on numerous other systems).
2016-05-06 21:24:31 -04:00
Stephen Frost c778e27e13 Correct query in pg_dumpall:dumpRoles
We need to use a new branch due to the 9.5 addition of bypassrls
when adding in the clause to exclude pg_* roles from being dumped
by pg_dumpall.

Pointed out by Noah, patch by me.
2016-05-06 16:15:52 -04:00
Stephen Frost eccfeeb631 Remove MODULES_big from test_pg_dump
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension.  Hopefully
this will fix the Windows buildfarm members which were complaining.

In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
2016-05-06 15:26:57 -04:00
Tom Lane 73b9952e82 Improve pg_upgrade's report about failure to match up old and new tables.
Ordinarily, pg_upgrade shouldn't have any difficulty in matching up all
the relations it sees in the old and new databases.  If it does, however,
it just goes belly-up with a pretty unhelpful error message.  That seemed
fine as long as we expected the case never to occur in the wild, but
Alvaro reported that it had been seen in a database whose pg_largeobject
table had somehow acquired a TOAST table.  That doesn't quite seem like
a case that pg_upgrade actually needs to handle, but it would be good if
the report were more diagnosable.  Hence, extend the logic to print out
as much information as we can about the mismatch(es) before we quit.

In passing, improve the readability of get_rel_infos()'s data collection
query, which had suffered seriously from lets-not-bother-to-update-comments
syndrome, and generally was unnecessarily disrespectful to readers.

It could be argued that this is a bug fix, but given that we have so few
reports, I don't feel a need to back-patch; at least not before this has
baked awhile in HEAD.
2016-05-06 14:45:01 -04:00
Stephen Frost 6bd356c33a Add TAP tests for pg_dump
This TAP test suite will create a new cluster, populate it based on
the 'create_sql' values in the '%tests' hash, run all of the runs
defined in the '%pgdump_runs' hash, and then for each test in the
'%tests' hash, compare each run's output the the regular expression
defined for the test under the 'like' and 'unlike' functions, as
appropriate.

While this test suite covers a fair bit of ground (67% of pg_dump.c
and quite a bit of the other files in src/bin/pg_dump), there is
still quite a bit which remains to be added to provide better code
coverage.  Still, this is quite a bit better than we had, and has
found a few bugs already (note that the CREATE TRANSFORM test is
commented out, as it is currently failing).

Idea for using the TAP system from Tom, though all of the code is mine.
2016-05-06 14:06:50 -04:00
Stephen Frost e1b120a8cb Only issue LOCK TABLE commands when necessary
Reviewing the cases where we need to LOCK a given table during a dump,
it was pointed out by Tom that we really don't need to LOCK a table if
we are only looking to dump the ACL for it, or certain other
components.  After reviewing the queries run for all of the component
pieces, a list of components were determined to not require LOCK'ing
of the table.

This implements a check to avoid LOCK'ing those tables.

Initial complaint from Rushabh Lathia, discussed with Robert and Tom,
the patch is mine.
2016-05-06 14:06:50 -04:00
Stephen Frost 5d589993ca pg_dump performance and other fixes
Do not try to dump objects which do not have ACLs when only ACLs are
being requested.  This results in a significant performance improvement
as we can avoid querying for further information on these objects when
we don't need to.

When limiting the components to dump for an extension, consider what
components have been requested.  Initially, we incorrectly hard-coded
the components of the extension objects to dump, which would mean that
we wouldn't dump some components even with they were asked for and in
other cases we would dump components which weren't requested.

Correct defaultACLs to use 'dump_contains' instead of 'dump'.  The
defaultACL is considered a member of the namespace and should be
dumped based on the same set of components that the other objects in
the schema are, not based on what we're dumping for the namespace
itself (which might not include ACLs, if the namespace has just the
default or initial ACL).

Use DUMP_COMPONENT_ACL for from-initdb objects, to allow users to
change their ACLs, should they wish to.  This just extends what we
are doing for the pg_catalog namespace to objects which are not
members of namespaces.

Due to column ACLs being treated a bit differently from other ACLs
(they are actually reset to NULL when all privileges are revoked),
adjust the query which gathers column-level ACLs to consider all of
the ACL-relevant columns.
2016-05-06 14:06:50 -04:00
Stephen Frost 64d60c8bf0 Correct pg_dump WHERE clause for functions/aggregates
The query to grab the function/aggregate information is now joining
to pg_init_privs, so we can simplify (and correct) the WHERE clause
used to determine if a given function's ACL has changed from the
initial ACL on the function.

Bug found by Noah, patch by me.
2016-05-06 14:06:50 -04:00
Tom Lane 6b8b4e4d83 Fix pgbench's parsing of double values to notice trailing garbage.
Noted by Fabien Coelho, though this isn't exactly his proposed patch.
(The technique used here is borrowed from the zic sources.)
2016-05-06 11:08:48 -04:00
Tom Lane 9515299485 Improve handling of numeric-valued variables in pgbench.
The previous coding always stored variable values as strings, doing
conversion on-the-fly when a numeric value was needed or a number was to be
assigned.  This was a bit inefficient and risked loss of precision for
floating-point values.  The precision aspect had been hacked around by
printing doubles in "%.18e" format, which is ugly and has machine-dependent
results.  Instead, arrange to preserve an assigned numeric value in the
original binary numeric format, converting to string only when and if
needed.  When we do need to convert a double to string, convert in "%g"
format with DBL_DIG precision, which is the standard way to do it and
produces the least surprising results in most cases.

The implementation supports storing both a string value and a numeric
value for any one variable, with lazy conversion between them.  I also
arranged for lazy re-sorting of the variable array when new variables are
added.  That was mainly to allow a clean refactoring of putVariable()
into two levels of subroutine, but it may allow us to save a few sorts.

Discussion: <9188.1462475559@sss.pgh.pa.us>
2016-05-06 11:01:05 -04:00
Dean Rasheed 9b66aa006f Fix psql's \ev and \sv commands so that they handle view reloptions.
Commit 8eb6407aae added support for
editing and showing view definitions, but neglected to account for
view options such as security_barrier and WITH CHECK OPTION which are
not returned by pg_get_viewdef() and so need special handling.

Author: Dean Rasheed
Reviewed-by: Peter Eisentraut
Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
2016-05-06 12:48:27 +01:00
Dean Rasheed 93a8c6fd6c Move and rename fmtReloptionsArray().
Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it
is available to other frontend code. In particular psql's \ev and \sv
commands need it to handle view reloptions. Also rename the function
to appendReloptionsArray(), which is a more accurate description of
what it does.

Author: Dean Rasheed
Reviewed-by: Peter Eisentraut
Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
2016-05-06 12:45:36 +01:00
Tom Lane 7a622b2731 Rename pgbench min/max to least/greatest, and fix handling of double args.
These functions behave like the backend's least/greatest functions,
not like min/max, so the originally-chosen names invite confusion.
Per discussion, rename to least/greatest.

I also took it upon myself to make them return double if any input is
double.  The previous behavior of silently coercing all inputs to int
surely does not meet the principle of least astonishment.

Copy-edit some of the other new functions' documentation, too.
2016-05-05 14:51:00 -04:00
Andrew Dunstan 0fb54de9aa Support building with Visual Studio 2015
Adjust the way we detect the locale. As a result the minumum Windows
version supported by VS2015 and later is Windows Vista. Add some tweaks
to remove new compiler warnings. Remove documentation references to the
now obsolete msysGit.

Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich.

Backpatch to 9.5
2016-04-29 08:09:07 -04:00
Peter Eisentraut 3019f432d6 pg_dump: Message style improvements
forgotten in b6dacc173b
2016-04-26 21:37:06 -04:00
Peter Eisentraut b6dacc173b pg_dump: Message style improvements 2016-04-25 17:16:59 -04:00
Peter Eisentraut 63417b4b2e Update GETTEXT_FILES after config and controldata refactoring 2016-04-24 20:58:11 -04:00
Robert Haas b4e0f18382 Add pg_dump support for the new PARALLEL option for aggregates.
This was an oversight in commit 41ea0c2376.

Fabrízio de Royes Mello, per a report from Tushar Ahuja
2016-04-20 23:06:06 -04:00
Tom Lane 9603a32594 Avoid code duplication in \crosstabview.
In commit 6f0d6a507 I added a duplicate copy of psqlscanslash's identifier
downcasing code, but actually it's not hard to split that out as a callable
subroutine and avoid the duplication.
2016-04-17 11:37:58 -04:00
Peter Eisentraut c313687673 psql: Add new gettext trigger 2016-04-15 20:23:41 -04:00
Tom Lane 6f0d6a5078 Rethink \crosstabview's argument parsing logic.
\crosstabview interpreted its arguments in an unusual way, including
doing case-insensitive matching of unquoted column names, which is
surely not the right thing.  Rip that out in favor of doing something
equivalent to the dequoting/case-folding rules used by other psql
commands.  To keep it simple, change the syntax so that the optional
sort column is specified as a separate argument, instead of the
also-quite-unusual syntax that attached it to the colH argument with
a colon.

Also, rework the error messages to be closer to project style.
2016-04-14 22:54:31 -04:00
Tom Lane 6cead413bb Fix pg_dump so pg_upgrade'ing an extension with simple opfamilies works.
As reported by Michael Feld, pg_upgrade'ing an installation having
extensions with operator families that contain just a single operator class
failed to reproduce the extension membership of those operator families.
This caused no immediate ill effects, but would create problems when later
trying to do a plain dump and restore, because the seemingly-not-part-of-
the-extension operator families would appear separately in the pg_dump
output, and then would conflict with the families created by loading the
extension.  This has been broken ever since extensions were introduced,
and many of the standard contrib extensions are affected, so it's a bit
astonishing nobody complained before.

The cause of the problem is a perhaps-ill-considered decision to omit
such operator families from pg_dump's output on the grounds that the
CREATE OPERATOR CLASS commands could recreate them, and having explicit
CREATE OPERATOR FAMILY commands would impede loading the dump script into
pre-8.3 servers.  Whatever the merits of that decision when 8.3 was being
written, it looks like a poor tradeoff now.  We can fix the pg_upgrade
problem simply by removing that code, so that the operator families are
dumped explicitly (and then will be properly made to be part of their
extensions).

Although this fixes the behavior of future pg_upgrade runs, it does nothing
to clean up existing installations that may have improperly-linked operator
families.  Given the small number of complaints to date, maybe we don't
need to worry about providing an automated solution for that; anyone who
needs to clean it up can do so with manual "ALTER EXTENSION ADD OPERATOR
FAMILY" commands, or even just ignore the duplicate-opfamily errors they
get during a pg_restore.  In any case we need this fix.

Back-patch to all supported branches.

Discussion: <20228.1460575691@sss.pgh.pa.us>
2016-04-13 18:58:14 -04:00
Tom Lane 7a5f8b5c59 Improve coding of column-name parsing in psql's new crosstabview.c.
Coverity complained about this code, not without reason because it was
rather messy.  Adjust it to not scribble on the passed string; that adds
one malloc/free cycle per column name, which is going to be insignificant
in context.  We can actually const-ify both the string argument and the
PGresult.

Daniel Verité, with some further cleanup by me
2016-04-12 12:52:42 -04:00
Tom Lane 074050f16a pg_dump: add missing "destroyPQExpBuffer(query)" in dumpForeignServer().
Coverity complained about this resource leak (why now, I don't know,
since it's been like that a long time).  Our general policy in pg_dump
is that PQExpBuffers are worth cleaning up, so do it here too.  But
don't bother with a back-patch, because it seems unlikely that very
many databases contain enough FOREIGN SERVER objects to notice.
2016-04-11 00:00:08 -04:00
Alvaro Herrera c09b18f21c Support \crosstabview in psql
\crosstabview is a completely different way to display results from a
query: instead of a vertical display of rows, the data values are placed
in a grid where the column and row headers come from the data itself,
similar to a spreadsheet.

The sort order of the horizontal header can be specified by using
another column in the query, and the vertical header determines its
ordering from the order in which they appear in the query.

This only allows displaying a single value in each cell.  If more than
one value correspond to the same cell, an error is thrown.  Merging of
values can be done in the query itself, if necessary.  This may be
revisited in the future.

Author: Daniel Verité
Reviewed-by: Pavel Stehule, Dean Rasheed
2016-04-08 20:23:18 -03:00
Stephen Frost 293007898d Reserve the "pg_" namespace for roles
This will prevent users from creating roles which begin with "pg_" and
will check for those roles before allowing an upgrade using pg_upgrade.

This will allow for default roles to be provided at initdb time.

Reviews by José Luis Tallón and Robert Haas
2016-04-08 16:56:27 -04:00
Stephen Frost fa6075e551 Fix improper usage of 'dump' bitmap
Now that 'dump' is a bitmap, we can't simply set it to 'true'.

Noticed while debugging the prior issue.
2016-04-08 16:30:02 -04:00
Stephen Frost 689f9a0588 In dumpTable, re-instate the skipping logic
Pretty sure I removed this based on some incorrect thinking that it was
no longer possible to reach this point for a table which will not be
dumped, but that's clearly wrong.

Pointed out on IRC by Erik Rijkers.
2016-04-08 15:00:44 -04:00
Teodor Sigaev 8b99edefca Revert CREATE INDEX ... INCLUDING ...
It's not ready yet, revert two commits
690c543550 - unstable test output
386e3d7609 - patch itself
2016-04-08 21:52:13 +03:00
Tom Lane 34c33a1f00 Add BSD authentication method.
Create a "bsd" auth method that works the same as "password" so far as
clients are concerned, but calls the BSD Authentication service to
check the password.  This is currently only available on OpenBSD.

Marisa Emerson, reviewed by Thomas Munro
2016-04-08 13:52:06 -04:00
Teodor Sigaev 386e3d7609 CREATE INDEX ... INCLUDING (column[, ...])
Now indexes (but only B-tree for now) can contain "extra" column(s) which
doesn't participate in index structure, they are just stored in leaf
tuples. It allows to use index only scan by using single index instead
of two or more indexes.

Author: Anastasia Lubennikova with minor editorializing by me
Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
2016-04-08 19:45:59 +03:00
Robert Haas 25fe8b5f1a Add a 'parallel_degree' reloption.
The code that estimates what parallel degree should be uesd for the
scan of a relation is currently rather stupid, so add a parallel_degree
reloption that can be used to override the planner's rather limited
judgement.

Julien Rouhaud, reviewed by David Rowley, James Sewell, Amit Kapila,
and me.  Some further hacking by me.
2016-04-08 11:14:56 -04:00
Stephen Frost 23f34fa4ba In pg_dump, include pg_catalog and extension ACLs, if changed
Now that all of the infrastructure exists, add in the ability to
dump out the ACLs of the objects inside of pg_catalog or the ACLs
for objects which are members of extensions, but only if they have
been changed from their original values.

The original values are tracked in pg_init_privs.  When pg_dump'ing
9.6-and-above databases, we will dump out the ACLs for all objects
in pg_catalog and the ACLs for all extension members, where the ACL
has been changed from the original value which was set during either
initdb or CREATE EXTENSION.

This should not change dumps against pre-9.6 databases.

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Stephen Frost d217b2c360 In pg_dump, split "dump" into "dump" and "dump_contains"
Historically, the "dump" component of the namespace has been used
to decide if the objects inside of the namespace should be dumped
also.  Given that "dump" is now a bitmask and may be partial, and
we may want to dump out all components of the namespace object but
only some of the components of objects contained in the namespace,
create a "dump_contains" bitmask which will represent what components
of the objects inside of a namespace should be dumped out.

No behavior change here, but in preparation for a change where we
will dump out just the ACLs of objects in pg_catalog, but we might
not dump out the ACL of the pg_catalog namespace itself (for instance,
when it hasn't been changed from the value set at initdb time).

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Stephen Frost a9f0e8e5a2 In pg_dump, use a bitmap to represent what to include
pg_dump has historically used a simple boolean 'dump' value to indicate
if a given object should be included in the dump or not.  Instead, use
a bitmap which breaks down the components of an object into their
distinct pieces and use that bitmap to only include the components
requested.

This does not include any behavioral change, but is in preperation for
the change to dump out just ACLs for objects in pg_catalog.

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Stephen Frost 6c268df127 Add new catalog called pg_init_privs
This new catalog holds the privileges which the system was
initialized with at initdb time, along with any permissions set
by extensions at CREATE EXTENSION time.  This allows pg_dump
(and any other similar use-cases) to detect when the privileges
set on initdb-created or extension-created objects have been
changed from what they were set to at initdb/extension-creation
time and handle those changes appropriately.

Reviews by Alexander Korotkov, Jose Luis Tallon
2016-04-06 21:45:32 -04:00
Peter Eisentraut 3b3fcc4eea pg_dump: Add table qualifications to some tags
Some object types have names that are only unique for one table.  But
for those we generally didn't put the table name into the dump TOC tag.
So it was impossible to identify these objects if the same name was used
for multiple tables.  This affects policies, column defaults,
constraints, triggers, and rules.

Fix by adding the table name to the TOC tag, so that it now reads
"$schema $table $object".

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2016-04-06 12:13:11 -04:00
Simon Riggs 3fe3511d05 Generic Messages for Logical Decoding
API and mechanism to allow generic messages to be inserted into WAL that are
intended to be read by logical decoding plugins. This commit adds an optional
new callback to the logical decoding API.

Messages are either text or bytea. Messages can be transactional, or not, and
are identified by a prefix to allow multiple concurrent decoding plugins.

(Not to be confused with Generic WAL records, which are intended to allow crash
recovery of extensible objects.)

Author: Petr Jelinek and Andres Freund
Reviewers: Artur Zakirov, Tomas Vondra, Simon Riggs
Discussion: 5685F999.6010202@2ndquadrant.com
2016-04-06 10:05:41 +01:00
Tom Lane 2bbe9112ae Add a \gexec command to psql for evaluation of computed queries.
\gexec executes the just-entered query, like \g, but instead of printing
the results it takes each field as a SQL command to send to the server.
Computing a series of queries to be executed is a fairly common thing,
but up to now you always had to resort to kluges like writing the queries
to a file and then inputting the file.  Now it can be done with no
intermediate step.

The implementation is fairly straightforward except for its interaction
with FETCH_COUNT.  ExecQueryUsingCursor isn't capable of being called
recursively, and even if it were, its need to create a transaction
block interferes unpleasantly with the desired behavior of \gexec after
a failure of a generated query (i.e., that it can continue).  Therefore,
disable use of ExecQueryUsingCursor when doing the master \gexec query.
We can still apply it to individual generated queries, however, and there
might be some value in doing so.

While testing this feature's interaction with single-step mode, I (tgl) was
led to conclude that SendQuery needs to recognize SIGINT (cancel_pressed)
as a negative response to the single-step prompt.  Perhaps that's a
back-patchable bug fix, but for now I just included it here.

Corey Huinker, reviewed by Jim Nasby, Daniel Vérité, and myself
2016-04-04 15:25:16 -04:00
Tom Lane 3cc38ca7d2 Add psql \errverbose command to see last server error at full verbosity.
Often, upon getting an unexpected error in psql, one's first wish is that
the verbosity setting had been higher; for example, to be able to see the
schema-name field or the server code location info.  Up to now the only way
has been to adjust the VERBOSITY variable and repeat the failing query.
That's a pain, and it doesn't work if the error isn't reproducible.

This commit adds a psql feature that redisplays the most recent server
error at full verbosity, without needing to make any variable changes or
re-execute the failed command.  We just need to hang onto the latest error
PGresult in case the user executes \errverbose, and then apply libpq's
new PQresultVerboseErrorMessage() function to it.  This will consume
some trivial amount of psql memory, but otherwise the cost when the
feature isn't used should be negligible.

Alex Shulgin, reviewed by Daniel Vérité, some improvements by me
2016-04-03 12:29:55 -04:00
Alvaro Herrera 5cb882675a pgbench: Remove unused parameter
For some reason this parameter was introduced as unused in 3da0dfb4b1,
and has never been used for anything.  Remove it.

Author: Fabien Coelho
2016-04-01 17:11:18 -03:00
Teodor Sigaev 65578341af Add Generic WAL interface
This interface is designed to give an access to WAL for extensions which
could implement new access method, for example. Previously it was
impossible because restoring from custom WAL would need to access system
catalog to find a redo custom function. This patch suggests generic way
to describe changes on page with standart layout.

Bump XLOG_PAGE_MAGIC because of new record type.

Author: Alexander Korotkov with a help of Petr Jelinek, Markus Nullmeier and
	minor editorization by my
Reviewers: Petr Jelinek, Alvaro Herrera, Teodor Sigaev, Jim Nasby,
	Michael Paquier
2016-04-01 12:21:48 +03:00
Robert Haas 5fe5a2cee9 Allow aggregate transition states to be serialized and deserialized.
This is necessary infrastructure for supporting parallel aggregation
for aggregates whose transition type is "internal".  Such values
can't be passed between cooperating processes, because they are
just pointers.

David Rowley, reviewed by Tomas Vondra and by me.
2016-03-29 15:04:05 -04:00
Alvaro Herrera a1c935d3b7 pgbench: allow a script weight of zero
This refines the previous weight range and allows a script to be "turned
off" by passing a zero weight, which is useful when scripting multiple
pgbench runs.

I did not apply the suggested warning when a script uses zero weight; we
use the principle elsewhere that if there's nothing to be done, do
nothing quietly.

Adjust docs accordingly.

Author: Jeff Janes, Fabien Coelho
2016-03-29 14:47:10 -03:00
Robert Haas ad9566470b pgbench: Remove \setrandom.
You can now do the same thing via \set using the appropriate function,
either random(), random_gaussian(), or random_exponential(), depending
on the desired distribution.  This is not backward-compatible, but per
discussion, it's worth it to avoid having the old syntax hang around
forever.

Fabien Coelho, reviewed by Michael Paquier, and adjusted by me.
2016-03-29 12:08:49 -04:00
Tom Lane 656ee84890 Fix portability issues in 86c43f4e22.
INT64_MIN/MAX should be spelled PG_INT64_MIN/MAX, per well established
convention in our sources.  Less obviously, a symbol named DOUBLE causes
problems on Windows builds, so rename that to DOUBLE_CONST; and rename
INTEGER to INTEGER_CONST for consistency.

Also, get rid of incorrect/obsolete hand-munging of yycolumn, and fix
the grammar for float constants to handle expected cases such as ".1".

First two items by Michael Paquier, second two by me.
2016-03-29 00:53:53 -04:00
Robert Haas 86c43f4e22 pgbench: Support double constants and functions.
The new functions are pi(), random(), random_exponential(),
random_gaussian(), and sqrt().  I was worried that this would be
slower than before, but, if anything, it actually turns out to be
slightly faster, because we now express the built-in pgbench scripts
using fewer lines; each \setrandom can be merged into a subsequent
\set.

Fabien Coelho
2016-03-28 20:45:57 -04:00
Tom Lane 1f4e9da624 Sync tzload() and tzparse() APIs with IANA release tzcode2016c.
This brings us a bit closer to matching upstream, but since it affects
files outside src/timezone/, we might choose not to back-patch it.
Hence keep it separate from the main update patch.
2016-03-28 17:19:29 -04:00
Alvaro Herrera cad3edef4f pg_rewind: Improve internationalization
This is mostly cosmetic since two of the three changes are debug
messages, and the third one is just a progress indicator.

Author: Michaël Paquier
2016-03-28 14:33:00 -03:00
Alvaro Herrera 37732a2555 Fix minor leak in pg_dump for ACCESS METHOD.
Bug reported by Coverity.

Author: Michaël Paquier
2016-03-28 14:27:41 -03:00
Teodor Sigaev 559e7a0a6d psql tab-complete for CREATE/DROP ACCESS METHOD
Alexander Korotkov
2016-03-28 19:32:13 +03:00
Teodor Sigaev dabd255d58 Fix comment in pg_dump.
It was missed in 473b932870,
CREATE ACCESS METHOD

Alexander Korotkov
2016-03-28 19:17:28 +03:00
Andres Freund 408f043853 pg_rewind: fsync target data directory.
Previously pg_rewind did not fsync any files. That's problematic, given
that the target directory is modified. If the database was started
afterwards, 2ce439f33 luckily already caused the data directory to be
synced to disk at postmaster startup; reducing the scope of the problem.

To fix, use initdb -S, at the end of the pg_rewind run. It doesn't seem
worthwhile to duplicate the code into pg_rewind, and initdb -S is
already used that way by pg_upgrade.

Reported-By: Andres Freund
Author: Michael Paquier, somewhat edited by me
Discussion: 20160310034352.iuqgvpmg5qmnxtkz@alap3.anarazel.de
    CAB7nPqSytVG1o4S3S2pA1O=692ekurJ+fckW2PywEG3sNw54Ow@mail.gmail.com
Backpatch: 9.5, where pg_rewind was introduced
2016-03-27 23:46:25 +02:00
Andres Freund a6c845946d pg_rewind: Close backup_label file descriptor.
This was a relatively harmless leak, as createBackupLabel() is only
called once per pg_rewind invocation.

Author: Michael Paquier
Reported-By: Michael Paquier
Discussion: CAB7nPqRnOw30gOXe2_SPLjh37bgm4V+txbYAPwoXb97nGQ297w@mail.gmail.com
Backpatch: 9.5, where pg_rewind was introduced
2016-03-27 22:48:31 +02:00
Tom Lane 7caaeaf360 Link libpq after libpgfeutils to satisfy Windows linker.
Some of the non-MSVC Windows buildfarm members seem to need this to avoid
getting "undefined symbol" errors on libpgfeutils' references to libpq.
I could understand that if libpq were a static library, but surely it is
not?  Oh well, at least the extra reference is no more harmful than it is
for libpgcommon or libpgport.
2016-03-24 20:45:31 -04:00
Tom Lane c1156411ad Move psql's psqlscan.l into src/fe_utils.
This completes (at least for now) the project of getting rid of ad-hoc
linkages among the src/bin/ subdirectories.  Everything they share is now
in src/fe_utils/ and is included from a static library at link time.

A side benefit is that we can restore the FLEX_NO_BACKUP check for
psqlscanslash.l.  We might need to think of another way to do that check
if we ever need to build two lexers with that property in the same source
directory, but there's no foreseeable reason to need that.
2016-03-24 20:28:47 -04:00
Tom Lane d65bea26a8 Move psql's print.c and mbprint.c into src/fe_utils.
Just turning the crank ...
2016-03-24 18:27:28 -04:00
Tom Lane 588d963b00 Create src/fe_utils/, and move stuff into there from pg_dump's dumputils.
Per discussion, we want to create a static library and put the stuff into
it that until now has been shared across src/bin/ directories by ad-hoc
methods like symlinking a source file.  This commit creates the library and
populates it with a couple of files that contain the widely-useful portions
of pg_dump's dumputils.c file.  dumputils.c survives, because it has some
stuff that didn't seem appropriate for fe_utils, but it's significantly
smaller and is no longer referenced from any other directory.

Follow-on patches will move more stuff into fe_utils.

The Mkvcbuild.pm hacking here is just a best guess; we'll see how the
buildfarm likes it.
2016-03-24 15:55:57 -04:00
Alvaro Herrera 473b932870 Support CREATE ACCESS METHOD
This enables external code to create access methods.  This is useful so
that extensions can add their own access methods which can be formally
tracked for dependencies, so that DROP operates correctly.  Also, having
explicit support makes pg_dump work correctly.

Currently only index AMs are supported, but we expect different types to
be added in the future.

Authors: Alexander Korotkov, Petr Jelínek
Reviewed-By: Teodor Sigaev, Petr Jelínek, Jim Nasby
Commitfest-URL: https://commitfest.postgresql.org/9/353/
Discussion: https://www.postgresql.org/message-id/CAPpHfdsXwZmojm6Dx+TJnpYk27kT4o7Ri6X_4OSWcByu1Rm+VA@mail.gmail.com
2016-03-23 23:01:35 -03:00
Tom Lane 2c6af4f442 Move keywords.c/kwlookup.c into src/common/.
Now that we have src/common/ for code shared between frontend and backend,
we can get rid of (most of) the klugy ways that the keyword table and
keyword lookup code were formerly shared between different uses.
This is a first step towards a more general plan of getting rid of
special-purpose kluges for sharing code in src/bin/.

I chose to merge kwlookup.c back into keywords.c, as it once was, and
always has been so far as keywords.h is concerned.  We could have
kept them separate, but there is noplace that uses ScanKeywordLookup
without also wanting access to the backend's keyword list, so there
seems little point.

ecpg is still a bit weird, but at least now the trickiness is documented.

I think that the MSVC build script should require no adjustments beyond
what's done here ... but we'll soon find out.
2016-03-23 20:22:08 -04:00
Tom Lane b283096534 Allow the delay in psql's \watch command to be a fractional second.
Instead of just "2" seconds, allow eg. "2.5" seconds.  Per request
from Alvaro Herrera.  No docs change since the docs didn't say you
couldn't do this already.
2016-03-21 18:34:18 -04:00
Tom Lane dea2b5960a Improve header output from psql's \watch command.
Include the \pset title string if there is one, and shorten the prefab
part of the header to be "timestamp (every Ns)".  Per suggestion by
David Johnston.

Michael Paquier and Tom Lane
2016-03-21 18:18:13 -04:00
Tom Lane b6afae71aa Use %option bison-bridge in psql/pgbench lexers.
The point of this change is to use %pure-parser in pgbench's exprparse.y.
The immediate reason is that it turns out very ancient versions of bison
have a bug with the combination of a reentrant lexer and non-reentrant
parser.  We could consider dropping support for such ancient bisons; but
considering that we might well need exprparse.y to be reentrant some day,
it seems better to make it so right now than to move the portability
goalposts.  (AFAICT there's no particular performance consequence to this
change, either, so there's no good reason not to do it.)

Now, %pure-parser assumes that the called lexer is built with %option
bison-bridge.  Because we're assuming bitwise compatibility of yyscan_t
(yyguts_t) data structures among all the psql/pgbench lexers, that
requirement propagates back to psql's lexers as well.  But it's just a
few lines of change on that side too; and if psqlscan.l is to set the
baseline for a possibly-large family of lexers, it should err on the
side of including not omitting useful features.
2016-03-20 21:59:03 -04:00
Tom Lane 68ab8e8ba4 SQL commands in pgbench scripts are now ended by semicolons, not newlines.
To allow multiline SQL commands in scripts, adopt the same rules psql uses
to decide what is the end of a SQL command, to wit, an unquoted semicolon
not encased in parentheses.  Do this by importing the same flex lexer that
psql uses, since coping with stuff like dollar-quoted literals is hard to
get right without going the full nine yards.

This makes use of the infrastructure added in commit 0ea9efbe9e to
support independently-written flex lexers scanning the same PsqlScanState
input-buffer data structure.  Since that infrastructure isn't very
friendly to ad-hoc parsing code such as strtok(), improve exprscan.l
so that it can parse either whitespace-separated words or expression
tokens, on demand, and rewrite pgbench.c's backslash-command parsing
code to always use the lexer to fetch tokens.

It's still the case that pgbench backslash commands extend to the end
of the line, no more and no less.  That could be changed in a fairly
localized way now, and there was some interest in doing so, but it
seems like material for a separate patch.

In passing, make some marginal cleanups in syntax error reporting,
const-ify a few data structures that could use it, and run some of
this code through pgindent.

I can't tell whether the MSVC build scripts need to be taught explicitly
about the changes here or not, but the buildfarm will soon tell us.

Kyotaro Horiguchi and Tom Lane
2016-03-20 12:58:51 -04:00
Tom Lane 429ee5a822 Make pgbench's expression lexer reentrant.
This is a necessary preliminary step for making it play with psqlscan.l
given the way I set up the lexer input-buffer sharing mechanism in commit
0ea9efbe9e.

I've not tried to make it *actually* reentrant; there's still some static
variables laying about.  But flex thinks it's reentrant, and that's what
counts.

In support of that, fix exprparse.y to pass through the yyscan_t from the
caller.  Also do some minor code beautification, like not casting away
const.
2016-03-19 16:35:41 -04:00
Alvaro Herrera 1038bc91ca pgbench: Silence new compiler warnings
The original coding in 7bafffea64 and previous wasn't all that great
anyway.

Reported by Jeff Janes and Tom Lane
2016-03-19 16:16:39 -03:00
Tom Lane 21c8ee7946 Sync backend/parser/scan.l with bin/psql/psqlscan.l.
Make some minor formatting adjustments to make it easier to diff these
files and see that they indeed implement the same flex rules (at least
to the extent that we want them to be the same).

(Someday it'd be nice to make ecpg's pgc.l more easily diff'able too,
but today is not that day.)

Also run relevant parts of these files and psqlscanslash.l through
pgindent.

No actual behavioral changes here, just obsessive neatnik-ism.
2016-03-19 14:36:22 -04:00
Alvaro Herrera 7bafffea64 pgbench: Allow changing weights for scripts
Previously, all scripts had the same probability of being chosen when
multiple of them were specified via -b, -f, -N, -S.  With this commit,
-b and -f now search for an "@" in the script name and use the integer
found after it as the drawing probability for that script.

(One disadvantage is that if you have script whose names contain @, you
are now forced to specify "@1" at the end; otherwise the name's @ is
confused with a weight separator.  We don't expect many pgbench script
with @ in their names in the wild, so this shouldn't be too serious a
problem.)

While at it, rework the interface between addScript, process_file,
process_builtin, and findBuiltin.  It had gotten a bit out of hand with
recent commits.

Author: Fabien Coelho
Reviewed-By: Andres Freund, Robert Haas, Álvaro Herrera, Michaël Paquier
Discussion: http://www.postgresql.org/message-id/alpine.DEB.2.10.1603160721240.1666@sto
2016-03-19 12:32:42 -03:00
Tom Lane b46d9beb65 With ancient gcc, skip pg_attribute_printf() on function pointer.
Buildfarm results show that the ability to attach pg_attribute_printf
decoration to a function pointer appeared somewhere between gcc 2.95.3
and gcc 4.0.1.  Guess that it was there in 4.0.
2016-03-19 10:59:20 -04:00
Tom Lane ff0a7e6167 Use yylex_init not yylex_init_extra().
Older versions of flex don't have the latter.  Per buildfarm.
2016-03-19 01:02:18 -04:00
Tom Lane a3e39f8363 Suppress FLEX_NO_BACKUP check for psqlscanslash.l.
The existing infrastructure for FLEX_NO_BACKUP doesn't work reliably
when two lexers are built in parallel in the same directory.  We can
probably fix that, but as a short-term workaround, just don't make
the check for psqlscanslash.l.

Per buildfarm.
2016-03-19 00:43:46 -04:00
Tom Lane 0ea9efbe9e Split psql's lexer into two separate .l files for SQL and backslash cases.
This gets us to a point where psqlscan.l can be used by other frontend
programs for the same purpose psql uses it for, ie to detect when it's
collected a complete SQL command from input that is divided across
line boundaries.  Moreover, other programs can supply their own lexers
for backslash commands of their own choosing.  A follow-on patch will
use this in pgbench.

The end result here is roughly the same as in Kyotaro Horiguchi's
0001-Make-SQL-parser-part-of-psqlscan-independent-from-ps.patch, although
the details of the method for switching between lexers are quite different.
Basically, in this patch we share the entire PsqlScanState, YY_BUFFER_STATE
stack, *and* yyscan_t between different lexers.  The only thing we need
to do to switch to a different lexer is to make sure the start_state is
valid for the new lexer.  This works because flex doesn't keep any other
persistent state that depends on the specific lexing tables generated for
a particular .l file.  (We are assuming that both lexers are built with
the same flex version, or at least versions that are compatible with
respect to the contents of yyscan_t; but that doesn't seem likely to
be a big problem in practice, considering how slowly flex changes.)

Aside from being more efficient than Horiguchi-san's original solution,
this avoids possible corner-case changes in semantics: the original code
was capable of popping the input buffer stack while still staying in
backslash-related parsing states.  I'm not sure that that equates to any
useful user-visible behaviors, but I'm not sure it doesn't either, so
I'm loath to assume that we only need to consider the topmost buffer when
parsing a backslash command.

I've attempted to update the MSVC build scripts for the added .l file,
but will rely on the buildfarm to see if I missed anything.

Kyotaro Horiguchi and Tom Lane
2016-03-19 00:24:55 -04:00
Tom Lane 27199058d9 Convert psql's flex lexer to be re-entrant, and make it compile standalone.
Change psqlscan.l to specify '%option reentrant', adjust internal APIs
to match, and get rid of its internal static variables.  While this is
good cleanup in an abstract sense, the reason to do it right now is that
it seems the only practical way to support use of separate flex lexers
with common PsqlScanState infrastructure.  If we build two non-reentrant
lexers then we are going to have problems with dangling buffer pointers
in whichever lexer isn't active when we transition from one buffer to
another, as well as curious side-effects if we try to share any code
between the files.  (Horiguchi-san had a different solution to that in his
pending patch, but I find it ugly and probably broken for corner cases.)

Depending on which version of flex you're using, this may result in getting
a "warning: unused variable 'yyg'" warning from psqlscan, similar to the
one you'd have seen for a long time in backend/parser/scan.l.  I put a
local -Wno-error into CFLAGS for the file, for the convenience of those
who compile with -Werror.

Also, stop compiling psqlscan as part of mainloop.c, and make it a
standalone build target instead.  This is a lot cleaner than before, though
it doesn't really change much in practice as of this commit.  (I'm not sure
whether the MSVC build scripts will need some help with this part, but the
buildfarm will soon tell us.)
2016-03-18 21:22:02 -04:00
Peter Eisentraut b555ed8102 Merge wal_level "archive" and "hot_standby" into new name "replica"
The distinction between "archive" and "hot_standby" existed only because
at the time "hot_standby" was added, there was some uncertainty about
stability.  This is now a long time ago.  We would like to move forward
with simplifying the replication configuration, but this distinction is
in the way, because a primary server cannot tell (without asking a
standby or predicting the future) which one of these would be the
appropriate level.

Pick a new name for the combined setting to make it clearer that it
covers all (non-logical) backup and replication uses.  The old values
are still accepted but are converted internally.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: David Steele <david@pgmasters.net>
2016-03-18 23:56:03 +01:00
Tom Lane 4e1d2a1708 Decouple psqlscan.l from surrounding program.
Remove assorted external references from psqlscan.l in preparation for
making it usable by other frontend programs.  This mostly involves
getting rid of direct calls to psql_error() and GetVariable() in favor
of introducing a callback-functions struct to encapsulate variable
fetching and error printing.  In addition, pass the current encoding
and standard-strings status as additional parameters to psql_scan_setup
instead of looking directly at "pset" or calling additional functions.

I did not bother to change some references to psql_error that are in
functions that will soon migrate to a psql-specific backslash-command
lexer.  Other than that, this version of psqlscan.l is capable of
compiling standalone.  It still depends on assorted src/common functions
as well as some encoding-related libpq functions, but we expect that
all programs using it will be happy with those dependencies.

Kyotaro Horiguchi, somewhat editorialized on by me
2016-03-18 15:05:59 -04:00
Tom Lane 3422feccca Clean up some misplaced #includes.
Random .h files have no business including postgres-fe.h (or postgres.h).
If that wasn't the first #include done by the calling .c file, it's the
.c file that's broken.  Noted while prepping Kyotaro Horiguchi's psql
lexer refactoring patch.
2016-03-18 13:43:17 -04:00
Tom Lane 47211af17a Fix "pg_bench -C -M prepared".
This didn't work because when we dropped and re-established a database
connection, we did not bother to reset session-specific state such as
the statements-are-prepared flags.

The st->prepared[] array certainly needs to be flushed, and I cleared a
couple of other fields as well that couldn't possibly retain meaningful
state for a new connection.

In passing, fix some bogus comments and strange field order choices.

Per report from Robins Tharakan.
2016-03-16 23:18:07 -04:00