Commit Graph

24362 Commits

Author SHA1 Message Date
Heikki Linnakangas
3dee636e04 Add -d option to pg_dumpall, for specifying a connection string.
Like with pg_basebackup and pg_receivexlog, it's a bit strange to call the
option -d/--dbname, when in fact you cannot pass a database name in it.

Original patch by Amit Kapila, heavily modified by me.
2013-02-25 19:39:10 +02:00
Heikki Linnakangas
691e595dd9 Add -d/--dbname option to pg_dump.
You could already pass a database name just by passing it as the last
option, without -d. This is an alias for that, like the -d/--dbname option
in psql and many other client applications. For consistency.
2013-02-25 19:39:04 +02:00
Andrew Dunstan
a64e33f030 Redo MSVC build implementation for pg_xlogdump.
The previous commit didn't work on MSVC editions earlier than
Visual Studio 2011, apparently. This works by copying files into the
contrib directory, and making provision to clean them up, which should
work on all editions.
2013-02-25 12:00:53 -05:00
Heikki Linnakangas
aa05c37e82 Add -d option to pg_basebackup and pg_receivexlog, for connection string.
Without this, there's no way to pass arbitrary libpq connection parameters
to these applications. It's a bit strange that the option is called
-d/--dbname, when in fact you can *not* pass a database name in it, but it's
consistent with other client applications where a connection string is also
passed using -d.

Original patch by Amit Kapila, heavily modified by me.
2013-02-25 14:59:33 +02:00
Andrew Dunstan
786170d74f Provide MSVC build setup for pg_xlogdump. 2013-02-24 20:28:42 -05:00
Peter Eisentraut
ca9c666602 Correct tense in log message 2013-02-23 23:30:14 -05:00
Peter Eisentraut
4f36292669 Add quotes to messages 2013-02-22 23:33:07 -05:00
Alvaro Herrera
639ed4e84b Add pg_xlogdump contrib program
This program relies on rm_desc backend routines and the xlogreader
infrastructure to emit human-readable rendering of WAL records.

Author: Andres Freund, with many reworks by Álvaro
Reviewed (in a much earlier version) by Peter Eisentraut
2013-02-22 16:56:55 -03:00
Alvaro Herrera
af0a4c5924 Blind attempt at fixing the non-MSVC Windows builds
Apparently, they need -DBUILDING_DLL for the Assert() declarations to
work correctly.
2013-02-22 11:51:15 -03:00
Heikki Linnakangas
6c4f6664b2 Fix thinko in previous commit.
We must still initialize minRecoveryPoint if we start straight with archive
recovery, e.g when recovering from a normal base backup taken with
pg_start/stop_backup. Otherwise we never consider the system consistent.
2013-02-22 13:12:43 +02:00
Heikki Linnakangas
abf5c5c9a4 If recovery.conf is created after "pg_ctl stop -m i", do crash recovery.
If you create a base backup using an atomic filesystem snapshot, and try to
perform PITR starting from that base backup, or if you just kill a master
server and create recovery.conf to put it into standby mode, we don't know
how far we need to recover before reaching consistency. Normally in crash
recovery, we replay all the WAL present in pg_xlog, and assume that we're
consistent after that. And normally in archive recovery, minRecoveryPoint,
backupEndRequired, or backupEndPoint is set in the control file, indicating
how far we need to replay to reach consistency. But if the server was
previously up and running normally, and you kill -9 it or take an atomic
filesystem snapshot, none of those fields are set in the control file.

The solution is to perform crash recovery first, replaying all the WAL in
pg_xlog. After that's done, we assume that the system is consistent like in
normal crash recovery, and switch to archive recovery mode after that.

Per report from Kyotaro HORIGUCHI. In his scenario, recovery.conf was
created after "pg_ctl stop -m i". I'm not sure we need to support that exact
scenario, but we should support backing up using a filesystem snapshot,
which looks identical.

This issue goes back to at least 9.0, where hot standby was introduced and
we started to track when consistency is reached. In 9.1 and 9.2, we would
open up for hot standby too early, and queries could briefly see an
inconsistent state. But 9.2 made it more visible, as we started to PANIC if
we see a reference to a non-existing page during recovery, if we've already
reached consistency. This is a fairly big patch, so back-patch to 9.2 only,
where the issue is more visible. We can consider back-patching further after
this has received some more testing in 9.2 and master.
2013-02-22 12:32:41 +02:00
Alvaro Herrera
a730183926 Move relpath() to libpgcommon
This enables non-backend code, such as pg_xlogdump, to use it easily.
The previous location, in src/backend/catalog/catalog.c, made that
essentially impossible because that file depends on many backend-only
facilities; so this needs to live separately.
2013-02-21 22:46:17 -03:00
Alvaro Herrera
6e3fd96463 Remove useless variable
Per Jeff Janes
2013-02-21 11:46:46 -03:00
Tom Lane
54a2786835 Need to decorate XactIsoLevel as PGDLLIMPORT for postgres_fdw.
Per buildfarm.
2013-02-21 09:28:42 -05:00
Tom Lane
699d70b2ec Teach MSVC build system about postgres_fdw.
Per buildfarm.
2013-02-21 06:43:15 -05:00
Tom Lane
d0d75c4022 Add postgres_fdw contrib module.
There's still a lot of room for improvement, but it basically works,
and we need this to be present before we can do anything much with the
writable-foreign-tables patch.  So let's commit it and get on with testing.

Shigeru Hanada, reviewed by KaiGai Kohei and Tom Lane
2013-02-21 05:27:16 -05:00
Heikki Linnakangas
f435cd1d38 Fix pg_dumpall with database names containing =
If a database name contained a '=' character, pg_dumpall failed. The problem
was in the way pg_dumpall passes the database name to pg_dump on the
command line. If it contained a '=' character, pg_dump would interpret it
as a libpq connection string instead of a plain database name.

To fix, pass the database name to pg_dump as a connection string,
"dbname=foo", with the database name escaped if necessary.

Back-patch to all supported branches.
2013-02-20 17:08:54 +02:00
Heikki Linnakangas
2930c05634 Don't pass NULL to fprintf, if a bogus connection string is given to pg_dump.
Back-patch to all supported branches.
2013-02-20 16:33:24 +02:00
Heikki Linnakangas
5d6899dbae Fix yet another typo in comment.
Etsuro Fujita
2013-02-20 12:31:26 +02:00
Alvaro Herrera
a40d09e27f Move ExceptionalCondition back to postgres.h
It needs to be defined in the backend even when assertions are not
enabled.  It's cleaner to put it back, than create a separate #ifdef
section in c.h.

Per trouble report from Jeff Janes
2013-02-18 18:53:32 -03:00
Alvaro Herrera
187492b6c2 Split pgstat file in smaller pieces
We now write one file per database and one global file, instead of
having the whole thing in a single huge file.  This reduces the I/O that
must be done when partial data is required -- which is all the time,
because each process only needs information on its own database anyway.
Also, the autovacuum launcher does not need data about tables and
functions in each database; having the global stats for all DBs is
enough.

Catalog version bumped because we have a new subdir under PGDATA.

Author: Tomas Vondra.  Some rework by Álvaro
Testing by Jeff Janes
Other discussion by Heikki Linnakangas, Tom Lane.
2013-02-18 18:12:52 -03:00
Peter Eisentraut
9475db3a4e Add ALTER ROLE ALL SET command
This generalizes the existing ALTER ROLE ... SET and ALTER DATABASE
... SET functionality to allow creating settings that apply to all users
in all databases.

reviewed by Pavel Stehule
2013-02-17 23:45:36 -05:00
Bruce Momjian
17f1523932 Warn about initdb using mount-points
Add code to detect and warn about trying to initdb or create pg_xlog on
mount points.
2013-02-16 18:52:50 -05:00
Heikki Linnakangas
1bd42cd70a Better fix for "unarchived WAL files get deleted on crash recovery" bug.
Revert my earlier fix for the bug that unarchived WAL files get deleted on
crash recovery, commit c9cc7e05c6. We create
a .done file for files streamed or restored from archive, so the WAL file
recycling logic used during normal operation works just as well during
archive recovery.

Per Fujii Masao's suggestion.
2013-02-15 19:33:31 +02:00
Simon Riggs
c2f79ba269 Force archive_status of .done for xlogs created by dearchival/replication.
This is a forward-patch of commit 6f4b8a4f4f,
applied to 9.2 back in August. The plan was to do something else in master,
but it looks like it's not going to happen, so let's just apply the 9.2
solution to master as well.

Fujii Masao
2013-02-15 19:28:06 +02:00
Heikki Linnakangas
c9cc7e05c6 Don't delete unarchived WAL files during crash recovery.
Bug reported by Jehan-Guillaume (ioguix) de Rorthais. This was introduced
with the change to keep WAL files restored from archive in pg_xlog, in 9.2.
2013-02-15 17:43:59 +02:00
Peter Eisentraut
8e6c8da16a pgindent: Fix order in instructions
The previous order of steps didn't literally work, because git clean
-fdx would delete the downloaded typedefs.list.  Also, pgindent needs to
be called with a path when one is in at the top of the build tree.
2013-02-14 21:40:05 -05:00
Tom Lane
fdaf44862b Invent pre-commit/pre-prepare/pre-subcommit events for xact callbacks.
Currently it's only possible for loadable modules to get control during
post-commit cleanup of a transaction.  That doesn't work too well if they
want to do something that could throw an error; for example, an FDW might
need to issue a remote commit, which could well fail.  To improve matters,
extend the existing APIs for XactCallback and SubXactCallback functions
to provide new pre-commit events for this purpose.

The release notes will need to mention that existing callback functions
should be checked to make sure they don't do something unwanted when one
of the new event types occurs.  In the examples within our source tree,
contrib/sepgsql was fine but plpgsql had been a bit too cute.
2013-02-14 20:35:08 -05:00
Tom Lane
71627f3d19 Fix CVE-2013-0255 properly.
Revert commit ab0f7b6089 (in HEAD only)
in favor of the proper solution, which is to declare enum_recv() correctly
in the system catalogs.  It should be declared to take type "internal"
not "cstring".

Also improve the type_sanity regression test, which should have caught
this typo, so that it actually would.  Most of the relevant checks on
the signature of type I/O functions should not have been restricted to
basetypes/pseudotypes, as they should apply to any type's I/O functions.
2013-02-13 16:20:01 -05:00
Tom Lane
cd89965aab Fix bogus when-to-deregister-from-listener-array logic.
Since a backend adds itself to the global listener array during
Exec_ListenPreCommit, it's inappropriate for it to remove itself during
Exec_UnlistenCommit or Exec_UnlistenAllCommit --- that leads to failure
when committing a transaction that did UNLISTEN then LISTEN, since we end
up not registered though we should be.  (This leads to missing later
notifications, or to Assert failures in assert-enabled builds.)  Instead
deal with deregistering at the bottom of AtCommit_Notify, when we know the
final state of the listenChannels list.

Also, simplify the representation of registration status by replacing the
transient backendHasExecutedInitialListen flag with an amRegisteredListener
flag.

Per report from Greg Sabino Mullane.  Back-patch to 9.0, where the problem
was introduced during the LISTEN/NOTIFY rewrite.
2013-02-13 12:48:05 -05:00
Heikki Linnakangas
fdf9e21196 Update visibility map in the second phase of vacuum.
There's a high chance that a page becomes all-visible when the second phase
of vacuum removes all the dead tuples on it, so it makes sense to check for
that. Otherwise the visibility map won't get updated until the next vacuum.

Pavan Deolasee, reviewed by Jeff Janes.
2013-02-13 17:52:10 +02:00
Alvaro Herrera
0e81ddde2c Rename "string" pstrdup argument to "in"
The former name collides with a symbol also used in the isolation test's
parser, causing assorted failures in certain platforms.
2013-02-12 12:43:09 -03:00
Alvaro Herrera
0f980b0e17 Don't build libpgcommon_srv.a just yet
It's empty, and some archivers do not support that case.
2013-02-12 12:21:27 -03:00
Alvaro Herrera
8396447cdb Create libpgcommon, and move pg_malloc et al to it
libpgcommon is a new static library to allow sharing code among the
various frontend programs and backend; this lets us eliminate duplicate
implementations of common routines.  We avoid libpgport, because that's
intended as a place for porting issues; per discussion, it seems better
to keep them separate.

The first use case, and the only implemented by this patch, is pg_malloc
and friends, which many frontend programs were already using.

At the same time, we can use this to provide palloc emulation functions
for the frontend; this way, some palloc-using files in the backend can
also be used by the frontend cleanly.  To do this, we change palloc() in
the backend to be a function instead of a macro on top of
MemoryContextAlloc().  This was previously believed to cause loss of
performance, but this implementation has been tweaked by Tom and Andres
so that on modern compilers it provides a slight improvement over the
previous one.

This lets us clean up some places that were already with
localized hacks.

Most of the pg_malloc/palloc changes in this patch were authored by
Andres Freund. Zoltán Böszörményi also independently provided a form of
that.  libpgcommon infrastructure was authored by Álvaro.
2013-02-12 11:21:05 -03:00
Peter Eisentraut
0cb1fac3b1 Add noreturn attributes to some error reporting functions 2013-02-12 07:13:22 -05:00
Heikki Linnakangas
62401db45c Support unlogged GiST index.
The reason this wasn't supported before was that GiST indexes need an
increasing sequence to detect concurrent page-splits. In a regular WAL-
logged GiST index, the LSN of the page-split record is used for that
purpose, and in a temporary index, we can get away with a backend-local
counter. Neither of those methods works for an unlogged relation.

To provide such an increasing sequence of numbers, create a "fake LSN"
counter that is saved and restored across shutdowns. On recovery, unlogged
relations are blown away, so the counter doesn't need to survive that
either.

Jeevan Chalke, based on discussions with Robert Haas, Tom Lane and me.
2013-02-11 23:07:09 +02:00
Heikki Linnakangas
b669f416ce Fix checkpoint after fast promotion.
The intention was to request a regular online checkpoint immediately after
end of recovery, when performing "fast promotion". However, because the
checkpoint was requested before other backends were allowed to write WAL,
the checkpointer process performed a restartpoint rather than a checkpoint.

Delay the RequestCheckPoint call until after recovery has truly ended, so
that you get a real checkpoint.
2013-02-11 22:22:08 +02:00
Heikki Linnakangas
7803e9327d Include previous TLI in end-of-recovery and shutdown checkpoint records.
This isn't used for anything but a sanity check at the moment, but it could
be highly valuable for debugging purposes. It could also be used to recreate
timeline history by traversing WAL, which seems useful.
2013-02-11 18:16:25 +02:00
Tom Lane
c352ea2d74 Further cleanup of gistsplit.c.
After further reflection I was unconvinced that the existing coding is
guaranteed to return valid union datums in every code path for multi-column
indexes.  Fix that by forcing a gistunionsubkey() call at the end of the
recursion.  Having done that, we can remove some clearly-redundant calls
elsewhere.  This should be a little faster for multi-column indexes (since
the previous coding would uselessly do such a call for each column while
unwinding the recursion), as well as much harder to break.

Also, simplify the handling of cases where one side or the other of a
primary split contains only don't-care tuples.  The previous coding used a
very ugly hack in removeDontCares() that essentially forced one random
tuple to be treated as non-don't-care, providing a random initial choice of
seed datum for the secondary split.  It seems unlikely that that method
will give better-than-random splits.  Instead, treat such a split as
degenerate and just let the next column determine the split, the same way
that we handle fully degenerate cases where the two sides produce identical
union datums.
2013-02-10 16:21:26 -05:00
Tom Lane
db3d7e9f0d Remove useless picksplit-doesn't-support-secondary-split log spam.
This LOG message was put in over five years ago with the evident
expectation that we'd make all GiST opclasses support secondary split
directly.  However, no such thing ever happened, and indeed the number of
opclasses supporting it decreased to zero in 9.2.  The reason is that
improving on the default implementation isn't that easy --- the
opclass-specific code that did exist, before 9.2, doesn't appear to have
been any improvement over the default.

Hence, remove the message altogether.  There's certainly no point in
nagging users about this in released branches, but I doubt that we'll
ever implement complete opclass-specific support anyway.
2013-02-10 13:07:40 -05:00
Tom Lane
dacc185f52 Remove vestigial secondary-split support in gist_box_picksplit().
Not only is this implementation of secondary-split not better than the
default implementation in gistsplit.c, it's actually worse.  The gistsplit.c
code at least looks to see if switching the left and right sides would make
a better merge with the previously-split tuples, while this doesn't.

In any case it's rather useless to support secondary split only in an edge
case.  There used to be more complete support for it here (in chooseLR()),
but that was removed in commit 7f3bd86843.
It appears to me though that the chooseLR() code was really isomorphic to
the default implementation, since it was still based on choosing the cheaper
way of adding two sub-split vectors that had been chosen without regard to
the primary split initially.  I think an implementation of secondary split
that could beat the default implementation would have to be pretty fully
integrated into the split algorithm, not plastered on at the end.

Back-patch to 9.2, but not further; previous branches have the chooseLR()
code which I don't feel a great need to mess with.  This is mainly so we
just have two behaviors and not three among the various branches (IOW, this
patch is cleanup for commit 7f3bd86843e5aad84585a57d3f6b80db3c609916's
incomplete removal of secondary-split support).
2013-02-10 12:40:09 -05:00
Tom Lane
0fd0f3688b Document and clean up gistsplit.c.
Improve comments, rename some variables and functions, slightly simplify
a couple of APIs, in an attempt to make this code readable by people other
than its original author.

Even though this is essentially just cosmetic, back-patch to all active
branches, because otherwise it's going to make back-patching future fixes
in this file very painful.
2013-02-10 11:58:15 -05:00
Tom Lane
a187c96d26 Reduce log level of picksplit-doesn't-support-secondary-split whining.
This was agreed to back in 2007, but never actually done.

Josh Hansen
2013-02-09 12:17:55 -05:00
Peter Eisentraut
0343a59d11 psql: Improve unaligned expanded output for zero rows
This used to erroneously print an empty line.  Now it prints nothing.
2013-02-09 00:11:58 -05:00
Peter Eisentraut
8ade58a4ea psql: Improve expanded print output in tuples-only mode
When there are zero result rows, in expanded mode, "(No rows)" is
printed.  So far, there was no way to turn this off.  Now, when
tuples-only mode is turned on, nothing is printed in this case.
2013-02-09 00:11:58 -05:00
Tom Lane
c61e26ee3e Add support for ALTER RULE ... RENAME TO.
Ali Dar, reviewed by Dean Rasheed.
2013-02-08 23:58:40 -05:00
Tom Lane
f806c191a3 Simplify box_overlap computations.
Given the assumption that a box's high coordinates are not less than its
low coordinates, the tests in box_ov() are overly complicated and can be
reduced to about half as much work.  Since many other functions in
geo_ops.c rely on that assumption, there doesn't seem to be a good reason
not to use it here.

Per discussion of Alexander Korotkov's GiST fix, which was already using
the simplified logic (in a non-fuzzy form, but the equivalence holds just
as well for fuzzy).
2013-02-08 18:26:08 -05:00
Tom Lane
3c29b196b0 Fix gist_box_same and gist_point_consistent to handle fuzziness correctly.
While there's considerable doubt that we want fuzzy behavior in the
geometric operators at all (let alone as currently implemented), nobody is
stepping forward to redesign that stuff.  In the meantime it behooves us
to make sure that index searches agree with the behavior of the underlying
operators.  This patch fixes two problems in this area.

First, gist_box_same was using fuzzy equality, but it really needs to use
exact equality to prevent not-quite-identical upper index keys from being
treated as identical, which for example would prevent an existing upper
key from being extended by an amount less than epsilon.  This would result
in inconsistent indexes.  (The next release notes will need to recommend
that users reindex GiST indexes on boxes, polygons, circles, and points,
since all four opclasses use gist_box_same.)

Second, gist_point_consistent used exact comparisons for upper-page
comparisons in ~= searches, when it needs to use fuzzy comparisons to
ensure it finds all matches; and it used fuzzy comparisons for point <@ box
searches, when it needs to use exact comparisons because that's what the
<@ operator (rather inconsistently) does.

The added regression test cases illustrate all three misbehaviors.

Back-patch to all active branches.  (8.4 did not have GiST point_ops,
but it still seems prudent to apply the gist_box_same patch to it.)

Alexander Korotkov, reviewed by Noah Misch
2013-02-08 18:03:17 -05:00
Alvaro Herrera
381d4b70a9 Clean up c.h / postgres.h after Assert() move
Per Tom
2013-02-08 12:50:58 -03:00
Alvaro Herrera
5766228bc6 Fix Xmax freeze conditions
I broke this in 0ac5ad5134; previously, freezing a tuple marked with an
IS_MULTI xmax was not necessary.

Per brokenness report from Jeff Janes.
2013-02-08 12:50:58 -03:00
Magnus Hagander
c572bfaf39 Fix another typo in a comment
Noted by Thom Brown
2013-02-08 15:42:01 +01:00
Peter Eisentraut
cf4d67e819 Exclude access/rmgrlist.h from cpluspluscheck
It is not meant to be included standalone.
2013-02-08 07:01:21 -05:00
Peter Eisentraut
4760142146 scripts: Add build prerequisite on libpgport
Without this, building in src/bin/scripts directly will fail if
libpgport wasn't built first.  Other bin components are handled the same
way.

Phil Sorber
2013-02-08 06:43:54 -05:00
Magnus Hagander
733701d274 Fix typo in comment
Etsuro Fujita
2013-02-08 11:45:42 +01:00
Tom Lane
bcc6c4c291 Fix performance issue in EXPLAIN (ANALYZE, TIMING OFF).
Commit af7914c662, which added the TIMING
option to EXPLAIN, had an oversight: if the TIMING option is disabled
then control in InstrStartNode() goes through an elog(DEBUG2) call, which
typically does nothing but takes a noticeable amount of time to do it.
Tweak the logic to avoid that.

In HEAD, also change the elog(DEBUG2)'s in instrument.c to elog(ERROR).
It's not very clear why they weren't like that to begin with, but this
episode shows that not complaining more vociferously about misuse is
likely to do little except allow bugs to remain hidden.

While at it, adjust some code that was making possibly-dangerous
assumptions about flag bits being in the rightmost byte of the
instrument_options word.

Problem reported by Pavel Stehule (via Tomas Vondra).
2013-02-07 22:53:00 -05:00
Tom Lane
166d534fcd Repair bugs in GiST page splitting code for multi-column indexes.
When considering a non-last column in a multi-column GiST index,
gistsplit.c tries to improve on the split chosen by the opclass-specific
pickSplit function by considering penalties for the next column.  However,
there were two bugs in this code: it failed to recompute the union keys for
the leftmost index columns, even though these might well change after
reassigning tuples; and it included the old union keys in the recomputation
for the columns it did recompute, so that those keys couldn't get smaller
even if they should.  The first problem could result in an invalid index
in which searches wouldn't find index entries that are in fact present;
the second would make the index less efficient to search.

Both of these errors were caused by misuse of gistMakeUnionItVec, whose
API was designed in a way that just begged such errors to be made.  There
is no situation in which it's safe or useful to compute the union keys for
a subset of the index columns, and there is no caller that wants any
previous union keys to be included in the computation; so the undocumented
choice to treat the union keys as in/out rather than pure output parameters
is a waste of code as well as being dangerous.

Hence, rather than just making a minimal patch, I've changed the API of
gistMakeUnionItVec to remove the "startkey" parameter (it now always
processes all index columns) and treat the attr/isnull arrays as purely
output parameters.

In passing, also get rid of a couple of unnecessary and dangerous uses
of static variables in gistutil.c.  It's remarkable that the one in
gistMakeUnionKey hasn't given us portability troubles before now, because
in addition to posing a re-entrancy hazard, it was unsafely assuming that
a static char[] array would have at least Datum alignment.

Per investigation of a trouble report from Tomas Vondra.  (There are also
some bugs in contrib/btree_gist to be fixed, but that seems like material
for a separate patch.)  Back-patch to all supported branches.
2013-02-07 17:44:02 -05:00
Tom Lane
c5aad8dc14 Fix possible failure to send final transaction counts to stats collector.
Normally, we suppress sending a tabstats message to the collector unless
there were some actual table stats to send.  However, during backend exit
we should force out the message if there are any transaction commit/abort
counts to send, else the session's last few commit/abort counts will never
get reported at all.  We had logic for this, but the short-circuit test
at the top of pgstat_report_stat() ignored the "force" flag, with the
consequence that session-ending transactions that touched no database-local
tables would not get counted.  Seems to be an oversight in my commit
641912b4d1, which added the "force" flag.
That was back in 8.3, so back-patch to all supported versions.
2013-02-07 14:44:00 -05:00
Simon Riggs
072521b8c8 Rely only on checkpoint 1 at end of recovery.
Searching for checkpoint 2 (previous) is not
correct in all cases.

Bug report from Heikki Linnakangas
2013-02-07 16:33:05 +00:00
Andrew Dunstan
e1c1e21732 Enable building with Microsoft Visual Studio 2012.
Backpatch to release 9.2

Brar Piening and Noah Misch, reviewed by Craig Ringer.
2013-02-06 14:52:29 -05:00
Alvaro Herrera
5a1cd89f8f Split out list of XLog resource managers
The new rmgrlist.h header, containing all necessary data
about built-in resource managers, allows other pieces of code to
access them.

In particular, this allows a future pg_xlogdump program to extract
rm_desc function pointers, without having to keep a duplicate list of
them.
2013-02-06 08:47:28 -03:00
Alvaro Herrera
cb9b66d31a Improve error message wording
The wording changes applied in 0ac5ad513 were universally disliked.

Per gripe from Andrew Dunstan
2013-02-06 00:19:53 -03:00
Tom Lane
ab0f7b6089 Prevent execution of enum_recv() from SQL.
This function was misdeclared to take cstring when it should take internal.
This at least allows crashing the server, and in principle an attacker
might be able to use the function to examine the contents of server memory.

The correct fix is to adjust the system catalog contents (and fix the
regression tests that should have caught this but failed to).  However,
asking users to correct the catalog contents in existing installations
is a pain, so as a band-aid fix for the back branches, install a check
in enum_recv() to make it throw error if called with a cstring argument.
We will later revert this in HEAD in favor of correcting the catalogs.

Our thanks to Sumit Soni (via Secunia SVCRP) for reporting this issue.

Security: CVE-2013-0255
2013-02-04 16:25:01 -05:00
Simon Riggs
f480e29449 Reset vacuum_defer_cleanup_age to PGC_SIGHUP.
Revert commit 84725aa5ef
2013-02-04 16:39:55 +00:00
Simon Riggs
bd56e74127 Reset master xmin when hot_standby_feedback disabled.
If walsender has xmin of standby then ensure we
reset the value to 0 when we change from hot_standby_feedback=on
to hot_standby_feedback=off.
2013-02-04 10:29:22 +00:00
Tom Lane
62e666400d Perform line wrapping and indenting by default in ruleutils.c.
This patch changes pg_get_viewdef() and allied functions so that
PRETTY_INDENT processing is always enabled.  Per discussion, only the
PRETTY_PAREN processing (that is, stripping of "unnecessary" parentheses)
poses any real forward-compatibility risk, so we may as well make dump
output look as nice as we safely can.

Also, set the default wrap length to zero (i.e, wrap after each SELECT
or FROM list item), since there's no very principled argument for the
former default of 80-column wrapping, and most people seem to agree this
way looks better.

Marko Tiikkaja, reviewed by Jeevan Chalke, further hacking by Tom Lane
2013-02-03 15:56:45 -05:00
Peter Eisentraut
330ed4ac6c PL/Python: Add result object str handler
This is intended so that say plpy.debug(rv) prints something useful for
debugging query execution results.

reviewed by Steve Singer
2013-02-03 00:31:01 -05:00
Tom Lane
d2d153fdb0 Create a psql command \gset to store query results into psql variables.
This eases manipulation of query results in psql scripts.

Pavel Stehule, reviewed by Piyush Newe, Shigeru Hanada, and Tom Lane
2013-02-02 17:06:38 -05:00
Tom Lane
101d6ae755 Prevent "\g filename" from affecting subsequent commands after an error.
In the previous coding, psql's state variable saying that output should
go to a file was only reset after successful completion of a query
returning tuples.  Thus for example,

regression=# select 1/0
regression-# \g somefile
ERROR:  division by zero
regression=# select 1/2;
regression=#

... huh, I wonder where that output went.  Even more oddly, the state
was not reset even if it's the file that's causing the failure:

regression=# select 1/2 \g /foo
/foo: Permission denied
regression=# select 1/2;
/foo: Permission denied
regression=# select 1/2;
/foo: Permission denied

This seems to me not to satisfy the principle of least surprise.
\g is certainly not documented in a way that suggests its effects are
at all persistent.

To fix, adjust the code so that the flag is reset at exit from SendQuery
no matter what happened.

Noted while reviewing the \gset patch, which had comparable issues.
Arguably this is a bug fix, but I'll refrain from back-patching for now.
2013-02-02 14:22:17 -05:00
Simon Riggs
84725aa5ef Mark vacuum_defer_cleanup_age as PGC_POSTMASTER.
Following bug analysis of #7819 by Tom Lane
2013-02-02 18:49:54 +00:00
Bruce Momjian
e8ae019661 Adjust COPY FREEZE error message to be more accurate and consistent.
Per suggestions from Noah and Tom.
2013-02-02 12:56:52 -05:00
Alvaro Herrera
e1d25de35a Move Assert() definitions to c.h
This way, they can be used by frontend and backend code.  We already
supported that, but doing it this way allows us to mix true frontend
files with backend files compiled in frontend environment.

Author: Andres Freund
2013-02-01 17:50:04 -03:00
Alvaro Herrera
dd1569da67 Fix typo in freeze_table_age implementation
The original code used freeze_min_age instead of freeze_table_age.  The
main consequence of this mistake is that lowering freeze_min_age would
cause full-table scans to occur much more frequently, which causes
serious issues because the number of writes required is much larger.
That feature (freeze_min_age) is supposed to affect only how soon tuples
are frozen; some pages should still be skipped due to the visibility
map.

Backpatch to 8.4, where the freeze_table_age feature was introduced.

Report and patch from Andres Freund
2013-02-01 12:00:40 -03:00
Alvaro Herrera
9ee00ef4c7 Fill tuple before HeapSatisfiesHOTAndKeyUpdate
Failing to do this results in almost all updates to system catalogs
being non-HOT updates, because the OID column would differ (not having
been set for the new tuple), which is an indexed column.

While at it, make sure to set the tableoid early in both old and new
tuples as well.  This isn't of much consequence, since that column is
seldom (never?) indexed.

Report and patch from Andres Freund.
2013-02-01 10:43:09 -03:00
Peter Eisentraut
5839052693 Add CREATE RECURSIVE VIEW syntax
This is specified in the SQL standard.  The CREATE RECURSIVE VIEW
specification is transformed into a normal CREATE VIEW statement with a
WITH RECURSIVE clause.

reviewed by Abhijit Menon-Sen and Stephen Frost
2013-01-31 22:31:58 -05:00
Peter Eisentraut
b1980f6d03 PL/Tcl: Fix compiler warnings with Tcl 8.6
Some constification was added in the Tcl APIs, so add the modifiers in
PL/Tcl as well.
2013-01-31 22:08:53 -05:00
Alvaro Herrera
b78647a0e6 Restrict infomask bits to set on multixacts
We must only set the bit(s) for the strongest lock held in the tuple;
otherwise, a multixact containing members with exclusive lock and
key-share lock will behave as though only a share lock is held.

This bug was introduced in commit 0ac5ad5134, somewhere along
development, when we allowed a singleton FOR SHARE lock to be
implemented without a MultiXact by using a multi-bit pattern.
I overlooked that GetMultiXactIdHintBits() needed to be tweaked as well.
Previously, we could have the bits for FOR KEY SHARE and FOR UPDATE
simultaneously set and it wouldn't cause a problem.

Per report from digoal@126.com
2013-01-31 19:35:31 -03:00
Simon Riggs
3f0ab05233 Switch timelines if we crash soon after promotion.
Previous patch to skip checkpoints at end of recovery didn't
correctly perform crash recovery, fumbling the timeline switch.
Now we record the minRecoveryPointTLI of the newly selected
timeline, so that we crash recover to the correct timeline.

Bug report from Fujii Masao, investigated by me.
2013-01-31 19:29:32 +00:00
Tom Lane
9afc58396a Reject nonzero day fields in AT TIME ZONE INTERVAL functions.
It's not sensible for an interval that's used as a time zone value to be
larger than a day.  When we changed the interval type to contain a separate
day field, check_timezone() was adjusted to reject nonzero day values, but
timetz_izone(), timestamp_izone(), and timestamptz_izone() evidently were
overlooked.

While at it, make the error messages for these three cases consistent.
2013-01-31 12:12:23 -05:00
Magnus Hagander
bfb8a8d381 Properly zero-pad the day-of-year part of the win32 build number
This ensure the version number increases over time. The first three digits
in the version number is still set to the actual PostgreSQL version
number, but the last one is intended to be an ever increasing build number,
which previosly failed when it changed between 1, 2 and 3 digits long values.

Noted by Deepak
2013-01-31 15:06:45 +01:00
Tom Lane
2ab218b576 Don't use spi_priv.h in plpython.
There may once have been a reason to violate modularity like that,
but it doesn't appear that there is anymore.
2013-01-30 20:11:58 -05:00
Tom Lane
0900ac2d0d Fix plpgsql's reporting of plan-time errors in possibly-simple expressions.
exec_simple_check_plan and exec_eval_simple_expr attempted to call
GetCachedPlan directly.  This meant that if an error was thrown during
planning, the resulting context traceback would not include the line
normally contributed by _SPI_error_callback.  This is already inconsistent,
but just to be really odd, a re-execution of the very same expression
*would* show the additional context line, because we'd already have cached
the plan and marked the expression as non-simple.

The problem is easy to demonstrate in 9.2 and HEAD because planning of a
cached plan doesn't occur at all until GetCachedPlan is done.  In earlier
versions, it could only be an issue if initial planning had succeeded, then
a replan was forced (already somewhat improbable for a simple expression),
and the replan attempt failed.  Since the issue is mainly cosmetic in older
branches anyway, it doesn't seem worth the risk of trying to fix it there.
It is worth fixing in 9.2 since the instability of the context printout can
affect the results of GET STACKED DIAGNOSTICS, as per a recent discussion
on pgsql-novice.

To fix, introduce a SPI function that wraps GetCachedPlan while installing
the correct callback function.  Use this instead of calling GetCachedPlan
directly from plpgsql.

Also introduce a wrapper function for extracting a SPI plan's
CachedPlanSource list.  This lets us stop including spi_priv.h in
pl_exec.c, which was never a very good idea from a modularity standpoint.

In passing, fix a similar inconsistency that could occur in SPI_cursor_open,
which was also calling GetCachedPlan without setting up a context callback.
2013-01-30 20:02:23 -05:00
Tom Lane
670a6c7a22 Fix grammar for subscripting or field selection from a sub-SELECT result.
Such cases should work, but the grammar failed to accept them because of
our ancient precedence hacks to convince bison that extra parentheses
around a sub-SELECT in an expression are unambiguous.  (Formally, they
*are* ambiguous, but we don't especially care whether they're treated as
part of the sub-SELECT or part of the expression.  Bison cares, though.)
Fix by adding a redundant-looking production for this case.

This is a fine example of why fixing shift/reduce conflicts via
precedence declarations is more dangerous than it looks: you can easily
cause the parser to reject cases that should work.

This has been wrong since commit 3db4056e22
or maybe before, and apparently some people have been working around it
by inserting no-op casts.  That method introduces a dump/reload hazard,
as illustrated in bug #7838 from Jan Mate.  Hence, back-patch to all
active branches.
2013-01-30 14:17:48 -05:00
Peter Eisentraut
574f764321 pg_regress: Allow overriding diff options
By setting the environment variable PG_REGRESS_DIFF_OPTS, custom diff
options can be passed.

reviewed by Jeevan Chalke
2013-01-29 22:59:45 -05:00
Peter Eisentraut
5bb2ddc0af entab: Fix some compiler warnings 2013-01-29 22:21:21 -05:00
Tom Lane
991f3e5ab3 Provide database object names as separate fields in error messages.
This patch addresses the problem that applications currently have to
extract object names from possibly-localized textual error messages,
if they want to know for example which index caused a UNIQUE_VIOLATION
failure.  It adds new error message fields to the wire protocol, which
can carry the name of a table, table column, data type, or constraint
associated with the error.  (Since the protocol spec has always instructed
clients to ignore unrecognized field types, this should not create any
compatibility problem.)

Support for providing these new fields has been added to just a limited set
of error reports (mainly, those in the "integrity constraint violation"
SQLSTATE class), but we will doubtless add them to more calls in future.

Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with
additional hacking by Tom Lane.
2013-01-29 17:08:26 -05:00
Heikki Linnakangas
c9d7dbacd3 Skip truncating ON COMMIT DELETE ROWS temp tables, if the transaction hasn't
touched any temporary tables.

We could try harder, and keep track of whether we've inserted to any temp
tables, rather than accessed them, and which temp tables have been inserted
to. But this is dead simple, and already covers many interesting scenarios.
2013-01-29 10:43:33 +02:00
Simon Riggs
fd4ced5230 Fast promote mode skips checkpoint at end of recovery.
pg_ctl promote -m fast will skip the checkpoint at end of recovery so that we
can achieve very fast failover when the apply delay is low. Write new WAL record
XLOG_END_OF_RECOVERY to allow us to switch timeline correctly for downstream log
readers. If we skip synchronous end of recovery checkpoint we request a normal
spread checkpoint so that the window of re-recovery is low.

Simon Riggs and Kyotaro Horiguchi, with input from Fujii Masao.
Review by Heikki Linnakangas
2013-01-29 00:06:15 +00:00
Alvaro Herrera
ee22c55f5a REASSIGN OWNED: handle shared objects, too
Give away ownership of shared objects (databases, tablespaces) along
with local objects, per original code intention.  Try to make the
documentation clearer, too.

Per discussion about DROP OWNED's brokenness, in bug #7748.

This is not backpatched because it'd require some refactoring of the
ALTER/SET OWNER code for databases and tablespaces.
2013-01-28 18:45:50 -03:00
Alvaro Herrera
ec41b8edc1 DROP OWNED: don't try to drop tablespaces/databases
My "fix" for bugs #7578 and #6116 on DROP OWNED at fe3b5eb08a not only
misstated that it applied to REASSIGN OWNED (which it did not affect),
but it also failed to fix the problems fully, because I didn't test the
case of owned shared objects.  Thus I created a new bug, reported by
Thomas Kellerer as #7748, which would cause DROP OWNED to fail with a
not-for-user-consumption error message.  The code would attempt to drop
the database, which not only fails to work because the underlying code
does not support that, but is a pretty dangerous and undesirable thing
to be doing as well.

This patch fixes that bug by having DROP OWNED only attempt to process
shared objects when grants on them are found, ignoring ownership.

Backpatch to 8.3, which is as far as the previous bug was backpatched.
2013-01-28 18:40:51 -03:00
Heikki Linnakangas
316186f289 Handle SPIErrors raised directly in PL/Python code.
If a PL/Python function raises an SPIError (or one if its subclasses)
directly with python's raise statement, treat it the same as an SPIError
generated internally. In particular, if the user sets the sqlstate
attribute, preserve that.

Oskari Saarenmaa and Jan Urbański, reviewed by Karl O. Pinc.
2013-01-28 09:46:23 +02:00
Michael Meskes
96bb29dc44 Made ecpglib use translated messages.
Bug reported and fixed by Chen Huajun <chenhj@cn.fujitsu.com>.
2013-01-27 13:48:12 +01:00
Tom Lane
2378d79ab2 Make LATERAL implicit for functions in FROM.
The SQL standard does not have general functions-in-FROM, but it does
allow UNNEST() there (see the <collection derived table> production),
and the semantics of that are defined to include lateral references.
So spec compliance requires allowing lateral references within UNNEST()
even without an explicit LATERAL keyword.  Rather than making UNNEST()
a special case, it seems best to extend this flexibility to any
function-in-FROM.  We'll still allow LATERAL to be written explicitly
for clarity's sake, but it's now a noise word in this context.

In theory this change could result in a change in behavior of existing
queries, by allowing what had been an outer reference in a function-in-FROM
to be captured by an earlier FROM-item at the same level.  However, all
pre-9.3 PG releases have a bug that causes them to match variable
references to earlier FROM-items in preference to outer references (and
then throw an error).  So no previously-working query could contain the
type of ambiguity that would risk a change of behavior.

Per a suggestion from Andrew Gierth, though I didn't use his patch.
2013-01-26 16:18:42 -05:00
Bruce Momjian
8865fe0ad3 Update comments in new DROP IF EXISTS code; commit message update
DROP IF EXISTS with a missing schema in commit
7e2322dff3 applies not only to tables, but
to DROP IF EXISTS with missing schemas for indexes, views, sequences,
and foreign tables.  Yeah!
2013-01-26 14:51:59 -05:00
Bruce Momjian
51cfb87ae2 Update LookupExplicitNamespace() comments; commit message update
Also, commit 7e2322dff3 affected DROP
TABLE IF EXISTS, not CREATE TABLE IF EXISTS.
2013-01-26 13:47:50 -05:00
Bruce Momjian
4deb57de7d Issue ERROR if FREEZE mode can't be honored by COPY
Previously non-honored FREEZE mode was ignored.  This also issues an
appropriate error message based on the cause of the failure, per
suggestion from Tom.  Additional regression test case added.
2013-01-26 13:33:24 -05:00
Bruce Momjian
7e2322dff3 Allow CREATE TABLE IF EXIST so succeed if the schema is nonexistent
Previously, CREATE TABLE IF EXIST threw an error if the schema was
nonexistent.  This was done by passing 'missing_ok' to the function that
looks up the schema oid.
2013-01-26 13:24:50 -05:00
Tom Lane
08be00fabe Fix plpython's handling of functions used as triggers on multiple tables.
plpython tried to use a single cache entry for a trigger function, but it
needs a separate cache entry for each table the trigger is applied to,
because there is table-dependent data in there.  This was done correctly
before 9.1, but commit 46211da1b8 broke it
by simplifying the lookup key from "function OID and triggered table OID"
to "function OID and is-trigger boolean".  Go back to using both OIDs
as the lookup key.  Per bug report from Sandro Santilli.

Andres Freund
2013-01-25 16:59:36 -05:00
Tom Lane
0d5fbdc157 Change plan caching to honor, not resist, changes in search_path.
In the initial implementation of plan caching, we saved the active
search_path when a plan was first cached, then reinstalled that path
anytime we needed to reparse or replan.  The idea of that was to try to
reselect the same referenced objects, in somewhat the same way that views
continue to refer to the same objects in the face of schema or name
changes.  Of course, that analogy doesn't bear close inspection, since
holding the search_path fixed doesn't cope with object drops or renames.
Moreover sticking with the old path seems to create more surprises than
it avoids.  So instead of doing that, consider that the cached plan depends
on search_path, and force reparse/replan if the active search_path is
different than it was when we last saved the plan.

This gets us fairly close to having "transparency" of plan caching, in the
sense that the cached statement acts the same as if you'd just resubmitted
the original query text for another execution.  There are still some corner
cases where this fails though: a new object added in the search path
schema(s) might capture a reference in the query text, but we'd not realize
that and force a reparse.  We might try to fix that in the future, but for
the moment it looks too expensive and complicated.
2013-01-25 14:14:41 -05:00
Robert Haas
a37e83c0a9 Make it easy to time out pg_isready, and make the default 3 seconds.
Along the way, add a missing line to the help message.

Phil Sorber, reviewed by Fujii Masao
2013-01-25 12:03:37 -05:00
Heikki Linnakangas
8936867627 Add prosecdef to \df+ output.
Jon Erdman, reviewed by Phil Sorber and Stephen Frost.
2013-01-25 17:22:26 +02:00
Heikki Linnakangas
ba1cc6501e Add some randomness to the choice of which GiST page to insert to.
When descending the tree for an insert, and there are multiple equally good
pages we could insert to, make the choice in random. Previously, we would
always choose the tuple with lowest offset number. That meant that when two
non-leaf pages overlap - in the extreme case they might have exactly the same
key - all but the first such page went unused. That wasn't optimal for space
usage; if you deleted some tuples from the non-first pages, the space would
never be reused.

With this patch, the other pages are sometimes chosen too, although there's
still a heavy bias towards low-offset tuples, so that we don't lose cache
locality when doing a lot of inserts with similar keys.

Original idea by Alexander Korotkov, although this patch version was written
by me and copy-edited by Tom Lane.
2013-01-25 16:58:38 +02:00
Magnus Hagander
be926474be Make pg_dump exclude unlogged table data on hot standby slaves
Noted by Joe Van Dyk
2013-01-25 09:46:07 +01:00
Tom Lane
760f3c043a Fix concat() and format() to handle VARIADIC-labeled arguments correctly.
Previously, the VARIADIC labeling was effectively ignored, but now these
functions act as though the array elements had all been given as separate
arguments.

Pavel Stehule
2013-01-25 00:19:56 -05:00
Tom Lane
2ddc600f8f Fix SPI documentation for new handling of ExecutorRun's count parameter.
Since 9.0, the count parameter has only limited the number of tuples
actually returned by the executor.  It doesn't affect the behavior of
INSERT/UPDATE/DELETE unless RETURNING is specified, because without
RETURNING, the ModifyTable plan node doesn't return control to execMain.c
for each tuple.  And we only check the limit at the top level.

While this behavioral change was unintentional at the time, discussion of
bug #6572 led us to the conclusion that we prefer the new behavior anyway,
and so we should just adjust the docs to match rather than change the code.
Accordingly, do that.  Back-patch as far as 9.0 so that the docs match the
code in each branch.
2013-01-24 18:34:00 -05:00
Andrew Dunstan
1068771abf Use correct output device for Windows prompts.
This ensures that mapping of non-ascii prompts
to the correct code page occurs.

Bug report and original patch from Alexander Law,
reviewed and reworked by Noah Misch.

Backpatch to all live branches.
2013-01-24 16:01:31 -05:00
Alvaro Herrera
74ebba84ae Redefine HEAP_XMAX_IS_LOCKED_ONLY
Tuples marked SELECT FOR UPDATE in a cluster that's later processed by
pg_upgrade would have a different infomask bit pattern than those
produced by 9.3dev; that bit pattern was being seen as "dead" by HEAD
(because they would fail the "is this tuple locked" test, and so the
visibility rules would thing they're updated, even though there's no
HEAP_UPDATED version of them).  In other words, some rows could silently
disappear after pg_upgrade.

With this new definition, those tuples become visible again.

This is breakage resulting from my commit 0ac5ad5134.
2013-01-24 16:10:02 -03:00
Alvaro Herrera
6772c1e542 Make output identical to pg_resetxlog's 2013-01-24 11:55:10 -03:00
Simon Riggs
5c54f63fd6 Fix rare missing cancellations in Hot Standby.
The machinery around XLOG_HEAP2_CLEANUP_INFO failed
to correctly pass through the necessary information
on latestRemovedXid, avoiding cancellations in some
infrequent concurrent update/cleanup scenarios.

Backpatchable fix to 9.0

Detailed bug report and fix by Noah Misch,
backpatchable version by me.
2013-01-24 14:19:29 +00:00
Heikki Linnakangas
168d315703 Also fix rotation of csvlog on Windows.
Backpatch to 9.2, like the previous fix.
2013-01-24 11:41:30 +02:00
Tom Lane
8556869f2f Fix failure to rotate postmaster log file for size reasons on Windows.
When we eliminated "unnecessary" wakeups of the syslogger process, we
broke size-based logfile rotation on Windows, because on that platform
data transfer is done in a separate thread.  While non-Windows platforms
would recheck the output file size after every log message, Windows only
did so when the control thread woke up for some other reason, which might
be quite infrequent.  Per bug #7814 from Tsunezumi.  Back-patch to 9.2
where the problem was introduced.

Jeff Janes
2013-01-23 22:08:01 -05:00
Alvaro Herrera
ca5db759b8 isolationtester: add a few fflush(stderr) calls
The lack of them is causing failures in some BF members.

Per Andrew Dunstan.
2013-01-23 13:30:14 -03:00
Robert Haas
ac2e967362 pg_isready
New command-line utility to test whether a server is ready to
accept connections.

Phil Sorber, reviewed by Michael Paquier and Peter Eisentraut
2013-01-23 11:01:20 -05:00
Alvaro Herrera
0ac5ad5134 Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE".  These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE".  UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.

Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.

The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid.  Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates.  This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed.  pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.

Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header.  This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.

Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)

With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.

As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.

Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane.  There's probably room for several more tests.

There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it.  Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.

This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
	AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
	1290721684-sup-3951@alvh.no-ip.org
	1294953201-sup-2099@alvh.no-ip.org
	1320343602-sup-2290@alvh.no-ip.org
	1339690386-sup-8927@alvh.no-ip.org
	4FE5FF020200002500048A3D@gw.wicourts.gov
	4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 12:04:59 -03:00
Robert Haas
601e2935e2 Update comments and output for event_trigger regression test. 2013-01-23 06:49:30 -05:00
Heikki Linnakangas
52906f175a Implement pg_unreachable() on MSVC. 2013-01-23 12:53:55 +02:00
Heikki Linnakangas
990fe3c4ed Fix more issues with cascading replication and timeline switches.
When a standby server follows the master using WAL archive, and it chooses
a new timeline (recovery_target_timeline='latest'), it only fetches the
timeline history file for the chosen target timeline, not any other history
files that might be missing from pg_xlog. For example, if the current
timeline is 2, and we choose 4 as the new recovery target timeline, the
history file for timeline 3 is not fetched, even if it's part of this
server's history. That's enough for the standby itself - the history file
for timeline 4 includes timeline 3 as well - but if a cascading standby
server wants to recover to timeline 3, it needs the history file. To fix,
when a new recovery target timeline is chosen, try to copy any missing
history files from the archive to pg_xlog between the old and new target
timeline.

A second similar issue was with the WAL files. When a standby recovers from
archive, and it reaches a segment that contains a switch to a new timeline,
recovery fetches only the WAL file labelled with the new timeline's ID. The
file from the new timeline contains a copy of the WAL from the old timeline
up to the point where the switch happened, and recovery recovers it from the
new file. But in streaming replication, walsender only tries to read it
from the old timeline's file. To fix, change walsender to read it from the
new file, so that it behaves the same as recovery in that sense, and doesn't
try to open the possibly nonexistent file with the old timeline's ID.
2013-01-23 10:19:20 +02:00
Robert Haas
ddef9a0028 Fix a few small bugs in yesterday's event trigger patch.
Dimitri Fontaine
2013-01-22 21:37:01 -05:00
Tom Lane
75b39e7909 Add infrastructure for storing a VARIADIC ANY function's VARIADIC flag.
Originally we didn't bother to mark FuncExprs with any indication whether
VARIADIC had been given in the source text, because there didn't seem to be
any need for it at runtime.  However, because we cannot fold a VARIADIC ANY
function's arguments into an array (since they're not necessarily all the
same type), we do actually need that information at runtime if VARIADIC ANY
functions are to respond unsurprisingly to use of the VARIADIC keyword.
Add the missing field, and also fix ruleutils.c so that VARIADIC ANY
function calls are dumped properly.

Extracted from a larger patch that also fixes concat() and format() (the
only two extant VARIADIC ANY functions) to behave properly when VARIADIC is
specified.  This portion seems appropriate to review and commit separately.

Pavel Stehule
2013-01-21 20:26:15 -05:00
Robert Haas
841a5150c5 Add ddl_command_end support for event triggers.
Dimitri Fontaine, with slight changes by me
2013-01-21 18:00:24 -05:00
Alvaro Herrera
765cbfdc92 Refactor ALTER some-obj RENAME implementation
Remove duplicate implementations of catalog munging and miscellaneous
privilege checks.  Instead rely on already existing data in
objectaddress.c to do the work.

Author: KaiGai Kohei, changes by me
Reviewed by: Robert Haas, Álvaro Herrera, Dimitri Fontaine
2013-01-21 12:06:41 -03:00
Tom Lane
8f0d8f481e Fix one-byte buffer overrun in PQprintTuples().
This bug goes back to the original Postgres95 sources.  Its significance
to modern PG versions is marginal, since we have not used PQprintTuples()
internally in a very long time, and it doesn't seem to have ever been
documented either.  Still, it *is* exposed to client apps, so somebody
out there might possibly be using it.

Xi Wang
2013-01-20 23:43:46 -05:00
Tom Lane
535e69a43f Fix error-checking typo in check_TSCurrentConfig().
The code failed to detect an out-of-memory failure.

Xi Wang
2013-01-20 23:09:35 -05:00
Tom Lane
d5b31cc32b Fix an O(N^2) performance issue for sessions modifying many relations.
AtEOXact_RelationCache() scanned the entire relation cache at the end of
any transaction that created a new relation or assigned a new relfilenode.
Thus, clients such as pg_restore had an O(N^2) performance problem that
would start to be noticeable after creating 10000 or so tables.  Since
typically only a small number of relcache entries need any cleanup, we
can fix this by keeping a small list of their OIDs and doing hash_searches
for them.  We fall back to the full-table scan if the list overflows.

Ideally, the maximum list length would be set at the point where N
hash_searches would cost just less than the full-table scan.  Some quick
experimentation says that point might be around 50-100; I (tgl)
conservatively set MAX_EOXACT_LIST = 32.  For the case that we're worried
about here, which is short single-statement transactions, it's unlikely
there would ever be more than about a dozen list entries anyway; so it's
probably not worth being too tense about the value.

We could avoid the hash_searches by instead keeping the target relcache
entries linked into a list, but that would be noticeably more complicated
and bug-prone because of the need to maintain such a list in the face of
relcache entry drops.  Since a relcache entry can only need such cleanup
after a somewhat-heavyweight filesystem operation, trying to save a
hash_search per cleanup doesn't seem very useful anyway --- it's the scan
over all the not-needing-cleanup entries that we wish to avoid here.

Jeff Janes, reviewed and tweaked a bit by Tom Lane
2013-01-20 13:45:10 -05:00
Tom Lane
26d905a12d Use SET TRANSACTION READ ONLY in pg_dump, if server supports it.
This currently does little except serve as documentation.  (The one case
where it has a performance benefit, SERIALIZABLE mode in 9.1 and up, was
already using READ ONLY mode.)  However, it's possible that it might have
performance benefits in future, and in any case it seems like good
practice since it would catch any accidentally non-read-only operations.

Pavan Deolasee
2013-01-19 17:56:40 -05:00
Tom Lane
4b94cfb564 Modernize string literal syntax in tutorial example.
Un-double the backslashes in the LIKE patterns, since
standard_conforming_strings is now the default.  Just to be sure, include
a command to set standard_conforming_strings to ON in the example.

Back-patch to 9.1, where standard_conforming_strings became the default.

Josh Kupershmidt, reviewed by Jeff Janes
2013-01-19 17:20:32 -05:00
Andrew Dunstan
9f10f7dc57 Make pgxs build executables with the right suffix.
Complaint and patch from Zoltán Böszörményi.

When cross-compiling, the native make doesn't know
about the Windows .exe suffix, so it only builds with
it when explicitly told to do so.

The native make will not see the link between the target
name and the built executable, and might this do unnecesary
work, but that's a bigger problem than this one, if in fact
we consider it a problem at all.

Back-patch to all live branches.
2013-01-19 14:54:29 -05:00
Tom Lane
c2a14bc7c9 Protect against SnapshotNow race conditions in pg_tablespace scans.
Use of SnapshotNow is known to expose us to race conditions if the tuple(s)
being sought could be updated by concurrently-committing transactions.
CREATE DATABASE and DROP DATABASE are particularly exposed because they do
heavyweight filesystem operations during their scans of pg_tablespace,
so that the scans run for a very long time compared to most.  Furthermore,
the potential consequences of a missed or twice-visited row are nastier
than average:

* createdb() could fail with a bogus "file already exists" error, or
  silently fail to copy one or more tablespace's worth of files into the
  new database.

* remove_dbtablespaces() could miss one or more tablespaces, thus failing
  to free filesystem space for the dropped database.

* check_db_file_conflict() could likewise miss a tablespace, leading to an
  OID conflict that could result in data loss either immediately or in
  future operations.  (This seems of very low probability, though, since a
  duplicate database OID would be unlikely to start with.)

Hence, it seems worth fixing these three places to use MVCC snapshots, even
though this will someday be superseded by a generic solution to SnapshotNow
race conditions.

Back-patch to all active branches.

Stephen Frost and Tom Lane
2013-01-18 18:06:20 -05:00
Bruce Momjian
530bbfac57 Rename new latex longtable function name, for consistency 2013-01-18 14:02:58 -05:00
Robert Haas
d8c3896626 Unbreak lock conflict detection for Hot Standby.
This got broken in the original fast-path locking patch, because
I failed to account for the fact that Hot Standby startup process
might take a strong relation lock on a relation in a database to
which it is not bound, and confused MyDatabaseId with the database
ID of the relation being locked.

Report and diagnosis by Andres Freund.  Final form of patch by me.
2013-01-18 11:52:28 -05:00
Alvaro Herrera
8c17144c75 Fix off-by-one bug in xlog reading logic
Bug reported by Michael Paquier

Author: Andres Freund
2013-01-18 11:19:53 -03:00
Bruce Momjian
74a82bafe4 psql latex fixes
Remove extra line at bottom of table for new 'latex' mode border=3.
Also update 'latex'-longtable 'tableattr' docs to say
'whitespace-separated' instead of 'space'.
2013-01-18 08:30:31 -05:00
Heikki Linnakangas
6f7cddc7ae Now that START_REPLICATION returns the next timeline's ID after reaching end
of timeline, take advantage of that in walreceiver.

Startup process is still in control of choosign the target timeline, by
scanning the timeline history files present in pg_xlog, but walreceiver now
uses the next timeline's ID to fetch its history file immediately after it
has finished streaming the old timeline. Before, the standby would first try
to restart streaming on the old timeline, which fetches the missing timeline
history file as a side-effect, and only then restart from the new timeline.
This patch eliminates the extra iteration, which speeds up the timeline
switch and reduces the noise in the log caused by the extra restart on the
old timeline.
2013-01-18 11:59:34 +02:00
Heikki Linnakangas
2ff6555313 Use the right timeline when beginning to stream from master.
The xlogreader refactoring broke the logic to decide which timeline to start
streaming from. XLogPageRead() uses the timeline history to check which
timeline the requested WAL position falls into. However, after the
refactoring, XLogPageRead() is always first called with the first page in
the segment, to verify the segment header, and only then with the actual WAL
position we're interested in. That first read of the segment's header made
XLogPageRead() to always start streaming from the old timeline containing
the segment header, not the timeline containing the actual record, if there
was a timeline switch within the segment.

I thought I fixed this yesterday, but that fix was too narrow and only fixed
this for the corner-case that the timeline switch happened in the first page
of the segment. To fix this more robustly, pass explicitly the position of
the record we're actually interested in to XLogPageRead, and use that to
decide which timeline to read from, rather than deduce it from the page and
offset.

Per report from Fujii Masao.
2013-01-18 11:46:49 +02:00
Heikki Linnakangas
88228e6f1d When xlogreader asks the callback function to read a page, make sure we
get a large enough part of the page to include the beginning of the next
record we're interested in. The XLogPageRead callback uses the requested
length to decide which timeline to stream WAL from, and if the first call
is short, and the page contains a timeline switch, we'll repeatedly try
to stream that page from the old timeline, and never get across the
timeline switch.
2013-01-17 23:46:33 +02:00
Heikki Linnakangas
3684a534ef I added a result set to START_STREAMING command, but neglected walreceiver.
The patch to allow pg_receivexlog to switch timeline added a result set
after copy has ended in START_STREAMING command, to return the next
timeline's ID to the client. But walreceived didn't get the memo, and threw
an error on the unexpected result set. Fix.
2013-01-17 23:45:45 +02:00
Alvaro Herrera
279628a0a7 Accelerate end-of-transaction dropping of relations
When relations are dropped, at end of transaction we need to remove the
files and clean the buffer pool of buffers containing pages of those
relations.  Previously we would scan the buffer pool once per relation
to clean up buffers.  When there are many relations to drop, the
repeated scans make this process slow; so we now instead pass a list of
relations to drop and scan the pool once, checking each buffer against
the passed list.  When the number of relations is larger than a
threshold (which as of this patch is being set to 20 relations) we sort
the array before starting, and bsearch the array; when it's smaller, we
simply scan the array linearly each time, because that's faster.  The
exact optimal threshold value depends on many factors, but the
difference is not likely to be significant enough to justify making it
user-settable.

This has been measured to be a significant win (a 15x win when dropping
100,000 relations; an extreme case, but reportedly a real one).

Author: Tomas Vondra, some tweaks by me
Reviewed by: Robert Haas, Shigeru Hanada, Andres Freund, Álvaro Herrera
2013-01-17 16:13:17 -03:00
Heikki Linnakangas
0b6329130e Make pg_receivexlog and pg_basebackup -X stream work across timeline switches.
This mirrors the changes done earlier to the server in standby mode. When
receivelog reaches the end of a timeline, as reported by the server, it
fetches the timeline history file of the next timeline, and restarts
streaming from the new timeline by issuing a new START_STREAMING command.

When pg_receivexlog crosses a timeline, it leaves the .partial suffix on the
last segment on the old timeline. This helps you to tell apart a partial
segment left in the directory because of a timeline switch, and a completed
segment. If you just follow a single server, it won't make a difference, but
it can be significant in more complicated scenarios where new WAL is still
generated on the old timeline.

This includes two small changes to the streaming replication protocol:
First, when you reach the end of timeline while streaming, the server now
sends the TLI of the next timeline in the server's history to the client.
pg_receivexlog uses that as the next timeline, so that it doesn't need to
parse the timeline history file like a standby server does. Second, when
BASE_BACKUP command sends the begin and end WAL positions, it now also sends
the timeline IDs corresponding the positions.
2013-01-17 20:23:00 +02:00
Tom Lane
8ae35e9180 Improve memory space management in tuplesort and tuplestore.
The code originally just doubled the size of the tuple-pointer array so
long as that would fit in allowedMem.  This could result in failing to use
as much as half of allowedMem, if (as is typical) the last doubling attempt
didn't quite fit.  Worse, we might double the array size but be unable to
use most of the added slots, because there was no room left within the
allowedMem limit for tuples the slots should point to.  To fix, double only
so long as we've used less than half of allowedMem in total.  Then do one
more array enlargement, but scale it based on total memory consumption so
far.  This will work nicely as long as the average tuple size is reasonably
stable, and in any case should be better than the old method.

This change will result in large sort operations consuming a larger
fraction of work_mem than they typically did in the past.  The release
notes should mention that users may want to revisit their work_mem
settings, if they'd tuned those settings based on the old behavior of
sorting.

Jeff Janes, reviewed by Peter Geoghegan and Robert Haas
2013-01-17 13:12:56 -05:00
Heikki Linnakangas
1296d5c53c Fix a couple of error-handling bugs in the xlogreader patch.
XLogReadRecord should reset its state on every error, to make sure it
re-reads the page on next call. It was inconsistent in that some errors did
that, but some did not.

In ReadRecord(), don't give up on an error if we're in standby mode. The
loop was set up to retry, but the checks within the loop broke out of the
loop on any error.

Andres Freund, with some tweaking by me.
2013-01-17 19:27:04 +02:00
Bruce Momjian
b14f81bc9a Add a latex-longtable output format to psql
latex longtable is more powerful than the 'tabular' output format
'latex' uses.  Also add border=3 support to 'latex'.
2013-01-17 11:39:38 -05:00
Magnus Hagander
8ef6961685 Silence compiler warnings 2013-01-17 16:10:33 +01:00
Heikki Linnakangas
9ee4d06f3f Make GiST indexes on-disk compatible with 9.2 again.
The patch that turned XLogRecPtr into a uint64 inadvertently changed the
on-disk format of GiST indexes, because the NSN field in the GiST page
opaque is an XLogRecPtr. That breaks pg_upgrade. Revert the format of that
field back to the two-field struct that XLogRecPtr was before. This is the
same we did to LSNs in the page header to avoid changing on-disk format.

Bump catversion, as this invalidates any existing GiST indexes built on
9.3devel.
2013-01-17 16:46:16 +02:00
Magnus Hagander
bba486f372 Base the default SSL ciphers on DEFAULT instead of ALL
It's better to start from what the OpenSSL people consider a good
default and then remove insecure things (low encryption, exportable
encryption and md5 at this point) from that, instead of starting
from everything that exists and remove from that. We trust the
OpenSSL people to make good choices about what the default is.
2013-01-17 15:04:44 +01:00
Magnus Hagander
4eebf1309f Make size-output fixed length in pg_basebackup verbose mode
This way the line doesn't shift right as the amount of data processed
increases.
2013-01-17 14:43:33 +01:00
Magnus Hagander
d7e9ca7ff7 Truncate filenames in the leadning end in pg_basebackup verbose output
When truncating at the end, like before, the output would often end up
just showing the path instead of the filename.

Also increase the length of the filename by 5, which still keeps us at
less than 80 characters in most outputs.
2013-01-17 14:38:49 +01:00
Magnus Hagander
f3af53441e Support multiple -t/--table arguments for more commands
On top of the previous support in pg_dump, add support to specify
multiple tables (by using the -t option multiple times) to
pg_restore, clsuterdb, reindexdb and vacuumdb.

Josh Kupershmidt, reviewed by Karl O. Pinc
2013-01-17 11:24:47 +01:00
Peter Eisentraut
36bdfa52a0 Get rid of pg_dump's README
It was largely full of outdated and incorrect information.  Move the few
notes which were still relevant into header comments of pg_backup_tar.c
and pg_dumpall.c.

Josh Kupershmidt
2013-01-16 23:49:54 -05:00
Alvaro Herrera
7fcbf6a405 Split out XLog reading as an independent facility
This new facility can not only be used by xlog.c to carry out crash
recovery, but also by external programs.  By supplying a function to
read XLog pages from somewhere, all the WAL reading can be used for
completely different purposes.

For the standard backend use, the behavior should be pretty much the
same as previously.  As for non-backend programs, an hypothetical
pg_xlogdump program is now closer to reality, but some more backend
support is still necessary.

This patch was originally submitted by Andres Freund in a different
form, but Heikki Linnakangas opted for and authored another design of
the concept.  Andres has advanced the patch since Heikki's initial
version.  Review and some (mostly cosmetics) changes by me.
2013-01-16 16:12:53 -03:00
Heikki Linnakangas
8606dd8190 Make \? help message more clear when not connected.
On second thought, "none" could mislead to think that you're connected a
database with that name. Duplicate the whole string, so that it can be
more easily translated. In back-branches, thought, just use an empty string
in place of the database name, to avoid adding a translatable string.
2013-01-15 22:23:14 +02:00
Heikki Linnakangas
b04ce529fd Don't pass NULL to fprintf, if not currently connected to a database.
Backpatch all the way to 8.3. Fixes bug #7811, per report and diagnosis by
Meng Qingzhong.
2013-01-15 19:23:47 +02:00
Alvaro Herrera
7ac5760fa2 Rework order of checks in ALTER / SET SCHEMA
When attempting to move an object into the schema in which it already
was, for most objects classes we were correctly complaining about
exactly that ("object is already in schema"); but for some other object
classes, such as functions, we were instead complaining of a name
collision ("object already exists in schema").  The latter is wrong and
misleading, per complaint from Robert Haas in
CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com

To fix, refactor the way these checks are done.  As a bonus, the
resulting code is smaller and can also share some code with Rename
cases.

While at it, remove use of getObjectDescriptionOids() in error messages.
These are normally disallowed because of translatability considerations,
but this one had slipped through since 9.1.  (Not sure that this is
worth backpatching, though, as it would create some untranslated
messages in back branches.)

This is loosely based on a patch by KaiGai Kohei, heavily reworked by
me.
2013-01-15 13:23:43 -03:00
Heikki Linnakangas
ffda05977a Give a proper error message if connecting to incompatible server.
The WAL streaming message format changed in 9.3, so 9.3 pg_basebackup or
pg_receivelog won't work against older servers.
2013-01-15 15:45:52 +02:00
Tom Lane
1b794d3f32 Fix hash_update_hash_key() to handle same-bucket case correctly.
Original coding would corrupt the hashtable if the item being updated was
at the end of its bucket chain and the new hash key hashed to that same
bucket.  Diagnosis and fix by Heikki Linnakangas.
2013-01-14 21:57:15 -05:00
Heikki Linnakangas
3f4b1749a8 Return value of lseek() can be negative on failure.
Because the return value of lseek() was assigned to an unsigned size_t
variable, we'd fail to notice an error return code -1. Compiler gave a
warning about this.

Andres Freund
2013-01-15 00:42:37 +02:00
Tom Lane
325c54b69c Fix obsolete SQL syntax in comment.
This was legal back in the days of add_missing_from, though perhaps
never good style.  It's not legal anymore ...

Jan Urbański
2013-01-14 15:48:12 -05:00
Tom Lane
5c4eb9166e Reject out-of-range dates in to_date().
Dates outside the supported range could be entered, but would not print
reasonably, and operations such as conversion to timestamp wouldn't behave
sanely either.  Since this has the potential to result in undumpable table
data, it seems worth back-patching.

Hitoshi Harada
2013-01-14 15:19:48 -05:00
Tom Lane
7127293a5d Add new timezone abbrevation "FET".
This seems to have been invented in 2011 to represent GMT+3, non daylight
savings rules, as now used in Europe/Kaliningrad and Europe/Minsk.
There are no conflicts so might as well add it to the Default list.
Per bug #7804 from Ruslan Izmaylov.
2013-01-14 14:45:40 -05:00
Alvaro Herrera
692079e5dc Remove spurious space
Andres Freund
2013-01-14 12:08:59 -03:00
Tom Lane
2065dd2834 Prevent very-low-probability PANIC during PREPARE TRANSACTION.
The code in PostPrepare_Locks supposed that it could reassign locks to
the prepared transaction's dummy PGPROC by deleting the PROCLOCK table
entries and immediately creating new ones.  This was safe when that code
was written, but since we invented partitioning of the shared lock table,
it's not safe --- another process could steal away the PROCLOCK entry in
the short interval when it's on the freelist.  Then, if we were otherwise
out of shared memory, PostPrepare_Locks would have to PANIC, since it's
too late to back out of the PREPARE at that point.

Fix by inventing a dynahash.c function to atomically update a hashtable
entry's key.  (This might possibly have other uses in future.)

This is an ancient bug that in principle we ought to back-patch, but the
odds of someone hitting it in the field seem really tiny, because (a) the
risk window is small, and (b) nobody runs servers with maxed-out lock
tables for long, because they'll be getting non-PANIC out-of-memory errors
anyway.  So fixing it in HEAD seems sufficient, at least until the new
code has gotten some testing.
2013-01-13 22:20:22 -05:00
Peter Eisentraut
9d2cd99a60 Make spelling more uniform 2013-01-13 21:42:03 -05:00
Tom Lane
24dd0502a1 Update comments for elog_start().
Forgot I was going to do this as part of the previous patch ...
2013-01-13 18:50:48 -05:00
Tom Lane
b853eb9718 Improve handling of ereport(ERROR) and elog(ERROR).
In commit 71450d7fd6, we added code to inform
suitably-intelligent compilers that ereport() doesn't return if the elevel
is ERROR or higher.  This patch extends that to elog(), and also fixes a
double-evaluation hazard that the previous commit created in ereport(),
as well as reducing the emitted code size.

The elog() improvement requires the compiler to support __VA_ARGS__, which
should be available in just about anything nowadays since it's required by
C99.  But our minimum language baseline is still C89, so add a configure
test for that.

The previous commit assumed that ereport's elevel could be evaluated twice,
which isn't terribly safe --- there are already counterexamples in xlog.c.
On compilers that have __builtin_constant_p, we can use that to protect the
second test, since there's no possible optimization gain if the compiler
doesn't know the value of elevel.  Otherwise, use a local variable inside
the macros to prevent double evaluation.  The local-variable solution is
inferior because (a) it leads to useless code being emitted when elevel
isn't constant, and (b) it increases the optimization level needed for the
compiler to recognize that subsequent code is unreachable.  But it seems
better than not teaching non-gcc compilers about unreachability at all.

Lastly, if the compiler has __builtin_unreachable(), we can use that
instead of abort(), resulting in a noticeable code savings since no
function call is actually emitted.  However, it seems wise to do this only
in non-assert builds.  In an assert build, continue to use abort(), so that
the behavior will be predictable and debuggable if the "impossible"
happens.

These changes involve making the ereport and elog macros emit do-while
statement blocks not just expressions, which forces small changes in
a few call sites.

Andres Freund, Tom Lane, Heikki Linnakangas
2013-01-13 18:40:09 -05:00
Andrew Dunstan
4ae5ee6c9b Extend and improve use of EXTRA_REGRESS_OPTS.
This is now used by ecpg tests, and not clobbered by pg_upgrade
tests. This change won't affect anything that doesn't set this
environment variable, but will enable the buildfarm to control
exactly what port regression test installs will be running on,
and thus to detect possible rogue postmasters more easily.

Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
2013-01-12 08:28:58 -05:00
Tom Lane
31f38f28b0 Redesign the planner's handling of index-descent cost estimation.
Historically we've used a couple of very ad-hoc fudge factors to try to
get the right results when indexes of different sizes would satisfy a
query with the same number of index leaf tuples being visited.  In
commit 21a39de580 I tweaked one of these
fudge factors, with results that proved disastrous for larger indexes.
Commit bf01e34b55 fudged it some more,
but still with not a lot of principle behind it.

What seems like a better way to address these issues is to explicitly model
index-descent costs, since that's what's really at stake when considering
diferent indexes with similar leaf-page-level costs.  We tried that once
long ago, and found that charging random_page_cost per page descended
through was way too much, because upper btree levels tend to stay in cache
in real-world workloads.  However, there's still CPU costs to think about,
and the previous fudge factors can be seen as a crude attempt to account
for those costs.  So this patch replaces those fudge factors with explicit
charges for the number of tuple comparisons needed to descend the index
tree, plus a small charge per page touched in the descent.  The cost
multipliers are chosen so that the resulting charges are in the vicinity of
the historical (pre-9.2) fudge factors for indexes of up to about a million
tuples, while not ballooning unreasonably beyond that, as the old fudge
factor did (even more so in 9.2).

To make this work accurately for btree indexes, add some code that allows
extraction of the known root-page height from a btree.  There's no
equivalent number readily available for other index types, but we can use
the log of the number of index pages as an approximate substitute.

This seems like too much of a behavioral change to risk back-patching,
but it should improve matters going forward.  In 9.2 I'll just revert
the fudge-factor change.
2013-01-11 12:56:58 -05:00
Andrew Dunstan
7fb97ecd13 Detect Windows perl linkage parameters in configure script.
This means we can now construct a configure test for the library
presence. Previously these parameters were only figured out at
build time in plperl's GnuMakefile.
2013-01-09 17:49:23 -05:00
Magnus Hagander
6e650a55cd Properly install ecpg_compat and pgtypes libraries on msvc
JiangGuiqing
2013-01-09 17:29:59 +01:00
Magnus Hagander
b5ed1376c6 Don't attempt to write recovery.conf when -R is not specified
Fixes segmentation fault during regular use.

Fujii Masao
2013-01-09 16:57:32 +01:00
Tom Lane
c00dc337b8 Fix potential corruption of lock table in CREATE/DROP INDEX CONCURRENTLY.
If VirtualXactLock() has to wait for a transaction that holds its VXID lock
as a fast-path lock, it must first convert the fast-path lock to a regular
lock.  It failed to take the required "partition" lock on the main
shared-memory lock table while doing so.  This is the direct cause of the
assert failure in GetLockStatusData() recently observed in the buildfarm,
but more worryingly it could result in arbitrary corruption of the shared
lock table if some other process were concurrently engaged in modifying the
same partition of the lock table.  Fortunately, VirtualXactLock() is only
used by CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY, so the
opportunities for failure are fewer than they might have been.

In passing, improve some comments and be a bit more consistent about
order of operations.
2013-01-08 18:25:58 -05:00
Peter Eisentraut
f31d5baff6 Fix typo 2013-01-07 21:34:30 -05:00
Andrew Dunstan
74570db99c Fix a logic bug in pgindent. 2013-01-07 12:26:27 -05:00
Robert Haas
a0dc23f205 Fix incorrect error message when schema-CREATE permission is absent.
Report by me.  Fix by KaiGai Kohei.
2013-01-07 11:54:59 -05:00
Magnus Hagander
915a29a10c Add support for generating minimal recovery.conf when doing base backups
Adds commandline option -R to pg_basebackup that creates a recovery.conf which
enables standby mode using the same parameters that pg_basebackup used to
connect to the master, and writes it into the output directory (or injects it
in the tar file when tar format is used).

Zoltan Boszormenyi, modified by Magnus Hagander, reviewed by Amit Kapila & Fujii Masao
2013-01-05 16:54:06 +01:00
Magnus Hagander
940d136661 Centralize single quote escaping in src/port/quotes.c
For code-reuse in upcoming functionality in pg_basebackup.

Zoltan Boszormenyi
2013-01-05 15:40:19 +01:00
Peter Eisentraut
fc8745070a PL/Python: Make build on OS X more flexible
The PL/Python build on OS X was previously hardcoded to use the system
installation of Python, ignoring whatever was specified to configure.
Except that it would use the header files from configure, which could
lead to mismatches.  It was not possible to build against a custom
Python installation.

Now, we check in configure how the specified Python installation was
built and use that, supporting framework and non-framework builds.
2013-01-05 08:56:14 -05:00
Peter Eisentraut
7e938e3c56 Revert "PL/Python: Remove workaround for returning booleans in Python <2.3"
This reverts commit be0dfbad36.

The previous information that Py_RETURN_TRUE and Py_RETURN_FALSE are
supported in Python 2.3 is wrong.  They require Python 2.4.  Update the
comment about that.
2013-01-05 08:50:58 -05:00
Peter Eisentraut
49e7a26d67 Make some spelling more consistent 2013-01-05 08:25:21 -05:00
Tom Lane
94afbd5831 Invent a "one-shot" variant of CachedPlans for better performance.
SPI_execute() and related functions create a CachedPlan, execute it once,
and immediately discard it, so that the functionality offered by
plancache.c is of no value in this code path.  And performance measurements
show that the extra data copying and invalidation checking done by
plancache.c slows down simple queries by 10% or more compared to 9.1.
However, enough of the SPI code is shared with functions that do need plan
caching that it seems impractical to bypass plancache.c altogether.
Instead, let's invent a variant version of cached plans that preserves
99% of the API but doesn't offer any of the actual functionality, nor the
overhead.  This puts SPI_execute() performance back on par, or maybe even
slightly better, than it was before.  This change should resolve recent
complaints of performance degradation from Dong Ye, Pavel Stehule, and
others.

By avoiding data copying, this change also reduces the amount of memory
needed to execute many-statement SPI_execute() strings, as for instance in
a recent complaint from Tomas Vondra.

An additional benefit of this change is that multi-statement SPI_execute()
query strings are now processed fully serially, that is we complete
execution of earlier statements before running parse analysis and planning
on following ones.  This eliminates a long-standing POLA violation, in that
DDL that affects the behavior of a later statement will now behave as
expected.

Back-patch to 9.2, since this was a performance regression compared to 9.1.
(In 9.2, place the added struct fields so as to avoid changing the offsets
of existing fields.)

Heikki Linnakangas and Tom Lane
2013-01-04 17:42:19 -05:00
Heikki Linnakangas
b0daba57bb Tolerate timeline switches while "pg_basebackup -X fetch" is running.
If you take a base backup from a standby server with "pg_basebackup -X
fetch", and the timeline switches while the backup is being taken, the
backup used to fail with an error "requested WAL segment %s has already
been removed". This is because the server-side code that sends over the
required WAL files would not construct the WAL filename with the correct
timeline after a switch.

Fix that by using readdir() to scan pg_xlog for all the WAL segments in the
range, regardless of timeline.

Also, include all timeline history files in the backup, if taken with
"-X fetch". That fixes another related bug: If a timeline switch happened
just before the backup was initiated in a standby, the WAL segment
containing the initial checkpoint record contains WAL from the older
timeline too. Recovery will not accept that without a timeline history file
that lists the older timeline.

Backpatch to 9.2. Versions prior to that were not affected as you could not
take a base backup from a standby before 9.2.
2013-01-03 19:51:00 +02:00
Heikki Linnakangas
ee994272ca Delay reading timeline history file until it's fetched from master.
Streaming replication can fetch any missing timeline history files from the
master, but recovery would read the timeline history file for the target
timeline before reading the checkpoint record, and before walreceiver has
had a chance to fetch it from the master. Delay reading it, and the sanity
checks involving timeline history, until after reading the checkpoint
record.

There is at least one scenario where this makes a difference: if you take
a base backup from a standby server right after a timeline switch, the
WAL segment containing the initial checkpoint record will begin with an
older timeline ID. Without the timeline history file, recovering that file
will fail as the older timeline ID is not recognized to be an ancestor of
the target timeline. If you try to recover from such a backup, using only
streaming replication to fetch the WAL, this patch is required for that to
work.
2013-01-03 10:41:58 +02:00
Alvaro Herrera
84f6fb81b8 Fix IsUnderPostmaster/EXEC_BACKEND confusion 2013-01-02 18:39:20 -03:00
Alvaro Herrera
15658911d9 Set MaxBackends only on bootstrap and standalone modes
... not on auxiliary processes.  I managed to overlook the fact that I
had disabled assertions on my HEAD checkout long ago.

Hopefully this will turn the buildfarm green again, and put an end to
today's silliness.
2013-01-02 17:57:56 -03:00
Magnus Hagander
794397ae1d Move tar function headers to pgtar.h
This makes it possible to include them only where they are used, so
we can avoid the conflict of the uid_t and gid_t datatypes that happened
in plperl (since plperl doesn't need the tar functions)
2013-01-02 20:34:08 +01:00
Alvaro Herrera
dfbba2c86c Make sure MaxBackends is always set
Auxiliary and bootstrap processes weren't getting it, causing initdb to
fail completely.
2013-01-02 14:39:11 -03:00
Alvaro Herrera
cdbc0ca48c Fix background workers for EXEC_BACKEND
Commit da07a1e8 was broken for EXEC_BACKEND because I failed to realize
that the MaxBackends recomputation needed to be duplicated by
subprocesses in SubPostmasterMain.  However, instead of having the value
be recomputed at all, it's better to assign the correct value at
postmaster initialization time, and have it be propagated to exec'ed
backends via BackendParameters.

MaxBackends stays as zero until after modules in
shared_preload_libraries have had a chance to register bgworkers, since
the value is going to be untrustworthy till that's finished.

Heikki Linnakangas and Álvaro Herrera
2013-01-02 12:01:14 -03:00
Heikki Linnakangas
d194d7a526 Fix bug in streaming replication over multiple tli switches.
After receiving some WAL over streaming replication, try to open the file
from the timeline we're currently recieving, not recoveryTargetTLI. They
are usually the same, which is why wasn't noticed before, but you'd get
an error if there have been more than one timeline switch between the
current point in WAL and the recovery target.
2013-01-02 14:35:15 +02:00
Heikki Linnakangas
4ffd589f44 Fix silly typo in code, which broke the check for reaching consistency. 2013-01-02 13:44:59 +02:00
Bruce Momjian
bd61a623ac Update copyrights for 2013
Fully update git head, and update back branches in ./COPYRIGHT and
legal.sgml files.
2013-01-01 17:15:01 -05:00
Magnus Hagander
b10e9ae9f3 Add new file to MSVC build system as well 2013-01-01 18:29:48 +01:00
Magnus Hagander
f5d4bdd3a5 Unify some tar functionality across different parts
Move some of the tar functionality that existed mostly duplicated
in both pg_dump and the walsender basebackup functionality into
port/tar.c instead, so it can be used from both. It will also be
used by pg_basebackup in the future, which would've caused a third
copy of it around.

Zoltan Boszormenyi and Magnus Hagander
2013-01-01 18:15:57 +01:00
Tom Lane
2ffa740be9 Fix ruleutils to cope with conflicts from adding/dropping/renaming columns.
In commit 11e131854f, we improved the
rule/view dumping code so that it would produce valid query representations
even if some of the tables involved in a query had been renamed since the
query was parsed.  This patch extends that idea to fix problems that occur
when individual columns are renamed, or added or dropped.  As before, the
core of the fix is to assign unique new aliases when a name conflict has
been created.  This is complicated by the JOIN USING feature, which
requires the same column alias to be used in both input relations, but we
can handle that with a sufficiently complex approach to assigning aliases.

A fortiori, this patch takes care of situations where the query didn't have
unique column names to begin with, such as in a recent complaint from Bryan
Nuse.  (Because of expansion of "SELECT *", re-parsing a dumped query can
require column name uniqueness even though the original text did not.)
2012-12-31 15:13:26 -05:00
Peter Eisentraut
0e690209ee Fix compiler warning about uninitialized variable 2012-12-31 00:13:40 -05:00
Heikki Linnakangas
60df192aea Keep timeline history files restored from archive in pg_xlog.
The cascading standby patch in 9.2 changed the way WAL files are treated
when restored from the archive. Before, they were restored under a temporary
filename, and not kept in pg_xlog, but after the patch, they were copied
under pg_xlog. This is necessary for a cascading standby to find them, but
it also means that if the archive goes offline and a standby is restarted,
it can recover back to where it was using the files in pg_xlog. It also
means that if you take an offline backup from a standby server, it includes
all the required WAL files in pg_xlog.

However, the same change was not made to timeline history files, so if the
WAL segment containing the checkpoint record contains a timeline switch, you
will still get an error if you try to restart recovery without the archive,
or recover from an offline backup taken from the standby.

With this patch, timeline history files restored from archive are copied
into pg_xlog like WAL files are, so that pg_xlog contains all the files
required to recover. This is a corner-case pre-existing issue in 9.2, but
even more important in master where it's possible for a standby to follow a
timeline switch through streaming replication. To make that possible, the
timeline history files must be present in pg_xlog.
2012-12-30 14:29:45 +02:00
Robert Haas
82b1b213ca Adjust more backend functions to return OID rather than void.
This is again intended to support extensions to the event trigger
functionality.  This may go a bit further than we need for that
purpose, but there's some value in being consistent, and the OID
may be useful for other purposes also.

Dimitri Fontaine
2012-12-29 07:55:37 -05:00
Alvaro Herrera
5ab3af46dd Remove obsolete XLogRecPtr macros
This gets rid of XLByteLT, XLByteLE, XLByteEQ and XLByteAdvance.
These were useful for brevity when XLogRecPtrs were split in
xlogid/xrecoff; but now that they are simple uint64's, they are just
clutter.  The only downside to making this change would be ease of
backporting patches, but that has been negated by other substantive
changes to the involved code anyway.  The clarity of simpler expressions
makes the change worthwhile.

Most of the changes are mechanical, but in a couple of places, the patch
author chose to invert the operator sense, making the code flow more
logical (and more in line with preceding comments).

Author: Andres Freund
Eyeballed by Dimitri Fontaine and Alvaro Herrera
2012-12-28 13:06:15 -03:00
Alvaro Herrera
24eca7977e Assign InvalidXLogRecPtr instead of MemSet(0)
For consistency.

Author: Andres Freund
2012-12-27 18:33:03 -03:00
Alvaro Herrera
eaa1f7220a Remove unused NextLogPage macro
Commit 061e7efb1b did away with its last caller, but neglected to remove
the actual definition.

Author: Andres Freund
2012-12-27 18:23:23 -03:00
Tom Lane
3f88b08003 Fix some minor issues in view pretty-printing.
Code review for commit 2f582f76b1: don't use
a static variable for what ought to be a deparse_context field, fix
non-multibyte-safe test for spaces, avoid useless and potentially O(N^2)
(though admittedly with a very small constant) calculations of wrap
positions when we aren't going to wrap.
2012-12-24 17:52:19 -05:00
Simon Riggs
c2b3218064 Update comments on rd_newRelfilenodeSubid.
Ensure comments accurately reflect state of code
given new understanding, and recent changes.
Include example code from Noah Misch to
illustrate how rd_newRelfilenodeSubid can be
reset deterministically. No code changes.
2012-12-24 17:07:06 +00:00
Simon Riggs
ae9aba69a8 Keep rd_newRelfilenodeSubid across overflow.
Teach RelationCacheInvalidate() to keep rd_newRelfilenodeSubid across rel cache
message overflows, so that behaviour is now fully deterministic.

Noah Misch
2012-12-24 16:43:22 +00:00
Simon Riggs
42fa810c14 Fix more weird compiler messages caused
by unmatched function prototypes.

Andres Freund
2012-12-24 16:25:26 +00:00
Simon Riggs
2dcb2ebee2 Add function prototype from previous commit. 2012-12-24 09:18:42 +00:00
Robert Haas
c504513f83 Adjust many backend functions to return OID rather than void.
Extracted from a larger patch by Dimitri Fontaine.  It is hoped that
this will provide infrastructure for enriching the new event trigger
functionality, but it seems possibly useful for other purposes as
well.
2012-12-23 18:37:58 -05:00
Tom Lane
31bc839724 Prevent failure when RowExpr or XmlExpr is parse-analyzed twice.
transformExpr() is required to cope with already-transformed expression
trees, for various ugly-but-not-quite-worth-cleaning-up reasons.  However,
some of its newer subroutines hadn't gotten the memo.  This accounts for
bug #7763 from Norbert Buchmuller: transformRowExpr() was overwriting the
previously determined type of a RowExpr during CREATE TABLE LIKE INCLUDING
INDEXES.  Additional investigation showed that transformXmlExpr had the
same kind of problem, but all the other cases seem to be safe.

Andres Freund and Tom Lane
2012-12-23 14:07:24 -05:00
Heikki Linnakangas
1ff92eea14 Fix sloppiness in the timeline switch over streaming replication patch.
Here's another attempt at fixing the logic that decides how far the WAL can
be streamed, which was still broken if the timeline changed while streaming.
You would get an assertion failure. The way the logic is now written is more
readable, too.

Thom Brown reported the assertion failure.
2012-12-21 20:08:12 +02:00
Heikki Linnakangas
36e4456d78 Fix race condition if a file is removed while pg_basebackup is running.
If a relation file was removed when the server-side counterpart of
pg_basebackup was just about to open it to send it to the client, you'd
get a "could not open file" error. Fix that.

Backpatch to 9.1, this goes back to when pg_basebackup was introduced.
2012-12-21 15:34:15 +02:00
Heikki Linnakangas
d57a97343e Forgot to remove extern declaration of GetRecoveryTargetTLI()
Fujii Masao
2012-12-21 09:29:03 +02:00
Peter Eisentraut
740ee42da5 Make some messages more consistent in style 2012-12-21 00:10:46 -05:00
Peter Eisentraut
a0bfb7b36e Fix grammatical mistake in error message 2012-12-20 23:36:13 -05:00
Tom Lane
343c2a865b Fix pg_extension_config_dump() to handle update cases more sanely.
If pg_extension_config_dump() is executed again for a table already listed
in the extension's extconfig, the code was blindly making a new array entry.
This does not seem useful.  Fix it to replace the existing array entry
instead, so that it's possible for extension update scripts to alter the
filter conditions for configuration tables.

In addition, teach ALTER EXTENSION DROP TABLE to check for an extconfig
entry for the target table, and remove it if present.  This is not a 100%
solution because it's allowed for an extension update script to just
summarily DROP a member table, and that code path doesn't go through
ExecAlterExtensionContentsStmt.  We could probably make that case clean
things up if we had to, but it would involve sticking a very ugly wart
somewhere in the guts of dependency.c.  Since on the whole it seems quite
unlikely that extension updates would want to remove pre-existing
configuration tables, making the case possible with an explicit command
seems sufficient.

Per bug #7756 from Regina Obe.  Back-patch to 9.1 where extensions were
introduced.
2012-12-20 16:31:42 -05:00
Heikki Linnakangas
343ee00b73 Fix recycling of WAL segments after switching timeline during recovery.
This was broken before, we would recycle old WAL segments on wrong timeline
after the recovery target timeline had changed, but my recent commit to
not initialize ThisTimeLineID at all in a standby's checkpointer process
broke this completely.

The problem is that when installing a recycled WAL segment as a future one,
ThisTimeLineID is used to construct the filename. To fix, always update
ThisTimeLineID to the current timeline being recovered, before recycling
WAL segments at a restartpoint.

This still leaves a small window where we might install WAL segments under
wrong timeline ID, if the timeline is changed just as we're about to start
recycling. Also, even if we're replaying timeline X at the momnent, there's
no guarantee that we'll need as many WAL segments on that timeline as we
recycle. We might be just about to reach the point where we switch to next
timeline, so might only need one more WAL segment on the current timeline.
We'll live with the waste in that situation.

Bug pointed out by Fujii Masao. 9.1 and 9.2 had the same issue, when
recovery target timeline was changed, but I committed a slightly different
version of this patch on those branches.
2012-12-20 22:00:58 +02:00
Bruce Momjian
dc9896a245 Avoid using NAMEDATALEN in pg_upgrade
Because the client encoding might not match the server encoding,
pg_upgrade can't allocate NAMEDATALEN bytes for storage of database,
relation, and namespace identifiers.  Instead pg_strdup() the memory and
free it.

Also add C comment in initdb.c about safe NAMEDATALEN usage.
2012-12-20 13:56:31 -05:00
Heikki Linnakangas
af275a12df Follow TLI of last replayed record, not recovery target TLI, in walsenders.
Most of the time, the last replayed record comes from the recovery target
timeline, but there is a corner case where it makes a difference. When
the startup process scans for a new timeline, and decides to change recovery
target timeline, there is a window where the recovery target TLI has already
been bumped, but there are no WAL segments from the new timeline in pg_xlog
yet. For example, if we have just replayed up to point 0/30002D8, on
timeline 1, there is a WAL file called 000000010000000000000003 in pg_xlog
that contains the WAL up to that point. When recovery switches recovery
target timeline to 2, a walsender can immediately try to read WAL from
0/30002D8, from timeline 2, so it will try to open WAL file
000000020000000000000003. However, that doesn't exist yet - the startup
process hasn't copied that file from the archive yet nor has the walreceiver
streamed it yet, so walsender fails with error "requested WAL segment
000000020000000000000003 has already been removed". That's harmless, in that
the standby will try to reconnect later and by that time the segment is
already created, but error messages that should be ignored are not good.

To fix that, have walsender track the TLI of the last replayed record,
instead of the recovery target timeline. That way walsender will not try to
read anything from timeline 2, until the WAL segment has been created and at
least one record has been replayed from it. The recovery target timeline is
now xlog.c's internal affair, it doesn't need to be exposed in shared memory
anymore.

This fixes the error reported by Thom Brown. depesz the same error message,
but I'm not sure if this fixes his scenario.
2012-12-20 14:39:04 +02:00
Heikki Linnakangas
1a11d4609e Don't set ThisTimeLineID in checkpointer & bgwriter during recovery.
We used to set it to the current recovery target timeline, but the recovery
target timeline can change during recovery, leaving ThisTimeLineID at an
old value. That seems worse than always leaving it at zero to begin with.

AFAICS there was no good reason to set it in the first place. ThisTimeLineID
is not needed in checkpointer or bgwriter process, until it's time to write
the end-of-recovery checkpoint, and at that point ThisTimeLineID is updated
anyway.
2012-12-20 14:39:04 +02:00
Heikki Linnakangas
e43f947bf3 Check if we've reached end-of-backup point also if no redo is required.
If you restored from a backup taken from a standby, and the last record in
the backup is the checkpoint record, ie. there is no redo required except
for the checkpoint record, we would fail to notice that we've reached the
end-of-backup point, and the database is consistent. The result was an
error "WAL ends before end of online backup". To fix, move the
have-we-reached-end-of-backup check into CheckRecoveryConsistency(), which
is already responsible for similar checks with minRecoveryPoint, and is
called in the right places.

Backpatch to 9.2, this check and bug did not exist before that.
2012-12-19 14:22:00 +02:00
Peter Eisentraut
f2b88080db Rename SQL feature S403 to ARRAY_MAX_CARDINALITY
In an earlier version of the standard, this was called just
"MAX_CARDINALITY".
2012-12-19 07:14:27 -05:00
Peter Eisentraut
6925e38dad pg_basebackup: Small message punctuation improvements 2012-12-19 07:01:11 -05:00
Andrew Dunstan
9ac749ceb5 Don't include postgres.h in postgres_fe.h for cpluspluscheck.
Error exposed by recent Assert changes.

Complaint from Peter Eisentraut.
2012-12-18 16:30:14 -05:00
Peter Eisentraut
1a5f04dd7e Remove allow_nonpic_in_shlib
This was used in a time when a shared libperl or libpython was difficult
to come by.  That is obsolete, and the idea behind the flag was never
fully portable anyway and will likely fail on more modern CPU
architectures.
2012-12-18 01:13:59 -05:00
Tom Lane
6919b7e329 Fix failure to ignore leftover temp tables after a server crash.
During crash recovery, we remove disk files belonging to temporary tables,
but the system catalog entries for such tables are intentionally not
cleaned up right away.  Instead, the first backend that uses a temp schema
is expected to clean out any leftover objects therein.  This approach
requires that we be careful to ignore leftover temp tables (since any
actual access attempt would fail), *even if their BackendId matches our
session*, if we have not yet established use of the session's corresponding
temp schema.  That worked fine in the past, but was broken by commit
debcec7dc3 which incorrectly removed the
rd_islocaltemp relcache flag.  Put it back, and undo various changes
that substituted tests like "rel->rd_backend == MyBackendId" for use
of a state-aware flag.  Per trouble report from Heikki Linnakangas.

Back-patch to 9.1 where the erroneous change was made.  In the back
branches, be careful to add rd_islocaltemp in a spot in the struct that
was alignment padding before, so as not to break existing add-on code.
2012-12-17 20:15:32 -05:00
Tom Lane
c299477229 Fix filling of postmaster.pid in bootstrap/standalone mode.
We failed to ever fill the sixth line (LISTEN_ADDR), which caused the
attempt to fill the seventh line (SHMEM_KEY) to fail, so that the shared
memory key never got added to the file in standalone mode.  This has been
broken since we added more content to our lock files in 9.1.

To fix, tweak the logic in CreateLockFile to add an empty LISTEN_ADDR
line in standalone mode.  This is a tad grotty, but since that function
already knows almost everything there is to know about the contents of
lock files, it doesn't seem that it's any better to hack it elsewhere.

It's not clear how significant this bug really is, since a standalone
backend should never have any children and thus it seems not critical
to be able to check the nattch count of the shmem segment externally.
But I'm going to back-patch the fix anyway.

This problem had escaped notice because of an ancient (and in hindsight
pretty dubious) decision to suppress LOG-level messages by default in
standalone mode; so that the elog(LOG) complaint in AddToDataDirLockFile
that should have warned of the problem didn't do anything.  Fixing that
is material for a separate patch though.
2012-12-16 15:02:49 -05:00
Andrew Dunstan
3717f0837b Tidy up from frontend Assert change.
Quiet compiler warnings noted by Peter Eisentraut.
2012-12-16 12:22:57 -05:00
Magnus Hagander
c1f856a17f Properly copy fmgroids.h after clean on Win32
Craig Ringer
2012-12-16 14:56:51 +01:00
Andrew Dunstan
1c382655ad Provide Assert() for frontend code.
Per discussion on-hackers. psql is converted to use the new code.

Follows a suggestion from Heikki Linnakangas.
2012-12-14 18:03:07 -05:00
Robert Haas
75758a6ff0 Update comment in heapgetpage() regarding PD_ALL_VISIBLE vs. Hot Standby.
Pavan Deolasee, slightly modified by me
2012-12-14 15:44:38 -05:00
Peter Eisentraut
fdb67eb2b6 NLS: Use msgmerge --previous option
It provides some additional help to translators.
2012-12-13 23:12:12 -05:00
Heikki Linnakangas
abfd192b1b Allow a streaming replication standby to follow a timeline switch.
Before this patch, streaming replication would refuse to start replicating
if the timeline in the primary doesn't exactly match the standby. The
situation where it doesn't match is when you have a master, and two
standbys, and you promote one of the standbys to become new master.
Promoting bumps up the timeline ID, and after that bump, the other standby
would refuse to continue.

There's significantly more timeline related logic in streaming replication
now. First of all, when a standby connects to primary, it will ask the
primary for any timeline history files that are missing from the standby.
The missing files are sent using a new replication command TIMELINE_HISTORY,
and stored in standby's pg_xlog directory. Using the timeline history files,
the standby can follow the latest timeline present in the primary
(recovery_target_timeline='latest'), just as it can follow new timelines
appearing in an archive directory.

START_REPLICATION now takes a TIMELINE parameter, to specify exactly which
timeline to stream WAL from. This allows the standby to request the primary
to send over WAL that precedes the promotion. The replication protocol is
changed slightly (in a backwards-compatible way although there's little hope
of streaming replication working across major versions anyway), to allow
replication to stop when the end of timeline reached, putting the walsender
back into accepting a replication command.

Many thanks to Amit Kapila for testing and reviewing various versions of
this patch.
2012-12-13 19:17:32 +02:00
Heikki Linnakangas
527668717a Make xlog_internal.h includable in frontend context.
This makes unnecessary the ugly hack used to #include postgres.h in
pg_basebackup.

Based on Alvaro Herrera's patch
2012-12-13 14:59:13 +02:00
Heikki Linnakangas
6264cd3d69 In multi-insert, don't go into infinite loop on a huge tuple and fillfactor.
If a tuple is larger than page size minus space reserved for fillfactor,
heap_multi_insert would never find a page that it fits in and repeatedly ask
for a new page from RelationGetBufferForTuple. If a tuple is too large to
fit on any page, taking fillfactor into account, RelationGetBufferForTuple
will always expand the relation. In a normal insert, heap_insert will accept
that and put the tuple on the new page. heap_multi_insert, however, does a
fillfactor check of its own, and doesn't accept the newly-extended page
RelationGetBufferForTuple returns, even though there is no other choice to
make the tuple fit.

Fix that by making the logic in heap_multi_insert more like the heap_insert
logic. The first tuple is always put on the page RelationGetBufferForTuple
gives us, and the fillfactor check is only applied to the subsequent tuples.

Report from David Gould, although I didn't use his patch.
2012-12-12 13:54:42 +02:00
Tom Lane
691c5ebf79 Add defenses against integer overflow in dynahash numbuckets calculations.
The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d171652, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.
2012-12-11 22:09:05 -05:00
Tom Lane
cd3413ec36 Disable event triggers in standalone mode.
Per discussion, this seems necessary to allow recovery from broken event
triggers, or broken indexes on pg_event_trigger.

Dimitri Fontaine
2012-12-11 19:28:31 -05:00
Kevin Grittner
b19e4250b4 Fix performance problems with autovacuum truncation in busy workloads.
In situations where there are over 8MB of empty pages at the end of
a table, the truncation work for trailing empty pages takes longer
than deadlock_timeout, and there is frequent access to the table by
processes other than autovacuum, there was a problem with the
autovacuum worker process being canceled by the deadlock checking
code. The truncation work done by autovacuum up that point was
lost, and the attempt tried again by a later autovacuum worker. The
attempts could continue indefinitely without making progress,
consuming resources and blocking other processes for up to
deadlock_timeout each time.

This patch has the autovacuum worker checking whether it is
blocking any other thread at 20ms intervals. If such a condition
develops, the autovacuum worker will persist the work it has done
so far, release its lock on the table, and sleep in 50ms intervals
for up to 5 seconds, hoping to be able to re-acquire the lock and
try again. If it is unable to get the lock in that time, it moves
on and a worker will try to continue later from the point this one
left off.

While this patch doesn't change the rules about when and what to
truncate, it does cause the truncation to occur sooner, with less
blocking, and with the consumption of fewer resources when there is
contention for the table's lock.

The only user-visible change other than improved performance is
that the table size during truncation may change incrementally
instead of just once.

This problem exists in all supported versions but is infrequently
reported, although some reports of performance problems when
autovacuum runs might be caused by this. Initial commit is just the
master branch, but this should probably be backpatched once the
build farm and general developer usage confirm that there are no
surprising effects.

Jan Wieck
2012-12-11 14:33:08 -06:00
Heikki Linnakangas
970fb12de1 Consistency check should compare last record replayed, not last record read.
EndRecPtr is the last record that we've read, but not necessarily yet
replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the
last replayed record instead. This caused recovery to think it's reached
consistency too early.

Now that we do the check in CheckRecoveryConsistency correctly, we have to
move the call of that function to after redoing a record. The current place,
after reading a record but before replaying it, is wrong. In particular, if
there are no more records after the one ending at minRecoveryPoint, we don't
enter hot standby until one extra record is generated and read by the
standby, and CheckRecoveryConsistency is called. These two bugs conspired
to make the code appear to work correctly, except for the small window
between reading the last record that reaches minRecoveryPoint, and
replaying it.

In the passing, rename recoveryLastRecPtr, which is the last record
replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing
with replayEndRecPtr, which is the last record read that we're about to
replay.

Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao.
Backpatch to 9.0, where Hot Standby subtly changed the test from
"minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The
former works because where the test is performed, we have always read one
more record than we've replayed.
2012-12-11 18:54:02 +02:00
Andrew Dunstan
ad69bd052f Add mode where contrib installcheck runs each module in a separately named database.
Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Second attempt at this, this time accomodating make versions older
than 3.82.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.
2012-12-11 11:52:45 -05:00
Heikki Linnakangas
7bffc9b7bf Update minimum recovery point on truncation.
If a file is truncated, we must update minRecoveryPoint. Once a file is
truncated, there's no going back; it would not be safe to stop recovery
at a point earlier than that anymore.

Per report from Kyotaro HORIGUCHI. Backpatch to 8.4. Before that,
minRecoveryPoint was not updated during recovery at all.
2012-12-10 16:57:16 +02:00
Heikki Linnakangas
6be799664a Fix the tracking of min recovery point timeline.
Forgot to update it at the right place. Also, consider checkpoint record
that switches to new timelne to be on the new timeline.

This fixes erroneous "requested timeline 2 does not contain minimum recovery
point" errors, pointed out by Amit Kapila while testing another patch.
2012-12-10 16:04:26 +02:00
Tom Lane
b46c92112b Fix assorted bugs in privileges-for-types patch.
Commit 729205571e added privileges on data
types, but there were a number of oversights.  The implementation of
default privileges for types missed a few places, and pg_dump was
utterly innocent of the whole concept.  Per bug #7741 from Nathan Alden,
and subsequent wider investigation.
2012-12-09 00:08:23 -05:00
Tom Lane
a99c42f291 Support automatically-updatable views.
This patch makes "simple" views automatically updatable, without the need
to create either INSTEAD OF triggers or INSTEAD rules.  "Simple" views
are those classified as updatable according to SQL-92 rules.  The rewriter
transforms INSERT/UPDATE/DELETE commands on such views directly into an
equivalent command on the underlying table, which will generally have
noticeably better performance than is possible with either triggers or
user-written rules.  A view that has INSTEAD OF triggers or INSTEAD rules
continues to operate the same as before.

For the moment, security_barrier views are not considered simple.
Also, we do not support WITH CHECK OPTION.  These features may be
added in future.

Dean Rasheed, reviewed by Amit Kapila
2012-12-08 18:26:21 -05:00
Simon Riggs
ef754fb51b Correct xmax test for COPY FREEZE 2012-12-07 14:18:47 +00:00
Simon Riggs
1f023f9297 Optimize COPY FREEZE with CREATE TABLE also.
Jeff Davis, additional test by me
2012-12-07 13:26:52 +00:00
Simon Riggs
1eb6cee499 Clarify that COPY FREEZE is not a hard rule.
Remove message when FREEZE not honoured,
clarify reasons in comments and docs.
2012-12-07 12:59:05 +00:00
Tom Lane
31a891857a Improve pl/pgsql to support composite-type expressions in RETURN.
For some reason lost in the mists of prehistory, RETURN was only coded to
allow a simple reference to a composite variable when the function's return
type is composite.  Allow an expression instead, while preserving the
efficiency of the original code path in the case where the expression is
indeed just a composite variable's name.  Likewise for RETURN NEXT.

As is true in various other places, the supplied expression must yield
exactly the number and data types of the required columns.  There was some
discussion of relaxing that for pl/pgsql, but no consensus yet, so this
patch doesn't address that.

Asif Rehman, reviewed by Pavel Stehule
2012-12-06 23:09:52 -05:00
Alvaro Herrera
da07a1e856 Background worker processes
Background workers are postmaster subprocesses that run arbitrary
user-specified code.  They can request shared memory access as well as
backend database connections; or they can just use plain libpq frontend
database connections.

Modules listed in shared_preload_libraries can register background
workers in their _PG_init() function; this is early enough that it's not
necessary to provide an extra GUC option, because the necessary extra
resources can be allocated early on.  Modules can install more than one
bgworker, if necessary.

Care is taken that these extra processes do not interfere with other
postmaster tasks: only one such process is started on each ServerLoop
iteration.  This means a large number of them could be waiting to be
started up and postmaster is still able to quickly service external
connection requests.  Also, shutdown sequence should not be impacted by
a worker process that's reasonably well behaved (i.e. promptly responds
to termination signals.)

The current implementation lets worker processes specify their start
time, i.e. at what point in the server startup process they are to be
started: right after postmaster start (in which case they mustn't ask
for shared memory access), when consistent state has been reached
(useful during recovery in a HOT standby server), or when recovery has
terminated (i.e. when normal backends are allowed).

In case of a bgworker crash, actions to take depend on registration
data: if shared memory was requested, then all other connections are
taken down (as well as other bgworkers), just like it were a regular
backend crashing.  The bgworker itself is restarted, too, within a
configurable timeframe (which can be configured to be never).

More features to add to this framework can be imagined without much
effort, and have been discussed, but this seems good enough as a useful
unit already.

An elementary sample module is supplied.

Author: Álvaro Herrera

This patch is loosely based on prior patches submitted by KaiGai Kohei,
and unsubmitted code by Simon Riggs.

Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
Heikki Linnakangas, Simon Riggs, Amit Kapila
2012-12-06 17:47:30 -03:00
Tom Lane
e31d524867 Fix intermittent crash in DROP INDEX CONCURRENTLY.
When deleteOneObject closes and reopens the pg_depend relation,
we must see to it that the relcache pointer held by the calling function
(typically performMultipleDeletions) is updated.  Usually the relcache
entry is retained so that the pointer value doesn't change, which is why
the problem had escaped notice ... but after a cache flush event there's
no guarantee that the same memory will be reassigned.  To fix, change
the recursive functions' APIs so that we pass around a "Relation *"
not just "Relation".

Per investigation of occasional buildfarm failures.  This is trivial
to reproduce with -DCLOBBER_CACHE_ALWAYS, which points up the sad
lack of any buildfarm member running that way on a regular basis.
2012-12-05 23:42:51 -05:00
Alvaro Herrera
5e15cdb2ae Update comment at top of index_create
I neglected to update it in commit f4c4335.

Michael Paquier
2012-12-05 23:09:46 -03:00
Tom Lane
af4aba2f05 Ensure recovery pause feature doesn't pause unless users can connect.
If we're not in hot standby mode, then there's no way for users to connect
to reset the recoveryPause flag, so we shouldn't pause.  The code was aware
of this but the test to see if pausing was safe was seriously inadequate:
it wasn't paying attention to reachedConsistency, and besides what it was
testing was that we could legally enter hot standby, not that we have
done so.  Get rid of that in favor of checking LocalHotStandbyActive,
which because of the coding in CheckRecoveryConsistency is tantamount to
checking that we have told the postmaster to enter hot standby.

Also, move the recoveryPausesHere() call that reacts to asynchronous
recoveryPause requests so that it's not in the middle of application of a
WAL record.  I put it next to the recoveryStopsHere() call --- in future
those are going to need to interact significantly, so this seems like a
good waystation.

Also, don't bother trying to read another WAL record if we've already
decided not to continue recovery.  This was no big deal when the code was
written originally, but now that reading a record might entail actions like
fetching an archive file, it seems a bit silly to do it like that.

Per report from Jeff Janes and subsequent discussion.  The pause feature
needs quite a lot more work, but this gets rid of some indisputable bugs,
and seems safe enough to back-patch.
2012-12-05 18:27:50 -05:00
Heikki Linnakangas
d67b06fe3e Oops, meant to change the comment in writeTimeLineHistory. 2012-12-05 21:00:59 +02:00
Simon Riggs
6aa2e49a87 Must not reach consistency before XLOG_BACKUP_RECORD
When waiting for an XLOG_BACKUP_RECORD the minRecoveryPoint
will be incorrect, so we must not declare recovery as consistent
before we have seen the record. Major bug allowing recovery to end
too early in some cases, allowing people to see inconsistent db.
This patch to HEAD and 9.2, other fix required for 9.1 and 9.0

Simon Riggs and Andres Freund, bug report by Jeff Janes
2012-12-05 13:28:03 +00:00
Tom Lane
cdf498c5d7 Attempt to un-break Windows builds with USE_LDAP.
The buildfarm shows this case is entirely broken, and I'm betting the
reason is lack of any include file.
2012-12-04 17:25:51 -05:00
Michael Meskes
ac99ca68d7 Include isinf.o in libecpg if isinf() is not available on the system.
Patch done by Jiang Guiqing <jianggq@cn.fujitsu.com>.
2012-12-04 16:44:22 +01:00
Heikki Linnakangas
90991c40eb Downgrade a status message from LOG to DEBUG2.
I never intended this to be anything other than a debugging aid, but forgot
to change the level before committing.
2012-12-04 17:29:44 +02:00
Heikki Linnakangas
32f4de0adf Write exact xlog position of timeline switch in the timeline history file.
This allows us to do some more rigorous sanity checking for various
incorrect point-in-time recovery scenarios, and provides more information
for debugging purposes. It will also come handy in the upcoming patch to
allow timeline switches to be replicated by streaming replication.
2012-12-04 17:29:07 +02:00
Bruce Momjian
a84c30dda5 In initdb.c, move auth warning code into main() from secondary function. 2012-12-04 09:52:00 -05:00
Peter Eisentraut
ec8d1e32dd Fix build of LDAP URL feature
Some code was not ifdef'ed out for non-LDAP builds.

patch from Bruce Momjian
2012-12-04 06:42:25 -05:00
Heikki Linnakangas
5ce108bf32 Track the timeline associated with minRecoveryPoint, for more sanity checks.
This allows recovery to notice certain incorrect recovery scenarios.
If a server has recovered to point X on timeline 5, and you restart
recovery, it better be on timeline 5 when it reaches point X again, not on
some timeline with a higher ID. This can happen e.g if you a standby server
is shut down, a new timeline appears in the WAL archive, and the standby
server is restarted. It will try to follow the new timeline, which is wrong
because some WAL on the old timeline was already replayed before shutdown.

Requires an initdb (or at least pg_resetxlog), because this adds a field to
the control file.
2012-12-04 11:31:00 +02:00
Peter Eisentraut
aa2fec0a18 Add support for LDAP URLs
Allow specifying LDAP authentication parameters as RFC 4516 LDAP URLs.
2012-12-03 23:31:02 -05:00
Bruce Momjian
26374f2a0f In initdb.c, rename some newly created functions, and move the directory
creation and xlog symlink creation to separate functions.

Per suggestions from Andrew Dunstan.
2012-12-03 23:22:56 -05:00
Bruce Momjian
630cd14426 Add initdb --sync-only option to sync the data directory to durable
storage.

Have pg_upgrade use it, and enable server options fsync=off and
full_page_writes=off.

Document that users turning fsync from off to on should run initdb
--sync-only.

[ Previous commit was incorrectly applied as a git merge. ]
2012-12-03 22:47:59 -05:00
Bruce Momjian
25d1ed04a2 Revert initdb --sync-only patch that had incorrect commit messages. 2012-12-03 22:46:51 -05:00
Bruce Momjian
cd7569a546 dummy commit 2012-12-03 22:45:02 -05:00
Bruce Momjian
db00d837c1 In pg_upgrade, fix bug where no users were dumped in pg_dumpall
binary-upgrade mode;  instead only skip dumping the current user.

This bug was introduced in during the removal of split_old_dump().  Bug
discovered during local testing.
2012-12-03 19:43:02 -05:00
Andrew Dunstan
fc5c1bbbeb Revert "Add mode where contrib installcheck runs each module in a separately named database."
This reverts commit e2b3c21b05.
2012-12-03 15:00:51 -05:00
Simon Riggs
62656617db Avoid holding vmbuffer pin after VACUUM.
During VACUUM if we pause to perform a cycle
of index cleanup we drop the vmbuffer pin,
so we should do the same thing when heap
scan completes. This avoids holding vmbuffer
pin across the main index cleanup in VACUUM,
which could be minutes or hours longer than
necessary for correctness.

Bug report and suggested fix from Pavan Deolasee
2012-12-03 18:53:31 +00:00
Andrew Dunstan
d5652e50d5 Attempt to unbreak MSVC builds broken by f21bb9cfb5.
We can't use type uint, so use uint32.
2012-12-03 10:23:22 -05:00
Simon Riggs
f21bb9cfb5 Refactor inCommit flag into generic delayChkpt flag.
Rename PGXACT->inCommit flag into delayChkpt flag,
and generalise comments to allow use in other situations,
such as the forthcoming potential use in checksum patch.
Replace wait loop to look for VXIDs with delayChkpt set.
No user visible changes, not behaviour changes at present.

Simon Riggs, reviewed and rebased by Jeff Davis
2012-12-03 13:13:53 +00:00
Simon Riggs
7a764990d8 Clarify locking for PageGetLSN() in XLogCheckBuffer() 2012-12-03 12:20:31 +00:00
Simon Riggs
1c563a2ae1 Clarify when to use PageSetLSN/PageGetLSN().
Update README to explain prerequisites for
correct access to LSN fields of a page.
Independent chunk removed from checksums
patch to reduce size of patch.
2012-12-03 11:59:25 +00:00
Heikki Linnakangas
a068c391ab Refactor the code implementing standby-mode logic.
It is now easier to see that it's a state machine, making the code easier
to understand overall.
2012-12-03 12:32:44 +02:00
Andrew Dunstan
e2b3c21b05 Add mode where contrib installcheck runs each module in a separately named database.
Normally each module is tested in aq database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This mode, enabled by adding USE_MODULE_DB=1 to the make command line,
runs most modules in a database with the module name embedded in it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably possible
to test upgrading from.
2012-12-02 17:20:38 -05:00
Tom Lane
fc75d4f81c Update time zone data files to tzdata release 2012j.
DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa,
and portions of Brazil.
2012-12-02 16:35:23 -05:00
Simon Riggs
5457a130d3 Reduce scope of changes for COPY FREEZE.
Allow support only for freezing tuples by explicit
command. Previous coding mistakenly extended
slightly beyond what was agreed as correct on -hackers.
So essentially a partial revoke of earlier work,
leaving just the COPY FREEZE command.
2012-12-02 20:52:52 +00:00
Tom Lane
3114cb60a1 Don't advance checkPoint.nextXid near the end of a checkpoint sequence.
This reverts commit c11130690d in favor of
actually fixing the problem: namely, that we should never have been
modifying the checkpoint record's nextXid at this point to begin with.
The nextXid should match the state as of the checkpoint's logical WAL
position (ie the redo point), not the state as of its physical position.
It's especially bogus to advance it in some wal_levels and not others.
In any case there is no need for the checkpoint record to carry the
same nextXid shown in the XLOG_RUNNING_XACTS record just emitted by
LogStandbySnapshot, as any replay operation will already have adopted
that value as current.

This fixes bug #7710 from Tarvi Pillessaar, and probably also explains bug
#6291 from Daniel Farina, in that if a checkpoint were in progress at the
instant of XID wraparound, the epoch bump would be lost as reported.
(And, of course, these days there's at least a 50-50 chance of a checkpoint
being in progress at any given instant.)

Diagnosed by me and independently by Andres Freund.  Back-patch to all
branches supporting hot standby.
2012-12-02 15:20:41 -05:00
Simon Riggs
5c11725867 Rearrange storage of data in xl_running_xacts.
Previously we stored all xids mixed together.
Now we store top-level xids first, followed
by all subxids. Also skip logging any subxids
if the snapshot is suboverflowed, since there
are potentially large numbers of them and they
are not useful in that case anyway. Has value
in the envisaged design for decoding of WAL.
No planned effect on Hot Standby.

Andres Freund, reviewed by me
2012-12-02 19:39:37 +00:00
Simon Riggs
c11130690d XidEpoch++ if wraparound during checkpoint.
If wal_level = hot_standby we update the checkpoint nextxid,
though in the case where a wraparound occurred half-way through
a checkpoint we would neglect updating the epoch also. Updating
the nextxid is arguably the wrong thing to do, but changing that
may introduce subtle bugs into hot standby startup, while updating
the value doesn't cause any known bugs yet. Minimal fix now to
HEAD and backbranches, wider fix later in HEAD.

Bug reported in #6291 by Daniel Farina and slightly differently in

Cause analysis and recommended fixes from Tom Lane and Andres Freund.

Applied patch is minimal version of Andres Freund's work.
2012-12-02 14:57:44 +00:00
Simon Riggs
9f98704b82 Clarify operation of online checkpoints.
Previous comments left, but were too obscure
for such an important aspect of the system.
2012-12-02 13:09:55 +00:00
Tatsuo Ishii
53edb8dc02 Fix psql crash while parsing SQL file whose encoding is different from
client encoding and the client encoding is not *safe* one. Such an
example is, file encoding is UTF-8 and client encoding SJIS. Patch
contributed by Jiang Guiqing.
2012-12-02 21:11:15 +09:00
Tom Lane
c35fea1026 Prevent passing gmake's environment variables down through pg_regress.
When we do "make install" to create a temp installation, we don't want
that instance of make to try to communicate with any instance of make
that might be calling us.  This is known to cause problems if the
upper make has a -jN flag, and in principle could cause problems even
without that.  Unset the relevant environment variables to prevent such
issues.

Andres Freund
2012-12-01 17:23:49 -05:00
Tom Lane
b1346822f3 Make sure sharedir/extension/ directory is created when needed.
The previous coding worked as long as MODULEDIR wasn't set explicitly,
because we create sharedir/$(datamoduledir) and the default value of
that is "extension".  But if some other value is specified for MODULEDIR
then the installation directory needed for the control file wasn't made.

Cédric Villemain
2012-12-01 16:04:39 -05:00
Tom Lane
7b90469b71 Allow adding values to an enum type created in the current transaction.
Normally it is unsafe to allow ALTER TYPE ADD VALUE in a transaction block,
because instances of the value could be added to indexes later in the same
transaction, and then they would still be accessible even if the
transaction rolls back.  However, we can allow this if the enum type itself
was created in the current transaction, because then any such indexes would
have to go away entirely on rollback.

The reason for allowing this is to support pg_upgrade's new usage of
pg_restore --single-transaction: in --binary-upgrade mode, pg_dump emits
enum types as a succession of ALTER TYPE ADD VALUE commands so that it can
preserve the values' OIDs.  The support is a bit limited, so we'll leave
it undocumented.

Andres Freund
2012-12-01 14:27:30 -05:00
Simon Riggs
02aea36414 Second tweak of COPY FREEZE 2012-12-01 14:55:35 +00:00
Simon Riggs
ddf509eb4a Tweak tests in COPY FREEZE 2012-12-01 13:46:41 +00:00
Simon Riggs
8de72b66a2 COPY FREEZE and mark committed on fresh tables.
When a relfilenode is created in this subtransaction or
a committed child transaction and it cannot otherwise
be seen by our own process, mark tuples committed ahead
of transaction commit for all COPY commands in same
transaction. If FREEZE specified on COPY
and pre-conditions met then rows will also be frozen.
Both options designed to avoid revisiting rows after commit,
increasing performance of subsequent commands after
data load and upgrade. pg_restore changes later.

Simon Riggs, review comments from Heikki Linnakangas, Noah Misch and design
input from Tom Lane, Robert Haas and Kevin Grittner
2012-12-01 12:54:20 +00:00
Alvaro Herrera
113d25c4e6 Change test ExceptionalCondition to return void
Commit 81107282a changed it in assert.c, but overlooked this other file.
2012-11-30 19:24:21 -03:00
Bruce Momjian
b86327c1c5 Split initdb.c main() code into multiple functions, for easier
maintenance.
2012-11-30 16:45:08 -05:00
Bruce Momjian
12ee6ec71f In pg_upgrade, dump each database separately and use
--single-transaction to restore each database schema.  This yields
performance improvements for databases with many tables.  Also, remove
split_old_dump() as it is no longer needed.
2012-11-30 16:30:13 -05:00
Bruce Momjian
bd9c8e741b Move long_options structures to the top of main() functions, for
consistency.

Per suggestion from Tom.
2012-11-30 14:49:55 -05:00
Tom Lane
da63fec7db Add missing buffer lock acquisition in GetTupleForTrigger().
If we had not been holding buffer pin continuously since the tuple was
initially fetched by the UPDATE or DELETE query, it would be possible for
VACUUM or a page-prune operation to move the tuple while we're trying to
copy it.  This would result in a garbage "old" tuple value being passed to
an AFTER ROW UPDATE or AFTER ROW DELETE trigger.  The preconditions for
this are somewhat improbable, and the timing constraints are very tight;
so it's not so surprising that this hasn't been reported from the field,
even though the bug has been there a long time.

Problem found by Andres Freund.  Back-patch to all active branches.
2012-11-30 13:55:55 -05:00
Magnus Hagander
65c3bf19fd Add libpq function PQconninfo()
This allows a caller to get back the exact conninfo array that was
used to create a connection, including parameters read from the
environment.

In doing this, restructure how options are copied from the conninfo
to the actual connection.

Zoltan Boszormenyi and Magnus Hagander
2012-11-30 15:11:08 +09:00
Tom Lane
4af446e7cd Produce a more useful error message for over-length Unix socket paths.
The length of a socket path name is constrained by the size of struct
sockaddr_un, and there's not a lot we can do about it since that is a
kernel API.  However, it would be a good thing if we produced an
intelligible error message when the user specifies a socket path that's too
long --- and getaddrinfo's standard API is too impoverished to do this in
the natural way.  So insert explicit tests at the places where we construct
a socket path name.  Now you'll get an error that makes sense and even
tells you what the limit is, rather than something generic like
"Non-recoverable failure in name resolution".

Per trouble report from Jeremy Drake and a fix idea from Andrew Dunstan.
2012-11-29 19:57:01 -05:00
Simon Riggs
d3fe59939c Correctly init fast path fields on PGPROC 2012-11-29 22:15:52 +00:00
Simon Riggs
f1e57a4ec9 Cleanup VirtualXact at end of Hot Standby. 2012-11-29 21:59:11 +00:00
Robert Haas
7a2fe9bd03 Basic binary heap implementation.
There are probably other places where this can be used, but for now,
this just makes MergeAppend use it, so that this code will have test
coverage.  There is other work in the queue that will use this, as
well.

Abhijit Menon-Sen, reviewed by Andres Freund, Robert Haas, Álvaro
Herrera, Tom Lane, and others.
2012-11-29 11:16:59 -05:00
Michael Meskes
086cf1458c When processing nested structure pointer variables ecpg always expected an
array datatype which of course is wrong.

Applied patch by Muhammad Usama <m.usama@gmail.com> to fix this.
2012-11-29 17:12:00 +01:00
Tom Lane
1fc698cf14 Suppress parallel build in interfaces/ecpg/preproc/.
This is to see if it will stop intermittent build failures on buildfarm
member okapi.  We know that gmake 3.82 has some problems with sometimes
not honoring dependencies in parallel builds, and it seems likely that
this is more of the same.  Since the vast bulk of the work in the preproc
directory is associated with creating preproc.c and then preproc.o,
parallelism buys us hardly anything here anyway.

Also, make both this .NOTPARALLEL and the one previously added in
interfaces/ecpg/Makefile be conditional on "ifeq ($(MAKE_VERSION),3.82)".
The known bug in gmake is fixed upstream and should not be present in
3.83 and up, and there's no reason to think it affects older releases.
2012-11-28 22:19:46 -05:00
Tom Lane
3c84046490 Fix assorted bugs in CREATE/DROP INDEX CONCURRENTLY.
Commit 8cb53654db, which introduced DROP
INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor
choice of catalog state representation.  The pg_index state for an index
that's reached the final pre-drop stage was the same as the state for an
index just created by CREATE INDEX CONCURRENTLY.  This meant that the
(necessary) change to make RelationGetIndexList ignore about-to-die indexes
also made it ignore freshly-created indexes; which is catastrophic because
the latter do need to be considered in HOT-safety decisions.  Failure to
do so leads to incorrect index entries and subsequently wrong results from
queries depending on the concurrently-created index.

To fix, add an additional boolean column "indislive" to pg_index, so that
the freshly-created and about-to-die states can be distinguished.  (This
change obviously is only possible in HEAD.  This patch will need to be
back-patched, but in 9.2 we'll use a kluge consisting of overloading the
formerly-impossible state of indisvalid = true and indisready = false.)

In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index
flag changes they make without exclusive lock on the index are made via
heap_inplace_update() rather than a normal transactional update.  The
latter is not very safe because moving the pg_index tuple could result in
concurrent SnapshotNow scans finding it twice or not at all, thus possibly
resulting in index corruption.  This is a pre-existing bug in CREATE INDEX
CONCURRENTLY, which was copied into the DROP code.

In addition, fix various places in the code that ought to check to make
sure that the indexes they are manipulating are valid and/or ready as
appropriate.  These represent bugs that have existed since 8.2, since
a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid
index behind, and we ought not try to do anything that might fail with
such an index.

Also fix RelationReloadIndexInfo to ensure it copies all the pg_index
columns that are allowed to change after initial creation.  Previously we
could have been left with stale values of some fields in an index relcache
entry.  It's not clear whether this actually had any user-visible
consequences, but it's at least a bug waiting to happen.

In addition, do some code and docs review for DROP INDEX CONCURRENTLY;
some cosmetic code cleanup but mostly addition and revision of comments.

This will need to be back-patched, but in a noticeably different form,
so I'm committing it to HEAD before working on the back-patch.

Problem reported by Amit Kapila, diagnosis by Pavan Deolassee,
fix by Tom Lane and Andres Freund.
2012-11-28 21:26:01 -05:00
Alvaro Herrera
1577b46b7c Split out rmgr rm_desc functions into their own files
This is necessary (but not sufficient) to have them compilable outside
of a backend environment.
2012-11-28 13:01:15 -03:00
Heikki Linnakangas
dd7353dde8 If we don't have a backup-end-location, don't claim we've reached it.
This was apparently a typo, which caused recovery to think that it
immediately reached the end of backup, and allowed the database to start
up too early.

Reported by Jeff Janes. Backpatch to 9.2, where this code was introduced.
2012-11-28 15:14:27 +02:00
Tom Lane
e78d288c89 Add explicit casts in ilist.h's inline functions.
Needed to silence C++ errors, per report from Peter Eisentraut.

Andres Freund
2012-11-27 10:58:37 -05:00
Heikki Linnakangas
1f67078ea3 Add OpenTransientFile, with automatic cleanup at end-of-xact.
Files opened with BasicOpenFile or PathNameOpenFile are not automatically
cleaned up on error. That puts unnecessary burden on callers that only want
to keep the file open for a short time. There is AllocateFile, but that
returns a buffered FILE * stream, which in many cases is not the nicest API
to work with. So add function called OpenTransientFile, which returns a
unbuffered fd that's cleaned up like the FILE* returned by AllocateFile().

This plugs a few rare fd leaks in error cases:

1. copy_file() - fixed by by using OpenTransientFile instead of BasicOpenFile
2. XLogFileInit() - fixed by adding close() calls to the error cases. Can't
   use OpenTransientFile here because the fd is supposed to persist over
   transaction boundaries.
3. lo_import/lo_export - fixed by using OpenTransientFile instead of
   PathNameOpenFile.

In addition to plugging those leaks, this replaces many BasicOpenFile() calls
with OpenTransientFile() that were not leaking, because the code meticulously
closed the file on error. That wasn't strictly necessary, but IMHO it's good
for robustness.

The same leaks exist in older versions, but given the rarity of the issues,
I'm not backpatching this. Not yet, anyway - it might be good to backpatch
later, after this mechanism has had some more testing in master branch.
2012-11-27 10:25:50 +02:00
Tom Lane
532994299e Revert patch for taking fewer snapshots.
This reverts commit d573e239f0, "Take fewer
snapshots".  While that seemed like a good idea at the time, it caused
execution to use a snapshot that had been acquired before locking any of
the tables mentioned in the query.  This created user-visible anomalies
that were not present in any prior release of Postgres, as reported by
Tomas Vondra.  While this whole area could do with a redesign (since there
are related cases that have anomalies anyway), it doesn't seem likely that
any future patch would be reasonably back-patchable; and we don't want 9.2
to exhibit a behavior that's subtly unlike either past or future releases.
Hence, revert to prior code while we rethink the problem.
2012-11-26 15:55:43 -05:00
Tom Lane
d3237e04ca Fix SELECT DISTINCT with index-optimized MIN/MAX on inheritance trees.
In a query such as "SELECT DISTINCT min(x) FROM tab", the DISTINCT is
pretty useless (there being only one output row), but nonetheless it
shouldn't fail.  But it could fail if "tab" is an inheritance parent,
because planagg.c's code for fixing up equivalence classes after making the
index-optimized MIN/MAX transformation wasn't prepared to find child-table
versions of the aggregate expression.  The least ugly fix seems to be
to add an option to mutate_eclass_expressions() to skip child-table
equivalence class members, which aren't used anymore at this stage of
planning so it's not really necessary to fix them.  Since child members
are ignored in many cases already, it seems plausible for
mutate_eclass_expressions() to have an option to ignore them too.

Per bug #7703 from Maxim Boguk.

Back-patch to 9.1.  Although the same code exists before that, it cannot
encounter child-table aggregates AFAICS, because the index optimization
transformation cannot succeed on inheritance trees before 9.1 (for lack
of MergeAppend).
2012-11-26 12:57:58 -05:00
Michael Meskes
c50b8a4637 Applied patch by Chen Huajun <chenhj@cn.fujitsu.com> to make ecpg able to cope
with very long structs.
2012-11-23 14:39:27 +01:00
Tom Lane
455b8887cf Fix pg_resetxlog to use correct path to postmaster.pid.
Since we've already chdir'd into the data directory, the file should
be referenced as just "postmaster.pid", without prefixing the directory
path.  This is harmless in the normal case where an absolute PGDATA path
is used, but quite dangerous if a relative path is specified, since the
program might then fail to notice an active postmaster.

Reported by Hari Babu.  This got broken in my commit
eb5949d190, so patch all active versions.
2012-11-22 11:24:29 -05:00
Heikki Linnakangas
24c19e6bf9 Avoid bogus "out-of-sequence timeline ID" errors in standby-mode.
When startup process opens a WAL segment after replaying part of it, it
validates the first page on the WAL segment, even though the page it's
really interested in later in the file. As part of the validation, it checks
that the TLI on the page header is >= the TLI it saw on the last page it
read. If the segment contains a timeline switch, and we have already
replayed it, and then re-open the WAL segment (because of streaming
replication got disconnected and reconnected, for example), the TLI check
will fail when the first page is validated. Fix that by relaxing the TLI
check when re-opening a WAL segment.

Backpatch to 9.0. Earlier versions had the same code, but before standby
mode was introduced in 9.0, recovery never tried to re-read a segment after
partially replaying it.

Reported by Amit Kapila, while testing a new feature.
2012-11-22 11:44:44 +02:00
Tom Lane
27b2c6a1ef Don't launch new child processes after we've been told to shut down.
Once we've received a shutdown signal (SIGINT or SIGTERM), we should not
launch any more child processes, even if we get signals requesting such.
The normal code path for spawning backends has always understood that,
but the postmaster's infrastructure for hot standby and autovacuum didn't
get the memo.  As reported by Hari Babu in bug #7643, this could lead to
failure to shut down at all in some cases, such as when SIGINT is received
just before the startup process sends PMSIGNAL_RECOVERY_STARTED: we'd
launch a bgwriter and checkpointer, and then those processes would have no
idea that they ought to quit.  Similarly, launching a new autovacuum worker
would result in waiting till it finished before shutting down.

Also, switch the order of the code blocks in reaper() that detect startup
process crash versus shutdown termination.  Once we've sent it a signal,
we should not consider that exit(1) is surprising.  This is just a cosmetic
fix since shutdown occurs correctly anyway, but better not to log a phony
complaint about startup process crash.

Back-patch to 9.0.  Some parts of this might be applicable before that,
but given the lack of prior complaints I'm not going to worry too much
about older branches.
2012-11-21 15:19:30 -05:00
Heikki Linnakangas
5cb0e33597 Speed up operations on numeric, mostly by avoiding palloc() overhead.
In many functions, a NumericVar was initialized from an input Numeric, to be
passed as input to a calculation function. When the NumericVar is not
modified, the digits array of the NumericVar can point directly to the digits
array in the original Numeric, and we can avoid a palloc() and memcpy(). Add
init_var_from_num() function to initialize a var like that.

Remove dscale argument from get_str_from_var(), as all the callers just
passed the dscale of the variable. That means that the rounding it used to
do was not actually necessary, and get_str_from_var() no longer scribbles on
its input. That makes it safer in general, and allows us to use the new
init_var_from_num() function in e.g numeric_out().

Also modified numericvar_to_int8() to no scribble on its input either. It
creates a temporary copy to avoid that. To compensate, the callers no longer
need to create a temporary copy, so the net # of pallocs is the same, but this
is nicer.

In the passing, use a constant for the number 10 in get_str_from_var_sci(),
when calculating 10^exponent. Saves a palloc() and some cycles to convert
integer 10 to numeric.

Original patch by Kyotaro HORIGUCHI, with further changes by me. Reviewed
by Pavel Stehule.
2012-11-21 15:53:35 +02:00
Tom Lane
1f7cb5c309 Improve handling of INT_MIN / -1 and related cases.
Some platforms throw an exception for this division, rather than returning
a necessarily-overflowed result.  Since we were testing for overflow after
the fact, an exception isn't nice.  We can avoid the problem by treating
division by -1 as negation.

Add some regression tests so that we'll find out if any compilers try to
optimize away the overflow check conditions.

This ought to be back-patched, but I'm going to see what the buildfarm
reports about the regression tests first.

Per discussion with Xi Wang, though this is different from the patch he
submitted.
2012-11-19 12:24:25 -05:00
Heikki Linnakangas
644a0a6379 Fix archive_cleanup_command.
When I moved ExecuteRecoveryCommand() from xlog.c to xlogarchive.c, I didn't
realize that it's called from the checkpoint process, not the startup
process. I tried to use InRedo variable to decide whether or not to attempt
cleaning up the archive (must not do so before we have read the initial
checkpoint record), but that variable is only valid within the startup
process.

Instead, let ExecuteRecoveryCommand() always clean up the archive, and add
an explicit argument to RestoreArchivedFile() to say whether that's allowed
or not. The caller knows better.

Reported by Erik Rijkers, diagnosis by Fujii Masao. Only 9.3devel is
affected.
2012-11-19 10:14:20 +02:00
Tom Lane
b6e3798f3a Limit values of archive_timeout, post_auth_delay, auth_delay.milliseconds.
The previous definitions of these GUC variables allowed them to range
up to INT_MAX, but in point of fact the underlying code would suffer
overflows or other errors with large values.  Reduce the maximum values
to something that won't misbehave.  There's no apparent value in working
harder than this, since very large delays aren't sensible for any of
these.  (Note: the risk with archive_timeout is that if we're late
checking the state, the timestamp difference it's being compared to
might overflow.  So we need some amount of slop; the choice of INT_MAX/2
is arbitrary.)

Per followup investigation of bug #7670.  Although this isn't a very
significant fix, might as well back-patch.
2012-11-18 17:15:06 -05:00
Tom Lane
d038966ddb Fix syslogger to not fail when log_rotation_age exceeds 2^31 milliseconds.
We need to avoid calling WaitLatch with timeouts exceeding INT_MAX.
Fortunately a simple clamp will do the trick, since no harm is done if
the wait times out before it's really time to rotate the log file.
Per bug #7670 (probably bug #7545 is the same thing, too).

In passing, fix bogus definition of log_rotation_age's maximum value in
guc.c --- it was numerically right, but only because MINS_PER_HOUR and
SECS_PER_MINUTE have the same value.

Back-patch to 9.2.  Before that, syslogger wasn't using WaitLatch.
2012-11-18 16:16:39 -05:00
Tom Lane
14ddff44c2 Assert that WaitLatch's timeout is not more than INT_MAX milliseconds.
The behavior with larger values is unspecified by the Single Unix Spec.
It appears that BSD-derived kernels report EINVAL, although Linux does not.
If waiting for longer intervals is desired, the calling code has to do
something to limit the delay; we can't portably fix it here since "long"
may not be any wider than "int" in the first place.

Part of response to bug #7670, though this change doesn't fix that
(in fact, it converts the problem from an ERROR into an Assert failure).
No back-patch since it's just an assertion addition.
2012-11-18 15:39:51 -05:00
Tom Lane
1746ba9256 Improve check_partial_indexes() to consider join clauses in proof attempts.
Traditionally check_partial_indexes() has only looked at restriction
clauses while trying to prove partial indexes usable in queries.  However,
join clauses can also be used in some cases; mainly, that a strict operator
on "x" proves an "x IS NOT NULL" index predicate, even if the operator is
in a join clause rather than a restriction clause.  Adding this code fixes
a regression in 9.2, because previously we would take join clauses into
account when considering whether a partial index could be used in a
nestloop inner indexscan path.  9.2 doesn't handle nestloop inner
indexscans in the same way, and this consideration was overlooked in the
rewrite.  Moving the work to check_partial_indexes() is a better solution
anyway, since the proof applies whether or not we actually use the index
in that particular way, and we don't have to do it over again for each
possible outer relation.  Per report from Dave Cramer.
2012-11-15 19:29:05 -05:00
Tom Lane
a235b85a0b Fix the int8 and int2 cases of (minimum possible integer) % (-1).
The correct answer for this (or any other case with arg2 = -1) is zero,
but some machines throw a floating-point exception instead of behaving
sanely.  Commit f9ac414c35 dealt with this
in int4mod, but overlooked the fact that it also happens in int8mod
(at least on my Linux x86_64 machine).  Protect int2mod as well; it's
not clear whether any machines fail there (mine does not) but since the
test is so cheap it seems better safe than sorry.  While at it, simplify
the original guard in int4mod: we need only check for arg2 == -1, we
don't need to check arg1 explicitly.

Xi Wang, with some editing by me.
2012-11-14 17:30:00 -05:00
Bruce Momjian
3bdfd9cb9e Adjust find_status for newer Linux 'nm' output format. 2012-11-13 21:08:07 -05:00
Tom Lane
273986bf0d Fix memory leaks in record_out() and record_send().
record_out() leaks memory: it fails to free the strings returned by the
per-column output functions, and also is careless about detoasted values.
This results in a query-lifespan memory leakage when returning composite
values to the client, because printtup() runs the output functions in the
query-lifespan memory context.  Fix it to handle these issues the same way
printtup() does.  Also fix a similar leakage in record_send().

(At some point we might want to try to run output functions in
shorter-lived memory contexts, so that we don't need a zero-leakage policy
for them.  But that would be a significantly more invasive patch, which
doesn't seem like material for back-patching.)

In passing, use appendStringInfoCharMacro instead of appendStringInfoChar
in the innermost data-copying loop of record_out, to try to shave a few
cycles from this function's runtime.

Per trouble report from Carlos Henrique Reimer.  Back-patch to all
supported versions.
2012-11-13 14:45:26 -05:00
Simon Riggs
d9fad1076d Skip searching for subxact locks at commit.
At commit all standby locks are released
for the top-level transaction, so searching
for locks for each subtransaction is both
pointless and costly (N^2) in the presence
of many AccessExclusiveLocks.
2012-11-13 16:00:19 -03:00
Simon Riggs
68f7fe140b Clarify docs on hot standby lock release
Andres Freund and Simon Riggs
2012-11-13 15:54:01 -03:00
Tom Lane
3bbf668de9 Fix multiple problems in WAL replay.
Most of the replay functions for WAL record types that modify more than
one page failed to ensure that those pages were locked correctly to ensure
that concurrent queries could not see inconsistent page states.  This is
a hangover from coding decisions made long before Hot Standby was added,
when it was hardly necessary to acquire buffer locks during WAL replay
at all, let alone hold them for carefully-chosen periods.

The key problem was that RestoreBkpBlocks was written to hold lock on each
page restored from a full-page image for only as long as it took to update
that page.  This was guaranteed to break any WAL replay function in which
there was any update-ordering constraint between pages, because even if the
nominal order of the pages is the right one, any mixture of full-page and
non-full-page updates in the same record would result in out-of-order
updates.  Moreover, it wouldn't work for situations where there's a
requirement to maintain lock on one page while updating another.  Failure
to honor an update ordering constraint in this way is thought to be the
cause of bug #7648 from Daniel Farina: what seems to have happened there
is that a btree page being split was rewritten from a full-page image
before the new right sibling page was written, and because lock on the
original page was not maintained it was possible for hot standby queries to
try to traverse the page's right-link to the not-yet-existing sibling page.

To fix, get rid of RestoreBkpBlocks as such, and instead create a new
function RestoreBackupBlock that restores just one full-page image at a
time.  This function can be invoked by WAL replay functions at the points
where they would otherwise perform non-full-page updates; in this way, the
physical order of page updates remains the same no matter which pages are
replaced by full-page images.  We can then further adjust the logic in
individual replay functions if it is necessary to hold buffer locks
for overlapping periods.  A side benefit is that we can simplify the
handling of concurrency conflict resolution by moving that code into the
record-type-specfic functions; there's no more need to contort the code
layout to keep conflict resolution in front of the RestoreBkpBlocks call.

In connection with that, standardize on zero-based numbering rather than
one-based numbering for referencing the full-page images.  In HEAD, I
removed the macros XLR_BKP_BLOCK_1 through XLR_BKP_BLOCK_4.  They are
still there in the header files in previous branches, but are no longer
used by the code.

In addition, fix some other bugs identified in the course of making these
changes:

spgRedoAddNode could fail to update the parent downlink at all, if the
parent tuple is in the same page as either the old or new split tuple and
we're not doing a full-page image: it would get fooled by the LSN having
been advanced already.  This would result in permanent index corruption,
not just transient failure of concurrent queries.

Also, ginHeapTupleFastInsert's "merge lists" case failed to mark the old
tail page as a candidate for a full-page image; in the worst case this
could result in torn-page corruption.

heap_xlog_freeze() was inconsistent about using a cleanup lock or plain
exclusive lock: it did the former in the normal path but the latter for a
full-page image.  A plain exclusive lock seems sufficient, so change to
that.

Also, remove gistRedoPageDeleteRecord(), which has been dead code since
VACUUM FULL was rewritten.

Back-patch to 9.0, where hot standby was introduced.  Note however that 9.0
had a significantly different WAL-logging scheme for GIST index updates,
and it doesn't appear possible to make that scheme safe for concurrent hot
standby queries, because it can leave inconsistent states in the index even
between WAL records.  Given the lack of complaints from the field, we won't
work too hard on fixing that branch.
2012-11-12 22:05:53 -05:00
Heikki Linnakangas
dbdf9679d7 Use correct text domain for translating errcontext() messages.
errcontext() is typically used in an error context callback function, not
within an ereport() invocation like e.g errmsg and errdetail are. That means
that the message domain that the TEXTDOMAIN magic in ereport() determines
is not the right one for the errcontext() calls. The message domain needs to
be determined by the C file containing the errcontext() call, not the file
containing the ereport() call.

Fix by turning errcontext() into a macro that passes the TEXTDOMAIN to use
for the errcontext message. "errcontext" was used in a few places as a
variable or struct field name, I had to rename those out of the way, now
that errcontext is a macro.

We've had this problem all along, but this isn't doesn't seem worth
backporting. It's a fairly minor issue, and turning errcontext from a
function to a macro requires at least a recompile of any external code that
calls errcontext().
2012-11-12 17:07:29 +02:00
Heikki Linnakangas
c9d44a75d4 Silence "expression result unused" warnings in AssertVariableIsOfTypeMacro
At least clang 3.1 generates those warnings. Prepend (void) to avoid them,
like we have in AssertMacro.
2012-11-12 15:02:40 +02:00
Tom Lane
34f3b396a6 Check for stack overflow in transformSetOperationTree().
Since transformSetOperationTree() recurses, it can be driven to stack
overflow with enough UNION/INTERSECT/EXCEPT clauses in a query.  Add a
check to ensure it fails cleanly instead of crashing.  Per report from
Matthew Gerber (though it's not clear whether this is the only thing
going wrong for him).

Historical note: I think the reasoning behind not putting a check here in
the beginning was that the check in transformExpr() ought to be sufficient
to guard the whole parser.  However, because transformSetOperationTree()
recurses all the way to the bottom of the set-operation tree before doing
any analysis of the statement's expressions, that check doesn't save it.
2012-11-11 19:56:10 -05:00
Alvaro Herrera
fa12cb7f02 Remove leftover LWLockRelease() call
This code was refactored in d5497b95 but an extra LWLockRelease call was
left behind.

Per report from Erik Rijkers
2012-11-09 10:19:34 -03:00
Tom Lane
3e7fdcffd6 Fix WaitLatch() to return promptly when the requested timeout expires.
If the sleep is interrupted by a signal, we must recompute the remaining
time to wait; otherwise, a steady stream of non-wait-terminating interrupts
could delay return from WaitLatch indefinitely.  This has been shown to be
a problem for the autovacuum launcher, and there may well be other places
now or in the future with similar issues.  So we'd better make the function
robust, even though this'll add at least one gettimeofday call per wait.

Back-patch to 9.2.  We might eventually need to fix 9.1 as well, but the
code is quite different there, and the usage of WaitLatch in 9.1 is so
limited that it's not clearly important to do so.

Reported and diagnosed by Jeff Janes, though I rewrote his patch rather
heavily.
2012-11-08 20:04:48 -05:00
Tom Lane
dcc55dd21a Rename ResolveNew() to ReplaceVarsFromTargetList(), and tweak its API.
This function currently lacks the option to throw error if the provided
targetlist doesn't have any matching entry for a Var to be replaced.
Two of the four existing call sites would be better off with an error,
as would the usage in the pending auto-updatable-views patch, so it seems
past time to extend the API to support that.  To do so, replace the "event"
parameter (historically of type CmdType, though it was declared plain int)
with a special-purpose enum type.

It's unclear whether this function might be called by third-party code.
Since many C compilers wouldn't warn about a call site continuing to use
the old calling convention, rename the function to forcibly break any
such code that hasn't been updated.  The old name was none too well chosen
anyhow.
2012-11-08 16:52:49 -05:00
Tom Lane
75af5ae9c0 Don't trash input list structure in does_not_exist_skipping().
The trigger and rule cases need to split up the input name list, but
they mustn't corrupt the passed-in data structure, since it could be part
of a cached utility-statement parsetree.  Per bug #7641.
2012-11-08 11:34:32 -05:00
Heikki Linnakangas
a9dad56441 Teach pg_basebackup and pg_receivexlog to reply to server keepalives.
Without this, the connection will be killed after timeout if
wal_sender_timeout is set in the server.

Original patch by Amit Kapila, modified by me to fit recent changes in the
code.
2012-11-08 10:28:52 +02:00
Tom Lane
9e45e03886 Fix missing inclusions.
Some platforms require including <netinet/in.h> and/or <arpa/inet.h> to
use htonl() and ntohl().  Per build failure locally.
2012-11-07 18:06:03 -05:00
Bruce Momjian
aa69670e42 Add URLs to document why DLLIMPORT is needed on Windows.
Per email from Craig Ringer
2012-11-07 15:01:25 -05:00
Alvaro Herrera
4ee5c40b06 Don't try to use a unopened relation
Commit 4c9d0901 mistakenly introduced a call to
TransferPredicateLocksToHeapRelation() on an index relation that had
been closed a few lines above.  Moving up an index_open() call that's
below is enough to fix the problem.

Discovered by me while testing an unrelated patch.
2012-11-07 16:23:39 -03:00
Heikki Linnakangas
add6c3179a Make the streaming replication protocol messages architecture-independent.
We used to send structs wrapped in CopyData messages, which works as long as
the client and server agree on things like endianess, timestamp format and
alignment. That's good enough for running a standby server, which has to run
on the same platform anyway, but it's useful for tools like pg_receivexlog
to work across platforms.

This breaks protocol compatibility of streaming replication, but we never
promised that to be compatible across versions, anyway.
2012-11-07 19:09:13 +02:00
Tom Lane
5ed6546cf7 Fix handling of inherited check constraints in ALTER COLUMN TYPE.
This case got broken in 8.4 by the addition of an error check that
complains if ALTER TABLE ONLY is used on a table that has children.
We do use ONLY for this situation, but it's okay because the necessary
recursion occurs at a higher level.  So we need to have a separate
flag to suppress recursion without making the error check.

Reported and patched by Pavan Deolasee, with some editorial adjustments by
me.  Back-patch to 8.4, since this is a regression of functionality that
worked in earlier branches.
2012-11-05 13:36:16 -05:00
Tom Lane
ef28e05ac5 Fix bogus handling of $(X) (i.e., ".exe") in isolationtester Makefile.
I'm not sure why commit 1eb1dde049 seems
to have made this start to fail on Cygwin when it never did before ---
but nonetheless, the coding was pretty bogus, and unlike the way we
handle $(X) anywhere else.  Per buildfarm.
2012-11-01 19:48:53 -04:00
Tom Lane
19e36477b3 Limit the number of rel sets considered in consider_index_join_outer_rels.
In bug #7626, Brian Dunavant exposes a performance problem created by
commit 3b8968f252: that commit attempted to
consider *all* possible combinations of indexable join clauses, but if said
clauses join to enough different relations, there's an exponential increase
in the number of outer-relation sets considered.

In Brian's example, all the clauses come from the same equivalence class,
which means it's redundant to use more than one of them in an indexscan
anyway.  So we can prevent the problem in this class of cases (which is
probably the majority of real examples) by rejecting combinations that
would only serve to add a known-redundant clause.

But that still leaves us exposed to exponential growth of planning time
when the query has a lot of non-equivalence join clauses that are usable
with the same index.  I chose to prevent such cases by setting an upper
limit on the number of relation sets considered, equal to ten times the
number of index clauses considered so far.  (This sliding limit still
allows new relsets to be added on as we move to additional index columns,
which is probably more important than considering even more combinations of
clauses for the previous column.)  This should keep the amount of work done
roughly linear rather than exponential in the apparent query complexity.
This part of the fix is pretty ad-hoc; but without a clearer idea of
real-world cases for which this would result in markedly inferior plans,
it's hard to see how to do better.
2012-11-01 14:08:42 -04:00
Peter Eisentraut
1eb1dde049 Have make never delete intermediate files automatically
Several hacks in certain modes already thought this was a bad idea, so
just disable it globally.
2012-10-31 23:33:35 -04:00
Alvaro Herrera
2f1692d213 Fix erroneous choice of timeline variable, too 2012-10-31 17:05:55 -03:00
Alvaro Herrera
9b8dd7e8aa Fix erroneous choices of segNo variables
Commit dfda6eba (which changed segment numbers to use a single 64 bit
variable instead of log/seg) introduced a couple of bogus choices of
exactly which log segment number variable to use in each case.

This is currently pretty harmless; in one place, the bogus number was
only being used in an error message for a pretty unlikely condition
(failure to fsync a WAL segment file).  In the other, it was using a
global variable instead of the local variable; but all callsites were
passing the value of the global variable anyway.

No need to backpatch because that commit is not on earlier branches.
2012-10-31 11:05:28 -03:00
Alvaro Herrera
04f28bdb84 Fix ALTER EXTENSION / SET SCHEMA
In its original conception, it was leaving some objects into the old
schema, but without their proper pg_depend entries; this meant that the
old schema could be dropped, causing future pg_dump calls to fail on the
affected database.  This was originally reported by Jeff Frost as #6704;
there have been other complaints elsewhere that can probably be traced
to this bug.

To fix, be more consistent about altering a table's subsidiary objects
along the table itself; this requires some restructuring in how tables
are relocated when altering an extension -- hence the new
AlterTableNamespaceInternal routine which encapsulates it for both the
ALTER TABLE and the ALTER EXTENSION cases.

There was another bug lurking here, which was unmasked after fixing the
previous one: certain objects would be reached twice via the dependency
graph, and the second attempt to move them would cause the entire
operation to fail.  Per discussion, it seems the best fix for this is to
do more careful tracking of objects already moved: we now maintain a
list of moved objects, to avoid attempting to do it twice for the same
object.

Authors: Alvaro Herrera, Dimitri Fontaine
Reviewed by Tom Lane
2012-10-31 10:52:55 -03:00
Peter Eisentraut
4af3dda136 Preserve intermediate .c files in coverage mode
The introduction of the .y -> .c pattern rule causes some .c files such
as bootparse.c to be considered intermediate files in the .y -> .c -> .o
rule chain, which make would automatically delete.  But in coverage
mode, the processing tools such as genhtml need those files, so mark
them as "precious" so that make preserves them.
2012-10-28 10:35:46 -04:00
Kevin Grittner
6868ed7491 Throw error if expiring tuple is again updated or deleted.
This prevents surprising behavior when a FOR EACH ROW trigger
BEFORE UPDATE or BEFORE DELETE directly or indirectly updates or
deletes the the old row.  Prior to this patch the requested action
on the row could be silently ignored while all triggered actions
based on the occurence of the requested action could be committed.
One example of how this could happen is if the BEFORE DELETE
trigger for a "parent" row deleted "children" which had trigger
functions to update summary or status data on the parent.

This also prevents similar surprising problems if the query has a
volatile function which updates a target row while it is already
being updated.

There are related issues present in FOR UPDATE cursors and READ
COMMITTED queries which are not handled by this patch.  These
issues need further evalution to determine what change, if any, is
needed.

Where the new error messages are generated, in most cases the best
fix will be to move code from the BEFORE trigger to an AFTER
trigger.  Where this is not feasible, the trigger can avoid the
error by re-issuing the triggering statement and returning NULL.

Documentation changes will be submitted in a separate patch.

Kevin Grittner and Tom Lane with input from Florian Pflug and
Robert Haas, based on problems encountered during conversion of
Wisconsin Circuit Court trigger logic to plpgsql triggers.
2012-10-26 14:55:36 -05:00
Tom Lane
17804fa71b Prefer actual constants to pseudo-constants in equivalence class machinery.
generate_base_implied_equalities_const() should prefer plain Consts over
other em_is_const eclass members when choosing the "pivot" value that
all the other members will be equated to.  This makes it more likely that
the generated equalities will be useful in constraint-exclusion proofs.
Per report from Rushabh Lathia.
2012-10-26 14:19:34 -04:00
Tom Lane
5a39114fe7 In pg_dump, dump SEQUENCE SET items in the data not pre-data section.
Represent a sequence's current value as a separate TableDataInfo dumpable
object, so that it can be dumped within the data section of the archive
rather than in pre-data.  This fixes an undesirable inconsistency between
the meanings of "--data-only" and "--section=data", and also fixes dumping
of sequences that are marked as extension configuration tables, as per a
report from Marko Kreen back in July.  The main cost is that we do one more
SQL query per sequence, but that's probably not very meaningful in most
databases.

Back-patch to 9.1, since it has the extension configuration issue even
though not the --section switch.
2012-10-26 12:12:42 -04:00
Tom Lane
bf01e34b55 Tweak genericcostestimate's fudge factor for index size.
To provide some bias against using a large index when a small one would do
as well, genericcostestimate adds a "fudge factor", which for a long time
was random_page_cost * index_pages/10000.  However, this can grow to be the
dominant term in indexscan cost estimates when the index involved is large
enough, a behavior that was never intended.  Change to a ln(1 + n/10000)
formulation, which has nearly the same behavior up to a few hundred pages
but tails off significantly thereafter.  (A log curve seems correct on
first principles, since what we're trying to account for here is index
descent costs, which are typically logarithmic.)  Per bug #7619 from Niko
Kiirala.

Possibly this change should get back-patched, but I'm hesitant to mess with
cost estimates in stable branches.
2012-10-24 16:25:40 -04:00
Tom Lane
a4e8680a6c When converting a table to a view, remove its system columns.
Views should not have any pg_attribute entries for system columns.
However, we forgot to remove such entries when converting a table to a
view.  This could lead to crashes later on, if someone attempted to
reference such a column, as reported by Kohei KaiGai.

Patch in HEAD only.  This bug has been there forever, but in the back
branches we will have to defend against existing mis-converted views,
so it doesn't seem worthwhile to change the conversion code too.
2012-10-24 13:39:37 -04:00
Alvaro Herrera
f4c4335a4a Add context info to OAT_POST_CREATE security hook
... and have sepgsql use it to determine whether to check permissions
during certain operations.  Indexes that are being created as a result
of REINDEX, for instance, do not need to have their permissions checked;
they were already checked when the index was created.

Author: KaiGai Kohei, slightly revised by me
2012-10-23 18:24:24 -03:00
Kevin Grittner
4c9d0901f1 Correct predicate locking for DROP INDEX CONCURRENTLY.
For the non-concurrent case there is an AccessExclusiveLock lock
on both the index and the heap at a time during which no other
process is using either, before which the index is maintained and
used for scans, and after which the index is no longer used or
maintained.  Predicate locks can safely be moved from the index to
the related heap relation under the protection of these locks.
This was done prior to the introductin of DROP INDEX CONCURRENTLY
and continues to be done for non-concurrent index drops.

For concurrent index drops, the predicate locks must be moved when
there are no index scans in progress on that index and no more can
subsequently start, and before heap inserts stop maintaining the
index.  As long as these conditions are guaranteed when the
TransferPredicateLocksToHeapRelation() function is called,
stronger locks are not needed for correctness.

Kevin Grittner based on questions by Tom Lane in reviewing the
DROP INDEX CONCURRENTLY patch and in cooperation with Andres
Freund and Simon Riggs.
2012-10-21 16:35:42 -05:00
Tom Lane
edef20f6e1 Fix pg_dump's handling of DROP DATABASE commands in --clean mode.
In commit 4317e0246c, I accidentally broke
this behavior while rearranging code to ensure that --create wouldn't
affect whether a DATABASE entry gets put into archive-format output.
Thus, 9.2 would issue a DROP DATABASE command in --clean mode, which is
either useless or dangerous depending on the usage scenario.
It should not do that, and no longer does.

A bright spot is that this refactoring makes it easy to allow the
combination of --clean and --create to work sensibly, ie, emit DROP
DATABASE then CREATE DATABASE before reconnecting.  Ordinarily we'd
consider that a feature addition and not back-patch it, but it seems
silly to not include the extra couple of lines required in the 9.2
version of the code.

Per report from Guillaume Lelarge, though this is slightly more extensive
than his proposed patch.
2012-10-20 16:58:32 -04:00
Tom Lane
5d1abe64e6 Fix UtilityContainsQuery() to handle CREATE TABLE AS EXECUTE correctly.
The code seems to have been written to handle the pre-parse-analysis
representation, where an ExecuteStmt would appear directly under
CreateTableAsStmt.  But in reality the function is only run on
already-parse-analyzed statements, so there will be a Query node in
between.  We'd not noticed the bug because the function is generally
not used at all except in extended query protocol.

Per report from Robert Haas and Rushabh Lathia.
2012-10-19 18:33:45 -04:00
Tom Lane
4e32f8cd14 Fix hash_search to avoid corruption of the hash table on out-of-memory.
An out-of-memory error during expand_table() on a palloc-based hash table
would leave a partially-initialized entry in the table.  This would not be
harmful for transient hash tables, since they'd get thrown away anyway at
transaction abort.  But for long-lived hash tables, such as the relcache
hash, this would effectively corrupt the table, leading to crash or other
misbehavior later.

To fix, rearrange the order of operations so that table enlargement is
attempted before we insert a new entry, rather than after adding it
to the hash table.

Problem discovered by Hitoshi Harada, though this is a bit different
from his proposed patch.
2012-10-19 15:24:03 -04:00
Tom Lane
0d6895051a Fix ruleutils to print "INSERT INTO foo DEFAULT VALUES" correctly.
Per bug #7615 from Marko Tiikkaja.  Apparently nobody ever tried this
case before ...
2012-10-19 13:39:51 -04:00
Simon Riggs
da85727565 Fix orphan on cancel of drop index concurrently.
Canceling DROP INDEX CONCURRENTLY during
wait could allow an orphaned index to be
left behind which could not be dropped.

Backpatch to 9.2

Andres Freund, tested by Abhijit Menon-Sen
2012-10-19 09:56:29 +01:00
Tom Lane
002191a1a3 Further cleanup of catcache.c ilist changes.
Remove useless duplicate initialization of bucket headers, don't use a
dlist_mutable_iter in a performance-critical path that doesn't need it,
make some other cosmetic changes for consistency's sake.
2012-10-18 19:30:43 -04:00
Tom Lane
dc5aeca168 Remove unnecessary "head" arguments from some dlist/slist functions.
dlist_delete, dlist_insert_after, dlist_insert_before, slist_insert_after
do not need access to the list header, and indeed insisting on that negates
one of the main advantages of a doubly-linked list.

In consequence, revert addition of "cache_bucket" field to CatCTup.
2012-10-18 19:04:20 -04:00
Tom Lane
8f8d746478 Code review for inline-list patch.
Make foreach macros less syntactically dangerous, and fix some typos in
evidently-never-tested ones.  Add missing slist_next_node and
slist_head_node functions.  Fix broken dlist_check code.  Assorted comment
improvements.
2012-10-18 16:47:07 -04:00
Heikki Linnakangas
2a49585e2b Further tweaking of the readfile() function in pg_ctl.
Don't leak a file descriptor if the file is empty or we can't read its size.

Expect there to be a newline at the end of the last line, too. If there
isn't, ignore anything after the last newline. This makes it a tiny bit
more robust in case the file is appended to concurrently, so that we don't
return the last line if it hasn't been fully written yet. And this makes
the code a bit less obscure, anyway. Per Tom Lane's suggestion.

Backpatch to all supported branches.
2012-10-18 22:26:26 +03:00
Simon Riggs
160984c8c8 Isolation test for DROP INDEX CONCURRENTLY
for recent concurrent changes.

Abhijit Menon-Sen
2012-10-18 19:41:40 +01:00
Simon Riggs
2f0e480d02 Re-think guts of DROP INDEX CONCURRENTLY.
Concurrent behaviour was flawed when using
a two-step process, so add an additional
phase of processing to ensure concurrency
for both SELECTs and INSERT/UPDATE/DELETEs.

Backpatch to 9.2

Andres Freund, tweaked by me
2012-10-18 18:58:30 +01:00
Tom Lane
72a4231f0c Fix planning of non-strict equivalence clauses above outer joins.
If a potential equivalence clause references a variable from the nullable
side of an outer join, the planner needs to take care that derived clauses
are not pushed to below the outer join; else they may use the wrong value
for the variable.  (The problem arises only with non-strict clauses, since
if an upper clause can be proven strict then the outer join will get
simplified to a plain join.)  The planner attempted to prevent this type
of error by checking that potential equivalence clauses aren't
outerjoin-delayed as a whole, but actually we have to check each side
separately, since the two sides of the clause will get moved around
separately if it's treated as an equivalence.  Bugs of this type can be
demonstrated as far back as 7.4, even though releases before 8.3 had only
a very ad-hoc notion of equivalence clauses.

In addition, we neglected to account for the possibility that such clauses
might have nonempty nullable_relids even when not outerjoin-delayed; so the
equivalence-class machinery lacked logic to compute correct nullable_relids
values for clauses it constructs.  This oversight was harmless before 9.2
because we were only using RestrictInfo.nullable_relids for OR clauses;
but as of 9.2 it could result in pushing constructed equivalence clauses
to incorrect places.  (This accounts for bug #7604 from Bill MacArthur.)

Fix the first problem by adding a new test check_equivalence_delay() in
distribute_qual_to_rels, and fix the second one by adding code in
equivclass.c and called functions to set correct nullable_relids for
generated clauses.  Although I believe the second part of this is not
currently necessary before 9.2, I chose to back-patch it anyway, partly to
keep the logic similar across branches and partly because it seems possible
we might find other reasons why we need valid values of nullable_relids in
the older branches.

Add regression tests illustrating these problems.  In 9.0 and up, also
add test cases checking that we can push constants through outer joins,
since we've broken that optimization before and I nearly broke it again
with an overly simplistic patch for this problem.
2012-10-18 12:30:10 -04:00
Alvaro Herrera
7b583b20b1 pg_dump: Output functions deterministically sorted
Implementation idea from Tom Lane

Author: Joel Jacobson
Reviewed by Joachim Wieland
2012-10-18 12:23:27 -03:00
Simon Riggs
5ad72cee7e Revert tests for drop index concurrently. 2012-10-18 15:27:12 +01:00
Simon Riggs
4e206744dc Add isolation tests for DROP INDEX CONCURRENTLY.
Backpatch to 9.2 to ensure bugs are fixed.

Abhijit Menon-Sen
2012-10-18 13:37:09 +01:00
Tom Lane
ff3f9c8de5 Close un-owned SMgrRelations at transaction end.
If an SMgrRelation is not "owned" by a relcache entry, don't allow it to
live past transaction end.  This design allows the same SMgrRelation to be
used for blind writes of multiple blocks during a transaction, but ensures
that we don't hold onto such an SMgrRelation indefinitely.  Because an
SMgrRelation typically corresponds to open file descriptors at the fd.c
level, leaving it open when there's no corresponding relcache entry can
mean that we prevent the kernel from reclaiming deleted disk space.
(While CacheInvalidateSmgr messages usually fix that, there are cases
where they're not issued, such as DROP DATABASE.  We might want to add
some more sinval messaging for that, but I'd be inclined to keep this
type of logic anyway, since allowing VFDs to accumulate indefinitely
for blind-written relations doesn't seem like a good idea.)

This code replaces a previous attempt towards the same goal that proved
to be unreliable.  Back-patch to 9.1 where the previous patch was added.
2012-10-17 12:38:21 -04:00
Tom Lane
9bacf0e373 Revert "Use "transient" files for blind writes, take 2".
This reverts commit fba105b109.
That approach had problems with the smgr-level state not tracking what
we really want to happen, and with the VFD-level state not tracking the
smgr-level state very well either.  In consequence, it was still possible
to hold kernel file descriptors open for long-gone tables (as in recent
report from Tore Halset), and yet there were also cases of FDs being closed
undesirably soon.  A replacement implementation will follow.
2012-10-17 12:37:08 -04:00
Alvaro Herrera
a66ee69add Embedded list interface
Provide a common implementation of embedded singly-linked and
doubly-linked lists.  "Embedded" in the sense that the nodes'
next/previous pointers exist within some larger struct; this design
choice reduces memory allocation overhead.

Most of the implementation uses inlineable functions (where supported),
for performance.

Some existing uses of both types of lists have been converted to the new
code, for demonstration purposes.  Other uses can (and probably will) be
converted in the future.  Since dllist.c is unused after this conversion,
it has been removed.

Author: Andres Freund
Some tweaks by me
Reviewed by Tom Lane, Peter Geoghegan
2012-10-17 11:31:20 -03:00
Bruce Momjian
22cc3b35f4 When outputting the session id in log_line_prefix (%c) or in CSV log
output mode, cause the hex digits after the period to always be at least
four hex digits, with zero-padding.
2012-10-16 12:37:59 -04:00
Tom Lane
b72bd3d1c6 alter_generic regression test cannot run concurrently with privileges test.
... because the latter plays games with the privileges for language SQL.
It looks like running alter_generic in parallel with "misc" is OK though.

Also, adjust serial_schedule to maintain the same test ordering (up to
parallelism) as parallel_schedule.
2012-10-15 12:18:52 -04:00
Heikki Linnakangas
7d3ed5ae78 Fix typo in comment.
Fujii Masao
2012-10-15 13:01:31 +03:00
Heikki Linnakangas
ff6c78c480 Remove comment that is no longer true.
AddToDataDirLockFile() supports out-of-order updates of the lockfile
nowadays.
2012-10-15 11:03:39 +03:00
Heikki Linnakangas
5c89684e08 Fix race condition in pg_ctl reading postmaster.pid.
If postmaster changed postmaster.pid while pg_ctl was reading it, pg_ctl
could overrun the buffer it allocated for the file. Fix by reading the
whole file to memory with one read() call.

initdb contains an identical copy of the readfile() function, but the files
that initdb reads are static, not modified concurrently. Nevertheless, add
a simple bounds-check there, if only to silence static analysis tools.

Per report from Dave Vitek. Backpatch to all supported branches.
2012-10-15 10:36:32 +03:00
Tom Lane
e81e8f9342 Split up process latch initialization for more-fail-soft behavior.
In the previous coding, new backend processes would attempt to create their
self-pipe during the OwnLatch call in InitProcess.  However, pipe creation
could fail if the kernel is short of resources; and the system does not
recover gracefully from a FATAL error right there, since we have armed the
dead-man switch for this process and not yet set up the on_shmem_exit
callback that would disarm it.  The postmaster then forces an unnecessary
database-wide crash and restart, as reported by Sean Chittenden.

There are various ways we could rearrange the code to fix this, but the
simplest and sanest seems to be to split out creation of the self-pipe into
a new function InitializeLatchSupport, which must be called from a place
where failure is allowed.  For most processes that gets called in
InitProcess or InitAuxiliaryProcess, but processes that don't call either
but still use latches need their own calls.

Back-patch to 9.1, which has only a part of the latch logic that 9.2 and
HEAD have, but nonetheless includes this bug.
2012-10-14 22:59:56 -04:00
Tom Lane
8b728e5c6e Fix oversight in new code for printing rangetable aliases.
In commit 11e131854f, I missed the case of
a CTE RTE that doesn't have a user-defined alias, but does have an
alias assigned by set_rtable_names().  Per report from Peter Eisentraut.

While at it, refactor slightly to reduce code duplication.
2012-10-12 16:14:43 -04:00
Bruce Momjian
49ec613201 In our source code, make a copy of getopt's 'optarg' string arguments,
rather than just storing a pointer.
2012-10-12 13:35:43 -04:00
Tom Lane
a29f7ed554 Get rid of COERCE_DONTCARE.
We don't need this hack any more.
2012-10-12 13:35:00 -04:00
Tom Lane
71e58dcfb9 Make equal() ignore CoercionForm fields for better planning with casts.
This change ensures that the planner will see implicit and explicit casts
as equivalent for all purposes, except in the minority of cases where
there's actually a semantic difference (as reflected by having a 3-argument
cast function).  In particular, this fixes cases where the EquivalenceClass
machinery failed to consider two references to a varchar column as
equivalent if one was implicitly cast to text but the other was explicitly
cast to text, as seen in bug #7598 from Vaclav Juza.  We have had similar
bugs before in other parts of the planner, so I think it's time to fix this
problem at the core instead of continuing to band-aid around it.

Remove set_coercionform_dontcare(), which represents the band-aid
previously in use for allowing matching of index and constraint expressions
with inconsistent cast labeling.  (We can probably get rid of
COERCE_DONTCARE altogether, but I don't think removing that enum value in
back branches would be wise; it's possible there's third party code
referring to it.)

Back-patch to 9.2.  We could go back further, and might want to once this
has been tested more; but for the moment I won't risk destabilizing plan
choices in long-since-stable branches.
2012-10-12 12:11:22 -04:00
Andrew Dunstan
e583ffe947 Unbreak MSVC builds after recent Makefile refactoring.
Based on a suggestion by Peter Eisentraut.
2012-10-11 12:36:42 -04:00
Tom Lane
4816d2ea32 Fix cross-type case in partial row matching for hashed subplans.
When hashing a subplan like "WHERE (a, b) NOT IN (SELECT x, y FROM ...)",
findPartialMatch() attempted to match rows using the hashtable's internal
equality operators, which of course are for x and y's datatypes.  What we
need to use are the potentially cross-type operators for a=x, b=y, etc.
Failure to do that leads to wrong answers or even crashes.  The scope for
problems is limited to cases where we have different types with compatible
hash functions (else we'd not be using a hashed subplan), but for example
int4 vs int8 can cause the problem.

Per bug #7597 from Bo Jensen.  This has been wrong since the hashed-subplan
code was written, so patch all the way back.
2012-10-11 12:22:13 -04:00
Heikki Linnakangas
6f60fdd701 Improve replication connection timeouts.
Rename replication_timeout to wal_sender_timeout, and add a new setting
called wal_receiver_timeout that does the same at the walreceiver side.
There was previously no timeout in walreceiver, so if the network went down,
for example, the walreceiver could take a long time to notice that the
connection was lost. Now with the two settings, both sides of a replication
connection will detect a broken connection similarly.

It is no longer necessary to manually set wal_receiver_status_interval to
a value smaller than the timeout. Both wal sender and receiver now
automatically send a "ping" message if more than 1/2 of the configured
timeout has elapsed, and it hasn't received any messages from the other end.

Amit Kapila, heavily edited by me.
2012-10-11 17:48:08 +03:00
Peter Eisentraut
8521d13194 Refactor flex and bison make rules
Numerous flex and bison make rules have appeared in the source tree
over time, and they are all virtually identical, so we can replace
them by pattern rules with some variables for customization.

Users of pgxs will also be able to benefit from this.
2012-10-11 06:57:04 -04:00
Peter Eisentraut
ab112068b6 Remove _FORTIFY_SOURCE
Apparently, on some glibc versions this causes warnings when
optimization is not enabled.

Altogether, there appear to be too many incompatibilities surrounding
this.
2012-10-10 21:42:38 -04:00
Tom Lane
864db11683 Update obsolete comment.
We no longer use GetNewOidWithIndex on pg_largeobject; rather,
pg_largeobject_metadata's regular OID column is considered the repository
of OIDs for large objects.  The special functionality is still needed for
TOAST tables however.
2012-10-10 17:04:37 -04:00
Tom Lane
a80889a735 Set procost to 10 for each of the pg_foo_is_visible() functions.
The idea here is to make sure the planner will evaluate these functions
last not first among the filter conditions in psql pattern search and
tab-completion queries.  We've discussed this several times, and there
was consensus to do it back in August, but we didn't want to do it just
before a release.  Now seems like a safer time.

No catversion bump, since this catalog change doesn't create a backend
incompatibility nor any regression test result changes.
2012-10-10 12:19:25 -04:00
Tom Lane
3f88fa971a Fix PGXS support for building loadable modules on AIX.
Building a shlib on AIX requires use of the mkldexport.sh script, but we
failed to install that, preventing its use from non-source-tree contexts.
Also, Makefile.aix had the wrong idea about where to find the installed
copy of the postgres.imp symbol file used by AIX.

Per report from John Pierce.  Patch all the way back, since this has been
broken since the beginning of PGXS.
2012-10-09 21:04:06 -04:00
Tom Lane
7e0cce0265 Remove unnecessary overhead in backend's large-object operations.
Do read/write permissions checks at most once per large object descriptor,
not once per lo_read or lo_write call as before.  The repeated tests were
quite useless in the read case since the snapshot-based tests were
guaranteed to produce the same answer every time.  In the write case,
the extra tests could in principle detect revocation of write privileges
after a series of writes has started --- but there's a race condition there
anyway, since we'd check privileges before performing and certainly before
committing the write.  So there's no real advantage to checking every
single time, and we might as well redefine it as "only check the first
time".

On the same reasoning, remove the LargeObjectExists checks in inv_write
and inv_truncate.  We already checked existence when the descriptor was
opened, and checking again doesn't provide any real increment of safety
that would justify the cost.
2012-10-09 16:38:00 -04:00
Heikki Linnakangas
2d8c81ac86 Fix silly bug in previous refactoring.
I extracted the refactoring patch from a larger patch that contained other
changes too, but missed one unintentional change and didn't test enough...
2012-10-09 19:33:12 +03:00
Heikki Linnakangas
ff8f160bf4 Put the logic to wait for WAL in standby mode to a separate function.
This is just refactoring with no user-visible effect, to make the code more
readable.
2012-10-09 19:20:17 +03:00
Alvaro Herrera
f46baf601d Rename USE_INLINE to PG_USE_INLINE
The former name was too likely to conflict with symbols from external
headers; and, as seen in recent buildfarm failures in member spoonbill,
it has now happened at least in plpython.
2012-10-09 11:17:33 -03:00
Heikki Linnakangas
0b77aebabf Remove stray newline in comment. 2012-10-09 13:06:48 +03:00
Tom Lane
bc433317ae Fix lo_import and lo_export to return useful error messages more often.
I found that these functions tend to return -1 while leaving an empty error
message string in the PGconn, if they suffer some kind of I/O error on the
file.  The reason is that lo_close, which thinks it's executed a perfectly
fine SQL command, clears the errorMessage.  The minimum-change workaround
is to reorder operations here so that we don't fill the errorMessage until
after lo_close.
2012-10-08 21:52:34 -04:00
Tom Lane
f52c5165e1 Fix lo_export usage in example programs.
lo_export returns -1, not zero, on failure.
2012-10-08 21:19:54 -04:00
Tom Lane
0e924c007d Fix lo_read, lo_write, lo_truncate to cope with "size_t" length parameters.
libpq defines these functions as accepting "size_t" lengths ... but the
underlying backend functions expect signed int32 length parameters, and so
will misinterpret any value exceeding INT_MAX.  Fix the libpq side to throw
error rather than possibly doing something unexpected.

This is a bug of long standing, but I doubt it's worth back-patching.  The
problem is really pretty academic anyway with lo_read/lo_write, since any
caller expecting sane behavior would have to have provided a multi-gigabyte
buffer.  It's slightly more pressing with lo_truncate, but still we haven't
supported large objects over 2GB until now.
2012-10-08 21:19:53 -04:00
Peter Eisentraut
b6d4522296 Remove generation of repl_gram.h
It was apparently never necessary.
2012-10-08 20:36:46 -04:00
Tom Lane
26fe56481c Code review for 64-bit-large-object patch.
Fix broken-on-bigendian-machines byte-swapping functions, add missed update
of alternate regression expected file, improve error reporting, remove some
unnecessary code, sync testlo64.c with current testlo.c (it seems to have
been cloned from a very old copy of that), assorted cosmetic improvements.
2012-10-08 18:24:32 -04:00
Alvaro Herrera
878daf2e72 Fix thinko in previous commit
Since postgres.h includes palloc.h, definitions that affect the latter
must be present before the former is included.

Per buildfarm results
2012-10-08 18:33:08 -03:00
Alvaro Herrera
976fa10d20 Add support for easily declaring static inline functions
We already had those, but they forced modules to spell out the function
bodies twice.  Eliminate some duplicates we had already grown.

Extracted from a somewhat larger patch from Andres Freund.
2012-10-08 16:28:01 -03:00
Robert Haas
08c8058ce9 Add #define for UUIDOID.
Phil Sorber and Thom Brown. Reviewed by Albe Laurenz.
2012-10-08 10:15:15 -04:00
Heikki Linnakangas
b28cc92d7d Say ANALYZE, not VACUUM, in error message on analyze in hot standby.
Tomonaru Katsumata
2012-10-08 14:17:27 +03:00
Heikki Linnakangas
9c0e2b9182 Fix walsender handling of postmaster shutdown, to not go into endless loop.
This bug was introduced by my patch to use the regular die/quickdie signal
handlers in walsender processes. I tried to make walsender exit at next
CHECK_FOR_INTERRUPTS() by setting ProcDiePending, but that's not enough, you
need to set InterruptPending too. On second thoght, it was not a very good
way to make walsender exit anyway, so use proc_exit(0) instead.

Also, send a CommandComplete message before exiting; that's what we did
before, and you get a nicer error message in the standby that way.

Reported by Thom Brown.
2012-10-08 13:32:14 +03:00
Tom Lane
95d035e66d Autoconfiscate selection of 64-bit int type for 64-bit large object API.
Get rid of the fundamentally indefensible assumption that "long long int"
exists and is exactly 64 bits wide on every platform Postgres runs on.
Instead let the configure script select the type to use for "pg_int64".

This is a bit of a pain in the rear since we do not want to pollute client
namespace with all the random symbols that pg_config.h defines; instead
we have to create a separate generated header file, "pg_config_ext.h".
But now that the infrastructure is there, we might have the ability to
add some other stuff that's long been wanting in this area.
2012-10-07 21:52:43 -04:00
Andrew Dunstan
ea72bb8ae5 Fix typo in previous MSC commit. 2012-10-07 19:56:26 -04:00
Andrew Dunstan
33a7101281 Quiet a few MSC compiler warnings. 2012-10-07 17:31:10 -04:00
Tatsuo Ishii
7e2f8ed2b0 Fix compiling errors on Windows platform. Fix wrong usage of
INT64CONST macro. Fix lo_hton64 and lo_ntoh64 not to use int32_t and
uint32_t.
2012-10-07 23:30:31 +09:00
Tatsuo Ishii
b51a65f5bf Bump up catalog vesion due to 64-bit large object API functions
addition.
2012-10-07 09:36:20 +09:00