Commit Graph

14565 Commits

Author SHA1 Message Date
Peter Eisentraut
79af9a1d26 Fix namespace handling in xpath function
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value.  That means the output value
was not correct XML.  Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.

Author: Ali Akbar <the.apaan@gmail.com>
2015-01-06 23:06:13 -05:00
Andres Freund
3fabed0705 Correctly handle relcache invalidation corner case during logical decoding.
When using a historic snapshot for logical decoding it can validly
happen that a relation that's in the relcache isn't visible to that
historic snapshot.  E.g. if a newly created relation is referenced in
the query that uses the SQL interface for logical decoding and a
sinval reset occurs.

The earlier commit that fixed the error handling for that corner case
already improves the situation as a ERROR is better than hitting an
assertion... But it's obviously not good enough.  So additionally
allow that case without an error if a historic snapshot is set up -
that won't allow an invalid entry to stay in the cache because it's a)
already marked invalid and will thus be rebuilt during the next access
b) the syscaches will be reset at the end of decoding.

There might be prettier solutions to handle this case, but all that we
could think of so far end up being much more complex than this quite
simple fix.

This fixes the assertion failures reported by the buildfarm (markhor,
tick, leech) after the introduction of new regression tests in
89fd41b390. The failure there weren't actually directly caused by
CLOBBER_CACHE_ALWAYS but the extraordinary long runtimes due to it
lead to sinval resets triggering the behaviour.

Discussion: 22459.1418656530@sss.pgh.pa.us

Backpatch to 9.4 where logical decoding was introduced.
2015-01-07 00:19:37 +01:00
Andres Freund
31912d01d8 Improve relcache invalidation handling of currently invisible relations.
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.

The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.

Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.

Discussion: 22459.1418656530@sss.pgh.pa.us

There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back.  The logical decoding bug will be
handled in a separate commit.
2015-01-07 00:18:00 +01:00
Bruce Momjian
4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Fujii Masao
9f1d7313aa Fix typo in comment.
Report by Amit Kapila
2015-01-05 16:35:26 +09:00
Alvaro Herrera
d5e3d1e969 Fix thinko in lock mode enum
Commit 0e5680f473 contained a thinko
mixing LOCKMODE with LockTupleMode.  This caused misbehavior in the case
where a tuple is marked with a multixact with at most a FOR SHARE lock,
and another transaction tries to acquire a FOR NO KEY EXCLUSIVE lock;
this case should block but doesn't.

Include a new isolation tester spec file to explicitely try all the
tuple lock combinations; without the fix it shows the problem:

    starting permutation: s1_begin s1_lcksvpt s1_tuplock2 s2_tuplock3 s1_commit
    step s1_begin: BEGIN;
    step s1_lcksvpt: SELECT * FROM multixact_conflict FOR KEY SHARE; SAVEPOINT foo;
    a

    1
    step s1_tuplock2: SELECT * FROM multixact_conflict FOR SHARE;
    a

    1
    step s2_tuplock3: SELECT * FROM multixact_conflict FOR NO KEY UPDATE;
    a

    1
    step s1_commit: COMMIT;

With the fixed code, step s2_tuplock3 blocks until session 1 commits,
which is the correct behavior.

All other cases behave correctly.

Backpatch to 9.3, like the commit that introduced the problem.
2015-01-04 15:48:29 -03:00
Andres Freund
2ea95959af Add error handling for failing fstat() calls in copy.c.
These calls are pretty much guaranteed not to fail unless something
has gone horribly wrong, and even in that case we'd just error out a
short time later.  But since several code checkers complain about the
missing check it seems worthwile to fix it nonetheless.

Pointed out by Coverity.
2015-01-04 16:47:23 +01:00
Andres Freund
14570c2828 Remove superflous variable from xlogreader's XLogFindNextRecord().
Pointed out by Coverity.

Since this is mere, and debatable, cosmetics I'm not backpatching
this.
2015-01-04 15:35:46 +01:00
Andres Freund
2c0a485896 Prevent WAL files created by pg_basebackup -x/X from being archived again.
WAL (and timeline history) files created by pg_basebackup did not
maintain the new base backup's archive status. That's currently not a
problem if the new node is used as a standby - but if that node is
promoted all still existing files can get archived again.  With a high
wal_keep_segment settings that can happen a significant time later -
which is quite confusing.

Change both the backend (for the -x/-X fetch case) and pg_basebackup
(for -X stream) itself to always mark WAL/timeline files included in
the base backup as .done. That's in line with walreceiver.c doing so.

The verbosity of the pg_basebackup changes show pretty clearly that it
needs some refactoring, but that'd result in not be backpatchable
changes.

Backpatch to 9.1 where pg_basebackup was introduced.

Discussion: 20141205002854.GE21964@awork2.anarazel.de
2015-01-03 20:54:12 +01:00
Andres Freund
ccb161b66a Add pg_string_endswith as the start of a string helper library in src/common.
Backpatch to 9.3 where src/common was introduce, because a bugfix that
needs to be backpatched, requires the function. Earlier branches will
have to duplicate the code.
2015-01-03 20:54:12 +01:00
Tom Lane
d6657d2a10 Treat negative values of recovery_min_apply_delay as having no effect.
At one point in the development of this feature, it was claimed that
allowing negative values would be useful to compensate for timezone
differences between master and slave servers.  That was based on a mistaken
assumption that commit timestamps are recorded in local time; but of course
they're in UTC.  Nor is a negative apply delay likely to be a sane way of
coping with server clock skew.  However, the committed patch still treated
negative delays as doing something, and the timezone misapprehension
survived in the user documentation as well.

If recovery_min_apply_delay were a proper GUC we'd just set the minimum
allowed value to be zero; but for the moment it seems better to treat
negative settings as if they were zero.

In passing do some extra wordsmithing on the parameter's documentation,
including correcting a second misstatement that the parameter affects
processing of Restore Point records.

Issue noted by Michael Paquier, who also provided the code patch; doc
changes by me.  Back-patch to 9.4 where the feature was introduced.
2015-01-03 13:14:03 -05:00
Tom Lane
a486841eb1 Print more information about getObjectIdentityParts() failures.
This might help us debug what's happening on some buildfarm members.

In passing, reduce the message from ereport to elog --- it doesn't seem
like this should be a user-facing case, so not worth translating.
2014-12-31 14:44:43 -05:00
Alvaro Herrera
ba66c9d068 Add missing pstrdup calls
The one for the OCLASS_COLLATION case was noticed by
CLOBBER_CACHE_ALWAYS buildfarm members; the others I spotted by manual
code inspection.

Also remove a redundant check.
2014-12-31 13:19:40 -03:00
Alvaro Herrera
72dd233d3e pg_event_trigger_dropped_objects: Add name/args output columns
These columns can be passed to pg_get_object_address() and used to
reconstruct the dropped objects identities in a remote server containing
similar objects, so that the drop can be replicated.

Reviewed by Stephen Frost, Heikki Linnakangas, Abhijit Menon-Sen, Andres
Freund.
2014-12-30 17:41:46 -03:00
Alvaro Herrera
a676201490 Add pg_identify_object_as_address
This function returns object type and objname/objargs arrays, which can
be passed to pg_get_object_address.  This is especially useful because
the textual representation can be copied to a remote server in order to
obtain the corresponding OID-based address.  In essence, this function
is the inverse of recently added pg_get_object_address().

Catalog version bumped due to the addition of the new function.

Also add docs to pg_get_object_address.
2014-12-30 15:41:50 -03:00
Alvaro Herrera
3f88672a4e Use TypeName to represent type names in certain commands
In COMMENT, DROP, SECURITY LABEL, and the new pg_get_object_address
function, we were representing types as a list of names, same as other
objects; but types are special objects that require their own
representation to be totally accurate.  In the original COMMENT code we
had a note about fixing it which was lost in the course of c10575ff00.
Change all those places to use TypeName instead, as suggested by that
comment.

Right now the original coding doesn't cause any bugs, so no backpatch.
It is more problematic for proposed future code that operate with object
addresses from the SQL interface; type details such as array-ness are
lost when working with the degraded representation.

Thanks to Petr Jelínek and Dimitri Fontaine for offlist help on finding
a solution to a shift/reduce grammar conflict.
2014-12-30 13:57:23 -03:00
Tom Lane
9a11df1449 Remove duplicate assignment in new pg_get_object_address() function.
Noted by Coverity.
2014-12-28 12:03:32 -05:00
Alvaro Herrera
6630420fc9 Restrict name list len for domain constraints
This avoids an ugly-looking "cache lookup failure" message.

Ugliness pointed out by Andres Freund.
2014-12-26 14:31:37 -03:00
Alvaro Herrera
0e5680f473 Grab heavyweight tuple lock only before sleeping
We were trying to acquire the lock even when we were subsequently
not sleeping in some other transaction, which opens us up unnecessarily
to deadlocks.  In particular, this is troublesome if an update tries to
lock an updated version of a tuple and finds itself doing EvalPlanQual
update chain walking; more than two sessions doing this concurrently
will find themselves sleeping on each other because the HW tuple lock
acquisition in heap_lock_tuple called from EvalPlanQualFetch races with
the same tuple lock being acquired in heap_update -- one of these
sessions sleeps on the other one to finish while holding the tuple lock,
and the other one sleeps on the tuple lock.

Per trouble report from Andrew Sackville-West in
http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230

His scenario can be simplified down to a relatively simple
isolationtester spec file which I don't include in this commit; the
reason is that the current isolationtester is not able to deal with more
than one blocked session concurrently and it blocks instead of raising
the expected deadlock.  In the future, if we improve isolationtester, it
would be good to include the spec file in the isolation schedule.  I
posted it in
http://www.postgresql.org/message-id/20141212205254.GC1768@alvh.no-ip.org

Hat tip to Mark Kirkwood, who helped diagnose the trouble.
2014-12-26 13:52:27 -03:00
Andres Freund
740a4ec7f4 Blindly fix a dtrace probe in lwlock.c for a removed local variable.
Per buildfarm member locust.
2014-12-25 19:48:46 +01:00
Tom Lane
966115c305 Temporarily revert "Move pg_lzcompress.c to src/common."
This reverts commit 60838df922.
That change needs a bit more thought to be workable.  In view of
the potentially machine-dependent stuff that went in today,
we need all of the buildfarm to be testing those other changes.
2014-12-25 13:22:55 -05:00
Andres Freund
d72731a704 Lockless StrategyGetBuffer clock sweep hot path.
StrategyGetBuffer() has proven to be a bottleneck in a number of
buffer acquisition heavy workloads. To some degree this has already
been alleviated by 5d7962c6, but it still can be quite a heavy
bottleneck.  The problem is that in unfortunate usage patterns a
single StrategyGetBuffer() call will have to look at a large number of
buffers - in turn making it likely that the process will be put to
sleep while still holding the spinlock.

Replace most of the usage of the buffer_strategy_lock spinlock for the
clock sweep by a atomic nextVictimBuffer variable. That variable,
modulo NBuffers, is the current hand of the clock sweep. The buffer
clock-sweep then only needs to acquire the spinlock after a
wraparound. And even then only in the process that did the wrapping
around. That alleviates nearly all the contention on the relevant
spinlock, although significant contention on the cacheline can still
exist.

Reviewed-By: Robert Haas and Amit Kapila

Discussion: 20141010160020.GG6670@alap3.anarazel.de,
    20141027133218.GA2639@awork2.anarazel.de
2014-12-25 18:26:25 +01:00
Andres Freund
ab5194e6f6 Improve LWLock scalability.
The old LWLock implementation had the problem that concurrent lock
acquisitions required exclusively acquiring a spinlock. Often that
could lead to acquirers waiting behind the spinlock, even if the
actual LWLock was free.

The new implementation doesn't acquire the spinlock when acquiring the
lock itself. Instead the new atomic operations are used to atomically
manipulate the state. Only the waitqueue, used solely in the slow
path, is still protected by the spinlock. Check lwlock.c's header for
an explanation about the used algorithm.

For some common workloads on larger machines this can yield
significant performance improvements. Particularly in read mostly
workloads.

Reviewed-By: Amit Kapila and Robert Haas
Author: Andres Freund

Discussion: 20130926225545.GB26663@awork2.anarazel.de
2014-12-25 17:24:30 +01:00
Andres Freund
7882c3b0b9 Convert the PGPROC->lwWaitLink list into a dlist instead of open coding it.
Besides being shorter and much easier to read it changes the logic in
LWLockRelease() to release all shared lockers when waking up any. This
can yield some significant performance improvements - and the fairness
isn't really much worse than before, as we always allowed new shared
lockers to jump the queue.
2014-12-25 17:24:30 +01:00
Andres Freund
570bd2b3fd Add capability to suppress CONTEXT: messages to elog machinery.
Hiding context messages usually is not a good idea - except for rather
verbose debugging/development utensils like LOG_DEBUG. There the
amount of repeated context messages just bloat the log without adding
information.
2014-12-25 17:24:30 +01:00
Fujii Masao
4a5593197b Remove duplicate include of slot.h.
Back-patch to 9.4, where this problem was added.
2014-12-25 22:47:53 +09:00
Fujii Masao
60838df922 Move pg_lzcompress.c to src/common.
Exposing compression and decompression APIs of pglz makes possible its
use by extensions and contrib modules. pglz_decompress contained a call
to elog to emit an error message in case of corrupted data. This function
is changed to return a status code to let its callers return an error instead.

This commit is required for upcoming WAL compression feature so that
the WAL reader facility can decompress the WAL data by using pglz_decompress.

Michael Paquier
2014-12-25 20:46:14 +09:00
Fujii Masao
3b6ca123b5 Remove unused fields from ReindexStmt.
fe263d1 changed the REINDEX logic so that those fields are not used at all,
but forgot to remove them.

Sawada Masahiko
2014-12-24 21:40:47 +09:00
Andres Freund
cd5ebe1edd Suppress MSVC warning in typeStringToTypeName function.
MSVC doesn't realize ereport(ERROR) doesn't return.

David Rowley
2014-12-24 12:30:08 +01:00
Alvaro Herrera
a609d96778 Revert "Use a bitmask to represent role attributes"
This reverts commit 1826987a46.

The overall design was deemed unacceptable, in discussion following the
previous commit message; we might find some parts of it still
salvageable, but I don't want to be on the hook for fixing it, so let's
wait until we have a new patch.
2014-12-23 15:35:49 -03:00
Alvaro Herrera
d7ee82e50f Add SQL-callable pg_get_object_address
This allows access to get_object_address from SQL, which is useful to
obtain OID addressing information from data equivalent to that emitted
by the parser.  This is necessary infrastructure of a project to let
replication systems propagate object dropping events to remote servers,
where the schema might be different than the server originating the
DROP.

This patch also adds support for OBJECT_DEFAULT to get_object_address;
that is, it is now possible to refer to a column's default value.

Catalog version bumped due to the new function.

Reviewed by Stephen Frost, Heikki Linnakangas, Robert Haas, Andres
Freund, Abhijit Menon-Sen, Adam Brightwell.
2014-12-23 15:31:29 -03:00
Alvaro Herrera
1826987a46 Use a bitmask to represent role attributes
The previous representation using a boolean column for each attribute
would not scale as well as we want to add further attributes.

Extra auxilliary functions are added to go along with this change, to
make up for the lost convenience of access of the old representation.

Catalog version bumped due to change in catalogs and the new functions.

Author: Adam Brightwell, minor tweaks by Álvaro
Reviewed by: Stephen Frost, Andres Freund, Álvaro Herrera
2014-12-23 10:22:09 -03:00
Alvaro Herrera
7eca575d1c get_object_address: separate domain constraints from table constraints
Apart from enabling comments on domain constraints, this enables a
future project to replicate object dropping to remote servers: with the
current mechanism there's no way to distinguish between the two types of
constraints, so there's no way to know what to drop.

Also added support for the domain constraint comments in psql's \dd and
pg_dump.

Catalog version bumped due to the change in ObjectType enum.
2014-12-23 09:06:44 -03:00
Peter Eisentraut
584e35d17c Change local_preload_libraries to PGC_USERSET
This allows it to be used with ALTER ROLE SET.

Although the old setting of PGC_BACKEND prevented changes after session
start, after discussion it was more useful to allow ALTER ROLE SET
instead and just document that changes during a session have no effect.
This is similar to how session_preload_libraries works already.

An alternative would be to change things to allow PGC_BACKEND and
PGC_SU_BACKEND settings to be changed by ALTER ROLE SET.  But that might
need further research (e.g., log_connections would probably not work).

based on patch by Kyotaro Horiguchi
2014-12-22 23:05:46 -05:00
Heikki Linnakangas
955557ddcc Move rbtree.c from src/backend/utils/misc to src/backend/lib.
We have other general-purpose data structures in src/backend/lib, so it
seems like a better home for the red-black tree as well.
2014-12-22 17:52:08 +02:00
Heikki Linnakangas
e7032610f7 Use a pairing heap for the priority queue in kNN-GiST searches.
This performs slightly better, uses less memory, and needs slightly less
code in GiST, than the Red-Black tree previously used.

Reviewed by Peter Geoghegan
2014-12-22 12:05:57 +02:00
Heikki Linnakangas
2ef6c66a2b Fix file descriptor leak at end of recovery.
XLogFileInit() returns a file descriptor, which needs to be closed. The leak
was short-lived, since the startup process exits shortly afterwards, but it
was clearly a bug, nevertheless.

Per Coverity report.
2014-12-21 21:51:59 +02:00
Alvaro Herrera
0ee98d1cbf pg_event_trigger_dropped_objects: add behavior flags
Add "normal" and "original" flags as output columns to the
pg_event_trigger_dropped_objects() function.  With this it's possible to
distinguish which objects, among those listed, need to be explicitely
referenced when trying to replicate a deletion.

This is necessary so that the list of objects can be pruned to the
minimum necessary to replicate the DROP command in a remote server that
might have slightly different schema (for instance, TOAST tables and
constraints with different names and such.)

Catalog version bumped due to change of function definition.

Reviewed by: Abhijit Menon-Sen, Stephen Frost, Heikki Linnakangas,
Robert Haas.
2014-12-19 15:00:45 -03:00
Heikki Linnakangas
5c805d0a81 Fix timestamp in end-of-recovery WAL records.
We used time(null) to set a TimestampTz field, which gave bogus results.
Noticed while looking at pg_xlogdump output.

Backpatch to 9.3 and above, where the fast promotion was introduced.
2014-12-19 17:04:20 +02:00
Andres Freund
37de8de9e3 Prevent potentially hazardous compiler/cpu reordering during lwlock release.
In LWLockRelease() (and in 9.4+ LWLockUpdateVar()) we release enqueued
waiters using PGSemaphoreUnlock(). As there are other sources of such
unlocks backends only wake up if MyProc->lwWaiting is set to false;
which is only done in the aforementioned functions.

Before this commit there were dangers because the store to lwWaitLink
could become visible before the store to lwWaitLink. This could both
happen due to compiler reordering (on most compilers) and on some
platforms due to the CPU reordering stores.

The possible consequence of this is that a backend stops waiting
before lwWaitLink is set to NULL. If that backend then tries to
acquire another lock and has to wait there the list could become
corrupted once the lwWaitLink store is finally performed.

Add a write memory barrier to prevent that issue.

Unfortunately the barrier support has been only added in 9.2. Given
that the issue has not knowingly been observed in praxis it seems
sufficient to prohibit compiler reordering using volatile for 9.0 and
9.1. Actual problems due to compiler reordering are more likely
anyway.

Discussion: 20140210134625.GA15246@awork2.anarazel.de
2014-12-19 14:29:52 +01:00
Alvaro Herrera
cd6e66572b Use %u to print out BlockNumber variables
Per Tom Lane
2014-12-18 17:59:00 -03:00
Alvaro Herrera
35192f0626 Have VACUUM log number of skipped pages due to pins
Author: Jim Nasby, some kibitzing by Heikki Linnankangas.
Discussion leading to current behavior and precise wording fueled by
thoughts from Robert Haas and Andres Freund.
2014-12-18 17:18:33 -03:00
Tom Lane
4a14f13a0a Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create().  Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate.  Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions.  hash_create() itself will
take care of optimizing when the key size is four bytes.

This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.

In future we could look into offering a similar optimized hashing function
for 8-byte keys.  Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.

For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules.  Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.

Teodor Sigaev and Tom Lane
2014-12-18 13:36:36 -05:00
Heikki Linnakangas
ba94518aad Change how first WAL segment on new timeline after promotion is created.
Two changes:

1. When copying a WAL segment from old timeline to create the first segment
on the new timeline, only copy up to the point where the timeline switch
happens, and zero-fill the rest. This avoids corner cases where we might
think that the copied WAL from the previous timeline belong to the new
timeline.

2. If the timeline switch happens at a segment boundary, don't copy the
whole old segment to the new timeline. It's pointless, because it's 100%
identical to the old segment.
2014-12-18 20:23:03 +02:00
Fujii Masao
38628db8d8 Add memory barriers for PgBackendStatus.st_changecount protocol.
st_changecount protocol needs the memory barriers to ensure that
the apparent order of execution is as it desires. Otherwise,
for example, the CPU might rearrange the code so that st_changecount
is incremented twice before the modification on a machine with
weak memory ordering. This surprising result can lead to bugs.

This commit introduces the macros to load and store st_changecount
with the memory barriers. These are called before and after
PgBackendStatus entries are modified or copied into private memory,
in order to prevent CPU from reordering PgBackendStatus access.

Per discussion on pgsql-hackers, we decided not to back-patch this
to 9.4 or before until we get an actual bug report about this.

Patch by me. Review by Robert Haas.
2014-12-18 23:07:51 +09:00
Fujii Masao
19e065c049 Ensure variables live across calls in generate_series(numeric, numeric).
In generate_series_step_numeric(), the variables "start_num"
and "stop_num" may be potentially freed until the next call.
So they should be put in the location which can survive across calls.
But previously they were not, and which could cause incorrect
behavior of generate_series(numeric, numeric). This commit fixes
this problem by copying them on multi_call_memory_ctx.

Andrew Gierth
2014-12-18 21:13:52 +09:00
Fujii Masao
26674c923d Remove odd blank line in comment.
Etsuro Fujita
2014-12-18 17:33:38 +09:00
Andres Freund
c303e9e7e5 Fix (re-)starting from a basebackup taken off a standby after a failure.
When starting up from a basebackup taken off a standby extra logic has
to be applied to compute the point where the data directory is
consistent. Normal base backups use a WAL record for that purpose, but
that isn't possible on a standby.

That logic had a error check ensuring that the cluster's control file
indicates being in recovery. Unfortunately that check was too strict,
disregarding the fact that the control file could also indicate that
the cluster was shut down while in recovery.

That's possible when the a cluster starting from a basebackup is shut
down before the backup label has been removed. When everything goes
well that's a short window, but when either restore_command or
primary_conninfo isn't configured correctly the window can get much
wider. That's because inbetween reading and unlinking the label we
restore the last checkpoint from WAL which can need additional WAL.

To fix simply also allow starting when the control file indicates
"shutdown in recovery". There's nicer fixes imaginable, but they'd be
more invasive.

Backpatch to 9.2 where support for taking basebackups from standbys
was added.
2014-12-18 08:47:27 +01:00
Tom Lane
fc2ac1fb41 Allow CHECK constraints to be placed on foreign tables.
As with NOT NULL constraints, we consider that such constraints are merely
reports of constraints that are being enforced by the remote server (or
other underlying storage mechanism).  Their only real use is to allow
planner optimizations, for example in constraint-exclusion checks.  Thus,
the code changes here amount to little more than removal of the error that
was formerly thrown for applying CHECK to a foreign table.

(In passing, do a bit of cleanup of the ALTER FOREIGN TABLE reference page,
which had accumulated some weird decisions about ordering etc.)

Shigeru Hanada and Etsuro Fujita, reviewed by Kyotaro Horiguchi and
Ashutosh Bapat.
2014-12-17 17:00:53 -05:00
Tom Lane
c977b8cffc Fix poorly worded error message.
Adam Brightwell, per report from Martín Marqués.
2014-12-17 13:14:53 -05:00